id
stringlengths
20
20
score
int64
1
5
normalized_score
float64
0.2
1
content
stringlengths
217
3.74M
sub_path
stringclasses
1 value
BkiUbjfxK6wB9k0iKBpR
5
1
\section{Introduction} A way to study knots and links is by studying their regular projections on a plane, called `diagrams'. One big advancement in Low-dimensional Topology was the `discretization' of link isotopy through the well-known moves on knot and link diagrams discovered by K. Reidemeister in 1927 \cite{Rd1}, see Fig.~\ref{reidem}. \begin{wrapfigure}{R}{0.30\textwidth} \centering \includegraphics[width=0.25\textwidth]{reidem.eps} \caption{The Reidemeister moves} \label{reidem} \end{wrapfigure} The crucial one is the Reidemeister III move, which in terms of dual planar graphs corresponds to the so-called `star-triangle relation'. A second advancement was the `algebraization' of link isotopy, by representing knots and links via braids. Braids are both geometric-topological and algebraic objects; geometric as sets of interwinding, non-intersecting descending strands, algebraic as elements of the so-called Artin braid groups \cite{Ar1,Ar2}. In Fig.~\ref{closures} middle, we can see an example of a braid on 3 strands. The aim of this article is to detail on the connection between links and braids, which is marked by two fundamental results: the {\it Alexander theorem}, whereby links are represented via braids, and the {\it Markov theorem}, which provides the braid equivalence that reflects link isotopy. See Theorems~\ref{alex} and~\ref{markov}. In 1984, these theorems played a key role in the discovery of a novel link invariant, the Jones polynomial \cite{Jo1,Jo2}. \section{From braids to links} \begin{wrapfigure}{L}{0.6\textwidth} \centering \includegraphics[width=0.58\textwidth]{closures.eps} \caption{A braid on 3 strands and two closures} \label{closures} \end{wrapfigure} More rigorously, a classical braid on $n$ strands is, geometrically, a homeomorphic image of $n$ arcs in the interior of a thickened rectangle starting from $n$ top points and running monotonically down to $n$ corresponding points, that is, with no local maxima or minima. Each one of the two sets of endpoints can be assumed colinear and both of them coplanar. For an example see middle illustration of Fig.~\ref{closures}. So a braid can be identified with the braid diagram on the projection plane of its endpoints. Moreover, two braids are {\it isotopic} through any isotopy move of its arcs that preserves the braid structure. These isotopies comprise the Reidemeister moves II and III for braids and planar isotopies, which include the change of relative heights for two non-adjacent crossings, as well as small shifts of endpoints that preserve their order. The Reidemeister I move cannot apply as the kink introduces local maxima and minima. In the set of braids isotopic elements are considered equal. The interaction between braids and links takes place through their diagrams. An operation that connects braids and links is the `closure' operation. It is realized by joining with simple non-interwinding arcs the corresponding pairs of endpoints of the braid. The result is an oriented link diagram that winds around a specified axis, the {\it braid axis}, in the same sense, say the counterclockwise. The link orientation is induced by the top-to-bottom direction of the braid. There are many ways for specifying the braid axis. Fig.~\ref{closures} illustrates two of them: in the left-hand illustration the braid axis can be considered to be perpendicular to the braid projection plane, piercing it at one point, its `trace'; in the right-hand illustration the braid axis is a horizontal line parallel to and behind the projection plane of the braid. The two closures are the {\it planar closure} and the {\it vertical closure} respectively. Clearly, any two closures of the same braid are isotopic. \section{From links to braids} Now the following question arises naturally: can one always do the converse? That is, given an oriented link diagram can one turn it into an isotopic closed braid? \begin{wrapfigure}{L}{0.45\textwidth} \centering \includegraphics[width=0.4\textwidth]{tooth.eps} \caption{Alexander's braiding of an opposite arc} \label{tooth} \end{wrapfigure} Note that our diagram may already be in closed braid form for some choice of braid axis. Yet, with a different specified axis it may not be braided any more. Imagine in Fig.~\ref{closures} the axis trace to be placed in some other region of the plane. The answer to the above question is `yes' and the idea is quite simple. Indeed, we first specify a point on the plane of the diagram, the trace of the braid axis, and we define a `good' direction around this point. We then subdivide the arcs of the link diagram into smaller arcs, by marking the transition from `good' arcs to `opposite' arcs, according to whether they agree or not with the good direction. The good arcs are left alone. \begin{wrapfigure}{R}{0.23\textwidth} \centering \includegraphics[width=0.18\textwidth]{subdivision.eps} \caption{Subdividing an opposite arc} \label{subdivision} \end{wrapfigure} \noindent An opposite arc is to be subdivided further if needed (Fig.~\ref{subdivision}), so that each subarc can be part of the boundary of a `sliding triangle' across the axis trace. See Fig.~\ref{tooth}. The sliding triangle of a subarc does not intersect any other arcs of the link diagram. If the subarc lies over other arcs of the diagram, so does its triangle. Similarly, if it lies under other arcs then its triangle also lies under these arcs. (The notion of the sliding triangle was first introduced by Reidemeister \cite{Rd1} in the general context of his study of isotopy, not of braiding, and he called these isotopy generating moves {\it $\Delta$-moves}.) So, by an isotopy move we can replace the opposite subarc by two good arcs, the other two sides of the sliding triangle, and the opposite subarc is eliminated. The opposite subarcs are finitely many, so the process terminates with a closed braid. The algorithm outlined above is J.W. Alexander's proof of his homonymous theorem \cite{Al}: \begin{thm}[J.W. Alexander, 1923] \label{alex} Every oriented link diagram may be isotoped to a closed braid. \end{thm} Yet, the idea of turning a knot into a braid goes back to Brunn in 1897 \cite{Br}, who observed that any knot has a projection with a single multiple point; from this follows immediately (by appropriate small perturbations) that the diagram can be brought to closed braided form. In 1974 Joan Birman made Alexander's braiding algorithm more technical \cite{Bi1} with the purpose of providing a rigorous proof of the Markov theorem (see Section~\ref{braidequiv}). Another proof of the Alexander theorem, very appealing to the imagination, is the one by Hugh Morton (\cite{Mo}, 1986): instead of having the braid axis fixed and moving by isotopies the arcs of the link diagram for reaching a closed braid form, he considers the diagram fixed and instead he `threads' the braid axis (in the form of a simple closed curve projected on the plane of the link diagram) through the arcs of the diagram, so that each subarc winds around the axis in the counterclockwise sense. In the same paper Morton uses a more technical version of his braiding algorithm for proving the Markov theorem. A novel approach to the Alexander theorem is due to Shuji Yamada (\cite{Ya}, 1987) using the auxilliary concept of the `Seifert circles'. The Seifert circles of a link diagram are created after smoothing all crossings according to the orientations of their arcs. Yamada introduces grouping operations on the system of Seifert circles which correspond to isotopies on the link diagram and which terminate with a braided form. \begin{wrapfigure}{R}{0.60\textwidth} \centering \includegraphics[width=0.57\textwidth]{basicbraiding.eps} \caption{An $L$-braiding move results in a pair of corresponding braid strands} \label{basicbraiding} \end{wrapfigure} An additional value of Yamada's braiding algorithm is that it implies equality between the Seifert number and the braid index of the link. Pierre Vogel gave in 1990 a more sophisticated braiding algorithm based on the one by Yamada, where trees are used for measuring complexity \cite{Vo}. Vogel's algorithm was then used by Pawel Traczyk for proving the Markov theorem (\cite{Tr}, 1992). A further approach to the Alexander theorem was made by the author in the 1990's \cite{La1,La2,LR1}. This algorithm results in open braids that one can `read' directly. Here we mark the local maxima and minima of the link diagram with respect to the height function and the opposite subarcs are now called {\it up-arcs}, so that we can have for each up-arc an orthogonal sliding triangle of the same type, `under' or `over', as the up-arc. \begin{wrapfigure}{L}{0.6\textwidth} \centering \includegraphics[width=0.57\textwidth]{twistxings.eps} \caption{A half twist and a full twist on crossings} \label{twistxings} \end{wrapfigure} The elimination of an up-arc is an {\it $L$-braiding move} and it consists in cutting the arc at some point, say the upmost, and then pulling the two ends, the upper upward and the lower downward, and keeping them aligned, so as to obtain a pair of corresponding braid strands, both running entirely {\it over\/} the rest of the diagram or entirely {\it under\/} it, according to the type of the up-arc. Fig.~\ref{basicbraiding} illustrates the abstraction of an `over' $L$-braiding move and Fig.~\ref{breg} an example of applying the $L$-braiding algorithm. The closure of the resulting tangle is a link diagram, obviously isotopic to the original one, since from the up-arc we created a stretched loop isotopic to the original up-arc. Any type of closure can apply here too. \begin{wrapfigure}{R}{0.7\textwidth} \centering \includegraphics[width=0.65\textwidth]{breg.eps} \caption{An example of applying the $L$-braiding algorithm} \label{breg} \end{wrapfigure} However, if we want to apply the braiding algorithm to the closure of a braid and obtain the initial braid back, it is convenient to apply the vertical closure. Finally, a really elegant version of the above braiding algorithm is given by the author and Louis H. Kauffman (\cite{KL1}, 2004): it uses the basic $L$-braiding move described above for up-arcs with no crossings, but it braids the up-arcs with crossings simply by rotating them on their projection plane, as illustrated in Fig.~\ref{twistxings}. Each algorithm for proving the Alexander theorem has its own flavour and its own advantages and disadvantages but they are all based on the same idea. Namely, to eliminate one by one the arcs of the diagram that have the wrong sense with respect to the chosen braid axis and to replace them by admissible ones. \section{The braid group} \begin{wrapfigure}{R}{0.65\textwidth} \centering \includegraphics[width=0.6\textwidth]{gauss.eps} \caption{Gauss' handwritten study of braids} \label{gauss} \end{wrapfigure} The systematic theory of braids was introduced by E. Artin in 1925 \cite{Ar1,Ar2}. He started with the geometric definition and then studied the group structure of the set of equivalence classes of braids on $n$ strands, after analyzing the braid isotopies with respect to the height function. These equivalence classes are also called {\it braids} and they form the {\it classical braid group $B_n$} with operation the concatenation. The group $B_n$ has a presentation of the group $B_n$ with finitely many generators $\sigma_i$, $i=1,\ldots,n-1$, and a finite set of defining relations [Artin, Chow]: \[ \sigma_{i}\sigma_{i+1}\sigma_{i}= \sigma_{i+1}\sigma_{i}\sigma_{i+1}\qquad \text{and}\qquad \sigma_i\sigma_j = \sigma_j\sigma_i \quad \text{for}\quad \vert i-j\vert >1. \] The generators $\sigma_i$ resemble the elementary transpositions $s_i$ of the symmetric group $S_n$. They can be viewed as elementary braids of one crossing between the consecutive strands $i$ and $i+1$, also carrying the topological information of which strand crosses over the other, see Fig.~\ref{brgenrs}. The most important braid relation is the braided Reidemeister III move (all arrows down), while the Reidemeister II move is a direct consequence of the fact that the generators $\sigma_i$ are invertible. \begin{wrapfigure}{L}{0.20\textwidth} \scalebox{0.65}{ \begin{tikzpicture}[scale=1] \braid[number of strands =1] 1 ; \end{tikzpicture} \begin{tikzpicture}[scale=1] \node [label={[shift={(0.0, 0.3)}]$\cdots$}] {}; \end{tikzpicture} \begin{tikzpicture}[scale=1] \braid[number of strands =1] 1 ; \end{tikzpicture} \quad \begin{tikzpicture}[scale=1] \braid s_1^{-1} ; \end{tikzpicture} \quad \begin{tikzpicture}[scale=1] \braid[number of strands =1] 1 ; \end{tikzpicture} \begin{tikzpicture}[scale=1] \node [label={[shift={(0.0, 0.3)}]$\cdots$}] {}; \end{tikzpicture} \begin{tikzpicture}[scale=1] \braid[number of strands =1] 1 ; \end{tikzpicture} } \caption{The braid generator $\sigma_i$} \label{brgenrs} \end{wrapfigure} The group $B_n$ surjects on the symmetric group $S_n$ by corresponding the generator $\sigma_i$ to the elementary transposition $s_i$. The group $S_n$ has the extra relations $s_i^2 = 1$, $i=1,\ldots,n-1$, which are responsible for its finite order. The permutation induced by a braid tells us the number of link components of its closure by counting the disjoint cycles of the permutation. Braid groups play a central role in many areas of mathematics. A complete reference on braid groups and related topics can be found in the text by Kassel and Turaev \cite{KT}. It is worth noting that C.F. Gauss was also thinking about the concept of a braid, probably in the frame of studying interwinding curves in space in the context of his theory of electrodynamics. In Fig.~\ref{gauss} we see a handwritten note of Gauss, page~283 of his Handbuch~7, containing a sketch of a braid that closes to a 3-component link, a coding table as well as a curve configuration winding around two points on its projection plane. \section{Link isotopy and braid equivalence}\label{braidequiv} We next consider equivalence relations in the set of all braids that correspond in the closures to link isotopy. This problem was first studied by A.A. Markov \cite{Ma}, after having available the Alexander theorem and Artin's algebraic structure of the braid group. \begin{wrapfigure}{R}{0.65\textwidth} \centering \includegraphics[width=0.6\textwidth]{conjugation.eps} \caption{Closure of braid conjugation and braid stabilization induces isotopy} \label{conjugation} \end{wrapfigure} Closing a braid $a$ and a conjugate of $a$ by one braid generator, the two resulting links differ by Reidemeister II \& III moves that take place in the closure part of the conjugate of $a$. See Fig.~\ref{conjugation}. Similarly, adding a new strand at the end of the braid, which crosses once over or under the last strand corresponds in the closures to a kink, that is, to a Reidemeister I move. This move is called `stabilization move'. See Fig.~\ref{conjugation}. Clearly, conjugations in all braid groups and stabilization moves are seemingly independent from each other and must figure in a braid equivalence that reflects link isotopy. The question is whether there are any other hidden moves for capturing the complexity of link isotopy. Yet, this is not the case, as we see from the following: \begin{thm}[A.A. Markov, 1936] \label{markov} Two oriented links are isotopic if and only if any two corresponding braids differ by a finite sequence of braid relations and the moves: \begin{center} (i) \ Conjugation $\sigma_i^{-1} a \sigma_i \sim a$ and \ (ii) \ Stabilization $a \sigma_n^{\pm 1} \sim a$ for any $a \in B_n$ and for all $n\in {\mathbb N}$. \end{center} \end{thm} The statement of the theorem included originally {\it three moves}: the two local moves above and another more global one, the exchange move, that generalizes the Reidemeister II move. The sketch of proof by A.A. Markov \cite{Ma} used Alexander's braiding algorithm. Soon afterwards, N.~Weinberg reduced the exchange move to the {\it two moves} of the statement \cite{Wei}. The interest in the braid equivalence was rekindled by Joan Birman after following the talk of some unknown speaker. Birman produced a rigorous proof of the theorem, filling in all details, by using a more technical version of Alexander's braiding algorithm \cite{Bi1}. A few years later Daniel Bennequin gave a different proof using 3-dimensional contact topology \cite{Ben}. In 1984 the Jones polynomial was discovered \cite{Jo1,Jo2}, a new powerful link invariant, whose construction used a representation of the braid group in the Temperley--Lieb algebra an the Alexander and Markov theorems. This discovery led to new approaches to the Markov theorem. Hugh Morton gave a new proof using his threading algorithm \cite{Mo}, Pawel Traczyk proved the Markov theorem using Vogel's algorithm \cite{Tr}, and Joan Birman revisited the theorem with William Menasco using Bennequin's ideas \cite{BM}. Finally, in the 1990's, the author discovered a more geometric braid equivalence move, the $L$-move, and proved with Colin P. Rourke an {\it one-move} analogue of the Markov theorem, whose proof used the braiding moves described earlier \cite{La1, La2, LR1}: \begin{thm} \label{L} Two oriented links are isotopic if and only if any two corresponding braids differ by a finite sequence of braid relations and the $L$-moves. \end{thm} \begin{wrapfigure}{L}{0.7\textwidth} \centering \includegraphics[width=0.62\textwidth]{Lmoves.eps} \caption{An $L_u$-move and an $L_o$-move at the same point} \label{Lmoves} \end{wrapfigure} An {\it $L$-move} resembles an $L$-braiding move: it consists in cutting a braid arc at some point and then pulling the two ends, the upper downward and the lower upward, keeping them aligned, and so as to obtain a new pair of corresponding braid strands, both running entirely {\it over\/} the rest of the braid or entirely {\it under\/} it, according to the type of the move, denoted $L_o$ or $L_u$ respectively. Fig.~\ref{Lmoves} illustrates an example with both types of $L$-moves taking place at the same point of a braid. The closure of the resulting braid differs from the closure of the initial one by a stretched loop. View also Fig.~\ref{symmetry} for an abstact illustration of the similarity of the $L$-braiding move and the $L$-move. \begin{wrapfigure}{R}{0.65\textwidth} \centering \includegraphics[width=0.6\textwidth]{algL.eps} \caption{The $L$-moves have algebraic expressions} \label{algL} \end{wrapfigure} The $L$-moves are geometric. However, as we see in the middle illustration of Fig.~\ref{algL}, using braid isotopy an $L$-move can be also viewed as introducing a crossing inside the braid `box', so in the closure it creates a stretched Reidemeister I kink. This way of viewing the $L$-moves shows that they generalize the stabilization moves. It also renders them locality and it leads to the observation that they have algebraic expressions, as it is clear from Fig.~\ref{algL}. Furthermore, it follows from Theorem~\ref{L} that conjugation can be achieved by braid relations and $L$-moves. \smallbreak The Markov theorem or Theorem~\ref{L} are not easy to prove. For proving the `only if' part one needs first to take two diagrams of the two isotopic links and produce corresponding braids, using some braiding algorithm, and then show that the two braids are Markov equivalent (resp. $L$-move equivalent). In practice this means that any choices made on a given link diagram when applying the braiding algorithm correspond to Markov equivalent (resp. $L$-move equivalent) braids and that if two link diagrams differ by an isotopy move the corresponding braids are also Markov equivalent (resp. $L$-move equivalent). For this analysis it is crucial to have the isotopy moves local and the braiding moves independent from each other. In this way one can always assume to have done almost all braiding in the otherwise identical diagrams in question and to be left only with the isotopy move or algorithmic choice by which they differ. Then the two braid diagrams are directly comparable. \begin{wrapfigure}{L}{0.65\textwidth} \centering \includegraphics[width=0.6\textwidth]{symmetry.eps} \caption{The symmetry of the braiding and the $L$-move} \label{symmetry} \end{wrapfigure} The braiding algorithm of \cite{La1,La2,LR1} is particularly appropriate for proving the Markov theorem or its equivalent Theorem~\ref{L}, after enchancing it with some extra technicalities \cite{La1,La2,LR1} for ensuring independence of the sequence of the braiding moves. This is because the $L$-braiding moves and the $L$-moves are simple and have a basic symmetric interconnection, as illustrated in Fig.~\ref{symmetry}, so they comprise, in fact, one very fundamental uniform move, enabling one to trace easily how the algorithmic choices and the isotopy moves on diagrams affect the final braids. \section{Extensions to other diagrammatic settings} Given a diagrammatic knot theory there are deep interrelations between the diagrammatic isotopy in this theory, the braid structures and the corresponding braid equivalences. More precisely, the isotopy moves that are allowed, but also moves that are forbidden in the setting, determine the corresponding braid isotopy, the way the closure of a braid is realized, and also the corresponding braid equivalence moves. On the other hand, braid equivalence theorems are important for understanding the structure and classification of knots and links in various settings and for constructing invariants of knots and links using algebraic means, for example via Markov traces on quotient algebras of the corresponding braid groups. The $L$-braiding and the $L$-moves provide a uniform and flexible ground for formulating and proving braiding and braid equivalence theorems for any diagrammatic setting. Indeed, their simple and fundamental nature together with the fact that the $L$-moves are geometric and can be localized are the reasons that they can adapt to all diagrammatic categories where the notions of braid and diagrammatic isotopy are defined. This is particularly useful in settings where algebraic braid structures are not immediately apparent. Indeed, the statements are first geometric and then they gradually turn algebraic, if algebraic braid structures are available. \smallbreak The $L$-move techniques were first employed for proving braiding and braid equivalence theorems for classical knots in $3$-manifolds with or without boundary; namely in knot complements, in closed, connected, oriented (c.c.o.) $3$-manifolds, which are obtained from the $3$-sphere, $S^3$, via the `surgery technique', as well as in handlebodies. \begin{wrapfigure}{R}{0.72\textwidth} \centering \includegraphics[width=.65\textwidth]{mfds.eps} \caption{(a) a mixed link and a geometric mixed braid for link complements and c.c.o. 3-manifolds; (b) a mixed link and a geometric mixed braid for handlebodies} \label{mfds} \end{wrapfigure} The idea here is to fix in $S^3$ a closed braid representation of the $3$-manifold and then represent knots and braids in the $3$-manifold as {\it mixed links} and {\it mixed braids} in $S^3$, which contain the {\it fixed part}, representing the $3$-manifold, and the {\it moving part}, representing the knot/braid in the $3$-manifold. View Fig.~\ref{mfds} for concrete examples. Then, knot isotopy in the $3$-manifold is translated into mixed link isotopy in $S^3$, which applies only on the moving part. In the case of c.c.o. $3$-manifolds we have the isotopy moves for the knot complements, as well as extra isotopy moves, the {\it band moves}, related to the surgery description of the manifold. The mixed braid equivalence for knot complements comprises $L$-moves which take place only on the moving parts of mixed braids. See Fig.~\ref{Lmfds}(a), while in the case of c.c.o. $3$-manifolds we have the extra {\it braid band moves}. Then the $L$-move braid equivalences turn into algebraic statements with the use of the algebraic mixed braids \cite{La3}, see Fig.~\ref{Lmfds}(b) for an example. \begin{wrapfigure}{L}{0.65\textwidth} \centering \includegraphics[width=0.6\textwidth]{Lmfds.eps} \caption{An $L$-move in a mixed braid; (b) an algebraic mixed braid} \label{Lmfds} \end{wrapfigure} For the case of a handlebody we have the same setting as for a knot complement. The difference here is that a knot may not pass beyond its boundary from either end, and this is reflected both in the definition of the closure of a mixed braid as well as in the corresponding braid equivalence. Namely, the closure is realized by simple closing arcs, slightly tilted in the extremes, which run `over' or `under' the rest of the diagram, and different choices may lead to non-isotopic closures. Furthermore, in the mixed braid equivalence some `loop' conjugations are not allowed. Details on the above can be found in \cite{LR1,La3,LR2,DL,HL,La4,La5}. See also \cite{Su} and \cite{Sk}. \smallbreak The next application of the $L$-move methods was in virtual knot theory. Virtual knot theory was introduced by Louis~H. Kauffman \cite{Kau2} and it is an extension of classical knot theory. In this extension one adds a `virtual' crossing that is neither an over-crossing nor an under-crossing. Fig.~\ref{vmoves} illustrates the diagrammatic moves that contain virtual crossings. In this theory we have the {\it virtual forbidden moves}, F1 and F2, with two real crossings and one virtual. We have here the virtual braid group \cite{Kau2,KL1,Bar}, which extends the classical braid group. The forbidden moves make it harder to braid a virtual knot diagram, so the idea of rotating crossings that contain up-arcs before braiding a diagram (recall Figure~\ref{twistxings}) comes in handy in this setting. The interpretation of an $L$-move as introducing an in-box crossing proved very crucial in the search of the types of $L$-moves needed in the setting, as they are related to the types of kinks allowed in the given isotopy. \begin{wrapfigure}{R}{0.55\textwidth} \centering \includegraphics[width=0.5\textwidth]{vmoves.eps} \caption{Virtual moves: allowed and forbidden} \label{vmoves} \end{wrapfigure} So, we have $L$-moves introducing a real or a virtual crossing facing to the right or to the left of the braid. Moreover, the presence of the forbidden moves in the theory leads to the requirement that the strands of an $L$-move cross the other strands of the virtual braid only virtually, and also to a type of virtual $L$-move coming from a `trapped' virtual kink. The above led in \cite{KL2} to formulations of virtual braid equivalence theorems for the virtual braid group, for the welded braid group \cite{FRR} and for some analogues of these structures, complementing the prior results of Seiichi Kamada in \cite{Ka}, where the more global exchange moves are used. \begin{wrapfigure}{L}{0.77\textwidth} \centering \includegraphics[width=0.73\textwidth]{virtualL.eps} \caption{Types of virtual $L$-moves} \label{virtualL} \end{wrapfigure} Furthermore, the $L$-move techniques have been used by Vassily Manturov and Hang Wang for formulating a Markov-type theorem for free links \cite{mawa}. Also, by Carmen Caprau and co-authors for obtaining a braid equivalence for virtual singular braids \cite{capega} as well as for virtual trivalent braids \cite{cadiposa,cacoda}. \smallbreak \begin{wrapfigure}{R}{0.5\textwidth} \centering \includegraphics[width=0.45\textwidth]{smoves.eps} \caption{Singular moves: allowed and forbidden} \label{smoves} \end{wrapfigure} Singular knot theory is related to Vassiliev's theory of knot invariants. Fig.~\ref{smoves} illustrates the diagrammatic moves in the theory as well as the {\it singular forbidden moves}, SF1 and SF2. The singular crossings together with the real crossings and their inverses generate the `singular braid monoid' introduced in different contexts by Baez\cite{Ba}, Birman\cite{Bi2} and Smolin\cite{Sm}. Braiding a singular knot diagram becomes particularly simple by using the idea of rotating singular crossings that contain up-arcs before braiding (Figure~\ref{twistxings}) and the $L$-braiding moves. An algebraic singular braid equivalence is proved by Bernd Gemein in \cite{Ge} and, assuming this result, in \cite{La4} the $L$-move analogue is formulated. Clearly, there is no $L$-move introducing a singular crossing, as the closure of such a move would contract to a kink with a singular crossing, and this is not an isotopy move in the theory. Also, there is no conjugation by a singular crossing, since this is not an invertible element in the monoid; yet, we can talk about `commuting' in the singular braid monoid: $ab \sim ba$. \smallbreak \begin{wrapfigure}{L}{0.6\textwidth} \centering \includegraphics[width=0.55\textwidth]{knotoid.eps} \caption{A knotoid and its two closures to knots} \label{knotoid} \end{wrapfigure} Another very interesting diagrammatic category is the theory of knotoids and braidoids. The theory of knotoids was introduced by Vladimir Turaev in 2012 \cite{Tu}. A knotoid diagram is an open curve in an oriented surface, having finitely many self-intersections that are endowed with under/over data and with its two endpoints possibly lying in different regions of the diagram. For an example see middle illustration of Fig.~\ref{knotoid}. The theory of knotoids is a complex diagrammatic theory, and its complexity lies in the {\it knotoid forbidden moves}, $\Phi_+$ and $\Phi_-$, that prevent the endpoints from slipping under or over other arcs of the diagram. See Fig.~\ref{forbidoid}. \begin{wrapfigure}{R}{0.45\textwidth} \centering \includegraphics[width=0.4\textwidth]{forbidoid.eps} \caption{The forbidden moves for knotoids} \label{forbidoid} \end{wrapfigure} The theory of spherical knotoids (i.e., knotoids in the two-sphere) extends the theory of classical knots and also proposes a new diagrammatic approach to classical knots, which arise via the `overpass' or `underpass' closures, see Fig.~\ref{knotoid}. This approach promises reducing of the computational complexity of knot invariants \cite{Tu}. On the other hand, planar knotoids surject to spherical knotoids, but do not inject. This means that planar knotoids provide a much richer combinatorial structure than the spherical ones. This fact has interesting implications in the study of proteins \cite{GDBS,GGLDSK}. \begin{wrapfigure}{L}{0.4\textwidth} \centering \includegraphics[width=0.35\textwidth]{braidoid_ing.eps} \caption{(a) A braidoid; (b) $L$-braidoiding at an endpoint} \label{braidoids} \end{wrapfigure} Recently, the theory of braidoids has been introduced and developed \cite{GL1,GL2}, which extends the classical braid theory. A `braidoid' is like a classical braid, but two of its strands terminate at the endpoints. For an example see Fig.~\ref{braidoids}(a). The forbidden moves play a role in the algorithm for turning planar knotoids to braidoids and they affect the definition of the closure operation on braidoids, in which the endpoints do not participate. Namely, we close corresponding braidoid ends using vertical arcs with slightly tilted extremes, running over or under the rest of the diagram, and this needs to be specified in advance, as different choices may result in non-isotopic knotoids (due to the forbidded moves). For turning a planar knotoid into a braidoid, we use the $L$-braiding moves for up-arcs not containing an endpoint and the analogous moves illustrated in Fig.~\ref{braidoids}(b) (with choice `o' in the figure) for the ones that contain an endpoint. For a braidoid equivalence we use the $L$-moves that can take place at any point of a braidoid except for the endpoints. We note that for the braidoids we do not have yet an appropriate algebraic structure, so the $L$-equivalence is as good as we can have so far. It is worth adding at this point that, in \cite{GK1} Neslihan G\"ug\"umc\"u and Louis Kauffman give a faithful lifting of a planar knotoid to a space curve, such that the endpoints remain attached to two paralell lines. In \cite{KoLa1} Dimitrios Kodokostas and the author make the observation that this interpretation of planar knotoids is related to the knot theory of the handlebody of genus two and these ideas are further explored in \cite{KoLa2}, where the notion of `rail knotoid' is introduced. Further, in \cite{La7} the $L$-move techniques are applied to long knots. \smallbreak Finally, in \cite{Sch} Nancy Scherich provides a computer implemented, grid diagrammatic proof of the Alexander theorem, based on the $L$-braiding moves. Another result of analogous flavour is a Markov-type theorem for ribbon torus-links in ${\mathbb R}^4$ by Celeste Damiani \cite{Da}. \smallbreak Surveys on many of the above results are included in \cite{La4,La5,GKL}, while a more complete presentation is to appear \cite{La6}.
train/arxiv
BkiUc-o5qsBDAJXqEHkC
5
1
\section{Introduction} In this paper, we study circle actions on oriented manifolds with discrete fixed point sets. Let the circle act on a compact oriented manifold $M$ with a discrete fixed point set. At each fixed point, there are non-zero integers, called \emph{weights} (also called \emph{rotation numbers}). In this paper, we prove properties of the weights at the fixed points and derive results on the manifold. One of which we show is that we can associate a multigraph to $M$ and the manifold can be described by the multigraph. Finally, we specialize into the case of dimension 4. In dimension 4, we classify the weights at the fixed points and prove the existence part. Moreover, we give a necessary and sufficient condition for a multigraph to be realized as a multigraph associated to a 4-dimensional oriented $S^1$-manifold. Consider a circle action on a compact oriented manifold. Assume that the fixed point set is non-empty and finite. For the classification of such an action, one may want to begin either with small numbers of fixed points, or with low dimensions. Note that having an isolated fixed point implies that the dimension of the manifold is even. First, let us begin with small numbers of fixed points. If there is one fixed point, then the manifold must be a point. On the other hand, if there are two fixed points, then any even dimension is possible, as there is an example of a rotation of an even dimensional sphere with two fixed points. From the example, on any even dimension greater than two, we can have any even number of fixed points, since we can perform equivariant sum of rotations of even dimensional spheres. This makes a big difference with $S^1$-actions on other types of manifolds; for instance, an almost complex (and hence complex or symplectic) manifold $M$ equipped with an $S^1$-action\footnote{Throughout the paper, if the circle acts on an almost complex, complex, or symplectic manifold, we assume that the action preserves the almost complex, complex, or symplectic structure, respectively.} having two fixed points must have either $\dim M=2$ or $\dim M=6$; for the classification results for other types of manifolds, see \cite{Jan2}, \cite{Kos1}, and \cite{PT}. The situation is quite different when the number of fixed points is odd. If there is an odd number of fixed points, then the dimension of the manifold must be a multiple of four; see Corollary \ref{c27}. Let us specialize into the case of three fixed points. Then the complex, quaternionic, and octonionic (Cayley) projective spaces ($\mathbb{CP}^2$, $\mathbb{HP}^2$, and $\mathbb{OP}^2$) of dimension 2 admit circle actions with three fixed points, which have real dimensions 4, 8, and 16, respectively. On the other hand, to the author's knowledge, it is not known if in dimensions other than 4, 8, and 16, there exists a manifold with three fixed points. Similar to the case of two fixed points, if we assume an almost complex structure on a manifold, three fixed points can only happen in dimension 4 \cite{Jan2}. Note that among the spaces above, only $\mathbb{CP}^2$ admits an almost complex structure (and complex or symplectic structures). For the classification results for other types of manifolds, see \cite{Jan1}. Second, let us begin with low dimensions. In dimension two, the classification is rather trivial. Among compact oriented surfaces, only the 2-sphere $S^2$ and the 2-torus $\mathbb{T}^2$ admit non-trivial circle actions. Any circle action on $S^2$ has two fixed points and any circle action on $\mathbb{T}^2$ is fixed point free; see Lemma \ref{l212}. We discuss classification results in dimension four. Before we discuss our main result, let us discuss results for other types of manifolds, or related results. The classification of a holomorphic vector field on a complex surface is made by Carrell, Howard, and Kosniowski \cite{CHK}. For a 4-dimensional Hamiltonian $S^1$-space, subsequent to the work by Ahara and Hattori \cite{AH} and Audin \cite{Au}, Karshon classifies such a space up to equivariant symplectomorphism, in terms of a multigraph associated to $M$ \cite{Ka}. Note that in Section \ref{s4}, we shall associate a multigraph to an oriented manifold $M$ and our notion of multigraphs generalizes the multigraphs for 4-dimensional Hamiltonian $S^1$-spaces. A multigraph determines the weights at fixed points. In dimension 4, for a complex manifold or for a symplectic manifold, the weights at the fixed points determine the manifold uniquely. When the fixed point set are discrete, our main result generalizes the classification of weights at fixed points for complex manifolds and symplectic manifolds to oriented manifolds. However, for a manifold to be oriented is a very weak condition, and therefore uniqueness fails to hold for oriented manifolds, since we can perform equivariant sum of a manifold with another manifold that is fixed point free. Somewhat related results are the classifications of a circle action on a 4-manifold, with different perspectives. For instance, a circle action on a homotopy 4-sphere \cite{MY1}, \cite{MY2}, \cite{F1}, \cite{Pa} or on a simply connected 4-manifold \cite{F2}, \cite{Y} is considered. In addition, Fintushel classifies a 4-dimensional oriented $S^1$-manifold in terms of orbit data \cite{F3}. Third, there is another point of view that one considers for a circle action on an oriented manifold with a discrete fixed point set. One of them is the Petrie's conjecture, which asserts that if a homotopy $\mathbb{CP}^n$ admits a non-trivial $S^1$-action, then its total Pontryagin class is the same as that of $\mathbb{CP}^n$ \cite{P}. In other words, the existence of a non-trivial $S^1$-action is enough to determine the characteristic class of such a manifold. The Petrie's conjecture is proved to hold in dimension up to 8 \cite{D}, \cite{Ja}. To state our classification result, we introduce a terminology. Let the circle act on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. Let $p$ be a fixed point. Then the tangent space at $p$ decomposes into $n$ two-dimensional irreducible $S^1$-equivariant real vector spaces \begin{center} $T_pM=\bigoplus_{i=1}^n L_i$. \end{center} Each $L_i$ is isomorphic to a one-dimensional $S^1$-equivariant complex space on which the action is given as multiplication by $g^{w_p^i}$, where $g\in S^1$ and $w_p^i$ is a non-zero integer. The $w_p^i$ are called \textbf{weights} at $p$. Though the sign of each weight is not well-defined, the sign of the product of the weights at $p$ is well-defined. We orient each $L_i$ so that every weight is positive. Let $\epsilon(p)=+1$ if the orientation given on $\bigoplus_{i=1}^n L_i$ this way agrees on the orientation on $T_pM$ and $\epsilon(p)=-1$ otherwise. Let us call it the \textbf{sign} of $p$. Denote the fixed point data at $p$ by $\Sigma_p=\{\epsilon(p), w_p^1, \cdots, w_p^n\}$. By the fixed point data $\Sigma_M$ of $M$, we mean a collection $\displaystyle \cup_{p \in M^{S^1}} \Sigma_p$ of the fixed point data at each fixed point $p$. To avoid possible confusion with weights, when we write the sign at $p$ inside $\Sigma_p$, we shall only write the sign of $\epsilon(p)$ and omit 1. We give an example. Let the circle act on $S^{2n}$ by \begin{center}$g \cdot (z_1,\cdots,z_n,x)=(g^{a_1} z_1,\cdots, g^{a_n} z_n, x)$, \end{center} for any $g \in S^1 \subset \mathbb{C}$, where $z_i$ are complex numbers and $x$ is a real number such that $\sum_{i=1}^n |z_i|^2+x^2=1$, and $a_i$ are positive integers for $1 \leq i \leq n$. The action has two fixed points, $p=(0,\cdots,0,1)$ and $q=(0,\cdots,0,-1)$. Near $p$, the action is described as $g \cdot (z_1,\cdots,z_n)=(g^{a_1} z_1,\cdots, g^{a_n} z_n)$. Therefore, the weights at $p$ are $\{a_1,\cdots,a_n\}$. Similarly, the weights at $q$ are $\{a_1,\cdots,a_n\}$. On the other hand, we have that $\epsilon(p)=-\epsilon(q)$; see Theorem \ref{t29}. The fixed point data of the circle action on $S^{2n}$ is therefore $\{+,a_1,\cdots,a_n\} \cup \{-,a_1,\cdots,a_n\}$. With the notion of weights, consider a circle action on a 4-dimensional compact oriented manifold $M$ and assume that the fixed point set is discrete. As we have seen, the classification of the fixed point data for oriented manifolds is in general harder than complex manifolds or symplectic manifolds. To the author's knowledge, the fixed point data of an $S^1$-action on an oriented 4-manifold is known only if the number of fixed points is at most three; see \cite{L2} for the case of three fixed points. In this paper, we completely determine the fixed point data of $M$ with an arbitrary number of fixed points. Note that given a circle action on a manifold, we can always make the action effective by quotienting out by the subgroup $\mathbb{Z}_k$ that acts trivially. This amounts to dividing all the weights by $k$. We prove that for a circle action on a 4-dimensional oriented manifold with a discrete fixed point set, the fixed point data of the manifold can be achieved by simple combinatorics. A combinatorial format of the main result can be stated as follows. \begin{theorem} \label{t11} Let the circle act effectively on a 4-dimensional compact oriented manifold $M$ with a discrete fixed point set. Then the fixed point data of $M$ can be achieved in the following way: begin with the empty set, and apply a combination of the following steps. \begin{enumerate} \item Add $\{+,a,b\}$ and $\{-,a,b\}$, where $a$ and $b$ are relatively prime positive integers. \item Replace $\{+,c,d\}$ by $\{+,c,c+d\}$ and $\{+,d,c+d\}$. \item Replace $\{-,e,f\}$ by $\{-,e,e+f\}$ and $\{-,f,e+f\}$. \end{enumerate}\end{theorem} For instance, if there are 2 fixed points, the only possibility is that only Step (1) occurs once, and hence the fixed point data is $\{+,a,b\}$ and $\{-,a,b\}$ for some positive integers $a$ and $b$. If there are 3 fixed points, Step (1) needs to occur once, and exactly one of Step (2) and Step (3) must occur once; if Step (2) occurs the fixed point data is $\{+,a,a+b\} \cup \{+,b,a+b\} \cup \{-,a,b\}$ and if Step (3) occurs the fixed point data is $\{+,a,b\} \cup \{-,a,a+b\} \cup \{-,b,a+b\}$ for some positive integers $a$ and $b$; see Theorem \ref{t71}. Moreover, we prove the existence part. Given a fixed point data achieved by the combinatorics as in Theorem \ref{t11}, we construct a manifold equipped with a circle action having a discrete fixed point set, with the desired fixed point data. In our construction, Step (1) in Theorem \ref{t11} corresponds to adding $S^4$ equipped with a rotation, and Step (2) and Step (3) each correspond to blowing up a fixed point. A geometric format of the main result, that proves the existence of such a manifold $M$ in Theorem \ref{t11}, can be stated as follows. \begin{theorem} \label{t12} Let the circle act on a 4-dimensional compact connected oriented manifold $M$ with a discrete fixed point set. Then there exists a 4-dimensional compact connected oriented manifold $M'$ that is an equivariant sum of blow-ups of rotations on $S^4$'s, such that $M$ and $M'$ have the same fixed point data. More precisely, $M'$ is an equivariant sum along free orbits of blow-ups of rotations of $S^4$'s, where $S^1$ acts on each $S^4$ by $g \cdot (z_1,z_2,x)=(g^az_1,g^bz_2,x)$ for any $g \in S^1 \subset \mathbb{C}$, for some positive integers $a$ and $b$, and blow up is in the sense of Lemma \ref{l51}. \end{theorem} We adapt the notion of blow up because we shall identify a neighborhood of a fixed point in $M$ with a neighborhood of $0$ in $\mathbb{C}^2$, where we have a complex structure to be able to blow up in the usual sense. We shall blow up equivariantly and only do so at a fixed point. Suppose that we blow up a fixed point $p$ whose weights are $\{\pm,a,b\}$ for some positive integers $a$ and $b$. On a blown up manifold $\widetilde{M}$, there is a naturally extended circle action. Moreover, instead of the fixed point $p$, there are two fixed points $p_1$ and $p_2$, whose weights are $\{\pm,a,a+b\}$ and $\{\pm,b,a+b\}$, respectively. For details, see Section \ref{s5}. As a corollary of Theorem \ref{t11} and Theorem \ref{t12}, we classify the signature of a 4-dimensional compact oriented manifold equipped with a circle action having a discrete fixed point set. We give a proof in Section \ref{s7}. \begin{cor} \label{c13} Let the circle act on a 4-dimensional compact oriented manifold $M$ with $k$ fixed points. Then $k \geq 2$, and the signature of $M$ satisfies $2-k \leq \textrm{sign}(M) \leq k-2$ and $\textrm{sign}(M) \equiv k \mod 2$. Moreover, given any pair $(j,k)$ of integers $j$ and $k$ such that $k \geq 2$, $2-k \leq j \leq k-2$, and $j \equiv k \mod 2$, there exists a 4-dimensional compact connected oriented manifold $M$ equipped with a circle action having $k$ fixed points, whose signature satisfies $\textrm{sign}(M)=j$. \end{cor} The idea of the proof of Theorem \ref{t11} and Theorem \ref{t12} is as follows. Given a weight $w$, there exist two fixed points $p_1$ and $p_2$ that lie in the same connected component $Z$ of $M^{\mathbb{Z}_w}$, where $M^{\mathbb{Z}_w}$ denotes the set of points fixed by the $\mathbb{Z}_w$-action. Here $\mathbb{Z}_w$ acts on $M$ as a subgroup of $S^1$. We begin with the biggest weight. Suppose that $p_1$ and $p_2$ have the biggest weight. Then we prove that two cases occur. In one case, we show that the fixed point data of $M$ can be obtained from another manifold $M'$ by blowing up at a fixed point, and in the other case we show that the fixed point data of $M$ can be obtained from an equivariant sum of another manifold $M''$ with $S^4$. In the former case, the fixed point data at $p_1$ and $p_2$ is achieved by blowing up a fixed point in $M'$. In the latter case, a rotation of $S^4$ has fixed points whose fixed point data is the same as that of $p_1$ and $p_2$. Therefore, these manifolds $M'$ and $M''$ have fewer fixed points and have smaller largest weights. Now, on $M'$ or $M''$, pick the biggest weight. The classification problem now reduces to the existence of another manifold with fewer fixed points and with smaller largest weight. We continue this process to reduce the classification problem of the fixed point data to the existence of a semi-free $S^1$-action with a discrete fixed point set, that does exist. Let us discuss what kinds of invariants can be determined from Theorem \ref{t11}. For this, suppose that there exists a 4-dimensional compact oriented manifold $M$ equipped with a circle action having a discrete fixed point set. Then we can perform equivariant sum of $M$ with another manifold $M'$ that is fixed point free. The resulting manifold is in general not (equivariantly) diffeomorphic to $M$. Therefore, Theorem \ref{t11} cannot determine whether two manifolds with the same fixed point data are (equivariantly) diffeomorphic to each other or not. On the other hand, Theorem \ref{t11} determines some invariants of a given manifold $M$. Theorem \ref{t11} determines the signature, the Pontryagin number $\int_M p_1$, and the Euler characteristic of $M$. In particular, let $b_1$ and $b_2$ be the number of times Step (2) and Step (3) occur in Theorem \ref{t11}, respectively. Then the signature of $M$ is $b_1-b_2$. The signature of any circle action on a 4-dimensional almost complex manifold satisfies $|\textrm{sign}(M)| \leq |k-4|$ ($|\textrm{sign}(M)|=|k-4|$ if $M$ is complex or symplectic), where $k$ is the number of fixed points. We can see this in the following way: if $M$ is almost complex, then the sign of each weight at a fixed point is well-defined. For $0 \leq i \leq 2$, there exists at least one fixed point with $i$ negative weights. A fixed point with $i$ negative weights contributes as $(-1)^i$ to the signature of $M$. On the other hand, if $M$ is complex or symplectic, then there is a unique fixed point with no negative weight and there is a unique fixed point with 2 negative weights. However, Corollary \ref{c13} proves that there exists a 4-dimensional oriented manifold equipped with a circle action having $k$ fixed points, whose signature satisfies $|\textrm{sign}(M)|=k-2$. A natural question from Theorem \ref{t11} is to ask if an analogous result holds in higher dimensions. \begin{que} \label{q1} Let the circle act on a compact oriented manifold with a discrete fixed point set. Then can we classify the fixed point data? What is a possible fixed point data? What can we say about the manifold? \end{que} More precisely, it is natural to ask if there exist minimal models (manifolds) and operations for classifying fixed point data. \begin{que} \label{q2} Let the circle act on a compact oriented manifold $M$ with a discrete fixed point set. Then what are the minimal models (manifolds) and operations to construct a manifold $M'$ with the same fixed point data as $M$? \end{que} As in dimension 4, spheres, equivariant sum, and blow-up operation are possible candidates for the problem. This paper is organized as follows. For the description, let the circle act on a compact oriented manifold $M$ with a discrete fixed point set. In Section \ref{s2}, we recall background and prove properties of weights and derive results from the properties. In Section \ref{s3}, we prove properties that weight representations of $M$ at the fixed points satisfy, in terms of isotropy submanifolds. One of which, Lemma \ref{l34}, plays a crucial role in the proof of Theorem \ref{t11} and Theorem \ref{t12}. In Section \ref{s4}, we associate a multigraph to $M$. In Section \ref{s5}, we discuss the blow up operation that also plays a key role in the proof. In Section \ref{s6}, with all of these, we prove Theorem \ref{t11} and Theorem \ref{t12} together. In Section \ref{s7}, we classify a circle action with few fixed points and prove Corollary \ref{c13} as applications of Theorem \ref{t11}. In Section \ref{s8}, as another application, we show that some kind of multigraph behaves like a manifold. The motivation of this paper is \cite{L2}, where Li classifies the weights at the fixed points of a circle action on a 4-dimensional compact orientable manifold with 3 fixed points. The author tried the case of 4 fixed points and found out that there is some pattern. Therefore the author developed a theory to deal with any number of fixed points (and any dimension), and was able to classify the case of any number of fixed points. After writing this paper, the author was informed that circle actions on 4-dimensional manifolds were classified a long time ago (see \cite{F1}, \cite{F2}, \cite{F3}, and \cite{Y}), and Theorem \ref{t11} of this paper can be extracted by some combinatorics of those papers; the weights at any fixed point are encoded in the Seifert invariants of the orbit space (see \cite{F3}). The author was also informed after writing this paper that the author's combinatorial technique (see the proof of the main theorems in Section \ref{s6}) was used by Pao \cite{Pa}, where Pao only considers homotopy 4-spheres. However, our proof is different from those papers (their main tool is orbit data), and those results and techniques are restricted in dimension 4. In this paper, we develop a theory that can be used to study in arbitrary dimension; see Sections \ref{s2}, \ref{s3}, \ref{s4}, and \ref{s5}. The author's combinatorial method follows from Lemma \ref{l34}, that we obtain as a corollary of Lemma \ref{l31} which is for arbitrary dimension. Similarly, the association of a multigraph to a manifold (Section \ref{s4}) can be used in any dimension. Finally, the author would like to thank the anonymous referee for valuable comments and suggestions. \section{Circle action on oriented manifold with discrete fixed point set} \label{s2} Let the circle act on a compact oriented manifold with a discrete fixed point set. In this section, we recall background and prove properties that the weights at the fixed points satisfy and look at their bi-products. For a compact oriented manifold $M$, one defines the signature operator on $M$. The Atiyah-Singer index theorem states that the topological index of the operator is equal to the analytical index of the operator. With the definitions of weights and the sign at fixed points in the Introduction, as an application to a compact oriented $S^1$-manifold with a discrete fixed point set, we have the following formula: \begin{theo} \emph{[Atiyah-Singer index theorem]} \cite{AS} \label{t21} Let the circle act on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. Then the signature of $M$ is \begin{center} $\displaystyle{\textrm{sign}(M) = \sum_{p \in M^{S^1}} \epsilon(p) \prod_{i=1}^{n} \frac{(1+t^{w_p^i})}{(1-t^{w_p^i})}}$ \end{center} and is a constant, where $t$ is an indeterminate. \end{theo} In particular, taking $t=0$ in Theorem \ref{t21}, we have \begin{equation} \label{eq:1} \displaystyle{\textrm{sign}(M) = \sum_{p \in M^{S^1}} \epsilon(p).} \end{equation} One of the important operations to prove Theorem \ref{t11} is the equivariant sum that connects two $S^1$-manifolds equivariantly. Let us discuss on equivariant sum more precisely. For $i=1,2$, let $M_i$ be a $2n$-dimensional compact connected oriented manifold equipped with an effective circle action and with a discrete fixed point set. Then for each $i$, there exists a free orbit in $M_i$. For each $i$, by the slice theorem, there exists a tubular neighborhood $S^1 \times D^{2n-1}$ in $M_i$, containing the free orbit. Using the tubular neighborhoods, we can connect $M_1$ and $M_2$ equivariantly to construct a new manifold $M$, that is compact, connected, oriented, and is equipped with a circle action. Moreover, by reversing orientations of $M_i$ and gluing if necessary, $M$ has the fixed point data $\pm \Sigma_{M_1} \cup \pm \Sigma_{M_2}$, where $-\Sigma_{M_i}$ denotes the fixed point data of $M_i$ with the orientation reversed, i.e., if $\Sigma_{M_i}=\cup_{p \in M_i^{S^1}} \{\epsilon(p),w_p^1,\cdots,w_p^n\}$, then $-\Sigma_{M_i}=\cup_{p \in M_i^{S^1}} \{-\epsilon(p),w_p^1,\cdots,w_p^n\}$. For the complete proof, one may look at Lemma 2.2 and Proposition 2.4 of \cite{L2}. \begin{lem} \cite{L2} \label{l23} Let the circle act effectively on a $2n$-dimensional compact connected oriented manifold $M_i$ with a discrete fixed point set, for $i=1,2$, where $n>1$. Then we can construct a $2n$-dimensional compact connected oriented manifold $M$ equipped with a circle action, whose fixed point data is $\pm \Sigma_{M_1} \cup \pm \Sigma_{M_2}$. \end{lem} In \cite{K}, Kobayashi shows that the fixed point set of an $S^1$-action on a compact orientable manifold is also orientable. \begin{lem} \cite{K} \label{l24} Let the circle act on a compact orientable manifold $M$. Then its fixed point set is orientable. \end{lem} Let $w>1$ be a positive integer. Given an effective circle action on a compact oriented manifold $M$, the group $\mathbb{Z}_w$ acts on $M$, as a subgroup of $S^1$, and the set $M^{\mathbb{Z}_w}$ of points that are fixed by the $\mathbb{Z}_w$-action is a union of smaller dimensional submanifolds. In \cite{HH}, H. Herrera and R. Herrera prove a result on the orientability of $M^{\mathbb{Z}_w}$. \begin{lem} \label{l25} \cite{HH} Let the circle act effectively on a $2n$-dimensional compact oriented manifold. Consider $\mathbb{Z}_w \subset S^1$ and its corresponding action on $M$. If $w$ is odd then the fixed point set $M^{\mathbb{Z}_w}$ is orientable. If $w$ is even and a connected component $Z$ of $M^{\mathbb{Z}_w}$ contains a fixed point of the $S^1$-action, then $Z$ is orientable. \end{lem} We shall discuss consequences of Theorem \ref{t21}. \begin{lem} \label{l22} Let the circle act on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. Let $w$ be the smallest weight that occurs among all the fixed points. Then the number of times the weight $w$ occurs at the fixed points of sign $+1$, counted with multiplicity, is equal to the number of times the weight $w$ occurs at the fixed points of sign $-1$, counted with multiplicity. \end{lem} \begin{proof} By Theorem \ref{t21}, the signature of $M$ is \begin{center} $\displaystyle{\textrm{sign}(M) = \sum_{p \in M^{S^1}} \epsilon(p) \prod_{i=1}^{n} \frac{ (1+t^{w_p^i})}{(1-t^{w_p^i})} = \sum_{p \in M^{S^1}} \epsilon(p) \prod_{i=1}^{n} [ (1+t^{w_p^i}) ( \sum_{j=0}^{\infty} t^{j w_p^i} )].}$ \end{center} Let $w$ be the smallest (positive) weight. At each fixed point $p$, \begin{center} $\displaystyle \prod_{i=1}^{n} [(1+t^{w_p^i}) ( \sum_{j=0}^{\infty} t^{j w_p^i} )] = 1+ 2N_p(w)t^w + t^{w+1}f_p(t)$, \end{center} where $N_p(w)$ is the number of times the weight $w$ occurs at $p$ and $f_p(t)$ is an infinite polynomial. Therefore, \begin{center} $\displaystyle \textrm{sign}(M) = \sum_{p \in M^{S^1}} \epsilon(p) [1+ 2N_p(w)t^w + t^{w+1}f_p(t)]$. \end{center} The signature of $M$ is independent of the indeterminate $t$ and is a constant. Collecting $t^w$-terms, we have \begin{center} $\sum_{p \in M^{S^1}} \epsilon(p) N_p(w)=0$. \end{center} \end{proof} With Lemma \ref{l25}, the following lemma is obtained as an application of Lemma \ref{l22}. \begin{lem} \label{l26} Let the circle act on a compact oriented manifold with a discrete fixed point set. For each positive integer $w$, the number of times $w$ occurs as weights among all the fixed points, counted with multiplicity, is even. \end{lem} \begin{proof} Consider the set $M^{\mathbb{Z}_w}$ of points that are fixed by the $\mathbb{Z}_w$-action, where $\mathbb{Z}_w$ acts on $M$ as a subgroup of $S^1$. Let $Z$ be a connected component of $M^{\mathbb{Z}_w}$ that contains an $S^1$-fixed point. By Lemma \ref{l25}, $Z$ is orientable. Fix an orientation of $Z$. The circle action on $M$ restricts to a circle action on $Z$. The circle action on $Z$ has $Z \cap M^{S^1}$ as a fixed point set. The smallest weight of the $S^1$-action on $Z$ is $w$. By applying Lemma \ref{l22} to the $S^1$-action on $Z$, the number of times the weight $w$ occurs at the fixed points of $Z \cap M^{S^1}$ is even. \end{proof} An immediate consequence of Lemma \ref{l26} is that if there is an odd number of fixed points, then the dimension of the manifold is a multiple of 4. \begin{cor} \label{c27} Let the circle act on a compact oriented manifold $M$. If the number of fixed points is odd, the dimension of the manifold is divisible by four. \end{cor} \begin{proof} Assume on the contrary that $\dim M=2n$ is not divisible by four. Then $n$ is odd. Let $k$ be the number of fixed points. Then the total number $nk$ of weights among all the fixed points, counted with multiplicity is odd. On the other hand, by Lemma \ref{l26}, for each positive integer $w$, the number of times $w$ occurs as weights among all the fixed points, counted with multiplicity is even. In particular, the total number $nk$ of weights among all the fixed points counted with multiplicity must be even, which leads to a contradiction. \end{proof} If there are two fixed points, then the fixed point data is classified. \begin{theorem} \label{t28} \cite{Kos2}, \cite{L2} Let the circle act on a compact oriented manifold with two fixed points $p$ and $q$. Then the weights at $p$ and $q$ are equal and $\epsilon(p)=-\epsilon(q)$. \end{theorem} The following result concerns the vanishing of the signature of a manifold equipped with a certain type of a circle action. This will be used in the proof of Theorem \ref{t11}. \begin{theorem} \label{t29} Let the circle act on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. Suppose that the weights at each fixed point are $\{a_1,\cdots,a_n\}$ for some positive integers $a_1,\cdots,a_n$. Then the number of fixed points $p$ with $\epsilon(p)=+1$ and that with $\epsilon(p)=-1$ are equal. In particular, the signature of $M$ vanishes. \end{theorem} \begin{proof} By Theorem \ref{t21}, the signature of $M$ is \begin{center} $\displaystyle{\textrm{sign}(M) = \sum_{p \in M^{S^1}} \epsilon(p) \prod_{i=1}^{n} \frac{(1+t^{w_p^i})}{(1-t^{w_p^i})}}=\sum_{p \in M^{S^1}} \epsilon(p) \prod_{i=1}^{n} \frac{(1+t^{a_i})}{(1-t^{a_i})}$. \end{center} Since the signature of $M$ is a constant, we must have that $\textrm{sign}(M)=0$, and the number of fixed points $p$ with $\epsilon(p)=+1$ and that with $\epsilon(p)=-1$ are equal. \end{proof} An $S^1$-action is called semi-free, if the action is free outside the fixed point set. In \cite{TW}, Tolman and Weitsman classify a symplectic semi-free circle action with a discrete fixed point set. Li reproves this in \cite{L1}. The case of almost complex manifolds is dealt in \cite{Jan3}. For a semi-free circle action, all the weights at the fixed points are 1. We classify a semi-free action on an oriented manifold with a discrete fixed point set, and solve the existence issue. \begin{theorem} \label{t210} Let the circle act semi-freely on a compact oriented manifold $M$ with a discrete fixed point set. Then there is an even number of fixed points, and the signature of $M$ vanishes. Moreover, given any positive integer $k$, there exists a semi-free circle action on a compact oriented manifold $M$ with $2k$ fixed points. \end{theorem} \begin{proof} In Theorem \ref{t29}, take $a_i=1$ for all $i$. The number of fixed points $p$ with $\epsilon(p)=+1$ and that with $\epsilon(p)=-1$ are equal. In particular there is an even number of fixed points. Moreover, the signature of $M$ vanishes. For the latter part, consider a rotation of $S^{2n}$ as in the introduction with taking $a_i=1$ for all $i$. It has two fixed points, the north pole and the south pole, and its fixed point data is $\{+,1,\cdots,1\}$ and $\{-,1,\cdots,1\}$; the action is semi-free. Take $k$-copies of such a manifold, and take an equivariant sum along free orbits of the manifolds in the sense of Lemma \ref{l22}. \end{proof} We quickly review the classification of $S^1$-actions on compact oriented surfaces. This will be used in the proof of Theorem \ref{t11}. Given a manifold $M$, let $\chi(M)$ be the Euler number of $M$. \begin{theo} \cite{K} \label{t211} Let the circle act on a compact oriented manifold $M$. Then \begin{center} $\displaystyle \chi(M)=\sum_{Z \subset M^{S^1}} \chi(Z)$. \end{center} \end{theo} The Euler number of a compact oriented surface $M$ is $2-2g$, where $g$ is the number of genus of $M$. The Euler number of a point is 1. Therefore, the following lemma holds. \begin{lem} \label{l212} Let $M$ be a compact connected oriented surface of genus $g$. \begin{enumerate} \item If $g=0$, i.e., $M$ is the 2-sphere $S^2$, then any circle action on it has two fixed points. \item If $g=1$, i.e., $M$ is the 2-torus $\mathbb{T}^2$, then any circle action on it is fixed point free. \item If $g>1$, then $M$ does not admit a non-trivial circle action. \end{enumerate} \end{lem} \begin{proof} Suppose that $M$ admits a circle action. Since $\dim M=2$, the fixed point set is either the empty set or a finite number of points. By Theorem \ref{t211}, \begin{center} $\displaystyle \chi(M)=\sum_{p \in M^{S^1}} 1$, \end{center} since the Euler number of a point is 1. This implies that $\chi(M) \geq 0$. On the other hand, $\chi(M)=2-2g$. This implies that $g=0$ or 1. If $g=0$, then $M$ is the 2-sphere and it has two fixed points since $2=\chi(M)=\sum_{p \in M^{S^1}} 1$. If $g=1$, then $M$ is the 2-torus and it has no fixed points since $0=\chi(M)=\sum_{p \in M^{S^1}} 1$. \end{proof} \section{Weight representations} \label{s3} In this section, we investigate properties that the weights at the fixed points satisfy, in terms of isotropy submanifolds. Our main goal in this section is Lemma \ref{l34}, that will play a crucial role in the proof of Theorem \ref{t11}. For this, we need to introduce technical terminologies. Let the circle act on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. For each $p \in M$, denote by $\alpha_p^M$ the orientation on $T_pM$ given by the orientation on $M$. Let $w$ be a positive integer. As a subgroup of $S^1$, $\mathbb{Z}_w$ acts on $M$. Let $M^{\mathbb{Z}_w}$ be the set of points fixed by the $\mathbb{Z}_w$-action and $Z$ a connected component of $M^{\mathbb{Z}_w}$. Assume that $Z$ contains an $S^1$-fixed point $p$, i.e., $Z \cap M^{S^1} \neq \emptyset$. By Lemma \ref{l25}, $Z$ is orientable. Choose an orientation of $Z$. Since $M$ is oriented, then the normal bundle $NZ$ of $Z$ is orientable. Take an orientation on $NZ$ so that the orientation of $T_pZ \bigoplus N_pZ$ is the orientation of $T_pM$. Denote by $\alpha_p^N$ and $\alpha_p^Z$ the orientations on $N_pZ$ and $T_pZ$, respectively. Let $p\in Z \cap M^{S^1}$ be an $S^1$-fixed point. As explained in the Introduction, the tangent space at $p$ decomposes into $n$ two-dimensional irreducible $S^1$-equivariant real vector spaces $L_1,\cdots,L_n$. Without loss of generality, by permuting the $L_i$'s, assume that $L_1,\cdots,L_m$ are the summands of $N_pZ$ and $L_{m+1},\cdots,L_n$ are the summands of $T_pZ$, where $2(n-m)$ is the dimension of $Z$, i.e., \begin{center} $T_pM=L_1 \bigoplus \cdots \bigoplus L_n$, $N_pZ=L_1 \bigoplus \cdots \bigoplus L_m$, and $T_pZ=L_{m+1} \bigoplus \cdots \bigoplus L_n$. \end{center} For each $i$, $L_i$ is isomorphic to a one-dimensional $S^1$-equivariant complex space, on which the action is given by multiplication by $g^{w_p^i}$, where $g \in S^1$ and $w_p^i$ is a non-zero integer. As in the Introduction, for each $i$, give an orientation on $L_i$ so that $w_p^i$ is positive. The choice of the orientation for each $L_i$ to make each weight $w_p^i$ positive gives orientation on $N_pZ$, $T_pZ$, and hence $T_pM$. Denote the orientations by $\beta_p^N$, $\beta_p^Z$, and $\beta_p^M$, respectively. Finally, define three parameters, depending on the two different orientations we made on each of $N_pZ$, $T_pZ$, and $T_pM$ in the following way: \begin{Definition} \label{d31} \begin{enumerate}[(1)] \item $\epsilon_p^N=+1$ if the two orientations $\alpha_p^N$ and $\beta_p^N$ on the normal bundle $N_pZ$ of $Z$ at $p$ agree and $\epsilon_p^N=-1$ otherwise. \item $\epsilon_p^Z=+1$ if the two orientations $\alpha_p^Z$ and $\beta_p^Z$ on the tangent space $T_pZ$ of $Z$ at $p$ agree and $\epsilon_p^Z=-1$ otherwise. \item $\epsilon_p^M=+1$ if the two orientations $\alpha_p^M$ and $\beta_p^M$ on the tangent space $T_pM$ of $M$ at $p$ agree and $\epsilon_p^M=-1$ otherwise. \end{enumerate} \end{Definition} By the definition, $\epsilon(p)=\epsilon_p^M$, where $\epsilon(p)$ is the sign of $p$ introduced in the Introduction. Moreover, it follows that $\epsilon_p^M=\epsilon_p^Z \cdot \epsilon_p^N$. The next lemma states that if two $S^1$-fixed points $p$ and $q$ lie in the same connected component of $M^{\mathbb{Z}_w}$ for some positive integer $w$, then the weights at $p$ and the weights at $q$ have intimate relations. \begin{lemma} \label{l31} Let the circle act on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. Fix a positive integer $w$. Let $\{p,q\} \subset Z \cap M^{S^1}$, where $Z$ is a connected component of $M^{\mathbb{Z}_w}$ such that $\dim Z=2n-2m$. Fix an orientation of $Z$. Rearrange weights at $p$ so that $\{w_p^1,\cdots,w_p^m\}$ are the weights on the normal bundle $N_pZ$ of $Z$ at $p$ and $\{w_p^{m+1},\cdots,w_p^n\}$ are the weights on the tangent space $T_pZ$ and similarly for $q$. Then there exist a permutation $\sigma \in S_m$ and $\epsilon \in \{-1,1\}^m$ such that $w_p^i \equiv \epsilon(i)w_q^{\sigma(i)} \mod w$ for any $1\leq i \leq m$ and $\epsilon_p^N=\epsilon_q^N \cdot (-1)^{\epsilon_{p,q}^-}$, where $\epsilon_{p,q}^-$ denotes the number of $i$'s such that $\epsilon(i)=-1$. \end{lemma} \begin{proof} Since $TZ$ is oriented, $NZ$ is an oriented $\mathbb{Z}_w$-bundle over $Z$. Therefore, the $\mathbb{Z}_w$-representations of $N_pZ$ and $N_qZ$ are isomorphic, since $Z$ is connected. These representations are given by $\{\epsilon_p^N \cdot w_p^1, w_p^2, \cdots, w_p^m\} \mod w$ and $\{\epsilon_q^N \cdot w_q^1, w_q^2, \cdots, w_q^m\} \mod w$. Since they are isomorphic, there exist a permutation $\sigma \in S_m$ and $\epsilon \in \{-1,1\}^m$ such that $w_p^i \equiv \epsilon(i) w_q^{\sigma(i)} \mod w$ for each $i$. Moreover, $\epsilon_p^N=\epsilon_q^N \cdot (-1)^{\epsilon_{p,q}^-}$. \end{proof} We explain Lemma \ref{l31} in an example. \begin{exa} Let $S^1$ act on $\mathbb{CP}^4$ by \begin{center} $g \cdot [z_0:z_1:z_2:z_3:z_4]=[z_0:g z_1:g^2 z_2:g^3 z_3:g^4 z_4]$, \end{center} where $\mathbb{CP}^4$ is equipped with the standard orientation from the standard complex structure. The action has 5 fixed points, $p_0=[1:0:0:0:0]$, $p_1=[0:1:0:0:0]$, $p_2=[0:0:1:0:0]$, $p_3=[0:0:0:1:0]$, and $p_4=[0:0:0:0:1]$. Near $p_0$, $z_0 \neq 0$ and hence $\displaystyle \frac{z_i}{z_0}$ with $1 \leq i \leq 4$ are local coordinates near $p_0$. Therefore, the local action of $S^1$ near $p_0$ is given by \begin{center} $\displaystyle g \cdot (\frac{z_1}{z_0},\frac{z_2}{z_0},\frac{z_3}{z_0},\frac{z_4}{z_0})=(\frac{g \cdot z_1}{z_0},\frac{g^2 \cdot z_2}{z_0},\frac{g^3 \cdot z_3}{z_0},\frac{g^4 \cdot z_4}{z_0})=(g\cdot\frac{z_1}{z_0},g^2 \cdot \frac{z_2}{z_0},g^3 \cdot \frac{z_3}{z_0},g^4 \cdot \frac{z_4}{z_0})$. \end{center} Therefore, the weights at $p_0$ as complex $S^1$-representations are $\{1,2,3,4\}$. Since all the weights are positive, as real $S^1$-representations, $\epsilon(p_0)=+1$ and the fixed point data at $p_0$ is $\{+,1,2,3,4\}$. Similarly, the weights at $p_3$ as complex $S^1$-representations are $\{-3,-2,-1,1\}$. Since there are 3 negative weights, as real $S^1$-representations, $\epsilon(p_3)=-1$ and the fixed point data at $p_3$ is $\{-,1,1,2,3\}$. Now, the group $\mathbb{Z}_3$ acts on $\mathbb{CP}^4$ and one of the connected component $Z$ of the set $M^{\mathbb{Z}_3}$ of points that are fixed by the $\mathbb{Z}_3$-action is $\mathbb{CP}^1=S^2$, that contains $p_0$ and $p_3$. Equip $Z$ with the standard orientation from $\mathbb{CP}^4$. Since the weight of $T_{p_0}Z$ ($T_{p_3}Z$) as a complex representation is 3 (-3), we have $\epsilon_{p_0}^Z=+1$ ($\epsilon_{p_3}^Z=-1$, respectively). Since $\epsilon(p_0)=+1$ ($\epsilon(p_3)=-1$) and $\epsilon_{p_0}^Z=+1$ ($\epsilon_{p_3}^Z=-1$), and $\epsilon(p_0)=\epsilon_{p_0}^Z \cdot \epsilon_{p_0}^N$ ($\epsilon(p_3)=\epsilon_{p_3}^Z \cdot \epsilon_{p_3}^N$), we have that $\epsilon_{p_0}^N=+1$ ($\epsilon_{p_3}^N=+1$, respectively). Alternatively, since the weights at $N_{p_0}Z$ ($N_{p_3}Z$) as the complex representations are $\{1,2,4\}$ ($\{-2,-1,1\}$), we have $\epsilon_{p_0}^N=+1$ ($\epsilon_{p_3}^N=+1$, respectively). To sum up, we have \begin{enumerate} \item $\Sigma_{p_0}=\{+,1,2,3,4\}$, $\epsilon_{p_0}^Z=+1$, $\epsilon_{p_0}^N=+1$, and the weights of $N_{p_0}Z$ are $\{1,2,4\}$. \item $\Sigma_{p_3}=\{-,1,1,2,3\}$, $\epsilon_{p_3}^Z=-1$, $\epsilon_{p_3}^N=+1$, and the weights of $N_{p_3}Z$ are $\{1,1,2\}$. \end{enumerate} The weights $\{1,2,4\}$ of $N_{p_0}Z$ and the weights $\{1,1,2\}$ of $N_{p_3}Z$ are equal modulo 3 up to sign; \begin{center} $1 \equiv 1 \mod 3, 2 \equiv -1 \mod 3$, and $4 \equiv -2 \mod 3$. \end{center} For the last two pairs, the weight at $p_0$ with plus sign is paired with the weight at $p_3$ with negative sign. Therefore, the terminology $\epsilon_{p_0,p_3}^-$ in Lemma \ref{l31} is equal to 2. By Lemma \ref{l31}, we must have that $1=\epsilon_{p_0}^N=\epsilon_{p_3}^N \cdot (-1)^{\epsilon_{p_0,p_3}^-}=1 \cdot (-1)^2=1$ and this computation confirms Lemma \ref{l31} in this example. \end{exa} We shall discuss applications of Lemma \ref{l31}. \begin{lemma} \label{l32} Let the circle act on a compact oriented manifold $M$ with a discrete fixed point set. Let $w$ be a positive integer. Suppose that no multiples of $w$ occur as weights, other than $w$ itself. Then for any connected component $Z$ of $M^{\mathbb{Z}_w}$, the number of $S^1$-fixed points $p$ in $Z$ with $\epsilon_p^Z=+1$ and that with $\epsilon_p^Z=-1$ are equal. Moreover, we can pair points in $Z\cap M^{S^1}$ such that \begin{enumerate} \item If $(p,q)$ is a pair, then $\epsilon_p^Z=-\epsilon_q^Z$. \item If $\{w_p^1,\cdots,w_p^m\}$ and $\{w_q^1,\cdots,w_q^m\}$ are the weights of $N_pZ$ and $N_qZ$, then there exist a bijection $\sigma_m:\{1,\cdots,m\}\rightarrow\{1,\cdots,m\}$ and $\epsilon \in \{-1,1\}^m$ such that $w_p^i \equiv \epsilon(i) w_q^{\sigma_m(i)} \mod w$ for any $1 \leq i \leq m$. Moreover, $\epsilon_p^M=\epsilon_q^M \cdot (-1)^{\epsilon_{p,q}^-+1}$, where $\epsilon_{p,q}^-$ is the number of $i$'s with $\epsilon(i)=-1$. \end{enumerate} \end{lemma} \begin{proof} Consider the induced action of $S^1/\mathbb{Z}_w=S^1$ on $Z$. Since no multiples of $w$ occur as weights, every weight on the tangent space $T_pZ$ of $Z$ at a point $p$ that is fixed by the induced $S^1$-action is $1$. Apply Theorem \ref{t29} by taking $a_i=1$ to the induced $S^1$-action on $Z$. It follows that the number of fixed points $p$ with $\epsilon_p^Z=+1$ and that with $\epsilon_p^Z=-1$ are equal. Therefore, we can pair points in $Z\cap M^{S^1}$ so that if $(p,q)$ is a pair, then $\epsilon_p^Z=-\epsilon_q^Z$. By Lemma \ref{l31}, $\epsilon_p^N=\epsilon_q^N \cdot (-1)^{\epsilon_{p,q}^-}$. With $\epsilon_p^M=\epsilon_p^Z \cdot \epsilon_p^N$ and $\epsilon_q^M=\epsilon_q^Z \cdot \epsilon_q^N$, the lemma follows from Lemma \ref{l31}. \end{proof} \begin{lemma} \label{l33} Let the circle act effectively on a 4-dimensional compact oriented manifold $M$ with a discrete fixed point set. Suppose that a fixed point $p$ has weights $\{a,w\}$, where $w>1$. Then there exists a unique fixed point $q$ such that $\{p,q\}\subset S^2 \subset M^{\mathbb{Z}_w}$. Let $b$ be the remaining weight at $q$. \begin{enumerate} \item If $\epsilon(p)=\epsilon(q)$, then $a \equiv -b \mod w$. \item If $\epsilon(p)=-\epsilon(q)$, then $a \equiv b \mod w$. \end{enumerate} \end{lemma} \begin{proof} Since $w>1$ and the action is effective, if $Z$ is a connected component of $M^{\mathbb{Z}_w}$ that contains $p$, then $\dim Z=2$. The induced action of $S^1/\mathbb{Z}_w=S^1$ on $Z$ has $p$ as a fixed point. By Lemma \ref{l212}, it follows that $Z$ is the 2-sphere $S^2$ and it contains precisely two fixed points $p,q$. By applying Theorem \ref{t28} to the induced $S^1$-action on $Z$, we have $\epsilon_p^Z=-\epsilon_q^Z$. First, suppose that $\epsilon(p)=\epsilon(q)$. Then by Lemma \ref{l32}, since $\epsilon_p^M=\epsilon_q^M \cdot (-1)^{\epsilon_{p,q}^-+1}$, we have that $\epsilon_{p,q}^-=1$. Since $a$ and $b$ are the weights of the normal bundle of $Z$ at $p$ and $q$, respectively, by Lemma \ref{l32}, we have that $a \equiv -b \mod w$. Second, suppose that $\epsilon(p)=-\epsilon(q)$. Then by Lemma \ref{l32}, since $\epsilon_p^M=\epsilon_q^M \cdot (-1)^{\epsilon_{p,q}^-+1}$, we have that $\epsilon_{p,q}^-=0$. Therefore, we have that $a \equiv b \mod w$. \end{proof} Let us consider the biggest weight among the weights over all the fixed points. If it is strictly bigger than 1, then Lemma \ref{l33} has the following consequence that we will use in the proof of Theorem \ref{t11} and Theorem \ref{t12} in Section \ref{s6}. \begin{lemma} \label{l34} Let the circle act effectively on a 4-dimensional compact oriented manifold $M$ with a discrete fixed point set. Assume that the biggest weight $w$ is bigger than 1. If a fixed point $p$ has weight $w$, then there exists a unique fixed point $q$ such that $\{p,q\} \subset S^2 \subset M^{\mathbb{Z}_w}$. Moreover, if $a$ and $b$ are the remaining weights at $p$ and $q$, respectively, then the following holds: \begin{enumerate} \item If $\epsilon(p)=\epsilon(q)$, then $a+b=w$. \item If $\epsilon(p)=-\epsilon(q)$, then $a=b$. \end{enumerate} \end{lemma} \begin{proof} Since the action is effective and $w>1$ is the biggest weight, $a<w$ and $b<w$. Therefore, by Lemma \ref{l33}, the lemma follows. \end{proof} \section{Multigraphs} \label{s4} In this section, we associate a multigraph to a compact oriented manifold equipped with a circle action having a discrete fixed point set. In particular, we assign one with the following properties: each fixed point is a vertex, each vertex has exactly $n$-edges where $2n$ is the dimension of the manifold, each edge is indexed by a positive integer such that the labels of the edges at a vertex are the weights at the fixed point, each vertex has a sign, and there is no self-loop. If a fixed point $p$ has weight $w$, then there exists another fixed point $q$ that has weight $w$. Therefore, we can draw an edge $e$ between $p$ and $q$ and assign a label $w$ to the edge $e$. \begin{Definition} \label{d41} A \textbf{multigraph} $\Gamma$ is an ordered pair $\Gamma=(V,E)$ where $V$ is a set of vertices and $E$ is a multiset of unordered pairs of vertices, called \textbf{edges}. A multigraph is called \textbf{signed} if every vertex has sign $+$ or $-$. A multigraph is called \textbf{labelled}, if every edge $e$ is labelled by a positive integer $w(e)$, called the \textbf{label}, or the \textbf{weight} of the edge, i.e., there exists a map from $E$ to the set of positive integers. Let $\Gamma$ be a labelled multigraph. The \textbf{weights} at a vertex $v$ consists of labels (weights) $w(e)$ for each edge $e$ at $v$. A multigraph $\Gamma$ is called \textbf{$n$-regular}, if every vertex has $n$-edges. \end{Definition} The following proposition shows that given a circle action on a compact oriented manifold with a discrete fixed point set, we can assign a signed, labelled multigraph that does not have any loops. \begin{pro} \label{p42} Let the circle act effectively on a $2n$-dimensional compact oriented manifold $M$ with a discrete fixed point set. Then there exists a signed, labelled, $n$-regular multigraph $\Gamma$ associated to $M$ with the following properties. \begin{enumerate} \item The set $V$ of vertices is the set $M^{S^1}$ of the fixed points. \item The labels of the edges at a vertex $p$ are the weights at the corresponding fixed point $p$. \item The sign $\epsilon(p)$ of a vertex $p$ is the sign $\epsilon(p)$ of the corresponding fixed point $p$. \item If there is an edge $e$ between two vertices $p_1$ and $p_2$ with label $w$, then the corresponding fixed points $p_1$ and $p_2$ lie in the same connected component $Z$ of $M^{\mathbb{Z}_w}$. \item The multigraph $\Gamma$ does not have any loops. \end{enumerate} \end{pro} \begin{proof} Assign each fixed point $p$ a vertex and also denote it by $p$. To each vertex $p$, assign a sign $\epsilon(p)$. These prove (1) and (3). Let $w$ be a positive integer and $Z$ a connected component of $M^{\mathbb{Z}_w}$, the set of points in $M$ that are fixed by the $\mathbb{Z}_w$-action. Assume that $Z$ contains an $S^1$-fixed point $p$. Then by Lemma \ref{l25}, $Z$ is orientable. Pick an orientation on $Z$. The $S^1$-action on $M$ restricts to an $S^1$-action on $Z$. Moreover, the smallest weight of the $S^1$-action on $Z$ is $w$. Applying Lemma \ref{l26} to the $S^1$-action on $Z$, it follows that the number of times the weight $w$ occurs at points that are fixed by the $S^1$-action on $Z$ with $\epsilon_p^Z=+1$ and that with $\epsilon_p^Z=-1$ are equal. Therefore, by this recipe, if a fixed point $p_1$ in $Z$ with $\epsilon_{p_1}^Z=+1$ has weight $w$ and a fixed point $p_2$ in $Z$ with $\epsilon_{p_2}^Z=-1$ has weight $w$, then we can draw an edge $e$ between the vertices $p_1$ and $p_2$, and issue a label $w$ to $e$. At each fixed point, we use one weight to draw only one edge. Repeat this for each positive integer $w$ and each connected component of the set $M^{\mathbb{Z}_w}$. This proves (2), (4), and (5), and that the multigraph is $n$-regular, hence the lemma follows. \end{proof} \section{Blow-up} \label{s5} Another key ingredient to prove Theorem \ref{t11} and Theorem \ref{t12} is a blow-up type operation. Blowing up the origin 0 in $\mathbb{C}^n$ is an operation that replaces the point 0 by the set of all straight complex lines through it. In this section, we shall introduce a blow-up type operation for an isolated fixed point of a circle action on a 4-dimensional oriented manifold by identifying a neighborhood of the fixed point with a neighborhood of the origin $0$ in $\mathbb{C}^2$. For this, let the circle act on a 4-dimensional oriented manifold $M$ and let $p$ be an isolated fixed point. Then we can identify a neighborhood $U$ of $p$ with a neighborhood $V$ of $0$ in $\mathbb{C}^2$, where the local action of $S^1$ near $p$ can be identified with a circle action near $0$ in $\mathbb{C}^2$ by \begin{center} $g \cdot (z_1,z_2)=(g^{-a}z_1,g^b z_2)$, \end{center} for some positive integers $a$ and $b$ that are weights at $p$. We can choose the signs by reversing the orientation of $M$ if necessary. Now let us blow up $0$ in $\mathbb{C}^2$ in the usual sense. Since the neighborhood $U$ of $p$ is identified with the neighborhood $V$ of $0$ in $\mathbb{C}^2$, we make the corresponding change on $U$ of $p$. We shall call this procedure \textbf{blow up}, as what it does geometrically is the same as blow up in complex geometry. We blow up equivariantly so that there is a natural extended circle action on the blown up manifold. In this case, the equivariant blow up replaces the fixed point $p$ by $\mathbb{CP}^1=S^2$, and the extended $S^1$-action on the $\mathbb{CP}^1$ has two fixed points $p_1$ and $p_2$, whose fixed point data are $\{\epsilon(p),a,a+b\}$ and $\{\epsilon(p),b,a+b\}$, respectively. In other words, from $M$, by the blow up at $p$, we can construct a new manifold $M'$ equipped with a circle action whose fixed point data is the same as $M$ with replacing $\{\epsilon(p),a,b\}$ by $\{\epsilon(p),a,a+b\}\cup\{\epsilon(p),b,a+b\}$. To state our technical lemma, we need the following theorem, that is an application of the equivariant tubular neighborhood theorem (the slice theorem) to a fixed point of a group action on a manifold. \begin{theo} (The Local Linearization Theorem) \cite{GGK} \label{t51} Let a compact Lie group $G$ act on a manifold $M$ and let $p \in M^G$ be a fixed point. Then there exists a $G$-equivariant diffeomorphism from a neighborhood of the origin in $T_pM$ onto a neighborhood of $p$ in $M$. \end{theo} With Theorem \ref{t51}, we are ready to state our technical lemma, that allows us to blow up a fixed point of an $S^1$-action. \begin{lemma} \label{l51} Let the circle act on a 4-dimensional compact connected oriented manifold $M$ with a discrete fixed point set. \begin{enumerate}[(1)] \item Suppose that a fixed point $p$ has fixed point data $\{-,a,b\}$ for some positive integers $a$ and $b$. Then we can construct a 4-dimensional compact connected oriented manifold $\widetilde{M}$ equipped with a circle action such that the fixed point data of $\widetilde{M}$ is $(\Sigma_M \setminus \{-,a,b\}) \cup \{-,a,a+b\} \cup \{-,b,a+b\}$. \item Suppose that a fixed point $p$ has fixed point data $\{+,a,b\}$ for some positive integers $a$ and $b$. Then we can construct a 4-dimensional compact connected oriented manifold $\widetilde{M}$ equipped with a circle action such that the fixed point data of $\widetilde{M}$ is $(\Sigma_M \setminus \{+,a,b\}) \cup \{+,a,a+b\} \cup \{+,b,a+b\}$. \end{enumerate} \end{lemma} \begin{proof} Assume Case (1), i.e., $\epsilon(p)=-1$. By Theorem \ref{t51}, a neighborhood of $p$ in $M$ is $S^1$-equivariantly diffeomorphic to a neighborhood of $0$ in $T_pM$. The tangent space at $p$ decomposes into 2 two-dimensional irreducible $S^1$-equivariant real vector spaces $L_1$ and $L_2$, on each of which the action is given by multiplication by $g^a$ and $g^b$ for any $g \in S^1$, respectively. Since $\epsilon(p)=-1$, this implies that $T_pM$ is isomorphic to $\mathbb{C}^2$, on which each $g \in S^1 \subset \mathbb{C}$ acts on $\mathbb{C}^2$ by either $g \cdot (z_1,z_2)=(g^{-a}z_1,g^bz_2)$ or $g \cdot (z_1,z_2)=(g^{a}z_1,g^{-b}z_2)$. By changing the orientations of both copies of $\mathbb{C}$'s in the latter case, we may assume that a neighborhood $U$ of $p$ is equivariantly diffeomorphic to a neighborhood $V$ of $0$ in $\mathbb{C}^2$ on which each $g \in S^1 \subset \mathbb{C}$ acts by \begin{center} $g \cdot (z_1,z_2)=(g^{-a}z_1,g^bz_2)$. \end{center} Denote the orientation preserving equivariant diffeomorphism by $\phi$. Next, we blow up the origin $0$ in $\mathbb{C}^2$ that corresponds to $p$. Call the blown up space $\widetilde{V}$. The blow up replaces 0 in $V$ by the set of all straight lines through it. The blown up space $\widetilde{V}$ is described as \begin{center} $\widetilde{V}=\{(z,l)|z \in l\}\subset V \times \mathbb{CP}^1$. \end{center} The space can also be described by the equation \begin{center} $\widetilde{V}=\{((z_1,z_2),[w_1:w_2])|(z_1,z_2) \in V, w_1z_2-w_2z_1=0\}$. \end{center} With this description, the action of $S^1$ on $V$ extends to act on $\widetilde{V}$ by \begin{center} $g \cdot ((z_1,z_2),[w_1:w_2])=((g^{-a}z_1,g^b z_2), [g^{-a}w_1:g^b w_2])$. \end{center} The extended action of $S^1$ on $\widetilde{V}$ has two fixed points $p_1=((0,0),[1:0])$ and $p_2=((0,0),[0:1])$. Now, we compute the fixed point data at $p_1$ and $p_2$. Consider $p_1$. Let $u=\frac{w_2}{w_1}$. Since $z_2=z_1\frac{w_2}{w_1}=z_1 u$, $z_1$ and $u$ become local coordinates near $p_1=((0,0),[1:0])$. Now, the extended action of $S^1$ near $p_1$ is given by \begin{center} $\displaystyle g\cdot(z_1,u)=g\cdot(z_1,\frac{w_2}{w_1})=(g^{-a}z_1,\frac{g^bw_2}{g^{-a}w_1})=(g^{-a}z_1,g^{a+b}u)$. \end{center} Hence, as complex $S^1$-representations, the weights at $p_1$ are $\{-a,a+b\}$. As real $S^1$-representations, the local fixed point data at $p_1$ is therefore $\{-,a,a+b\}$. Next, consider $p_2$. Let $v=\frac{w_1}{w_2}$. In this case $z_2$ and $v$ become local coordinates near $p_2$. The extended action of $S^1$ near $p_2$ is given by \begin{center} $\displaystyle g\cdot(z_2,v)=g\cdot(z_2,\frac{w_1}{w_2})=(g^bz_2,\frac{g^{-a}w_1}{g^bw_2})=(g^bz_2,g^{-a-b}v)$. \end{center} The weights at $p_2$ as complex $S^1$-representations are $\{b,-a-b\}$ and hence as real $S^1$-representations, the local fixed point data at $p_2$ is $\{-,b,a+b\}$. Note that $\phi$ is a diffeomorphism from $U \setminus \{p\}$ to $\widetilde{V} \setminus E$, where $E$ is the exceptional divisor. Consider the manifold $\widetilde{M}=((M\setminus\{p\})\sqcup \widetilde{V})/\phi$. The extended action of $S^1$ on the manifold $\widetilde{M}$ has the fixed point set $(M^{S^1}\setminus \{p\}) \cup \{p_1,p_2\}$ and hence the fixed point data $(\Sigma_M \setminus \{-,a,b\}) \cup \{-,a,a+b\} \cup \{-,b,a+b\}$. This proves the first part. Assume Case (2), i.e., $\epsilon(p)=1$. In this case, reverse the orientation of $M$. This reverses the sign of $\epsilon(q)$ for each fixed point $q$. In particular, $\epsilon(p)=-1$. Now, proceed as in the first case; we blow up the fixed point $p$ to replace the fixed point $p$ with $\mathbb{CP}^1$, on which we have an extended $S^1$-action that has two fixed points $p_1$ and $p_2$. The fixed points $p_1$ and $p_2$ have fixed point data $\{-,a,a+b\}$ and $\{-,b,a+b\}$, respectively. We reverse the orientation of $M$ back to its original orientation and this completes the proof. \end{proof} \section{Proof of the main result: Theorem \ref{t11} and Theorem \ref{t12}} \label{s6} We are ready to prove Theorem \ref{t11} and Theorem \ref{t12}. \begin{proof} [Proof of Theorem \ref{t11} and Theorem \ref{t12}] Associate to $M$ a signed, labelled, 2-regular multigraph $\Gamma$ without any loop as in Proposition \ref{p42}. We begin with the biggest label (weight) of an edge among all the edges of the multigraph $\Gamma$. Let $e$ be an edge whose weight $w$ is the biggest among all the weights of the edges of the multigraph $\Gamma$. Assume that $w>1$. By (4) of Proposition \ref{p42}, the vertices (fixed points) $p_1$ and $p_2$ of the edge $e$ lie in the same connected component $Z$ of $M^{\mathbb{Z}_w}$. Let $a$ and $b$ be the remaining weights at $p_1$ and $p_2$, respectively. By Lemma \ref{l34}, $Z=S^2$, and either \begin{enumerate} \item $\epsilon(p_1)=\epsilon(p_2)$ and $a+b=w$, or \item $\epsilon(p_1)=-\epsilon(p_2)$ and $a=b<w$. \end{enumerate} First, suppose that Case (1) holds; $\epsilon(p_1)=\epsilon(p_2)$ and $a+b=w$, i.e., $\Sigma_{p_1}=\{\epsilon(p_1),a,a+b\}$ and $\Sigma_{p_2}=\{\epsilon(p_1),b,a+b\}$. In this case, if we can construct a 4-dimensional compact connected oriented manifold $M'$, equipped with a circle action whose fixed point data is $(\Sigma_M \setminus (\{\epsilon(p_1),a,a+b\} \cup \{\epsilon(p_2),b,a+b\})) \cup \{\epsilon(p_1),a,b\}$, then by Lemma \ref{l51}, we can blow up the fixed point whose fixed point data is $\{\epsilon(p_1),a,b\}$ to construct a 4-dimensional compact connected oriented manifold $M''$ equipped with a circle action whose fixed point data $\Sigma_{M''}$ is the same as $\Sigma_M$. Therefore, the classification of the fixed point data of $M$ reduces to the existence of a 4-dimensional compact connected oriented manifold $M'$ equipped with a circle action whose fixed point data is $(\Sigma_M \setminus (\{\epsilon(p_1),a,a+b\} \cup \{\epsilon(p_2),b,a+b\})) \cup \{\epsilon(p_1),a,b\}$. Figure \ref{fig1} describes that by the blow up operation as in Lemma \ref{l51} at the fixed point $p$ in $M'$, from $M'$ whose multigraph is $\Gamma'$, we obtain a manifold $M''$ whose multigraph is $\Gamma$, which is the same as $\Gamma$ associated to $M$. \begin{figure} \centering \begin{subfigure}[b][6.5cm][s]{.4\textwidth} \centering \vfill \begin{tikzpicture}[state/.style ={circle, draw}] \node[state] (a) {}; \node[state] (b) [above right=of a] {$p_1$}; \node[state] (c) [above=of b] {$p_2$}; \node[state] (d) [above left=of c] {}; \path (a) edge node[right] {$a$} (b); \path (b) edge node [left] {$w=a+b$} (c); \path (c) edge node [left] {$b$} (d); \end{tikzpicture} \vfill \caption{$\Gamma$}\label{fig1-1} \end{subfigure} \begin{subfigure}[b][6.5cm][s]{.4\textwidth} \centering \vfill \begin{tikzpicture}[state/.style ={circle, draw}] \node[state] (a) {}; \node[state] (b) [above right=of a] {$p$}; \node[state] (c) [above left=of b] {}; \path (a) edge node[right] {$a$} (b); \path (b) edge node [right] {$b$} (c); \end{tikzpicture} \vfill \caption{$\Gamma'$}\label{fig1-2} \vspace{\baselineskip} \end{subfigure}\qquad \caption{Case (1)}\label{fig1} \end{figure} When $\epsilon(p_1)=\epsilon(p_2)=+1$ (i.e., when $\epsilon(p)=+1$), this corresponds to Step (2) in Theorem \ref{t11}. When $\epsilon(p_1)=\epsilon(p_2)=-1$ (i.e., when $\epsilon(p)=-1$), this corresponds to Step (3) in Theorem \ref{t11}. Second, suppose that Case (2) holds; $\epsilon(p_1)=-\epsilon(p_2)$ and $a=b$, i.e., $\Sigma_{p_1}=\{\epsilon(p_1),a,w\}$ and $\Sigma_{p_2}=\{-\epsilon(p_1),a,w\}$. Then there are two possibilities: \begin{enumerate}[(a)] \item There exists one more edge $e'$ between $p_1$ and $p_2$ with label $a$ (Figure \ref{fig2}(A)). \item There exist other fixed points $p_3$ and $p_4$ such that there is an edge $e_1$ with label $a$ between $p_1$ and $p_3$, and there is an edge $e_2$ with label $a$ between $p_2$ and $p_4$ (Figure \ref{fig2}(B)). \end{enumerate} Assume that Case (a) holds. In this case, if we can construct a 4-dimensional compact connected oriented $M'$ equipped with a circle action whose fixed point data is $\Sigma_M \setminus (\{\epsilon(p_1),a,w\} \cup \{\epsilon(p_2),a,w\})$, then by Lemma \ref{l23}, we can perform equivariant sum of $M'$ and a circle action on $S^4$ to construct a 4-dimensional compact connected oriented manifold $M''$ equipped with a circle action whose fixed point data is the same as $\Sigma_M$. Here, $S^1$ acts on $S^4$ by $g \cdot (z_1,z_2,x)=(g^a z_1,g^w z_2,x)$, where $g \in S^1 \subset \mathbb{C}$, $z_i \in \mathbb{C}$ and $x \in \mathbb{R}$. The fixed point data of the circle action on $S^4$ is $\{+,a,w\} \cup \{-,a,w\}$. Therefore, the classification of the fixed point data of $M$ reduces to the existence of a 4-dimensional compact connected oriented manifold $M'$ equipped with a circle action whose fixed point data is $\Sigma_M \setminus (\{\epsilon(p_1),a,w\} \cup \{\epsilon(p_2),a,w\})$. This corresponds to Step (1) in Theorem \ref{t11}. Assume that Case (b) holds. Let $a_3$ and $a_4$ be the remaining weights at $p_3$ and $p_4$, respectively, i.e., $\Sigma_{p_3}=\{\epsilon(p_3),a,a_3\}$ and $\Sigma_{p_4}=\{\epsilon(p_4),a,a_4\}$. Since $p_1$ and $p_3$ are connected by the edge $e_1$ whose label is $a$, by (4) of Proposition \ref{p42}, $p_1$ and $p_3$ lie in the same connected component of $M^{\mathbb{Z}_a}$. Suppose that $a>1$. If we take $a$ here for the role of $w$ in Lemma \ref{l33} ($w$ here for $a$ in Lemma \ref{l33} and $a_3$ here for $b$ in Lemma \ref{l33}), then we have that $w \equiv -a_3 \mod a$ if $\epsilon(p_1) =\epsilon(p_3)$, and $w \equiv a_3 \mod a$ if $\epsilon(p_1) \neq \epsilon(p_3)$. Equivalently, $-\epsilon(p_1)w \equiv \epsilon(p_3) a_3 \mod a$. This relation is trivial when $a=1$. Hence it holds for any $a$. Similarly, repeating this argument for the edge $e_2$ between $p_2$ and $p_4$ whose label is also $a$, we have $-\epsilon(p_2)w \equiv \epsilon(p_4) a_4 \mod a$. In Case (b), we redraw edges $e_1$ between $p_1$ and $p_3$ and $e_2$ between $p_2$ and $p_4$ in the following way. Instead of the edge $e_1$ with label $a$ between $p_1$ and $p_3$, there is an edge $e_1'$ with label $a$ between $p_1$ and $p_2$. Instead of the edge $e_2$ with label $a$ between $p_2$ and $p_4$, there is an edge $e_2'$ with label $a$ between $p_3$ and $p_4$. Except redrawing of the two edges, other edges of $\Gamma$ remain the same. See Figure \ref{fig2}(B) for the multigraph $\Gamma$ and Figure \ref{fig2}(C) for the new multigraph $\Gamma'$. Since our purpose is to classify the fixed point data, changing edges, i.e., isotropy submanifolds does not yield any problem. On the other hand, the new multigraph $\Gamma'$ still must satisfy Lemma \ref{l33} to be realized as a multigraph associated to a 4-dimensional compact connected oriented manifold equipped with a circle action having a discrete fixed point set, as the original multigraph $\Gamma$ satisfies. Now, in the new multigraph $\Gamma'$, $p_1$ and $p_2$ are connected by the two edges $e$ with label $w$ and $e_1'$ with label $a$. Since $\epsilon(p_1)=-\epsilon(p_2)$, if we apply Lemma \ref{l33} for the edge $e$ whose label is $w$, then we have $a \equiv a \mod w$. If we apply Lemma \ref{l33} for the edge $e_1'$ whose label is $a$, we have $w \equiv w \mod a$. Therefore, Lemma \ref{l33} holds for the edges $e$ and $e_1'$. Previously, since there were the edge $e_1$ with label $a$ between $p_1$ and $p_3$ and the edge $e_2$ with label $a$ between $p_2$ and $p_4$, by Lemma \ref{l33}, we had $-\epsilon(p_1)w \equiv \epsilon(p_3) a_3 \mod a$ and $-\epsilon(p_2)w \equiv \epsilon(p_4) a_4 \mod a$, respectively. Therefore, we had $-\epsilon(p_3) a_3 \equiv \epsilon(p_4) a_4 \mod a$, since $\epsilon(p_1)=-\epsilon(p_2)$. Now, by Lemma \ref{l33} for the edge $e_2'$ between $p_3$ and $p_4$ whose label is $a$ in the new multigraph $\Gamma'$,we must have $-\epsilon(p_3) a_3 \equiv \epsilon(p_4) a_4 \mod a$ and this confirms that Lemma \ref{l33} holds for the new multigraph $\Gamma'$. With the redrawing of the edges, now we fall into the situation of Case (a). Therefore, proceed as in Case (a). Therefore, Case (b) also corresponds to Step (1) in Theorem \ref{t11}. In the three Cases (1), (2)(a), and (2)(b) above, the classification problem of the fixed point data of $M$ reduces to the existence of a 4-dimensional compact connected oriented manifold $M'$ equipped with a circle action that has fewer fixed points. Repeat the process above whenever the biggest weight $w$ of an edge among all the edges is bigger than 1. The problem then reduces to the existence of a 4-dimensional compact connected oriented manifold $M'''$ equipped with a circle action, whose weights in the fixed point data are all 1, i.e., a semi-free circle action. By Theorem \ref{t29}, the number $k$ of fixed points $p$ in $M'''$ with $\epsilon(p)=+1$ and that with $\epsilon(p)=-1$ are equal. Such a manifold $M'''$ can be constructed as an eqivariant sum (along free orbits, in the sense of Lemma \ref{l23}) of $k$-copies of $S^4$'s, each of which is equipped with a circle action $g \cdot (z_1,z_2,x)=(g z_1,g z_2,x)$. The fixed point data of each $S^4$ is $\{+,1,1\} \cup \{-,1,1\}$. This also corresponds to Step (1) in Theorem \ref{t11}. \end{proof} \begin{figure} \begin{subfigure}[b][3.5cm][s]{.17\textwidth} \centering \vfill \begin{tikzpicture}[state/.style ={circle, draw}] \node[state] (a) {$p_1$}; \node[state] (b) [above=of a] {$p_2$}; \path (a) [bend right =20]edge node[right] {$w$} (b); \path (b) [bend right =20]edge node[left] {$a$} (a); \end{tikzpicture} \vfill \caption{Case(2)(a)}\label{fig2-1} \vspace{\baselineskip} \end{subfigure}\qquad \begin{subfigure}[b][3.5cm][s]{.3\textwidth} \centering \vfill \begin{tikzpicture}[state/.style ={circle, draw}] \node[state] (a) {$p_1$}; \node[state] (b) [right=of a] {$p_3$}; \node[state] (c) [above=of a] {$p_2$}; \node[state] (d) [right=of c] {$p_4$}; \node[state] (e) [right=of b] {}; \node[state] (f) [right=of d] {}; \path (a) edge node[above] {$a$} (b); \path (c) edge node[left] {$w$} (a); \path (d) edge node[above] {$a$} (c); \path (b) edge node[above] {$a_3$} (e); \path (d) edge node[above] {$a_4$} (f); \end{tikzpicture} \vfill \caption{Case(2)(b)$\Gamma$}\label{fig2-2} \vspace{\baselineskip} \end{subfigure}\qquad \begin{subfigure}[b][3.5cm][s]{.3\textwidth} \centering \vfill \begin{tikzpicture}[state/.style ={circle, draw}] \node[state] (a) {$p_1$}; \node[state] (c) [right=of a] {$p_3$}; \node[state] (b) [above=of a] {$p_2$}; \node[state] (d) [above=of c] {$p_4$}; \node[state] (e) [right=of c] {}; \node[state] (f) [right=of d] {}; \path (a) [bend right =20]edge node[right] {$w$} (b); \path (b) [bend right =20]edge node[left] {$a$} (a); \path (c) edge node [right] {$a$} (d); \path (c) edge node[above] {$a_3$} (e); \path (d) edge node[above] {$a_4$} (f); \end{tikzpicture} \vfill \caption{Case(2)(b)$\Gamma'$}\label{fig2-3} \vspace{\baselineskip} \end{subfigure}\qquad \caption{Case (2)}\label{fig2} \end{figure} \section{$S^1$-actions on 4-manifolds with few fixed points and proof of Corollary \ref{c13}} \label{s7} In this section, from Theorem \ref{t11}, we discuss the classification of $S^1$-actions on 4-dimensional compact oriented manifolds with few fixed points. From Theorem \ref{t11}, it is straightforward to classify the fixed point data when there are few fixed points. The case of two fixed points is given in Theorem \ref{t28} for any dimension. \begin{theorem} \label{t71} Let the circle act on a 4-dimensional compact connected oriented manifold $M$ with 3 fixed points. Then the fixed point data of $M$ is the same as a blow-up at a fixed point of a rotation on $S^4$ with two fixed points, i.e., the fixed point data of $M$ is $\{\mp,a,b\}$, $\{\pm,a,a+b\}$, $\{\pm,b,a+b\}$ for some positive integers $a$ and $b$ with $\mathrm{sign}(M)=\pm 1$. \end{theorem} \begin{proof} The fixed point data of $M$ is achieved by the combinatorics in Theorem \ref{t11}. Since there are 3 fixed points, Step (1) must occur precisely once. Then we have $\{+,a,b\} \cup \{-,a,b\}$ for some positive integers $a$ and $b$. If Step (1) occurs more than once then there are at least 4 fixed points. Next, to have 3 fixed points, exactly one of Step (2) and Step (3) must occur exactly once. If Step (2) occurs then it replaces $\{+,a,b\}$ by $\{+,a,a+b\} \cup \{+,b,a+b\}$ and hence the fixed point data of $M$ is $\{+,a,a+b\} \cup \{+,b,a+b\} \cup \{-,a,b\}$. Similarly, if Step (3) occurs then the fixed point data of $M$ is $\{+,a,b\} \cup \{-,a,a+b\} \cup \{-,b,a+b\}$. The conclusion on the signature of $M$ follows immediately from equation \ref{eq:1}. \end{proof} \begin{theorem} \label{t72} Let the circle act on a 4-dimensional compact connected oriented manifold $M$ with 4 fixed points. Then precisely one of the following holds for the fixed point data $\Sigma_M$ of $M$. \begin{enumerate} \item $\Sigma_M=\{+,a,b\} \cup \{-,a,b\} \cup \{+,c,d\} \cup \{-,c,d\}$ for some positive integers $a,b,c$, and $d$. \item $\Sigma_M=\pm (\{-,a,b\} \cup \{+,a,a+b\} \cup \{+,b,a+2b\} \cup \{+,a+b,a+2b\})$ for some positive integers $a$ and $b$. \end{enumerate} Moreover, in Case (1) $\textrm{sign}(M)=0$ and in Case (2) $\textrm{sign}(M)=\pm 2$. \end{theorem} \begin{proof} The proof of the theorem is similar to that of Theorem \ref{t71}. The fixed point data of $M$ is achieved by the combinatorics in Theorem \ref{t11} and Step (1) in Theorem \ref{t11} must occur at least once. If Step (1) occurs twice, then the fixed point data of $M$ is $\Sigma_M=\{+,a,b\} \cup \{-,a,b\} \cup \{+,c,d\} \cup \{-,c,d\}$ for some positive integers $a,b,c$, and $d$ and this is Case (1) of the theorem. Suppose that Step (1) occurs exactly once. Assume that we apply Step (2) in Theorem \ref{t11}. Then we have $\{+,a,a+b\} \cup \{+,b,a+b\} \cup \{-,a,b\}$ for some positive integers $a$ and $b$. If we apply Step (2) for $\{+,a,a+b\}$ or $\{+,b,a+b\}$, then this is Case (2) of the theorem. If we apply Step (3) for $\{-,a,b\}$, then the fixed point data of $M$ is $\{+,a,a+b\} \cup \{+,b,a+b\} \cup \{-,a,a+b\} \cup \{-,b,a+b\}$ and this is Case (1) of the theorem. The case where we apply Step (3) is similar. In each case, the conclusion on the signature of $M$ follows immediately from equation \ref{eq:1}. \end{proof} It has not been known before if a manifold with fixed point data as in Case (2) in Theorem \ref{t72} exists or not, as the fixed point data for the case of 4 fixed points has not been classified. Our Theorem \ref{t12} proves the existence of a manifold of Case (2) in Theorem \ref{t72}. Note that the fixed point data of Case (2) in Theorem \ref{t72} cannot be realized as the fixed point data of a circle action on a complex manifold or a symplectic manifold, since with 4 fixed points in either case the signature of a manifold must vanish \cite{Jan3}. We end this section with a proof of Corollary \ref{c13}. \begin{proof} [Proof of Corollary \ref{c13}] Equation \ref{eq:1} states that \begin{center} $\displaystyle \textrm{sign}(M)=\sum_{p \in M^{S^1}} \epsilon(p)$. \end{center} Since $\epsilon(p)$ is either $+1$ or $-1$, it follows that $\textrm{sign}(M) \equiv k \mod 2$. Since there are finitely many fixed points, Step (1) in Theorem \ref{t11} must occur at least once. Moreover, Step (1), Step (2), and Step (3) in Theorem \ref{t11} only increase the number of fixed points with sign $+1$ and/or $-1$ and do not decrease. Therefore, not all fixed points can have $\epsilon(p)=+1$ ($\epsilon(p)=-1$). Hence we have $\textrm{sign}(M) \leq k-2$ ($\textrm{sign}(M) \geq 2-k$). This proves the first part. For the second part, we first prove that for any integer $k \geq 2$, there exists a 4-dimensional compact connected oriented $S^1$-manifold $M$ with $k$ fixed points such that $\textrm{sign}(M)=k-2$ ($\textrm{sign}(M)=2-k$); $k-1$ fixed points have sign $+1$ ($-1$) and one fixed point has sign $-1$ ($+1$, respectively). We prove by induction on $k$. When $k=2$, a rotation on $S^4$ given by $g \cdot (z_1,z_2,x)=(g z_1, g z_2,x)$ provides the base case; it has 2 fixed points, its fixed point data is $\{+,1,1\}\cup\{-,1,1\}$, and hence $\textrm{sign}(S^4)=0$. Assume that the claim holds for $k=i+2$. We prove for $i+3$. Take any 4-dimensional compact connected oriented $S^1$-manifold $M'$ with $i+2$ fixed points such that $\textrm{sign}(M')=i$ ($\textrm{sign}(M')=-i$). Take a fixed point $p$ whose sign is $+1$ ($-1$); such a fixed point $p$ exists by Theorem \ref{t11}. Let the fixed point data at $p$ be $\Sigma_p=\{+,a,b\}$ ($\Sigma_p=\{-,a,b\}$) for some positive integers $a$ and $b$. By Lemma \ref{l51}, from $M'$, by blowing up the fixed point $p$ we can construct a 4-dimensional compact connected oriented manifold $\widetilde{M'}$ equipped with a circle action such that the fixed point data of $\widetilde{M'}$ is $(\Sigma_{M'} \setminus \{+,a,b\}) \cup \{+,a,a+b\} \cup \{+,b,a+b\}$ ($(\Sigma_{M'} \setminus \{-,a,b\}) \cup \{-,a,a+b\} \cup \{-,b,a+b\}$). We create one more fixed point with sign $+1$ ($-1$). Therefore, the manifold $\widetilde{M'}$ has $i+3$ fixed points; $i+2$ fixed points have sign $+1$ ($-1$) and one fixed point has sign $-1$ ($+1$). Since $\textrm{sign}(\widetilde{M'})=\sum_{p \in \widetilde{M'}^{S^1}} \epsilon(p)$, the signature of $\widetilde{M'}$ is $\textrm{sign}(\widetilde{M'})=i+1$ ($\textrm{sign}(\widetilde{M'})=-i-1$). This proves the claim. Suppose that we are given a pair $(j,k)$ of integers such that $k \geq 2$, $2-k \leq j \leq k-2$, and $j \equiv k \mod 2$. By the claim above, there exists a 4-dimensional compact connected oriented $S^1$-manifold $M'$ with $|j|+2$ fixed points such that $\textrm{sign}(M')=j$ ($-j$). Take a connected sum of $M'$ and $\frac{k-|j|-2}{2}$ ($k-|j|-2$ is even) copies of $S^4$'s, on each of which the action is given by $g \cdot (z_1,z_2,x)=(g z_1, g z_2,x)$ and hence the fixed point data $\{+,1,1\}$ and $\{-,1,1\}$. Here, the connected sum is along free orbits of the manifolds in the sense of Lemma \ref{l23}. The resulting manifold $M$ is a 4-dimensional compact connected oriented manifold and is equipped with a circle action that has $k$ fixed points. Since $M'$ has $|j|+1$ fixed points with sign $+1$ ($-1)$ and one fixed point with sign $-1$ ($+1$) and each rotation on $S^4$ has one fixed point with sign $+1$ and one fixed point with sign $-1$, the manifold $M$ has $|j|+1+\frac{k-|j|-2}{2}$ fixed points with sign $+1$ ($-1$) and $1+\frac{k-|j|-2}{2}$ fixed points with sign $-1$ ($+1$). Again, since $\textrm{sign}(M)=\sum_{p \in M^{S^1}} \epsilon(p)$, the signature of $M$ is $\textrm{sign}(M)=j$ ($-j$, respectively). \end{proof} \begin{rem} For the second part of Corollary \ref{c13}, the construction of a manifold is not unique. For instance, suppose that we construct an example with 6 fixed points and with signature 2. Then one may take two copies of a manifold with signature 1 in Theorem \ref{t71} and take a connected sum in the sense of Lemma \ref{l23}, or take blowing up twice at fixed points with sign $+1$ of a rotation on $S^4$ and take a connected sum with another rotation on $S^4$. The fixed point data of the former manifold is $\{-,a,b\} \cup \{+,a,a+b\} \cup \{+,b,a+b\} \cup \{-,c,d\} \cup \{+,c,c+d\} \cup \{+,d,c+d\}$ for some positive integers $a,b,c$, and $d$, whereas for the latter it is $\{-,e,f\} \cup \{+,e,e+f\} \cup \{+,f,e+2f\} \cup \{+,e+f,e+2f\} \cup \{+,g,h\} \cup \{-,g,h\}$ for some positive integers $e,f,g$, and $h$. \end{rem} \section{Graphs as manifolds} \label{s8} In section \ref{s4}, we associated a signed, labelled multigraph without a loop, to a compact oriented manifold equipped with a circle action, having a discrete fixed point set. The multigraph is $n$-regular, where $2n$ is the dimension of the manifold. Moreover, it satisfies equal modulo property for weights of edges in the sense of Lemma \ref{l31}. Now, we may ask a question: under what conditions, does a multigraph behave like a manifold? That is, when can a multigraph be realized as a multigraph associated to a manifold equipped with a circle action? We answer this question when a multigraph is 2-regular. \begin{Definition} \label{d81} Let $\Gamma$ be a signed, labelled, 2-regular multigraph that does not have any loop. \begin{enumerate} \item The multigraph $\Gamma$ is called \textbf{effective}, if for every vertex $v$, its edges $e_1$ and $e_2$ have relatively prime labels. \item The multigraph $\Gamma$ is said to satisfy \textbf{equal modulo property}, if for any edge $e$ between two vertices $v_1$ and $v_2$, $e_i$ is the remaining edge at $v_i$, then $-\epsilon(v_1) w(e_1) \equiv \epsilon(v_2)w(e_2) \mod w(e)$. \item The multigraph $\Gamma$ is said to satisfy \textbf{minimal property}, if an edge $e$ has the smallest weight (label), then its vertices $v_1$ and $v_2$ satisfy $\epsilon(v_1)=-\epsilon(v_2)$. \end{enumerate} \end{Definition} With Definition \ref{d81}, we provide a necessary and sufficient condition for a 2-regular multigraph to be realized as a multigraph associated to a circle action on an oriented 4-manifold. \begin{theorem} \label{t82} Let $\Gamma$ be an effective, signed, labelled, 2-regular multigraph that does not have any loops. Suppose that $\Gamma$ satisfies the equal modulo property and the minimal property. Then there exists a 4-dimensional compact connected oriented manifold $M$ equipped with an effective circle action having a discrete fixed point set such that the corresponding multigraph is $\Gamma$. \end{theorem} \begin{proof} The proof is analogous to the proof of Theorem \ref{t11} and Theorem \ref{t12} in Section \ref{s6}. Let $e$ be an edge whose label $w(e)$ is the largest among all the labels of edges. Let $v_1$ and $v_2$ be its vertices. Suppose that $w(e)>1$. If $\epsilon(v_1)=\epsilon(v_2)$, then by the equal modulo property of $\Gamma$, the remaining edges $e_i$ at $v_i$ satisfy $w(e_1)+w(e_2)=w(e)$. As in Case (1) of the proof of Theorem \ref{t11}, the existence of $M$ whose multigraph is $\Gamma$ reduces to the existence of a manifold $M'$ whose multigraph $\Gamma'$ (Figure \ref{fig1-2}) is the same as $\Gamma$ (Figure \ref{fig1-1}) with the edge $e$ shrunk to a vertex (Figure \ref{fig1}). Similarly, in the case that $\epsilon(v_1) \neq \epsilon(v_2)$, we proceed as in Case (2) of the proof of Theorem \ref{t11} in Section \ref{s6}; the existence of $M$ with multigraph $\Gamma$ reduces to the existence of a manifold $M'$ with $\Gamma'$ that has fewer vertices than $\Gamma$. At each step, the minimal property is conserved. Finally, after a finite number of the steps as above, the label of every edge is 1. Such a multigraph is realized as a multigraph associated to an equivariant sum by means of Lemma \ref{l23}, of copies of $S^4$'s on each of which the action is given by $g \cdot (z_1,z_2,x)=(gz_1,gz_2,x)$. \end{proof} As an application of Theorem \ref{t82}, we give a necessary and sufficient condition for a fixed point data to be realized as the fixed point data of a circle action on an oriented 4-manifold that does exist. \begin{theorem} \label{t83} Let $\Sigma=\cup_{p} \{\epsilon(p),w_p^1,w_p^2\}$ be a finite collection, where $\epsilon(p)=\pm 1$ and $w_p^i$ are positive integers. Then there exists a 4-dimensional compact connected oriented manifold $M$ equipped with an effective circle action whose fixed point data is $\Sigma$, if and only if there exists a multigraph $\Gamma=(V,E)$ such that $\cup_{v \in V} \{\epsilon(v),w(e_1(v)),w(e_2(v))\}=\Sigma$, and $\Gamma$ satisfies the conditions in Theorem \ref{t82}. \end{theorem}
train/arxiv
BkiUd7fxaKPQounYE3td
5
1
\section*{Acknowledgments} The authors would like to thank the reviewers for their comments that help improve the manuscript. This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-PhD-2021-01-002). \section{Conclusion} We presented an effective novel fine-tuning strategy: Prompt Regularization (ProReg), for deploying VLMs in downstream tasks. As its name implies, ProReg is motivated to make the best of the two worlds: pretrained general knowledge and task-specific knowledge, where the former is acquired by using prompt. The highlight of ProReg is the proposed sample-wise adaptive weight that trades off the training losses from the two worlds. Note that this weight is theoretically justified. Therefore, we believe that ProReg has a great potential for helping practitioners to fine-tune their own models efficiently. By extensive evaluations on three types of challenging OOD benchmarks, ProReg significantly outperformed zero-shot prompt, prompt tuning, conventional fine-tuning and other state-of-the-art methods. \section{Introduction}\label{sec:1} \input{images/Figure1} Please think about it: when you want to train a vision model for a task, what is the first thing first? Most likely, you will download an off-the-shelf foundation model~\cite{bommasani2021opportunities}, \textit{e.g.}, ResNet~\cite{resnet} or CLIP~\cite{clip}, pretrained on a large-scale dataset such as ImageNet~\cite{imagenet} or image-text pairs collected from the Internet~\cite{clip}; then, remove its classifier head, plug in your own task-specific head to the penultimate layer, and finally fine-tune on your own task data. Such ``pretrain, fine-tune'' paradigm has become a nearly ubiquitous standard for CV community---from classification to generation~\cite{stylegan2,biggan}, regions to pixels~\cite{maskrcnn}, and single modality to multi-modality~\cite{UpDn,ViLBERT}. The underlying empirical principle is that the pretrained model as initialization plays a role in regularization which reduces the variance of the fine-tuning model~\cite{sutskever2013importance}. Despite the beneficial regularization, the pretraining knowledge has a negative impact, especially when the downstream task data is limited or biased~\cite{negtransfer,yang2021causal}: the early exposed encyclopedic or generic features from the pretrained model may mislead the fine-tuning to focus on the task-unrelated attributes, resulting in a biased fine-tuned model. Figure~\ref{fig:figure1} shows three types of biases: Figure~\ref{fig:figure1}(a) contextual bias: images of training and test set contain distinct backgrounds, \textit{e.g.}, if most of the training ``bird'' images are in ``sky'' background, the fine-tuning will misclassify ``bird on ground'' as ``dog''; Figure~\ref{fig:figure1}(c) image style gap: test image style is unseen during training, \textit{e.g.}, if training images are from art painting, cartoon and real domain, the fine-tuned model is able to classify the in-distribution art painting ``dog'' but is confused when testing on a sketch ``dog''; Figure~\ref{fig:figure1}(e) language bias: for every question type, train and test sets have different prior distributions of answers, \textit{e.g.}, if most VQA training ``bananas'' images are ``yellow'', it mistakes the answer ``yellow'' for question ``what color are the bananas?'', given an image of green bananas. Recently, NLP community presents a tuning-free paradigm called ``pretrained, prompt, predict''~\cite{promptsurvey} and it has been quickly migrated to CV tasks~\cite{coop,frozen,cpt} by using a pretrained multi-modal model~\cite{vilt,clip,vinvl}. For example, image classification can be cast into the cloze prompt: ``a photo of a \texttt{[CLASS]}'', where the prediction can be found as the class word whose fill-in sentence is most similar to the query image. The similarity can be calculated directly from the pretrained model in a zero-shot learning fashion. As prompt is a rule-based query that has nothing to do with the downstream statistics, the prediction is expected to be independent to downstream domain and faithfully respects the pretrained knowledge. Yet, relying too much on the general knowledge also hurts the domain-specific generalization. For example, as shown in Figure~\ref{fig:figure1}(b), although prompt can correctly focus on the foreground object, it is less discriminative to distinguish between ``rat'' and ``cat'' in domain-specific animal images; in Figure~\ref{fig:figure1}(d), prompt's prediction confidence is not as discriminative as that of fine-tuning; in Figure~\ref{fig:figure1}(f), prompt-based VQA is too general to perform the downstream task of counting ``bananas''. To this end, ``prompt tuning'' is proposed to fine-tune the token embeddings in a prompt using the task data~\cite{promptsurvey,liu2021gpt,han2021ptr,coop}. For example, the prefix ``a photo of a'' before the cloze blank \texttt{[CLASS]} can be replaced with a set of tunable parameters. Prompt tuning is essentially fine-tuning with fixed backbone and tunable head (\textit{i.e.}, the prompt prefix). Therefore, it still inherits the above drawbacks of the biased fine-tuning. In this paper, we present a new fine-tuning paradigm, called \emph{Prompt Regularization} (ProReg), for a \emph{just-right} knowledge transfer from pretrained model to fine-tuned model. As expected, ProReg can fine-tune the resultant model, neither biased towards the pretrained knowledge nor towards the downstream knowledge. We formulate the downstream knowledge as the ground-truth annotations in downstream tasks, and represent the pretrained knowledge with the ``soft" label of the downstream data generated by prompting the pre-trained model. We then proposed the ProReg loss to enable learning from both of the knowledge. It is worth noting that different from traditional knowledge distillation that using a constant weight $\lambda \in (0,1)$ to trade-off the contribution of the knowledge: $\mathcal{L}_\text{kd}=(1-\lambda)\cdot \mathcal{L}_\text{ce}+\lambda\cdot \mathcal{L}_\text{kl}$, we propose a sample-wise adaptive weight to achieve a good trade-off between the two knowledge (Section~\ref{sec:proreg}). The proposed weight inspects whether the task-specific knowledge or the pre-trained general knowledge dominates the optimization process for each fine-tuning sample, which indeed requires a different ratio of the two knowledge types. We show that the estimation of the ratio evolves during the training process and can be automatically calculated on-the-fly. We implement ProReg on top of two off-the-shelf large-scale pretrained models: CLIP~\cite{clip} and ViLT~\cite{vilt}, which demonstrates that ProReg is applicable to different vision-language models that adopt masked language modeling or contrastive learning as pretraining tasks. We conduct extensive evaluations for ProReg on various out-of-distribution benchmarks, including BAR~\cite{LfF}, NICO~\cite{nico}, PACS~\cite{PACS} and DomainNet~\cite{DomainNet} for image classification tasks and VQA-CP~\cite{VQACP-GVQA} for visual question answering tasks. We demonstrate that: 1) ProReg consistently outperforms zero-shot prompt, conventional fine-tuning, and prompt tuning on all the datasets, 2) ProReg achieves compelling performance in both out-of-distribution and in-distribution settings. Thus, readers can feel free to use ProReg regardless of the training and testing distribution discrepancy. \section{Related Work} \noindent\textbf{Vision-Language Models~(VLM).} Most existing VLMs use 1) Masked language modeling~\cite{vilt,visualbert,ViLBERT}, 2) Image text pairing~\cite{lxmert,PixelBERT}, and 3) Contrastive learning~\cite{clip} as their pretraining objectives. Recently there is a line of adapting the existing VLMs to the downstream tasks. Conventional fine-tuning paradigm adds an additional classifier on top of the visual backbone (Linear Probe~\cite{clip}) or additional feature adapter, (CLIP-Adapter~\cite{gao2021clip}). Prompt-based learning that tunes the prompt to maximize the ground-truth token has gained its popularity, \textit{e.g.}, CoOp~\cite{coop} and CoCoOp~\cite{zhou2022cocoop}. ProGrad~\cite{ProGrad} bridges generalization gap by matching the gradient of prompt to the general knowledge. ProDA~\cite{ProDA} introduces prompt distribution learning to adapts VLMs to downstream classification tasks. As discussed in Section~\ref{sec:1}, both of these two fine-tune paradigms may result in biased downstream models, in this work, we aims to fine-tune a debiased VLM for downstream tasks. \noindent\textbf{OOD Generalization}. In real world, test distribution may shift from the training distribution, such as domain generalization~\cite{ben2007analysis,tzeng2017adversarial}, long-tailed classification~\cite{menon2020long,tang2020long}, contextual bias~\cite{LfF,nico}, and language bias~\cite{SCR,advreg,rubi,CF-VQA}. WiSE~\cite{WiSE} shares the same goal to improve OOD performance, which ensembles the weights of zero-shot and fine-tuned models. However, it requires the fine-tuned model and the pre-trained model to have the same architecture. Differently, our ProReg is free of this requirement and allows the modification of architecture. \input{images/ProReg} \section{The ProReg Pipeline} \subsection{Pre-trained Model} In this paper, we adopt two state-of-the-art VLM models as our backbones, \textit{i.e.}, Constrastive Language-Image Pre-training model (CLIP~\cite{clip} for image classification tasks and Vision-and-Language Transformer (ViLT~\cite{vilt})) for VQA tasks. CLIP collects large amounts of images with natural language description and is trained in a contrastive manner. The associated image and text pairs are regarded as positive pairs, while the mis-matched image and text pair are regarded as negative ones. The contrastive loss aims to maximize the cosine similarity of the positive pairs in the batch while minimize the cosine similarity of the negative pairs. ViLT consists of stacked blocks that include a multi-headed self-attention and a multi-layer perceptron layer. It is pretrained with two pretext tasks: Image Text Matching (ITM) and Masked Language Modeling (MLM). For ITM, ViLT selects half of the sentences and replaces each of them with a mismatched sentence. Then, a binary classifier is added to predict whether the image and the sentence match with each other. MLM first randomly replace $15\%$ of input text tokens with masked tokens, \textit{e.g.}, \texttt{[MASK]}, and then the model is optimized to predict the masked tokens, given other non-masked tokens and image feature. For classification tasks, CLIP based models show superior performance over the ViLT one, thus we only report the CLIP results in the main paper while the image classification results of ViLT can be found in Appendix. For VQA tasks, the input text varies sample-wisely, it is difficult for CLIP model to infer and optimize. As a result, we only implement the ProReg on ViLT models. Note that our approach is applicable to broader vision-language models whose pretrained pretext tasks include ITM and MLM objectives or contrastive objective. \subsection{Prompt-based Zero-shot Inference}\label{sec:promptdesign} Proper prompt can adapt the pre-trained model to downstream tasks without fine-tuning. In this section, we illustrate how CLIP performs zero-shot inference for image classification tasks and how ViLT can leverage carefully designed prompt to perform zero-shot prediction for VQA tasks. \input{tables/prompt_example} \noindent\textbf{Image Classification}. We formulate image classification as an image-text matching problem where the text is designed by a simple static prompt template. Take the action recognition dataset BAR~\cite{LfF} in Figure~\ref{fig:proreg}(a) as an example. We first compute the text embedding $\mathbf{w}_i$ of ``a person is \texttt{[CLASS]}.'', where the ``\texttt{[CLASS]}'' is replaced by $i$-th class name. Then the probability distribution of the image feature $\mathbf{x}$ over $K$ classes is given by: \begin{equation}\label{eq:clipzs} f(y=i|\mathbf{x};\theta)=\frac{\exp(<\mathbf{x},\mathbf{w}_i>\!/\tau)}{\sum_{j=1}^K\exp(<\mathbf{x},\mathbf{w}_j>\!/\tau)} \end{equation} where $\tau$ is the temperature learned by CLIP. \noindent\textbf{Visual Question Answering (VQA)}. The design of prompts for VQA task is less straightforward because the text inputs in VQA (\textit{i.e.}, questions) and pretraining tasks (\textit{i.e.}, captions) have different forms. Notice that VQA dataset consists of two types of questions: open-ended questions (\textit{e.g.}, ``what color is the cake?'') and closed-ended questions (\textit{e.g.}, ``any brown apples in the picture?''). We convert the question to a statement and name such design as \textbf{Question-to-Statement (Q2S)} prompt. Table~\ref{tab:promptexample} shows some prompt examples. Specifically, we use ITM for closed-ended questions and MLM for open-ended questions. For example, for a closed-ended question like ``any brown apples in the picture?'', the prompt statement is generated as ``some brown apples in the picture.''. The text embedding together with the visual feature are fed into ITM head to predict whether they match with each other. A high match score corresponds to the answer ``yes'' and a low score to ``no''. For an open-ended question like ``what color is the cake?'', the prompt is generated as ``the color of the cake is \texttt{[MASK]}.''. Similar to Eq.~\eqref{eq:clipzs}, we search the candidate answer set to find the one has the highest probability. \subsection{Sample-wise Adaptive Trade-off Weight}\label{sec:proreg} In this section, we illustrate how to implement ProReg for CLIP models, the ProReg for ViLT models can be implemented analogously or refer to Appendix. Figure~\ref{fig:proreg} (b) and (c) show the comparison of current fine-tuning works. Figure~\ref{fig:proreg} (b) shows the pipeline of ``pretrain, prompt, fine-tune'' where the classification head is generated by the optimizing the continuous prompt. We name the ``pretrain, fine-tune'' paradigm in Figure~\ref{fig:proreg}(c) as ``\textit{Conventional Fine-tuning}'', which adds a classification head for prediction. The classification head can be randomly initialized or initialized by feeding the hand-craft prompt to the text encoder. The common characteristic is that both of the two paradigms only use the ground-truth labels as supervision. As discussed in Section~\ref{sec:1}, both of these two fine-tune paradigms may result in biased downstream models, especially when the task data is limited and biased~\cite{ifsl,negtransfer,yang2021causal}. We present a new fine-tuning paradigm, dubbed \emph{Prompt Regularization} (ProReg), to transfer the knowledge from the pretraining domain to the downstream domain. ProReg considers two types of supervisions for optimization: task-specific downstream knowledge and task-agnostic general knowledge. \noindent \textbf{Task-specific downstream knowledge}. The ground-truth labels serve as the task-specific knowledge and allows the model $f(;\theta)$ to be adapted to downstream task. The cross-entropy $\mathcal{L}_\text{ce}$ of an image $\mathbf{x}$ is obtained as: \begin{equation} \mathcal{L}_\text{ce}(\theta) = -\sum_i \mathbf{y}_i \log f_i(\mathbf{x};\theta), \end{equation} where $\mathbf{y}$ denotes the one-hot ground-truth vector, and the subscript $i$ denotes the $i$-th label; $f(;\theta)$ is the classification model initialized by pretrained model. \noindent \textbf{Task-agnostic general knowledge}. Compared to large-scale datasets for pre-training, the task-specific data for downstream task may be limited or even biased. As a result, only taking the ground-truth annotations as supervision may lead to biased fine-tuning. In order to achieve debiased fine-tuning, we also use task-agnostic pre-training knowledge as regularization. We take the zero-shot prompt prediction of the pre-trained model $\mathbf{y}^\text{zs}$ as the regularization knowledge, and use kullback-leibler divergence ($\mathcal{L}_\text{kl}$) between the fine-tuned model prediction and the zero-shot prompt prediction as the regularization term: \begin{equation}\label{eq:klloss} \mathcal{L}_\text{kl}(\theta) = - \sum_i \mathbf{y}^\text{zs}_i\log \frac{f_i(\mathbf{x};\theta)}{\mathbf{y}^\text{zs}_i} , \end{equation} To fine-tune the model neither biased towards the downstream domain nor the pre-trained knowledge domain, a straightforward intuition is knowledge distillation (KD)~\cite{hinton2015distilling}, to combine the two terms with a constant weight $\lambda \in (0,1)$: \begin{equation}\label{eq:totalloss1} \mathcal{L}_\text{kd}=(1-\lambda)\cdot\mathcal{L}_\text{ce} +\lambda\cdot \mathcal{L}_\text{kl},\\ \end{equation} where we omit $\theta$ for simplicity. However, this simple solution overlooks that the trade-off of the two knowledge varies sample-wisely and evolves during training. In the next part, we will analyze the trade-off from the view of task-agnostic knowledge decomposition and propose a dynamic weight to balance the two loss terms for each training sample. \noindent\textbf{Decomposition of task-agnostic general knowledge.} We decompose the regularization term $\mathcal{L}_\text{kl}$ in Eq.~\eqref{eq:klloss} as $\mathcal{L}_\text{kl}=\mathcal{L}_\text{ce}+(\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})$. The first term $\mathcal{L}_\text{ce}$ aims to learn task-specific knowledge. The second term ($\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})$ is the key component to learn supplementary knowledge that is provided by the task-agnostic general knowledge but not included in the task-specific downstream knowledge. To better understand the contribution of the two knowledge, we calculate their gradients \textit{w.r.t} the logit $\mathbf{z}$ of the model $f$ on the class $t$ as: \begin{small} \begin{equation}\label{eq:kl_id} \nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}=f_t-\mathbf{y}_t, \end{equation} \end{small} \begin{small} \begin{equation}\label{eq:kl_ood} \nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})=(f_t-\mathbf{y}^\text{zs})-(f_t-\mathbf{y}_t)=\mathbf{y}_t-\mathbf{y}^\text{zs}_t. \end{equation} \end{small} We have the following observations: First, the two gradients always have opposite directions, learning one knowledge well will inevitably lead to the deterioration of the other knowledge. If $t$ is the true class, \textit{i.e.}, $\mathbf{y}_t=1$, then $\nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}=f_t-1 < 0$ while $\nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})=1-\mathbf{y}^\text{zs}_t > 0$. If $t$ is the false class, \textit{i.e.}, $\mathbf{y}_t=0$, $\nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}=f_t > 0$ while $\nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})=-\mathbf{y}^\text{zs}_t < 0$. The first observation reveals the conflict between task-specific knowledge learning and supplementary knowledge learning, which motivates us to balance the learning process of the two knowledge. Second, the gradient $\mathbf{y}_t-\mathbf{y}^\text{zs}_t$ for learning supplementary knowledge is constant in different optimization steps, and the magnitude varies sample-wisely. As a comparison, the gradient $f_t-\mathbf{y}_t$ for learning task-specific knowledge {keeps updated} during the training process ($f_t$ changes after each optimization step). The above analysis motivates us to trade-off the two knowledge in a sample-wise and dynamic manner, dubbed ProReg: \begin{equation}\label{eq:totalloss0} \mathcal{L}_\text{ProReg}=\mathcal{L}_\text{ce}+w\cdot (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})= (1-w) \cdot \mathcal{L}_\text{ce} + w\cdot \mathcal{L}_\text{kl}. \end{equation} The sample-wise trade-off weight $w$ aims to prevent the model from biasing to either task-specific knowledge and supplementary knowledge. There are two typical cases that affect the determination of $w$. If $|\nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}| $ is much larger than $|\nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})|$, the task-specific knowledge will dominate the overall optimization direction of this sample. We assign a larger weight $w$ to the term $(\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})$ to emphasis the supplementary knowledge and prevent the model from biasing to the downstream knowledge. On the contrary, if $|\nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})|$ is much larger than $|\nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}|$, the task-specific downstream knowledge might be neglected. We assign a smaller weight $w$ to $(\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})$ to guarantee the learning of downstream knowledge. From the above analysis, the trade-off weight $w$ should be proportional to $|{\nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}}|/|{\nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})}|$ to balance the two knowledge. From Eq.~\eqref{eq:kl_id} and Eq.~\eqref{eq:kl_ood}, we have $|{\nabla_{\mathbf{z}_t} \mathcal{L}_\text{ce}}|/|{\nabla_{\mathbf{z}_t} (\mathcal{L}_\text{kl}-\mathcal{L}_\text{ce})|}=\frac{\mathbf{y}_t-f_t}{\mathbf{y}_t-\mathbf{y}^\text{zs}_t}$. For simplifying analysis, we consider the binary classification case and assume the ground-truth label is $t$, \textit{i.e.}, $\mathbf{y}_t=1$. For true class $t$, we have $\frac{\mathbf{y}_t-f_t}{\mathbf{y}_t-\mathbf{y}^\text{zs}_t}=\frac{1-f_t}{1-\mathbf{y}^\text{zs}_t} \propto \frac{\mathbf{y}^\text{zs}_t}{f_t}$. For the false class $(1-t)$, we have $\frac{\mathbf{y}_{1\!-\!t}-f_{1\!-\!t}}{\mathbf{y}_{1\!-\!t}-\mathbf{y}^\text{zs}_{1\!-\!t}}=\frac{0-(1-f_t)}{0-(1-\mathbf{y}^\text{zs}_t)} \propto \frac{\mathbf{y}^\text{zs}_t}{f_t}$. From Eq.~\eqref{eq:totalloss0}, we also expect $w\in [0,1]$ to guarantee the positive sign of $\mathcal{L}_\text{ce}$ and $\mathcal{L}_\text{kl}$. Therefore, we design the trade-off weight as \begin{equation} w=\frac{f_t}{f_t+\mathbf{y}^\text{zs}_t} \propto \frac{\mathbf{y}^\text{zs}_t}{f_t}, \end{equation} where $t$ is the ground-truth label. In our implementation, in addition to the sample-wisely trade-off weight $w$, we further introduce a hyper-parameter $\alpha>0$ on $\mathcal{L}_\text{kl}$ in Eq.~\eqref{eq:totalloss0} to control the strength of trade-off globally. Our ProReg is formulated as: \begin{equation}\label{eq:totalloss_f} \mathcal{L}_\text{ProReg}=(1-w) \cdot \mathcal{L}_\text{ce} + \alpha \cdot w\cdot \mathcal{L}_\text{kl} \end{equation} \section{Experiments} \subsection{Datasets and Implementation Details} \input{images/dataset} \noindent\textbf{BAR}~\cite{LfF} is a real-world image dataset for action classification, where the action is biased to the place. Examples of ``climbing'' class are shown in Figure~\ref{fig:dataset}(a), where most training background are rocks, while the one of testing images are snow. \noindent\textbf{NICO}~\cite{nico} dataset is designed for Out-of-Distribution (OOD) image classification, which has 2 subsets (animal and vehicle), 19 classes and 188 contexts. For each class, we select 3 different contexts for both training and test set, \textit{e.g.}, the training contexts for ``sheep'' are ``in water'', ``on road'' and ``on snow'', while the test contexts are ``at sunset'', ``on grass'' and ``in forset''. Please kindly refer to Apprendix for more details of our setting. \noindent\textbf{PACS}~\cite{PACS} covers photo, sketch, cartoon and painting domains. Figure~\ref{fig:dataset}(b) shows some examples. The model is trained and validated on any three seen domains then tested on the rest unseen domain. \noindent\textbf{DomainNet}~\cite{DomainNet} consists of images from 345 classes covering the ``sktech'', ``real'', ``clipart'' and ``painting'' domains. Here, we use the ``sktech'' domain as ID dataset, and use ``clipart'' and ``painting'' as the OOD datasets. Please kindly refer to Appendix for more details. \noindent\textbf{VQA-CP}~\cite{VQACP-GVQA} is proposed to examine the generalizability of VQA models when the language prior varies significantly between training and testing splits. Figure~\ref{fig:dataset}(c) shows some answer distribution of the training and test set, \textit{e.g.}, most of "what color " questions are answered as "white" in training set while "black " in the test set. \noindent\textbf{Experimental Details.} For ViLT-based models, we followed the original fine-tuning settings in \cite{vilt}, which adopt the ViLT-B/32 model with AdamW~\cite{AdamW} optimizer for $10$ epochs for all datasets. For CLIP-based models, we used the ViT-B/32 backbone and adopted the ViLT fine-tuning settings, including the training epoch, optimizer, warmup schedule and image pre-processing, \textit{etc}. $\alpha$ is set to 2 for all experiments. See Appendix for more details. \noindent\textbf{Fine-tuning Baselines.} For CLIP-based models, we compared ProReg with 6 baselines. (1) zero-shot CLIP~\cite{clip}; (2) linear probe~\cite{clip}; (3) prompt tuning, \textit{i.e.}, CoOp~\cite{coop}; (4) an advance CLIP fine-tuning: CLIP-adapter~\cite{gao2021clip}; (5) conventional fine-tuning with randomly initialized classification head, which is denoted as FT; (6) conventional fine-tuning with classification head initialized from a text encoder thought prompting, which is denoted as FT++; For ViLT-based models, we compared ViLT-ProReg with three baseline methods. (1) zero-shot ViLT. (2) conventional fine-tuning with randomly initialized classification head, which is denoted as FT. (3) conventional fine-tuning with classification head initialized from a text encoder thought prompting, which is denoted as FT++. Please see Appendix for more details. \noindent\textbf{Evaluation Metrics.} To evaluate the unbiasedness, we also report the in-domain (ID) accuracy, out-of-domain (OOD) accuracy and their harmonic mean for NICO and DomainNet datasets. Specifically, ID test set has the same distribution with the training set while the distribution of OOD test set is different from the training one. A debiased model should have high performance on both ID and OOD testing, as a result, it should also have the highest harmonic mean. \subsection{Main Results}\label{sec:main_result} \input{tables/BAR_DomainNet} \input{tables/NICO} \noindent\textbf{Image Classification.} Results are shown in Table~\ref{tab:bar_result}, Table~\ref{tab:nico_result}, Table~\ref{tab:pacs_result} and Table~\ref{tab:domain_net_result}. Due to the powerful pretraining knowledge and hand-crafted prompt, the zero-shot CLIP model achieves strong performance. In particular, on NICO Vehicle subset (Table~\ref{tab:nico_result}), the zero-shot CLIP exhibits harmonic mean of $95.8\%$. After training on the downstream data, we observed that ProReg demonstrates clear advantages over other fine-tuning methods on the OOD test sets, \textit{e.g.}, on BAR test set, the ProReg outperforms other counterparts by at least $1.2\%$. Not surprisingly, on NICO and DomainNet datasets, we observed that other fine-tuning methods gain significant improvements on in-distribution accuracies but at a cost of performance loss on out-of-distribution accuracies compared to zero shot performance, \textit{e.g.}, recently proposed CoOp~\cite{coop} increases the ID accuracies from $93.3\%$ to $99.3\%$ on NICO Animal subset (from $58.2\%$ to $70.9\%$ on DomainNet) but the OOD accuracies decreases from $92.5\%$ to $86.1\%$ (from $63.7\%$ to $56.6\%$ on DomainNet). As a comparison, our ProReg obtains a good trade-off between on ID and OOD performance, thus achieving the best performance on harmonic means. Table~\ref{tab:pacs_result} reported the results on PACS. The zero-shot CLIP shows strong performance on Photo (P) and Cartoon (C) Domains, \textit{e.g.}, zero-shot CLIP achieves $99.4\%$ on photo domain. Besides the best performance in average accuracy, our ProReg significantly outperforms other fine-tuning methods on difficult unseen domains Sketch (S) and Cartoon (C), \textit{e.g.}, ProReg gains $1.6\%$ and $2.6\%$ on sketch and cartoon domain compared to FT++ method. These results show the power of our ProReg to overcome diverse domain biases. \input{tables/pacs} \input{tables/vqacp} \noindent\textbf{Visual Question Answering.} Considering that the input text for VQA tasks varies sample-wisely, it is difficult for CLIP model to infer and optimize. We only implement the ProReg based on ViLT models and compared with zero-shot prompt, FT and FT++. Although zero-shot prompt models have no access to the training data, thanks to the pretraining knowledge and our proposed question-to-statement (Q2S) prompt design, our zero-shot prompt achieves impressive results with $43.62\%$ accuracy, which is a strong baseline. The results show that our ProReg framework shows strong performance under language bias, \textit{e.g.}, ProReg achieves an overall accuracy of $54.89\%$ with an impressive improvement $+8.55\%$ compared to conventional fine-tuning with classification initialized from prompt (FT++). More interestingly, conventional promptless fine-tune (FT) failed on ``Yes/No'' questions, while FT++ failed on ``Other''. As a comparison, ProReg performs relatively well on all the three questions types. \subsection{Ablation Studies}\label{sec:abla} We further conduct ablation studies to answer the following questions. \input{images/alpha} \noindent\textit{Q1: What is the effect of the hyper-parameter $\alpha$?} We conducted the experiments on DomainNet dataset by varying $\alpha$ in Eq.~\eqref{eq:totalloss_f}, the results are shown in Figure~\ref{fig:alpha}. As expected, as $\alpha$ increases, the fine-tuned model is encouraged to learn more from pre-trained knowledge. As a result, the OOD performance will increase while the ID accuracy drops. \input{images/distill_main} \noindent\textit{Q2: Can we blend the knowledge using conventional distillation that use constant trade-off weight?} No, its performance is worse than ProReg fine-tune. We implemented traditional knowledge distillation strategy by varying the trade-off weight $\lambda$ as described in Eq.~\eqref{eq:totalloss1}. Figure \ref{fig:distill_main} shows the results on NICO Vehicle and VQA-CP dataset. We observed that our ProReg model achieves superior performance than traditional knowledge distillation, \textit{e.g.}, for VQA-CP datasets, even with the optimal $\lambda$, traditional KD (blue line) achieves the best performance of $53.15\%$ with $-1.74\%$ gap to ProReg fine-tune (red line). \input{images/ensemble_main} \noindent\textit{Q3: Can we directly ensemble the zero-shot model and traditional fine-tune model?} No, it not only achieves worse performance than ProReg, but also doubles the inference time and the number of parameters. In Figure~\ref{fig:ensemble_main}, we investigate whether combining the knowledge by ensembling the conventional fine-tuning and the zero-shot CLIP model can perform superior results. Specifically, given the prediction of fine-tuning model $\mathbf{y}^\text{ft}$ and the one from zero-shot model $\mathbf{y}^\text{zs}$, the ensembled prediction is formulated as $ \mathbf{y}^\text{ens} = (1-\lambda)\cdot \mathbf{y}^\text{ft} + \lambda \cdot \mathbf{y}^\text{zs},$ where $\lambda \in [0,1]$. Empirical results on VQA-CP dataset and NICO vehicle dataset are plotted in Figure~\ref{fig:ensemble_main}(a), where the highest harmonic mean wiht $\lambda=0.5$ (blue line) still underperforms our ProReg (red line). For VQA-CP dataset, ensemble model reaches its optimal accuracy $53.48\%$ with $\lambda=0.5$ (blue line). The highest accuracy of ensemble is still surpassed by our ProReg result ($54.89\%$, red line). Moreover, the ensemble model doubles the inference time and the number of parameters. \noindent\textbf{Qualitative results.} \input{images/failure_case_vqa} Figure~\ref{fig:failure_vqa} (c) and (d) show some failure cases of conventional fine-tune model and zero-shot ViLT model. In the first question: ``What is on the bridge?'', the fine-tune model predicts ``nothing'' but the right answer is ``train''. We conjecture it is the dataset bias which misleads the fine-tuned model, \textit{i.e.}, there exist many ``nothing'' answers in VQA-CP dataset. The second example is a sports related question, which is too domain-specific, the general knowledge learned by zero-shot prompt model is hard to answer such question. In the meanwhile, our ProReg inherits the knowledge from both domain and gives the right answer. Figure~\ref{fig:failure_vqa}(c) and (d) show some failure cases of conventional fine-tune model and zero-shot ViLT model for BAR dataset. In Figure \ref{fig:failure_vqa} (c), a ``climb'' image is mis-classified as ``vault'' by conventional fine-tune model, which can be attributed to the dataset bias, where the context of the ``climb'' training images are most rocks. In Figure \ref{fig:failure_vqa} (d), a ``vault'' image is recognized as ``dive'' by the zero-shot ViLT model, we conjecture that the reason is that the knowledge of pole vaulting is not common in the pretrained domain. In both cases, ProReg predicts right answers by inheriting knowledge from the downstream data and the pretraining knowledge.
train/arxiv
BkiUdN7xK0wg0_74-oy2
5
1
\section{Introduction} \label{sec:intro} Expanding and characterizing the population of known exoplanets with measured masses and orbital periods is crucial to painting a more complete picture of planet formation and evolution. A census of diverse exoplanets sheds light on worlds radically different from Earth and can provide insight into how these planets---and those orbiting the Sun---formed. Ground-based radial velocity (RV) surveys measure the Doppler shifts of stellar spectra to discover exoplanets and characterize their orbits and masses. These surveys have provided landmark discoveries that shaped our understanding of the formation and architectures of other worlds \citep[e.g.,][]{Mayor95,Marcy02,Tamuz08}. Doppler planet searches take time to accumulate the time series measurements that trace out planetary orbits. The Keck Planet Survey \citep{Cumming08} used eight years of RVs from Keck-HIRES \citep{Vogt94} to make the first broad measurement of giant planet occurrence ($\ensuremath{M \sin i} \geq 0.1 M_J$). This survey discovered an increase in the abundance of giant planets for orbits near the water-ice line and found that about 10\% of Sun-like stars have giant planets with a semi-major axes of $<$3 au. The survey only reported planet detections for orbital periods shorter than 2000 days, the observational baseline of the survey. Extrapolating based on the detection of partial orbits, \cite{Cumming08} estimated that $\sim$20\% of such stars have a giant planet orbiting within 20 au. Other teams of astronomers have surveyed the Northern and Southern skies in parallel with the Keck search. \cite{Mayor11} used 8 years of precise HARPS RVs supplemented by additional RVs from CORALIE to measure occurrence patterns in the population of giant planets that are similar to those described above. They found that the planet mass function is ``bottom heavy''. That is, low-mass planets (0.3--30 $M_\earth$~) are significantly more common than giant planets, a finding consistent with measurements from Keck Observatory by \cite{Howard10_Science}. Since then, the HARPS team has continued to discover increasingly longer-period and lower-mass planets \citep{Udry17, Rickman19}. Two other `legacy' planet searches have contributed significantly to our knowledge of giant planets. \cite{Wittenmyer20} used data from a subset of the stars surveyed by the 18-year Anglo-Australian Planet Search, which has also uncovered a number of cold giant planets \citep{Wittenmyer17, Kane19}, to measure a significant increase in giant planet occurrence at $\sim$1 au and a constant occurrence for orbits in the range $\sim$1--6 au. Similarly, the McDonald Observatory planet search has been operating for more than 20 years using the 2.7-m Harlan J. Smith Telescope, and has contributed valuable discoveries of long-period giant planets \citep[e.g.,][]{Robertson2012, Endl2016, Blunt19}. We are now in the fourth decade of Doppler planet searches. As we begin to discover planets with orbital periods comparable to Saturn's, we can answer questions that require a rigorous accounting of giant planets spanning a large range of orbital distances. What is the mass versus semi-major axis distribution of planets out to 10 au? How abundant are cold gas giants beyond the water-ice line, and what can this abundance tell us about planet formation across protoplanetary disks? The California Legacy Survey (CLS, Rosenthal et al. 2021) is uniquely suited for this work. As an unbiased radial velocity survey of 719 stars over three decades, the CLS is an excellent sample for a variety of occurrence measurements, particularly for cold gas giants. In this paper, we explore giant planet occurrence as a function of orbital separation. In Section 2, we review the star and planet catalog of the California Legacy Survey. In Section 3, we describe our methods for computing planet occurrence. Section 4 describes the patterns of planet occurrence that we observe in the sample. In Section 5, we discuss our findings and their context. We summarize our work in Section 6. \section{Survey Review} \label{sec:survey} The California Legacy Survey is a Doppler search for planets orbiting a well-defined sample of nearby FGKM stars conducted by the California Planet Search team \citep[CPS;][]{Howard10}. Paper I in this series (Rosenthal et al. 2021) describes the CLS in detail, including the stellar sample, the search methodology, and the resulting planet sample upon which this paper and forthcoming works in the CLS paper series build. The CLS stellar sample was selected specifically to make the measurements reported here---planet occurrence measurements, especially of giant planets with orbits out to 10 au and beyond---and it approximates a random sample of nearby stars. In particular, stars were selected for CLS observations independent of whether planets were known to orbit them. Stars were also selected independent of their metallicity or other factors that might make them more or less likely to harbor planets. CLS builds on Doppler measurements from the Keck Planet Search \citep{Cumming08}, a touchstone Doppler survey of 585 stars observed with HIRES at the W.\,M.\ Keck Observatory during 1996--2004. We continued to observe those stars and an additional 134 stars at Keck Observatory through 2020. CLS also includes observations of a subset of these stars made with the Hamilton spectrometer at Lick Observatory during 1988--2011, high-cadence Keck-HIRES observations of 235 magnetically inactive stars as part of the Eta-Earth Survey \citep{Howard10_Science}, and high-cadence Lick-APF observations of 135 of those stars \citep{Fulton16, Hirsch21}. The average star has been observed for 22 years and has 71 RVs with a precision of $\sim$2\,m s$^{-1}$\xspace. While these stars do not have homogeneous observing histories, our search methodology accounts for this by incorporating the search completeness of each star's individual dataset. (A Doppler survey that is completely homogeneous in the number, precision, and temporal spacing of measurements is infeasible given the three decade history of this planet search---indeed, this survey spans an era longer than the time during which extrasolar planets orbiting Sun-like stars have been known!) By the metric of ``Doppler survey \'etendue'' (number of stars surveyed $\times$ typical time series duration), CLS is the largest planet search to date at the $\sim$m s$^{-1}$\xspace\ level. Our search methodology (described in Rosenthal et al. 2021) involves an automated, iterative periodogram-based search for Keplerian signals with uniform vetting to identify false positives. This methodology detected 177 planets orbiting the 719 stars in the CLS stellar sample. The algorithm is sensitive to orbital periods much longer than the baseline of our dataset, with the longest period signals detected as partial orbits. The search was also sensitive to orbital segments only seen as linear and parabolic trends in an RV time series. There were only six such detections in our sample of trends that are not associated with known stellar binaries and are potentially consistent with planetary mass companions. Thus, nearly all orbital signals were resolved or partially resolved as Keplerian signals. To characterize survey completeness for each star in the survey, we conducted injection-recovery tests of synthetic Doppler planet signals over a range of injected masses, orbital periods, and orbital geometries. Detected planets and CLS survey completeness are shown in Figure \ref{fig:sample}. We refer the reader to Rosenthal et al. (2021) for the full stellar sample and planet catalog. \begin{figure*}[ht!] \begin{center} \includegraphics[width = \textwidth]{all_contours.pdf} \caption{California Legacy Survey planet catalog and survey-averaged search completeness contours in semi-major axis and \ensuremath{M \sin i} . 3$\%$ and 1$\%$ search completeness contours are highlighted in white.} \label{fig:sample} \end{center} \end{figure*} The CLS stellar sample has a median metallicity of [Fe/H]\xspace = 0.0\,dex, a median stellar mass of 1.0\,\ensuremath{M_{\odot}}\xspace, and a small number of evolved stars (subgiants). These are good heuristics for verifying that we successfully constructed a blind occurrence survey, since a bias toward known giant planet hosts could manifest as a metal-rich sample \citep{Fischer05, Santos04a}, a particularly massive sample, or an excess of evolved stars \citep{Johnson11}. \section{Methods} \label{sec:occurrence} The primary goal of this work is to measure planet occurrence. Many studies of RV or transit surveys use the intuitive occurrence measurement method known as ``inverse detection efficiency'' \citep{Howard12, Petigura13b}. According to this procedure, one estimates occurrence in a region of parameter space by counting the planets found in that region, with each planet weighted by the local search completeness. One can measure the search completeness map of a survey by injecting many synthetic signals into each dataset and computing the fraction of signals in a given region that are recovered by the search algorithm in use. Inverse detection efficiency is actually a specific case of a Poisson likelihood method, in which one models an observed planet catalog as the product of an underlying Poisson process and empirical completeness map \citep{Foreman-Mackey14}. This can be done with a parametric occurrence rate density model, like a broken power law, or a non-parametric density model, with a piecewise-constant step function. In this paper, we used the Poisson likelihood method to model the occurrence of giant planets, taking measurement uncertainty into account. We used the hierarchical Bayesian methodology outlined in \cite{Hogg10} and \cite{Foreman-Mackey14} to evaluate our occurrence likelihood. Given an observed population of planets with orbital and \ensuremath{M \sin i}\ posteriors $\{\bm{\omega}\}$ and associated survey completeness map $Q(\bm{\omega})$, and assuming that our observed planet catalog is generated by a set of independent Poisson process draws, we evaluated a Poisson likelihood for a given occurrence model $\Gamma(\bm{\omega} | \bm{\theta})$, where $\Gamma$ is an occurrence density $\frac{\mathrm{d}^2N}{\mathrm{dln}(a)\mathrm{dln}(\ensuremath{M \sin i})}$ and $\bm{{\theta}}$ is a vector of model parameters. The observed occurrence $\hat{\Gamma}(\bm{\omega} | \bm{\theta})$ of planets in our survey can be modeled as the product of the measured survey completeness and an underlying occurrence model, \begin{equation} \hat{\Gamma}(\bm{\omega} | \bm{\theta}) = Q(\bm{\omega})\Gamma(\bm{\omega} | \bm{\theta}). \\ \end{equation} The Poisson likelihood for an observed population of objects is \begin{equation} \mathcal{L} = e^{-\int \hat{\Gamma}(\bm{\omega} | \bm{\theta}) \,d\bm{\omega}} \prod_{k=1}^{K} \hat{\Gamma}(\bm{\omega}_k | \bm{\theta}),\\ \end{equation} where $K$ is the number of observed objects, and $\bm{\omega}_k$ is a vector of parameters that completely describe the $k$th planet's orbit. In our case, the two relevant parameters are $\ensuremath{M \sin i}$ and semi-major axis $a$, taken from the broader set that includes eccentricity, time of inferior conjunction, and argument of periastron. The Poisson likelihood can be understood as the product of the probability of detecting an observed set of objects (the product term in Equation 2) and the probability of observing no additional objects in the considered parameter space (the exponentiated integral). Equations 1 and 2 serve as the foundation for our occurrence model but do not take into account uncertainty in measurements of planetary orbits and minimum masses. In order to do this, we used \texttt{RadVel} and \texttt{emcee} to empirically sample the orbital posteriors of each system \citep{Fulton18, DFM13}. We hierarchically modeled the orbital posteriors of each planet in our catalog by summing our occurrence model over many posterior samples for each planet. The hierarchical Poisson likelihood is therefore approximated as \begin{equation} \mathcal{L}\approx e^{-\int \hat{\Gamma}(\bm{\omega} | \bm{\theta}) \,d\bm{\omega}} \prod_{k=1}^{K} \frac{1}{N_k} \sum_{n=1}^{N_k} \frac{\hat{\Gamma}(\bm{\omega}_k^n | \bm{\theta})}{p(\bm{\omega}_k^n | \bm{\alpha})},\\ \end{equation} where $N_k$ is the number of posterior samples for the $k$th planet in our survey and $\bm{\omega}_k^n$ is the $n$th sample of the $k$th planet's posterior. $p(\bm{\omega} | \bm{\alpha})$ is our prior on the individual planet posteriors. We placed linear-uniform priors on $M$sin$i$ and log-uniform priors on $a$. We used \texttt{emcee} to sample our hierarchical Poisson likelihood. We used two different occurrence frameworks to model our planet population. The first is a non-parametric model across bins uniformly spaced in ln($M$sin$i$) and ln($a$), with a set of steps $\bm{\Delta}$ of height $\bm{\theta}$. We define this framework with the occurrence function \begin{equation} \Gamma_N(\bm{\omega} | \bm{\theta}) = \theta_n | \bm{\omega} \in \Delta_n. \\ \end{equation} The second framework is a broken power law as a function of semi-major axis, defined with the function \begin{equation} \Gamma_B(a | C, \beta, a_0, \gamma) = C (a/\mathrm{au})^{\beta}(1 - e^{-(a/a_0)^{\gamma}}), \\ \end{equation} where $C$ is a normalization constant, $\beta$ is the occurrence power law index beyond the breaking point, $a_0$ determines the semi-major axis location of the breaking point, and $\beta + \gamma$ is the power law index within the breaking point. This model assumes a giant planet mass function that does not change with respect to semi-major axis. We fit this model to our population in order to explore whether giant planet occurrence falls off beyond the water-ice line. \section{Results} \label{sec:results} \begin{figure*}[ht!] \begin{center} \includegraphics[width = 0.8\textwidth]{hist_11x1_1014_fancy_mode.pdf} \caption{Non-parametric occurrence rates for semi-major axes of 0.03--30 au for planets with minimum masses from 30--6000 \ensuremath{M \sin i}, assuming uniform occurrence across ln(\ensuremath{M \sin i}). The dashed blue line represents a planet count in each semi-major axis bin without correcting for completeness; bold lines and dots show the maximum posterior values for the Poisson likelihood model; vertical lines represent 15.9--84.1$\%$ confidence intervals (except for the last bin, which is not separated from zero and shows 0--68.2$\%$); and transparent steps show draws from the occurrence posterior. We see a clear enhancement around 1--10 au, and a tentative falloff beyond that range.} \label{fig:semi_dist} \end{center} \end{figure*} \begin{figure}[ht!] \includegraphics[width = 0.49\textwidth]{hist_11x1_1014_hierarchy_and_broken.pdf} \caption{Our broken power law model, juxtaposed with our non-parametric model and measurements from \cite{Fernandes19} and \cite{Wittenmyer20}. The transparent curves represent draws from the broken power law posterior. We find that the power law index beyond the break is $\sim$2.5$\sigma$-separated from zero, implying an occurrence falloff beyond the water-ice line. \citet{Cumming08} performed a power-law fit to the occurrence rates of planets orbiting only within 3 au; the light dotted blue line represents an extrapolation to wider separations.} \label{fig:power_law} \end{figure} \begin{figure}[ht!] \includegraphics[width = 0.5\textwidth]{corner_broken_powerlaw.pdf} \caption{Broken power law posterior. $C$ is a normalization constant, $\beta$ is the power law index beyond the break, $a_0$ determines the location of the break in units of au, and $\beta + \gamma$ is the power law index within the break. The index beyond the break $\beta$ is \deleted{$\sim2.5$ $\sigma$} \added{$\sim99.1\%$}-separated from zero.} \label{fig:corner} \end{figure} \subsection{Enhancement for giant planets} \label{sec:sub-jovians} Figure \ref{fig:semi_dist} shows occurrence rates as a function of semi-major axis for planets with masses between 30 $M_\earth$\ and 6000 $M_\earth$, derived using the non-parametric model described in \S \ref{sec:occurrence} and assuming uniform occurrence across ln(\ensuremath{M \sin i}). We confirmed the previous result from \citet{Wright09}, \cite{Cumming08}, \cite{Fernandes19}, and \cite{Wittenmyer20} that giant planet occurrence is enhanced by a factor of four beyond 1 au compared to within 1 au. \deleted{Planets more massive than Neptune} \added{Specifically, planets more massive than 30 $M_\earth$\ } are 2--4 times more common at orbital distances between 1--3 au relative to 0.1--0.3 au. \added{Using our broken power law model,} we find a median power law slope inside the break of 0.72$^{+0.16}_{-0.20}$, which is 2 $\sigma$ higher than the power law slope measured by \citet{Cumming08} (0.26$\pm$0.1). This difference is likely caused by the single power law model being pulled to lower values due to neglecting a flattening or turnover in occurrence at long orbital periods since \citet{Cumming08} was limited to planets orbiting inside 3 au. \subsection{Distribution of giant planets beyond 3 au} \label{sec:semi-dist} Due to low completeness beyond our observational baselines, our occurrence results beyond 10 au are highly uncertain. However, we can estimate occurrence trends with the broken power law model described in \S \ref{sec:occurrence}. Figure \ref{fig:power_law} shows the broken power law results juxtaposed with the non-parametric results, and Figure \ref{fig:corner} presents the posteriors for the parametric model parameters. The medians and 68th percentile credible intervals for the broken power law model are listed in Table \ref{tab:broken}. Both assume uniform occurrence across ln(\ensuremath{M \sin i}). We find that 99.4\% of the posterior samples are consistent with a plateauing or declining occurrence rate beyond a peak around $3.6^{+2.0}_{-1.8}$ au. We find that the power law index beyond the peak is $\beta = -0.86^{+0.41}_{-0.41}$. This suggests a much shallower decline relative to the estimates of \cite{Fernandes19} but is also potentially discrepant with the constant prediction of \cite{Wittenmyer20}, as our model still measures a falloff. The results of our non-parametric fit are less clear, with integrated occurrence rates of $14.1^{+2.0}_{-1.8}$ and $8.9^{+3.0}_{-2.4}$ giant planets per 100 stars between 2--8 au and 8--32 au respectively. This suggests a fall-off in occurrence beyond 8 au with 1.5$\sigma$ confidence. \begin{deluxetable}{lr} \tabletypesize{\large} \tablecaption{Broken Power-Law Model Parameters\label{tab:broken}} \tablehead{ \colhead{Parameter} & \colhead{Value} } \startdata $C$ & $350^{+580}_{-220}$ \\ $\beta$ & $-0.86^{+0.41}_{-0.41}$ \\ $a_0$ & $3.6^{+2.0}_{-1.8}$ au \\ $\gamma$ & $1.59^{+0.36}_{-0.33}$ \\ \enddata \end{deluxetable} \subsection{Comparing sub- and super-Jovians} \label{sec:mass-function} Figure \ref{fig:semi_dist_sub} compares non-parametric occurrence rates for giant planets more and less massive than 300 $M_\earth$. We find a quantitatively similar occurrence enhancement around 1--10 au for both the sub-Jovian-mass and Jovian-mass planets. However, we lack the sensitivity to measure the occurrence rate of sub-Jovian mass planets beyond 10 au, to assess whether they exhibit the fall-off in occurrence at large orbital separations seen when examining occurrence across both mass ranges. The sub-Jovian planets are more common than the super-Jovian planets across a wide range of separations, particularly beyond the water ice line. We find a similar enhancement for sub-Saturns below 150 $M_\earth$, implying that this occurrence enhancement is independent of planet mass. \begin{figure}[ht!] \includegraphics[width = 0.49\textwidth]{hist_super_sub_Jupiters.pdf} \caption{A comparison between sub- and super-Jovian occurrence. Steps and dots show maximum posterior values, and vertical lines show 15.9--84.1$\%$ confidence intervals. The sub-Jovians are consistently more common than the super-Jovians, and both populations are enhanced beyond 1 au. Combining these two populations produces the same trends seen when we assume uniform occurrence across all masses.} \label{fig:semi_dist_sub} \end{figure} We more concretely measured occurrence as a function of mass by performing a non-parametric fit to our sample within 1--5 au. Figure \ref{fig:beyond-ice-mass-function} shows occurrence as a function of \ensuremath{M \sin i}\ within 30--3000 $M_\earth$, in four steps. This figure shows that our assumption of a uniform ln(\ensuremath{M \sin i}) distribution beyond the ice line is valid up to 900 $M_\earth$, but the distribution falls off with $\sim$2$\sigma$ significance above 900 $M_\earth$. If this is also true beyond 5 au, where low completeness prevents us from making a similar measurement, then we may be underestimating broad giant planet occurrence in our lowest-completeness region of parameter space, beyond 10 au. This is because our only detections in that regime are more massive than 300 $M_\earth$, and all but one of them are more massive than 900 $M_\earth$. \begin{figure}[ht!] \includegraphics[width = 0.49\textwidth]{beyond_ice_line_mass_function.pdf} \caption{Planet occurrence within 1--5 au with respect to \ensuremath{M \sin i} . Steps and dots show maximum posterior values, and vertical lines show 15.9--84.1$\%$ confidence intervals. The mass function is constant within 30--900 $M_\earth$, and falls off beyond 900 $M_\earth$ .} \label{fig:beyond-ice-mass-function} \end{figure} \subsection{Occurrence with respect to stellar mass and metallicity} \label{sec:mass-and-metallicity} In addition to measuring occurrence with respect to semi-major axis and \ensuremath{M \sin i}, we measured the broad occurrence rate of giant planets more massive than 100 $M_\earth$\ and within 1--5 au with respect to host-star mass and metallicity. \added{We chose a lower limit of 100 $M_\earth$\ instead of 30 $M_\earth$\ in order to restrict our analysis to search-complete regions within 1--5 au, since 30 $M_\earth$\ planets are effectively undetectable beyond 3 au.} For each of these two stellar properties, we computed occurrence across six divisions, in steps of 0.2 \ensuremath{M_{\odot}}\xspace across 0.3--1.5 \ensuremath{M_{\odot}}\xspace and 0.15 dex across -0.5--0.4 dex respectively. Figure \ref{fig:giants_mass} shows occurrence with respect to host-star mass, while Figure \ref{fig:giants_metallicity} shows occurrence with respect to host-star [Fe/H]. Both of our measurements agree with prior results. \cite{Johnson10}\added{, whose stellar sample was excluded from CLS due to its bias toward giant planet hosts,} measured giant planet occurrence across stellar mass and found an increase in occurrence with increasing stellar mass beginning near 1 \ensuremath{M_{\odot}}\xspace. \added{\cite{Wittenmyer20b} independently found an increase in giant planet occurrence beyond 1 \ensuremath{M_{\odot}}\xspace.} We see the same phenomenon in our sample, as presented in Figure \ref{fig:giants_mass}. Similarly, \cite{Fischer05} found that giant planet occurrence increases with increasing [Fe/H] beyond 0.1 dex, \added{as did \cite{Reffert15} and \cite{Wittenmyer16}.} We see the same transition near 0.1 dex in Figure \ref{fig:giants_metallicity}. \begin{figure}[ht!] \includegraphics[width = 0.49\textwidth]{stellar_mass_occurrence.pdf} \caption{Occurrence of giant planets more massive than 100 $M_\earth$\ and within 1--5 au as a function of host star mass, in six splits. Steps and dots show maximum posterior values, and vertical lines show 15.9--84.1$\%$ confidence intervals. There is an increase in occurrence beyond roughly 1 \ensuremath{M_{\odot}}\xspace, which is in agreement with \cite{Johnson10}'s original measurement of giant planet occurrence versus host-star mass.} \label{fig:giants_mass} \end{figure} \begin{figure}[ht!] \includegraphics[width = 0.49\textwidth]{stellar_metallicity_occurrence.pdf} \caption{Occurrence of giant planets more massive than 100 $M_\earth$\ and within 1--5 au as a function of host star metallicity, in six splits. Steps and dots show maximum posterior values, and vertical lines show 15.9--84.1$\%$ confidence intervals. There is a clear increase in occurrence beyond roughly 0.1 dex, which is in agreement with \cite{Fischer05}'s original report of a correlation between giant planet occurrence and host-star metallicity.} \label{fig:giants_metallicity} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Comparison to previous RV surveys} \label{sec:rv-surveys} The last few years have seen a number of RV studies examining the population of long-period planets. \cite{Fernandes19} probed planet occurrence as a function of orbital period by extracting planetary minimum masses and periods, as well as completeness contours, from a catalog plot shown in \cite{Mayor11}, which presented a HARPS \citep{Mayor03} and CORALIE \citep{Baranne96} blind radial velocity survey of 822 stars and 155 planets over 10 years (corresponding to a 4.6 au circular orbit around a Solar-mass star). \cite{Mayor11}, who did not publish their HARPS and CORALIE RVs, measured giant planet occurrence as a function of orbital period out to 4000 days, in the range of the water-ice line. \cite{Fernandes19} pushed out to low-completeness regimes and estimated a sharp falloff in occurrence beyond the water-ice line. They measured an integrated occurrence rate of 1.44$\pm$0.54 giant planets (0.1--20 $M_J$) per 100 stars for separations between 3.8 and 7.1 au. Our results indicate a much higher occurrence rate for the same planets at those separations; 15.5$^{+3.2}_{-3.0}$ giant planets per 100 stars. The treatment of partial orbits in \cite{Mayor11} is unclear, and they only measured occurrence with respect to orbital period out to 3000 days ($\sim$4 au). If \cite{Mayor11} under-reported partial orbits beyond this period in their sample or overestimated sensitivity to partial orbits, then that could explain the large discrepancy between this work and \citet{Fernandes19} at separations beyond 10 au. In contrast, \cite{Wittenmyer20}, which drew from the Anglo-Australian Planet Search \citep{Tinney01} to construct a blind survey of 203 stars and 38 giant planets over 18 years, found that giant planet occurrence is roughly constant beyond the water-ice line, out to almost 10 au. \cite{Wittenmyer20} reports an occurrence rate of $6.9^{+4.2}_{-2.1}$ giant planets \deleted{(0.1--20 $M_J$)} \added{> 0.3 $M_J$} per 100 stars with periods between 3000 and 10,000 days ($\approx$4--9 au). Our integrated occurrence rate in the same region of parameter space is slightly higher at $12.6^{+2.6}_{-2.0}$ giant planets per 100 stars but it is consistent to within 1 $\sigma$ with the \cite{Wittenmyer20} result. \subsection{Comparison to Kepler survey} \label{sec:kepler} \cite{Foreman-Mackey16} performed an automated search for long-period transiting exoplanets in a set of archival \textit{Kepler} light curves of G and K stars. For planets between 1.5--9 au and 0.01--20 $M_J$ and using a probabilistic mass-radius relationship drawn from \cite{ChenK16}, they found an occurrence rate density of $\frac{\mathrm{d}^2N}{\mathrm{dln}(a)\mathrm{dln}(M)} = 0.068 \pm 0.019$. We applied our occurrence model to the same parameter space and found $\frac{\mathrm{d}^2N}{\mathrm{dln}(a)\mathrm{dln}(\ensuremath{M \sin i})} = 0.0173 \pm 0.0022$. The \textit{Kepler} measurement is $2.66\sigma$ separated from our measurement. We are far less sensitive to planets in the 0.01--0.1 $M_J$ regime than \cite{Foreman-Mackey16}; this may partly explain the discrepancy in our results. \subsection{Comparison to direct imaging surveys} \label{sec:direct-imaging} RV surveys have recently begun to approach baselines long enough to detect and place limits on the frequency of planets like those detected by direct imaging. One caveat is that direct imaging surveys usually target stars younger than 100 Myr, while RV surveys generally target stars older than 1 Gyr. Young planets retain significant heat from their formation and are bright in the infrared wavelengths covered by direct imaging surveys. However, young stars also tend to be active and rapidly rotating, which makes precise RV work difficult. Because of this, there is minimal overlap between planets that have been detected by direct imaging and planets that have been detected by radial velocity measurements. We can still compare rates across these detection methods by making the assumption that giant planet occurrence does not change as host stars age beyond $\sim$10 Myr, once protoplanetary disks have dissipated. We compared our occurrence model to the results of two direct imaging surveys of nearby stars. \cite{Biller13} imaged 80 stars in nearby moving groups and detected a small number of brown dwarf companions but no planetary-mass companions. They used stellar evolution and planet formation models to estimate constraints on cold giant occurrence from their nondetections and sensitivity. More recently, \cite{Nielsen19} imaged 300 stars and detected six planets and three brown dwarfs. Figure \ref{fig:di_comp} compares these results to our occurrence measurements in their respective regions of parameter space. Our measurements are compatible with the limits placed on planets with masses 1--20 $M_J$ and separations between 10--50 au by \cite{Biller13}, depending on their assumed stellar evolutionary model that determines the expected brightness of young giant planets. Our measurement for planets with masses 5--14 $M_J$ orbiting between 10--100 au is in excellent agreement with the results of \cite{Nielsen19}. The only shared quality of our modeling methods is a Poisson counting likelihood. With the caveat of small number statistics, this is a remarkable benchmark for comparing exoplanet occurrence across independent search methods. \begin{figure*}[ht!] \includegraphics[width = 0.49\textwidth]{biller_comp.pdf} \includegraphics[width = 0.49\textwidth]{nielsen_comp.pdf} \caption{Occurrence rate comparison to direct imaging studies. \emph{Left}: Frequency of cool, massive companions with the direct imaging study of \citet{Biller13}. While they did not detect any planets in their survey they were able to put upper limits on the frequency of companions using assumptions of either hot-start (COND) or cold-start (DUSTY) models for planetary formation and infrared brightness. \emph{Right}: Same as left, but compared with the results of \citet{Bowler15} and \citet{Nielsen19} for the mass and separation limits specified in the x-axis label. The gray shading represents the 95$\%$ upper limit on occurrence from \citet{Bowler15}.} \label{fig:di_comp} \end{figure*} \begin{figure*}[ht!] \includegraphics[width = 0.49\textwidth]{cassan_comp.pdf} \includegraphics[width = 0.49\textwidth]{clanton_comp.pdf} \caption{\emph{Left:} Occurrence rate comparison with the microlensing survey of \citet{Cassan12}. We plot the 1 $\sigma$ limits from \citet{Cassan12} as the shaded blue region. The occurrence rate posterior from this work is plotted in black. \emph{Right:} Occurrence rate comparison with the combined analysis of \citet{Clanton16}. The occurrence rate posterior from this work is plotted in black. The 1 $\sigma$ limits from \citet{Clanton16} are indicated by the shaded red region. \citet{Clanton16} combine constraints from direct imaging, microlensing, and previous radial velocity studies.} \label{fig:ml_comp} \end{figure*} \subsection{Comparison to gravitational microlensing surveys} \label{sec:microlensing} We compare our model to the microlensing surveys of \cite{Cassan12} and \cite{Clanton16}. Like all gravitational lensing surveys, these studies assume a broad prior for stellar type based on Galactic observations, a prior that peaks in the M dwarf range. Our planet-hosting stars have a much higher median mass than this range, but since the gravitational lensing estimates comes purely from a galactic model prior, we chose to perform this broad comparison across stellar masses with the knowledge that the mass range for the lensing numbers is poorly constrained. Figure \ref{fig:ml_comp} shows that our estimates agree with broad constraints from the pure lensing survey \citep{Cassan12}. On the other hand, the constraints of \cite{Clanton16} strongly disagree with our planet occurrence measurement in the same parameter box. This may be due to that study having a significantly better constrained sample of M dwarfs, which would separate their stellar sample from our broader FGKM sample. deleted{\cite{Montet14} performed an RV survey of M dwarfs} \added{\cite{Endl06}, \cite{Bonfils13}, and \cite{Montet14} performed independent RV surveys of M dwarfs} and all showed that M dwarfs have a significantly lower giant planet occurrence rate than more massive stars. This implies that a survey of M dwarfs should yield a lower giant planet occurrence rate than a broad survey of FGKM stars, and this is exactly what we see in our comparison to \cite{Cassan12}. \subsection{Implications for planet formation} \label{sec:formation} \citet{Cumming08} first identified an enhancement in the occurrence rate of giant planets beyond orbital periods of $\sim$300 days. We expect such enhancements based on planetary migration models \citep{Ida04a}. The orbital period distribution in \cite{Cumming08} predicted a smooth rise in occurrence toward longer orbital periods, but we observed a sharp transition around 1 au, as seen in Figure \ref{fig:semi_dist}. \citet{Ida_Lin08_v} later suggested that additional solid materials due to ices in the protoplanetary disk could augment the formation of gas giant planets and cause a rapid rise in the occurrence rate of these planets beyond the water-ice line. If increased solids near and beyond the ice line cause a sharp rise in the occurrence rate, then we might expect this rise to be more well-defined when looking in a unit more closely related to the temperature in the protoplanetary disk. In Figure \ref{fig:insol_hist}, we plot the occurrence rate as a function of stellar light intensity relative to Earth. The occurrence rate with respect to flux is qualitatively similar to the rate with respect to orbital separation. We do not see strong evidence that the occurrence rate enhancement is any more localized in terms of stellar light intensity relative to Earth. The decline in occurrence for fluxes less than that received by the Earth from the Sun looks more convincing, but all except for the one bin with the highest occurrence near 1 au are consistent with a constant occurrence rate. We can separate the puzzle of gas giant formation into two components: the growth of solid cores that are large enough to undergo runaway gas accretion, and the process of gas accretion onto solid cores. It is currently unclear whether giant planet occurrence increases beyond the ice line because cores form more easily in this region, or because conditions are more favorable for rapid gas accretion onto solid cores. A number of studies \citep[e.g.][]{Morbidelli15,Schoonenberg17,Drazkowska17} have argued that that core growth should be enhanced beyond the ice line. If the solid grain sizes and densities beyond the ice line are enhanced during the earliest stages of planet formation, it would facilitate pebble clumping that leads to planetesimal formation and also result in higher pebble accretion rates onto the growing core \citep[e.g.][]{Bitsch19}. It is also possible that gas giants are more common beyond the ice line because it is easier for cores to rapidly grow their gas envelopes in this region. The rate at which the growing planet's envelope can cool and contract (hence accreting more gas) depends sensitively on its envelope opacity \citep[e.g.][]{Bitsch21}. In a recent study, \cite{Chachan21} used dust evolution models to study the effect of dust opacity and dust-to-gas ratio on giant planet formation in the epoch immediately following the end of core formation. They found that as the disk evolves, decreasing dust opacity beyond the water-ice line allows for higher gas accretion rates in this region. \begin{figure}[ht!] \includegraphics[width = 0.5\textwidth]{hist_11x1_insol.pdf} \caption{Analogous to Figure 2, occurrence with respect to stellar light intensity instead of orbital separation. Here we see a similar enhancement in the occurrence rate of giant planets where the insolation flux is equal to that of Earth and tentative evidence for a fall off in occurrence just beyond that.} \label{fig:insol_hist} \end{figure} \citet{Ida18} recently updated their models with an improved treatment of Type II migration. This mechanism would produce a broad semi-major axis distribution with many giant planets migrating inward to separations less than 1 au. However, \citet{Fernandes19} show that this model does not agree well with the occurrence contrast between the peak and the inner regions of these systems. Our results are in close agreement with those of \citet{Fernandes19} for separations less than 3 au. The multi-core accretion models of \citet{Mordasini18} are also in good agreement with the overall shape of the semi-major axis distribution, but they underestimate the absolute occurrence rate of giant planets. This could be due to the finite number of cores injected into their simulations. One common theme among planet formation models of gas giants is that protoplanets tend to migrate inward, all the way to the inner edge of the disk, on timescales much shorter than the gas dissipation timescale. This tends to produce an enhancement of occurrence closer to the star and/or many planets being engulfed by the host star. \citet{Jennings18} attempt to solve this issue by simultaneously modeling the effects of photoevaporation and viscous evolution on the gas disk. They find that, depending on the dominant energy of the evaporating photons, this could clear gaps in the disk that halt Type I migration and creates a pile-up of planets at orbital separations between 0.8--4 au. They showed that this can produce very strong and narrow enhancements near certain orbital separations, but it is conceivable that the shape of the final semi-major axis distribution would actually be driven by the spectral energy distributions of host stars during the early years of their formation. \citet{Hallatt20} also proposed gap formation in the protoplanetary disk shortly after the formation of gas giant planets as a mechanism to slow or halt migration at preferred orbital separations. Their model requires that the giant planets that form further out in the disk are more massive in order to reproduce the observed enhancements. We expect this to be the case if the dust content of disk envelopes is very low. The observed enhancement in the occurrence rate of sub-Jovian planets near 1--10 au, seen in Figure \ref{fig:semi_dist_sub}, suggests that the processes that drive the formation and pile-up of planets at those orbital distances also apply to these lower-mass planets. It appears just as likely for a gaseous planet to undergo runaway accretion and grow into a Jovian planet as it is to halt that runaway accretion process early and remain in the sub-Saturn regime. Unfortunately, it is difficult to extract significant constraints on planet formation models from semi-major axis distributions alone. Future planet catalogs produced by Gaia and The Roman Space Telescope will help to measure the precise shape of the occurrence enhancement around 1 au with planet samples several orders of magnitude larger, but the stellar samples will be different from ours. We plan for future works in this series to analyze the host star metallicity, eccentricity, and multiplicity distributions of our sample, in the hopes of uncovering evidence that discriminates between different planet formation models. \section{Conclusion} \label{sec:conclusion} In this work, we utilize the catalogue of stars, RV-detected planets, and completeness contours from Rosenthal et al. (2021) to measure giant planet occurrence as a function of semi-major axis. We applied a hierarchical Bayesian technique to incorporate measured search completeness and uncertainties in our observations into uncertainties in our occurrence rates. Our results are consistent with previous studies that have found a strong enhancement in the occurrence rates of these planets around 1 au. We find that the occurrence of planets less massive than Jupiter (30 $\leq$ \ensuremath{M \sin i} $\leq$ 300 $M_\earth$) is enhanced near 1--10 au in concordance with their more massive counterparts. We find that a fall-off in giant planet occurrence at larger orbital distances is favored over models with flat or increasing occurrence, with 2.5 $\sigma$ confidence from our broken power-low model and with 1.5 $\sigma$ confidence from our non-parametric model. Additionally, our occurrence measurements beyond 10 au are consistent with with those derived from direct imaging surveys. Finally, we lay out the methodology and groundwork for future studies of giant occurrence as a function of planet and host-star properties. With these tools, we plan to study the occurrence rates of giant planets in particular configurations. Paper III in the CLS series will examine the relationship between giant planets and smaller companions, while Paper IV will split our sample into single-giant and multiple-giant systems and investigate the differences and commonalities between these two groups. These undertakings may provide new insight into the formation and evolution of this class of planets that played a crucial role in sculpting the final architecture of our own Solar System. \facility{Keck:I (HIRES)} \acknowledgments We thank Jay Anderson, G\'asp\'ar Bakos, Mike Bottom, John Brewer, Christian Clanton, Jason Curtis, Fei Dai, Steven Giacalone, Sam Grunblatt, Michelle Hill, Lynne Hillenbrand, Rebecca Jensen-Clem, John A.\ Johnson, Chris McCarthy, Sean Mills, Teo Mo\v{c}nik, Ben Montet, Jack Moriarty, Tim Morton, Phil Muirhead, Sebastian Pineda, Nikolai Piskunov, Eugenio Rivera, Julien Spronck, Jonathan Swift, Guillermo Torres, Jeff Valenti, Sharon Wang, Josh Winn, Judah van Zandt, Ming Zhao, and others who contributed to the observations and analysis required for the CLS project. We acknowledge R.\ P.\ Butler and S.\ S.\ Vogt for many years of contributing to this dataset. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. We acknowledge RVs stemming from HIRES data in KOA with principal investigators from the LCES collaboration (S.\ S.\ Vogt, R.\ P.\ Butler, and N.\ Haghighipour). We gratefully acknowledge the efforts and dedication of the Keck Observatory staff for support of HIRES and remote observing. We are grateful to the time assignment committees of the Caltech, the University of California, the University of Hawaii, NASA, and NOAO for their generous allocations of observing time. Without their long-term commitment to radial velocity monitoring, these planets would likely remain unknown. We thank Ken and Gloria Levy, who supported the construction of the Levy Spectrometer on the Automated Planet Finder, which was used heavily for this research. We thank the University of California and Google for supporting Lick Observatory, and the UCO staff as well as UCO director Claire Max for their dedicated work scheduling and operating the telescopes of Lick Observatory. G.W.H.\ acknowledges long-term support from NASA, NSF, Tennessee State University, and the State of Tennessee through its Centers of Excellence program. A.W.H.\ acknowledges NSF grant 1753582. H.A.K. acknowledges NSF grant 1555095. P.D.\ gratefully acknowledges support from a National Science Foundation (NSF) Astronomy \& Astrophysics Postdoctoral Fellowship under award AST-1903811. The Center for Exoplanets and Habitable Worlds and the Penn State Extraterrestrial Intelligence Center are supported by the Pennsylvania State University and the Eberly College of Science. This research has made use of NASA's Astrophysics Data System. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. Finally, the authors wish to recognize and acknowledge the significant cultural role and reverence that the summit of Maunakea has long had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. \software{All code used in this paper is available at \url{github.com/California-Planet-Search/rvsearch} and \url{github.com/leerosenthalj/CLSII}. This research makes use of GNU Parallel \citep{Tange11}. We made use of the following publicly available Python modules: \texttt{astropy} \citep{Astropy-Collaboration13}, \texttt{matplotlib} \citep{Hunter07}, \texttt{numpy/scipy} \citep{numpy/scipy}, \texttt{pandas} \citep{pandas}, \texttt{emcee} \citep{DFM13}, and \texttt{RadVel} \citep{Fulton18}.} \bibliographystyle{aasjournal}
train/arxiv
BkiUc684uzlgqFtGnkvA
5
1
\section{Introduction} The Galactic object MWC\,137 is a peculiar early-type star surrounded by the optical nebula Sh\,2-266 (80$'' \times$ 60$''$) of unclear origin. A large-scale collimated outflow with several knots was recently detected in the light of the [N\,{\sc ii}]\,6583 line \citep{2016A&A...585A..81M}. Moreover, near-infrared spectroscopic observations displayed intense, kinematically broadened CO band emission in both isotopes $^{12}$CO and $^{13}$CO \citep{2013A&A...558A..17O}. The observed enrichment in $^{13}$CO implies that MWC\,137 is an evolved object \citep{2015AJ....149...13M}, and \citet{2016A&A...585A..81M} confirmed its supergiant nature. \section{Observations and Results} We obtained SINFONI $K$-band IFU spectroscopic data of MWC\,137 on 2014 December 30 and 2016 March 19 with high-spatial resolution (FOV of $0.8\arcsec \times 0.8\arcsec$). The continuum subtracted hot CO band images (Fig.\,1, left) display an outer ring (shell?) with $r_{\rm out} = 225$\,mas (dashed circle) and an inner disk or ring (ellipse) with two large blobs (pointed at by the arrows). The major and minor semiaxes are 112.5\,mas and 97.5\,mas, resulting in an inclination of $\sim 30^{\circ}$. These were determined by the position of the maximum intensity of the blobs and the constraint that the disk should be roughly perpendicular to the optical jet. The two blobs show an angular motion of $\sim 10\deg$ within 15 months. This would translate into $\varv_{\rm rot} = 375$\,km\,s$^{-1}$, if we assume a distance of 5.2\,kpc, which is too fast for Keplerian rotation. Observations of the $^{12}$CO(3-2) line at 345\,GHz were obtained with the Atacama Pathfinder EXperiment (APEX) in a region of $3\arcmin\times 3\arcmin$ centered on Sh2-266, with an angular resolution of $20\arcsec$. The cold CO emission comprises a partial shell in the velocity interval [+27.3,+30.3]\,km\,s$^{-1}$ (contours in Fig.\,1, right). According to circular galactic rotation models and the velocity field of the Galaxy by \citet{1993A&A...275...67B}, gas at these velocities is located at kinematical distances $d=5-9$\,kpc, in good agreement with the estimates of \citet{2016A&A...585A..81M}. \articlefigure[width=0.9\textwidth]{Kraus_bep2016_poster_fig1.pdf}{fig}{Location and variation of the hot, small-scale (left) and the cold, large-scale (right, contours) CO emission with respect to the jet and the optical nebula.} \acknowledgements Observations were obtained under ESO programs 094.D-0637 and 097.D-0033. We acknowledge support from GA\,\v{C}R (14-21373S), RVO:67985815, and ETAg (IUT40-1).
train/arxiv
BkiUex_xK0zjCrsOBPEq
5
1
\section{Introduction} This paper studies inference on the average treatment effect in experiments in which treatment status is determined according to ``matched pairs.'' By a ``matched pairs'' design, we mean that units are sampled i.i.d.\ from the population of interest, paired according to observed, baseline covariates and finally, within each pair, one unit is selected at random for treatment. This method is used routinely in all parts of the sciences. Indeed, commands to facilitate its implementation are included in popular software packages, such as \texttt{sampsi} in Stata. References to a variety of specific examples can be found, for instance, in the following surveys of various field experiments: \cite{donner2000design}, \cite{glennerster2013running}, and \cite{rosenberger2015randomization}. See also \cite{bruhn2009pursuit}, who, based on a survey of selected development economists, report that 56\% of researchers have used such a design at some point. \cite{bai2021inference} develop methods for inference on the average treatment effect in such experiments based on the difference-in-means estimator. In this paper, we pursue the goal of improving upon the precision of this estimator by exploiting observed, baseline covariates that are not used in determining treatment status. To this end, we study a broad class of estimators for the average treatment effect based on a ``doubly robust'' moment condition. The estimators in this framework are distinguished via different ``working models'' for the conditional expectations of potential outcomes under treatment and control given the observed, baseline covariates. Importantly, because of the double-robustness, these ``working models'' need not be correctly specified in order for the resulting estimator to be consistent. In this way, the framework permits us to study both finite-dimensional and high-dimensional forms of covariate adjustment without imposing unreasonable restrictions on the conditional expectations themselves. Under high-level conditions on the ``working models'' and their corresponding estimators and a requirement that pairs are formed so that units within pairs are suitably ``close'' in terms of the baseline covariates, we derive the limiting distribution of the covariate-adjusted estimator of the average treatment effect. We further construct an estimator for the variance of the limiting distribution and provide conditions under which it is consistent for this quantity. Using our general framework, we first consider finite-dimensional, linear adjustments. For this class of estimators, our main findings are summarized as follows. First, we find that such adjustments need not lead to improvements in terms of precision upon the unadjusted difference-in-means estimator. This finding echoes similar findings by \cite{yang2001efficiency} and \cite{tsiatis2008covariate} in settings in which treatment is determined by i.i.d.\ coin flips, and \cite{freedman2008regression} in a finite population setting in which treatment is determined according to complete randomization. See \cite{negi2021revisiting} for a succinct treatment of that literature. More surprisingly, we find that this phenomenon persists even if the adjustments are interacted with treatment. In fact, doing so leads to no changes in precision. In this sense, our results diverge from those in \cite{lin2013agnostic}, who found in the same setting studied by \cite{freedman2008regression} that such interactions ensured gains in precision relative to the unadjusted difference-in-means estimator. We show, however, that gains in precision can be ensured by including fixed effects for each of the pairs. Similar results have been obtained by \cite{fogarty2018regression} in a finite population framework for the estimation of the sample average treatment effect. Our analysis further reveals that the resulting covariate-adjusted estimator is ``optimal'' among all finite-dimensional, linear adjustments. In particular, further interaction of these adjustments with treatment leads to no further improvements. These results support the simulation-based findings of \cite{bruhn2009pursuit}, who advocate for including fixed effects for each of the pairs when analyzing such experiments. We emphasize, however, that the usual heteroskedascity-robust standard errors for the corresponding ordinary least squares estimator that na\"ively treats the data (including treatment status) as if it were i.i.d.\ need not be consistent for the limiting variance derived in our analysis. We then use our framework to consider high-dimensional adjustments based on the LASSO. We study, in particular, two estimators of this form. The first estimator is motivated by the observation that the finite-dimensional, linear adjustment that includes fixed effects for each of the pairs is identical to the intercept term in the linear regression of the pairwise differences in outcomes on the pairwise differences in covariates. The first estimator we consider is therefore defined as the intercept term in a LASSO-penalized regression of the pairwise difference in the outcomes on the pairwise differences in the covariates. As with its finite-dimensional counterpart, we show that this estimator is more precise than the unadjusted difference-in-means estimator. The second estimator we consider first obtains an intermediate estimator by using the LASSO to estimate the ``working model'' for the relevant conditional expectations. In a finite population setting in which treatment is determined according to complete randomization, \cite{cohen2020no-harm} show that such an estimator is necessarily more precise than the unadjusted difference-in-means estimator. When treatment is determined according to ``matched pairs,'' however, this intermediate estimator need not be the case. We therefore consider, in an additional step, an estimator based on the finite-dimensional, linear adjustment described above that uses the predicted values for the ``working model'' as the covariates and includes fixed effects for each of the pairs. We show that the resulting estimator improves upon both the intermediate estimator and the unadjusted difference-in-means estimator in terms of precision. Moreover, we provide conditions under which both of these high-dimensional adjustments attain the relevant semi-parametric efficiency bound derived in \cite{armstrong2022asymptotic}. The remainder of our paper is organized as follows. In Section \ref{sec:setup}, we describe our setup and notation. In particular, there we describe the precise sense in which we require that units in each pair are ``close'' in terms of their baseline covariates. In Section \ref{sec:main}, we introduce our general class of estimators based on a ``doubly robust'' moment condition. Under certain high-level conditions on the ``working models'' and their corresponding estimators, we derive the limiting behavior of the covariate-adjusted estimator. In Section \ref{sec:linear}, we use our general framework to study a variety of estimators with finite-dimensional, linear covariate adjustment. In Section \ref{sec:highdim}, we use our general framework to study two estimators with high-dimensional covariate adjustment based on the LASSO. In Section \ref{sec:simulations}, we examine the finite-sample behavior of tests based on these different estimators via a small simulation study. We find that covariate adjustment can lead to considerable gains in precision. Finally, in Section \ref{sec:empirical}, we apply our methods to reanalyze data from an experiment using a ``matched pairs'' design to study the effect of macroinsurance on microenterprise. \section{Setup and Notation} \label{sec:setup} Let $Y_i \in \mathbf R$ denote the (observed) outcome of interest for the $i$th unit, $D_i \in \{0,1\}$ be an indicator for whether the $i$th unit is treated, and $X_i \in \mathbf R^{k_x}$ and $W_i \in \mathbf R^{k_w}$ denote observed, baseline covariates for the $i$th unit; $X_i$ and $W_i$ will be distinguished below through the feature that only the former will be used in determining treatment assignment. Further denote by $Y_i(1)$ the potential outcome of the $i$th unit if treated and by $Y_i(0)$ the potential outcome of the $i$th unit if not treated. The (observed) outcome and potential outcomes are related to treatment status by the relationship \begin{equation} \label{eq:obsy} Y_i = Y_i(1)D_i + Y_i(0)(1 - D_i)~. \end{equation} For a random variable indexed by $i$, $A_i$, it will be useful to denote by $A^{(n)}$ the random vector $(A_1, \ldots, A_{2n})$. Denote by $P_n$ the distribution of the observed data $Z^{(n)}$, where $Z_i = (Y_i, D_i,X_i,W_i)$, and by $Q_n$ the distribution of $U^{(n)}$, where $U_i = (Y_i(1),Y_i(0),X_i,W_i)$. Note that $P_n$ is determined by \eqref{eq:obsy}, $Q_n$, and the mechanism for determining treatment assignment. We assume throughout that $U^{(n)}$ consists of $2n$ i.i.d.\ observations, i.e., $Q_n = Q^{2n}$, where $Q$ is the marginal distribution of $U_i$. We therefore state our assumptions below in terms of assumptions on $Q$ and the mechanism for determining treatment assignment. Indeed, we will not make reference to $P_n$ in the sequel, and all operations are understood to be under $Q$ and the mechanism for determining the treatment assignment. Our object of interest is the average effect of the treatment on the outcome of interest, which may be expressed in terms of this notation as \begin{equation} \label{eq:ate} \Delta(Q) = E[Y_i(1) - Y_i(0)]~. \end{equation} We now describe our assumptions on $Q$. We restrict $Q$ to satisfy the following mild requirement: \begin{assumption} \label{ass:Q} The distribution $Q$ is such that \vspace{-.3cm} \begin{enumerate}[(a)] \item $0 < E[\mathrm{Var}[Y_i(d) | X_i]]$ for $d \in \{0, 1\}$. \item $E[Y_i^2(d)] < \infty$ for $d \in \{0, 1\}$. \item $E[Y_i(d) | X_i = x]$ and $E[Y_i^2(d) | X_i = x]$ are Lipschitz for $d \in \{0, 1\}$. \end{enumerate} \end{assumption} Next, we describe our assumptions on the mechanism determining treatment assignment. In order to describe these assumptions more formally, we require some further notation to define the relevant pairs of units. The $n$ pairs may be represented by the sets $$\{\pi(2j-1), \pi(2j)\} \text{ for } j = 1, \ldots, n~,$$ where $\pi = \pi_n(X^{(n)})$ is a permutation of $2n$ elements. Because of its possible dependence on $X^{(n)}$, $\pi$ encompasses a broad variety of different ways of pairing the $2n$ units according to the observed, baseline covariates $X^{(n)}$. Given such a $\pi$, we assume that treatment status is assigned as described in the following assumption: \begin{assumption} \label{ass:treatment} Treatment status is assigned so that $(Y^{(n)}(1),Y^{(n)}(0),W^{(n)}) \perp \!\!\! \perp D^{(n)} | X^{(n)}$ and, conditional on $X^{(n)}$, $(D_{\pi(2j-1)},D_{\pi(2j)}), j = 1, \ldots, n$ are i.i.d. and each uniformly distributed over the values in $\{(0,1),(1,0)\}$. \end{assumption} Following \cite{bai2021inference}, our analysis will additionally require some discipline on the way in which pairs are formed. Let $\|\cdot\|_2$ denote the Euclidean norm. We will require that units in each pair are ``close`` in the sense described by the following assumption: \begin{assumption} \label{ass:close} The pairs used in determining treatment status satisfy $$\frac{1}{n} \sum_{1 \leq j \leq n} \|X_{\pi(2j)} - X_{\pi(2j-1)}\|_2^r \stackrel{P}{\rightarrow} 0$$ for $r \in \{1, 2\}$. \end{assumption} \noindent It will at times be convenient to require further that units in consecutive pairs are also ``close`` in terms of their baseline covariates. One may view this requirement, which is formalized in the following assumption, as ``pairing the pairs`` so that they are ``close`` in terms of their baseline covariates. \begin{assumption} \label{ass:pairsclose} The pairs used in determining treatment status satisfy $$\frac{1}{n} \sum_{1 \leq j \leq \lfloor \frac{n}{2} \rfloor} \|X_{\pi(4j -k)} - X_{\pi(4j-\ell)}\|_2^2 \stackrel{P}{\rightarrow} 0$$ for any $k \in \{2,3\}$ and $\ell \in \{0,1\}$. \end{assumption} \noindent \cite{bai2021inference} provide results to facilitate constructing pairs satisfying Assumptions \ref{ass:close}--\ref{ass:pairsclose} under weak assumptions on $Q$. In particular, given pairs satisfying Assumption \ref{ass:close}, it is frequently possible to ``re-order`` them so that Assumption \ref{ass:pairsclose} is satisfied. See Theorem 4.3 in \cite{bai2021inference} for further details. As in \cite{bai2021inference}, we highlight the fact that Assumption \ref{ass:pairsclose} will only be used to enable consistent estimation of relevant variances. \section{Main Results} \label{sec:main} To accommodate various forms of covariate-adjusted estimators of $\Delta(Q)$ in a single framework, it is useful to note it follows from Assumption \ref{ass:treatment} that for any $d \in \{0,1\}$ and any function $m_{d, n}: \mathbf R^{k_x} \times \mathbf R^{k_w} \rightarrow \mathbf R$ such that $E[|m_{d, n}(X_i,W_i)|] < \infty$, \begin{equation} \label{eq:doublemoment} E \left [ 2 I\{D_i = d\}(Y_i - m_{d, n}(X_i,W_i)) + m_{d, n}(X_i,W_i) \right] = E[Y_i(d)]~. \end{equation} We note that \eqref{eq:doublemoment} is just the augmented inverse propensity score weighted (AIPW) moment for $E[Y_i(d)]$ in which the propensity score is $1/2$ and the conditional mean model is $m_{d,n}(X_i,W_i)$. Such a moment is also ``double robustness.'' As the propensity score for the ``matched pairs'' design is exactly $1/2$, we do not require the conditional mean model to be correctly specified, i.e., $m_{d, n}(X_i,W_i) = E[Y_i(d)|X_i,W_i]$. See, for instance, \cite{robins1995analysis}. Intuitively, $m_{d, n}$ is the ``working model'' which researchers use to estimate $E[Y_i(d) | X_i, W_i]$, and can be arbitrarily misspecified because of \eqref{eq:doublemoment}. Although $m_{d, n}$ will be identical across $n \geq 1$ for the examples in Section \ref{sec:linear}, the notation permits $m_{d, n}$ to depend on the sample size $n$ in anticipation of the high-dimensional results in Section \ref{sec:highdim}. Based on the moment condition in \eqref{eq:doublemoment}, our proposed estimator of $\Delta(Q)$ is given by \begin{equation} \label{eq:hatdelta} \hat \Delta_n = \hat \mu_n(1) - \hat \mu_n(0)~, \end{equation} where, for $d \in \{0,1\}$, \begin{equation} \label{eq:hatmu} \hat \mu_n(d) = \frac{1}{2n} \sum_{1 \leq i \leq 2n} \left ( 2I\{D_i = d\}(Y_i - \hat m_{d, n}(X_i,W_i)) + \hat m_{d, n}(X_i,W_i) \right )~ \end{equation} and $\hat m_{d, n}$ is a suitable estimator of the ``working model`` $m_{d, n}$ in \eqref{eq:doublemoment}. We require some disciplines on the behavior of $m_{d, n}$ for $d \in \{0, 1\}$ and $n \geq 1$: \begin{assumption} \label{ass:md} The functions $m_{d, n}$ for $d \in \{0, 1\}$ and $n \geq 1$ satisfy \vspace{-.3cm} \begin{enumerate}[(a)] \item For $d \in \{0, 1\}$, \[\liminf_{n \to \infty} E \left [ \var \left [ Y_i(d) - \frac{1}{2}(m_{1, n}(X_i, W_i) + m_{0, n}(X_i, W_i)) \Bigg | X_i \right ] \right ] > 0~. \] \item For $d \in \{0, 1\}$, \[\lim_{\lambda \to \infty} \limsup_{n \to \infty} E[m_{d, n}^2(X_i, W_i) I \{|m_{d, n}(X_i, W_i)| > \lambda\}] = 0~. \] \item $E[m_{d, n}(X_i, W_i) | X_i = x]$, $E[m_{d, n}^2(X_i, W_i) | X_i = x]$, and $E[m_{d, n}(X_i, W_i) Y_i(d) | X_i = x]$ for $d \in \{0, 1\}$, and $E[m_{1, n}(X_i, W_i) m_{0, n}(X_i, W_i) | X_i = x]$ are Lipschitz uniformly over $n \geq 1$. \end{enumerate} \end{assumption} Assumption \ref{ass:md}(a) is an assumption to rule out degenerate situations. Assumption \ref{ass:md}(b) is a mild uniform integrability assumption on the ``working models.'' If $m_{d, n} \equiv m_d$ for $d \in \{0, 1\}$, then it is satisfied as long as $E[m_d^2(X_i, W_i)] < \infty$. Assumption \ref{ass:md}(c) ensures that units that are ``close'' in terms of the observed covariates are also ``close'' in terms of potential outcomes, uniformly across $n \geq 1$. Theorem \ref{thm:main} below establishes the limit in distribution of $\hat \Delta_n$. We note that the theorem depends on high-level conditions on $m_{d, n}$ and $\hat m_{d, n}$. In the sequel, these conditions will be verified in several examples. \begin{theorem} \label{thm:main} Suppose $Q$ satisfies Assumption \ref{ass:Q}, the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}, and $m_{d, n}$ for $d \in \{0, 1\}$ and $n \geq 1$ satisfy Assumption \ref{ass:md}. Further suppose $\hat m_{d, n}$ satisfies \begin{equation} \label{eq:rate} \frac{1}{\sqrt {2n}} \sum_{1 \leq i \leq 2n} (2D_i - 1)(\hat m_{d, n}(X_i,W_i) - m_{d, n}(X_i,W_i)) \stackrel{P}{\rightarrow} 0~. \end{equation} Then, $\hat \Delta_n$ defined in \eqref{eq:hatdelta} satisfies \begin{equation} \label{eq:normal} \frac{\sqrt n (\hat \Delta_n - \Delta(Q))}{\sigma_n(Q)} \stackrel{d}{\rightarrow} N(0,1)~, \end{equation} where $\sigma_n^2(Q) = \sigma_{1, n}^2(Q) + \sigma_2^2(Q) + \sigma_3^2(Q)$ with \begin{align*} \sigma_{1, n}^2(Q) & = \frac{1}{2}E\left [ \var\left [E[Y_i(1) + Y_i(0)|X_i,W_i] - (m_{1, n}(X_i,W_i) + m_{0, n}(X_i,W_i)) \Big | X_i \right ]\right]\\ \sigma_2^2(Q) & = \frac{1}{2}\var\left [ E\left [Y_i(1) - Y_i(0) \Big | X_i,W_i \right ]\right] \\ \sigma_3^2(Q) & = E[\var[Y_i(1)|X_i,W_i] + \var[Y_i(0)|X_i,W_i]]~. \end{align*} \end{theorem} In order to facilitate the use of Theorem \ref{thm:main} for inference about $\Delta(Q)$, we next provide a consistent estimator of $\sigma_n(Q)$. Define \begin{align*} \tilde Y_i & = Y_i - \frac{1}{2} (\hat m_{1, n}(X_i, W_i) + \hat m_{0, n}(X_i, W_i)) \\ \hat \tau_n^2 & = \frac{1}{n} \sum_{1 \leq j \leq n} (\tilde Y_{\pi(2j - 1)} - \tilde Y_{\pi(2j)})^2 \\ \hat \lambda_n & = \frac{2}{n} \sum_{1 \leq j \leq \lfloor \frac{n}{2} \rfloor} (\tilde Y_{\pi(4j - 3)} - \tilde Y_{\pi(4j - 2)}) (\tilde Y_{\pi(4j - 1)} - \tilde Y_{\pi(4j)}) (D_{\pi(4j - 3)} - D_{\pi(4j - 2)}) (D_{\pi(4j - 1)} - D_{\pi(4j)})~. \end{align*} The variance estimator is given by \begin{equation} \label{eq:var} \hat \sigma_n^2 = \hat \tau_n^2 - \frac{1}{2}(\hat \lambda_n + \hat \Delta_n^2)~. \end{equation} Note it can be shown similarly as in Remark 3.9 of \cite{bai2021inference} that $\hat \sigma_n^2$ in \eqref{eq:var} is nonnegative. Theorem \ref{thm:var} below establishes the consistency of this estimator and its implications for inference about $\Delta(Q)$. In the statement of the theorem, we make use of the following notation: for any scalars $a$ and $b$, $[a \pm b]$ is understood to be $[a - b, a + b]$. \begin{theorem} \label{thm:var} Suppose $Q$ satisfies Assumption \ref{ass:Q}, the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}, and $m_{d, n}$ for $d \in \{0, 1\}$ and $n \geq 1$ satisfy Assumption \ref{ass:md}. Further suppose $\hat m_{d, n}$ satisfies \eqref{eq:rate} and \begin{equation} \label{eq:L2} \frac{1}{2n} \sum_{1 \leq i \leq 2n} (\hat m_{d, n}(X_i,W_i) - m_{d, n}(X_i,W_i))^2 \stackrel{P}{\rightarrow} 0~. \end{equation} Then, \[ \frac{\hat \sigma_n}{\sigma_n(Q)} \stackrel{P}{\rightarrow} 1~. \] Hence, \eqref{eq:normal} holds with $\hat \sigma_n$ in place of $\sigma_n(Q)$. In particular, for any $\alpha \in (0,1)$, \[ P \left \{ \Delta(Q) \in \left [\hat \Delta_n \pm \hat \sigma_n \Phi^{-1} \left ( 1 - \frac{\alpha}{2} \right )\right ] \right\} \rightarrow 1-\alpha~, \] where $\Phi$ is the standard normal c.d.f.\ \end{theorem} \begin{remark} \label{remark:equivalent} An important and immediate implication of Theorem \ref{thm:main} is that $\sigma_n^2(Q)$ is minimized when \begin{eqnarray*} && E[Y_i(0) + Y_i(1)|X_i,W_i] - E[Y_i(0) + Y_i(1)|X_i] = \\ && \hspace{2cm} m_{0, n}(X_i,W_i) + m_{1, n}(X_i,W_i) - E[m_{0, n}(X_i,W_i) + m_{1, n}(X_i,W_i)| X_i] \end{eqnarray*} with probability one. In other words, the ``working model'' for $E[Y_i(0) + Y_i(1)|X_i,W_i]$ given by $m_{0, n}(X_i,W_i) + m_{1, n}(X_i,W_i)$, need only be correct ``on average'' over the variables that are not used in determining the pairs. For such a choice of $m_{0, n}(X_i,W_i)$ and $m_{1, n}(X_i,W_i)$, $\sigma_n^2(Q)$ in Theorem \ref{thm:main} becomes simply \[ \frac{1}{2}\var\left [ E\left [Y_i(1) - Y_i(0) \Big | X_i,W_i \right ]\right] + E[\var[Y_i(1)|X_i,W_i] + \var[Y_i(0)|X_i,W_i]]~, \] which agrees with the variance obtained in \cite{bai2021inference} when both $X_i$ and $W_i$ are used in determining the pairs. Such a variance also achieves the efficiency bound derived by \cite{armstrong2022asymptotic}. \end{remark} \begin{remark} Following \cite{bai2022inference}, it is straightforward to extend the analysis in this paper to the case with multiple treatment arms and where treatment status is determined using a ``matched tuples" design, but we do not pursue this further in this paper. \end{remark} \section{Linear Adjustments} \label{sec:linear} In this section, we consider linearly covariate-adjusted estimators of $\Delta(Q)$ based on a set of regressors generated by $X_i \in \mathbf R^{k_x}$ and $W_i \in \mathbf R^{k_w}$. To this end, define $\psi_i = \psi(X_i, W_i)$, where $\psi: \mathbf R^{k_x} \times \mathbf R^{k_w} \to \mathbf R^p$. We impose the following assumptions on the function $\psi$: \begin{assumption} \label{ass:psi} The function $\psi$ is such that \vspace{-.3cm} \begin{enumerate}[(a)] \item no component of $\psi$ is constant and $E[\var[\psi_i | X_i]]$ is nonsingular. \item $\var[\psi_i] < \infty$. \item $E[\psi_i | X_i = x]$, $E[\psi_i \psi_i' | X_i = x]$, and $E[\psi_i Y_i(d) | X_i = x]$ for $d \in \{0, 1\}$ are Lipschitz. \end{enumerate} \end{assumption} \noindent Assumption \ref{ass:psi} is analogous to Assumption \ref{ass:Q}. Note, in particular, that Assumption \ref{ass:psi}(a) rules out situations where $\psi_i$ is a function of $X_i$ only. See Remark \ref{remark:fogarty} for a discussion of the behavior of the covariate-adjusted estimators in such situations. \subsection{Linear Adjustments without Pair Fixed Effects} Consider the following linear regression model: \begin{equation} \label{eq:naive} Y_i = \alpha + \Delta D_i + \psi_i' \beta + \epsilon_i~. \end{equation} Let $\hat \alpha_n^{\rm naive}$, $\hat \Delta_n^{\rm naive}$, and $\hat \beta_n^{\rm naive}$ denote the OLS estimators of $\alpha$, $\Delta$, and $\beta$ in \eqref{eq:naive}. It follows from direct calculation that \[ \hat \Delta_n^{\rm naive} = \frac{1}{n} \sum_{1 \leq i \leq 2n} (Y_i - \psi_i' \hat \beta_n^{\rm naive}) (2 D_i - 1)~. \] Therefore, $\hat \Delta_n^{\rm naive}$ satisfies \eqref{eq:hatdelta}--\eqref{eq:hatmu} with \[ \hat m_{d, n}(X_i, W_i) = \psi_i' \hat \beta_n^{\rm naive}~. \] Theorem \ref{thm:naive} establishes \eqref{eq:rate} and \eqref{eq:L2} for a suitable choice of $m_{d, n}(X_i, W_i)$ for $d \in \{0, 1\}$ and, as a result, the limiting distribution of $\hat \Delta_n^{\rm naive}$ and the validity of the variance estimator. \begin{theorem} \label{thm:naive} Suppose $Q$ satisfies Assumption \ref{ass:Q} and the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}. Further suppose $\psi$ satisfies Assumption \ref{ass:psi}. Then, as $n \to \infty$, \[ \hat \beta_n^{\rm naive} \stackrel{P}{\to} \beta^{\rm naive} = \var[\psi_i]^{-1} \cov[\psi_i, Y_i(1) + Y_i(0)]~. \] Moreover, \eqref{eq:rate}, \eqref{eq:L2}, and Assumption \ref{ass:md} are satisfied with \[ m_{d, n}(X_i, W_i) = \psi_i' \beta^{\rm naive} \] for $d \in \{0, 1\}$ and $n \geq 1$. \end{theorem} \begin{remark} \label{remark:lin} \cite{freedman2008regression} studies regression adjustment based on \eqref{eq:naive} when treatment is assigned by complete randomization instead of a ``matched pairs'' design. In such settings, \cite{lin2013agnostic} proposes adjustment based on the following linear regression model: \begin{equation} \label{eq:lin} Y_i = \alpha + \Delta D_i + (\psi_i - \bar \psi_n)' \gamma + D_i (\psi_i - \bar \psi_n)' \eta + \epsilon_i~, \end{equation} where \[ \bar \psi_n = \frac{1}{2n} \sum_{1 \leq i \leq 2n} \psi_i~. \] Let $\hat \alpha_n^{\rm int}, \hat \Delta_n^{\rm int}, \hat \gamma_n^{\rm int}, \hat \eta_n^{\rm int}$ denote the OLS estimators for $\alpha, \Delta, \gamma, \eta$ in \eqref{eq:lin}. It is straightforward to show $\hat \Delta_n^{\rm int}$ satisfies \eqref{eq:hatdelta}--\eqref{eq:hatmu} with \begin{align*} \hat m_{1, n}(X_i, W_i) & = (\psi_i - \hat \mu_{\psi, n}(1))' (\hat \gamma_n^{\rm int} + \hat \eta_n^{\rm int}) \\ \hat m_{0, n}(X_i, W_i) & = (\psi_i - \hat \mu_{\psi, n}(0))' \hat \gamma_n^{\rm int}~, \end{align*} where \[ \hat \mu_{\psi, n}(d) = \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} \psi_i~. \] It can be shown using similar arguments to those used to establish Theorem \ref{thm:naive} that \eqref{eq:rate} and Assumption \ref{ass:md} are satisfied with \[ m_{d, n}(X_i, W_i) = (\psi_i - E[\psi_i])' \var[\psi_i]^{-1} \cov[\psi_i, Y_i(d)] \] for $d \in \{0, 1\}$ and $n \geq 1$. It thus follows by inspecting the expression for $\sigma^2_n(Q)$ in Theorem \ref{thm:main} that the limiting variance of $\hat \Delta_n^{\rm int}$ is the same as that of $\hat \Delta_n^{\rm naive}$ based on \eqref{eq:naive}. \end{remark} \subsection{Linear Adjustments with Pair Fixed Effects} \label{sec:pfe} Remark \ref{remark:lin} implies that in ``matched pairs'' designs, including interaction terms in the linear regression does not lead to an estimator with lower limiting variance than the one based on the linear regression without interaction terms. It is therefore interesting to study whether there exists a linearly covariate-adjusted estimator with lower limiting variance than the one based on \eqref{eq:naive} and \eqref{eq:lin}. To that end, consider instead the following linear regression model: \begin{equation} \label{eq:pfe} Y_i = \Delta D_i + \psi_i' \beta + \sum_{1 \leq j \leq n} \theta_j I \{i \in \{\pi(2j - 1), \pi(2j)\}\} + \epsilon_i~. \end{equation} Let $\hat \Delta_n^{\rm pfe}$, $\hat \beta_n^{\rm pfe}$, and $\hat \gamma_{j, n}$, $1 \leq j \leq n$ denote the OLS estimators of $\Delta$, $\beta$, $\theta_j$, $1 \leq j \leq n$ in \eqref{eq:pfe}. It follows from the Frisch-Waugh-Lovell theorem that \[ \hat \Delta_n^{\rm pfe} = \frac{1}{n} \sum_{1 \leq i \leq 2n} (Y_i - \psi_i' \hat \beta_n^{\rm pfe}) (2 D_i - 1)~. \] Therefore, $\hat \Delta_n^{\rm pfe}$ satisfies \eqref{eq:hatdelta}--\eqref{eq:hatmu} with \[ \hat m_{d, n}(X_i, W_i) = \psi_i' \hat \beta_n^{\rm pfe}~. \] Theorem \ref{thm:pfe} establishes \eqref{eq:rate} and \eqref{eq:L2} for a suitable choice of $m_{d, n}(X_i, W_i), d \in \{0, 1\}$ and, as a result, the limiting distribution of $\hat \Delta_n^{\rm pfe}$ and the validity of the variance estimator. \begin{theorem} \label{thm:pfe} Suppose $Q$ satisfies Assumption \ref{ass:Q} and the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}. Then, as $n \to \infty$, \[ \hat \beta_n^{\rm pfe} \stackrel{P}{\to} \beta^{\rm pfe} = (2 E[\var[\psi_i | X_i]])^{-1} E[\cov[\psi_i, Y_i(1) + Y_i(0) | X_i]]~. \] Moreover, \eqref{eq:rate}, \eqref{eq:L2}, and Assumption \ref{ass:md} are satisfied with \[ m_{d, n}(X_i, W_i) = \psi_i' \beta^{\rm pfe} \] for $d \in \{0, 1\}$ and $n \geq 1$. \end{theorem} \begin{remark} \label{remark:fogarty} When $\psi$ is restricted to be a function of $X_i$ only, then $\hat \Delta_n^{\rm pfe}$ coincides to first order with the unadjusted difference-in-means estimator defined as \begin{equation} \label{eq:na} \hat \Delta_n^{\rm unadj} = \frac{1}{n} \sum_{1 \leq i \leq 2n} Y_i D_i - \frac{1}{n} \sum_{1 \leq i \leq 2n} Y_i (1 - D_i)~. \end{equation} To see this, suppose further that $\psi$ is Lipschitz and that $\text{Var}[Y_i(d)|X_i = x], d \in \{0,1\}$ are bounded. The proof of Theorem \ref{thm:pfe} reveals that $\hat \Delta_n^{\rm pfe}$ and $\hat \beta_n^{\rm pfe}$ coincide with the OLS estimators of the intercept and slope parameters in a linear regression of $(Y_{\pi(2j)} - Y_{\pi(2j-1)})(D_{\pi(2j)} - D_{\pi(2j-1)})$ on a constant and $(\psi_{\pi(2j)} - \psi_{\pi(2j-1)})(D_{\pi(2j)} - D_{\pi(2j-1)})$. Using this observation, it follows by arguing as in Section S.1.1 of \cite{bai2021inference} that \[ \sqrt n (\hat \Delta_n^{\rm pfe} - \Delta(Q)) = \sqrt n (\hat \Delta_n^{\rm unadj} - \Delta(Q)) + o_P(1) ~.\] See also Remark 3.8 of \cite{bai2021inference}. \end{remark} \begin{remark} \label{remark:optimal} Note in the expression of $\sigma_n^2(Q)$ in Theorem \ref{thm:main} only depends on $m_{d, n}(X_i, W_i), d \in \{0,1\}$ through $\sigma_{1, n}^2(Q)$. With this in mind, consider the class of all linearly covariate-adjusted estimators based on $\psi_i$, i.e., $m_{d, n}(X_i, W_i) = \psi_i' \beta(d)$. For this specification of $m_{d, n}(X_i, W_i), d \in \{0,1\}$, \[ \sigma_{1, n}^2(Q) = E[(E[Y_i(1) + Y_i(0) | X_i, W_i] - E[Y_i(1) + Y_i(0) | X_i] - (\psi_i - E[\psi_i | X_i])' (\beta(1) + \beta(0)))^2]~. \] It follows that among all such linear adjustments, $\sigma_n^2(Q)$ in \eqref{eq:normal} is minimized when \[ \beta(1) + \beta(0) = 2 \beta^{\rm pfe}~. \] This observation implies that the linear adjustment with pair fixed effects, i.e., $\hat \Delta_n^{\rm pfe}$, yields the optimal linear adjustment in the sense of minimizing $\sigma_n^2(Q)$. Its limiting variance is, in particular, weakly smaller than the limiting variance of the unadjusted difference-in-means estimator defined in \eqref{eq:na}. The same limiting variance is attained by $m_{d, n}(X_i, W_i) = \psi_i' \beta(d) + h_d(X_i)$ for $d \in \{0, 1\}$. On the other hand, the covariate-adjusted estimators based on \eqref{eq:naive} or \eqref{eq:lin}, i.e., $\hat \Delta_n^{\rm naive}$ and $\hat \Delta_n^{\rm int}$, are in general not optimal among all linearly covariate-adjusted estimators based on $\psi_i$. In fact, the limiting variances of these two estimators may even be larger than that of the unadjusted difference-in-means estimator. Simulation evidence in Section \ref{sec:empirical} illustrates such a phenomenon in an example. In this sense, these estimators suffer from a counterpart to the critique raised by \cite{freedman2008regression}. \end{remark} \begin{remark} Even though $\hat \Delta_n^{\rm pfe}$ can be computed via ordinary least squares estimation of \eqref{eq:pfe}, we emphasize that the usual heteroskedascity-robust standard errors that na\"ively treats the data (including treatment status) as if it were i.i.d.\ need not be consistent for the limiting variance derived in our analysis. See \cite{bai2022inference} for details. \end{remark} \begin{remark} \label{remark:int-pfe} One can also consider the estimator based on the following linear regression model: \begin{equation} \label{eq:int-pfe} Y_i = \Delta D_i + (\psi_i - \bar \psi_n)' \gamma + D_i (\psi_i - \hat \mu_{\psi, n}(1))' \eta + \sum_{1 \leq j \leq n} \theta_j I \{i \in \{\pi(2j - 1), \pi(2j)\}\} + \epsilon_i~. \end{equation} Let $\hat \Delta_n^{\rm int-pfe}, \hat \gamma_n^{\rm int-pfe}, \hat \eta_n^{\rm int-pfe}$ denote the OLS estimators for $\Delta, \gamma, \eta$ in \eqref{eq:int-pfe}. It is straightforward to show $\hat \Delta_n^{\rm int - pfe}$ satisfies \eqref{eq:hatdelta}--\eqref{eq:hatmu} with \begin{align*} \hat m_{1, n}(X_i, W_i) & = (\psi_i - \hat \mu_{\psi, n}(1))' \hat \eta_n^{\rm int-pfe} \\ \hat m_{0, n}(X_i, W_i) & = (\psi_i - \hat \mu_{\psi, n}(0))' (\hat \eta_n^{\rm int-pfe} - \hat \gamma_n^{\rm int-pfe})~. \end{align*} Following similar arguments to those used in the proof of Theorem \ref{thm:naive}, we can establish that \eqref{eq:rate} and Assumption \ref{ass:md} are satisfied with \begin{align*} m_{1, n}(X_i, W_i) & = (\psi_i - E[\psi_i])' \eta^{\rm int-pfe} \\ m_{0, n}(X_i, W_i) & = (\psi_i - E[\psi_i])' (\eta^{\rm int-pfe} - \gamma^{\rm int-pfe})~, \end{align*} where \begin{align*} \gamma^{\rm int-pfe} & = (E[\var[\psi_i | X_i]])^{-1} E[\mathrm{Cov}[\psi_i, Y_i(1) - Y_i(0) | X_i]] \\ \eta^{\rm int-pfe} & = (E[\var[\psi_i | X_i]])^{-1} E[\mathrm{Cov}[\psi_i, Y_i(1) | X_i]] \end{align*} Because $2\eta^{\rm int-pfe} - \gamma^{\rm int-pfe} = 2 \beta^{\rm pfe}$, it follows from Remark \ref{remark:optimal} that the limiting variance of $\hat \Delta_n^{\rm int-pfe}$ is identical to the limiting variance of $\hat \Delta_n^{\rm pfe}$. \end{remark} \section{High-Dimensional Adjustments} \label{sec:highdim} In this section, we study covariate adjustments based on high-dimensional regressors. Such settings can arise if the covariates $W_i$ are high-dimensional or if the regressors include many transformations of $X_i$ and $W_i$. To accommodate situations where the dimension of $W_i$ increases with $n$, we add a subscript and denote it by $W_{n, i}$ instead. Let $k_{w, n}$ denote the dimension of $W_{n, i}$. For $n \geq 1$, let $\psi_{n,i} = \psi_n(X_i,W_{n,i})$, where $\psi_n: \mathbf R^{k_x} \times \mathbf R^{k_{w, n}} \to \mathbf R^{p_n}$ and $p_n$ will be permitted below to be possibly much larger than $n$. In what follows, we propose two distinct LASSO-based high-dimensional counterparts to $\hat \Delta_n^{\rm pfe}$ studied in Section \ref{sec:pfe}. The first method is motivated by the observation in Section \ref{sec:pfe} that $\hat \Delta_n^{\rm pfe}$ satisfies \eqref{eq:hatdelta}--\eqref{eq:hatmu} with \[ \hat m_{d, n}(X_i, W_i) = \psi_i' \hat \beta_n^{\rm pfe}~, \] where $\hat \beta_n^{\rm pfe}$ can be obtained, as described in Remark \ref{remark:fogarty}, through OLS regression of the pairwise differences in the outcomes on a constant and the pairwise differences in the covariates. For our first method, we therefore consider a LASSO-penalized version of this same procedure. As explained further below in Theorem \ref{thm:LASSO}, when, for $d\in\{0,1\}$, $m_{d,n}(X_i,W_i)$ is sufficiently well approximated by a sparse linear function of $\psi_{n, i}$, the resulting estimator, $\hat \Delta_n^{\rm hd-pd}$, is optimal in the sense that it minimizes the limiting variance in Theorem \ref{thm:main}. Moreover, when this is not the case, its limiting variance is still weakly smaller than the limiting variance of the unadjusted difference-in-means estimator. The second method is a two-step method in the spirit of \cite{fogarty2018regression}. In the first step, an intermediate estimator, $\hat \Delta_n^{\rm hd}$, is obtained using \eqref{eq:hatdelta} with a ``working model'' obtained through a LASSO-based approximation to $m_{d,n}(X_i,W_i)$. As explained further below in Theorem \ref{thm:LASSO2}, when, for $d\in\{0,1\}$, $m_{d,n}(X_i,W_i)$ is sufficiently well approximated by a sparse linear function of $\psi_{n, i}$, such an estimator is also optimal in the sense that it minimizes the limiting variance in Theorem \ref{thm:main}. When this is not the case, however, for reasons analogous to those put forward in Remark \ref{remark:fogarty}, it need not have a limiting variance weakly smaller than the unadjusted difference-in-means estimator. In a second step, we therefore consider an estimator based on OLS estimation of a version of \eqref{eq:pfe} in which the covariates $\psi_i$ are replaced by the LASSO-based estimates of $m_{d,n}(X_i,W_i)$ for $d \in \{0,1\}$. The resulting estimator, $\hat \Delta_n^{\rm hd-f}$, has limiting variance weakly smaller than that of the intermediate estimator and thus remains optimal in the same sense. Moreover, like $\hat \Delta_n^{\rm hd-pd}$, it too has limiting variance weakly smaller than the unadjusted difference-in-means estimator. Some comparisons between $\hat \Delta_n^{\rm hd-pd}$ and $\hat \Delta_n^{\rm hd-f}$ are described in Remark \ref{remark:comparison}. Before proceeding, we introduce some additional notation that will be required in our formal description of the methods. To this end, define \begin{align*} \mu_d(X_i) & = E[Y_i(d) | X_i] \\ \Psi(X_i) & = E[\psi_{n,i} | X_i] \\ \tilde \psi_{n,i} &= \psi_{n,i} - \Psi(X_i) ~. \end{align*} We denote by $\psi_{n,i,l}$ and $\tilde \psi_{n,i,l}$ the $l$the components of $\psi_{n,i}$ and $\tilde \psi_{n,i}$, respectively. For a vector $a \in \mathbf R^k$ and $0 \leq p \leq \infty$, recall that $$\|a\|_p = \Big ( \sum_{1 \leq l \leq k} |a_l|^p \Big )^{1/p}~,$$ where it is understood that $\|a\|_0 = \sum_{1 \leq l \leq k} I \{a_k \neq 0\}$ and $\|a\|_\infty = \sup_{1 \leq l \leq k} |a_l|$. Using this notation, we further define $$\Xi_n = \sup_{(x,w) \times \mathrm{supp}(X_i) \times \mathrm{supp}(W_i) }\|\psi_{n,i}(x,w)\|_\infty ~.$$ \subsection{First LASSO-based Adjustment} \label{sec:LASSO} Define \begin{equation} \label{eq:LASSO} (\hat \alpha_n^{\rm hd-pd},\hat \beta_{n}^{\rm hd-pd}) \in \argmin_{a \in \mathbf R, b \in \mathbf R^{p_n}} \frac{1}{n} \sum_{1 \leq j \leq n} (\delta_{Y, j} -a- \delta_{\psi, j}' b)^2 + \lambda_{ n}^{\rm hd-pd} \|\hat \Omega_n b\|_1~, \end{equation} where \begin{align*} \delta_{Y, j} & = (D_{\pi(2j - 1)} - D_{\pi(2j)}) (Y_{\pi(2j - 1)} - Y_{\pi(2j)}) \\ \delta_{\psi, j} & = (D_{\pi(2j - 1)} - D_{\pi(2j)}) (\psi_{n, \pi(2j - 1)} - \psi_{n,\pi(2j)})~, \end{align*} $\lambda_{n}^{\rm hd-pd}$ is a penalty parameter that will be disciplined by the assumptions below, $\hat{\Omega}_n = \diag(\hat{\omega}_{n,1},\cdots,\hat{\omega}_{n,p_n})$ is a diagonal matrix, and $\hat{\omega}_{n,l}$ is the penalty loading for the $l$th regressor. For some $\underline{c}$ and $\bar{c}$, we require that \begin{align} \label{eq:loadings1} & 0<\underline{c} \leq \liminf_{n \rightarrow \infty} \min_{1 \leq l \leq p_n} \hat{\omega}_{n,l} \leq \limsup_{n \rightarrow \infty} \max_{1 \leq l \leq p_n} \hat{\omega}_{n,l} \leq \bar{c} < \infty \end{align} with probability one. Let $\hat \Delta_n^{\rm hd-pd}$ denote the estimator in \eqref{eq:hatdelta} with $\hat m_{1, n}(X_i, W_{n, i}) = \hat m_{0, n}(X_i, W_{n, i}) = \hat \alpha_{ n}^{\rm hd-pd} + \psi_{n, i}' \hat \beta_{ n}^{\rm hd-pd}$. Because there is no penalty term for $a$ in \eqref{eq:LASSO}, $\hat \Delta_n^{\rm hd-pd} = \hat \alpha_{ n}^{\rm hd-pd}$. Our analysis of this estimator will require the following assumptions. In our statement of the assumptions, we will make use of the quantity $(\alpha_{d,n}^{\rm hd-pd},\beta_{d,n}^{\rm hd-pd})$, which will be assumed to satisfy \begin{equation} \label{eq:sparse} s_n^{\rm hd-pd} = \max_{d \in \{0, 1\}} \|\beta_{d,n}^{\rm hd-pd}\|_0 \end{equation} and \begin{equation} \label{eq:foc} \|E[(1,\tilde \psi_{n, i}')' \epsilon_{n, i}^{\rm hd-pd}(d)]\|_\infty = o(\lambda_n^{\rm hd-pd})~, \end{equation} where $$\epsilon_{n, i}^{\rm hd-pd}(d) = Y_i(d) - \alpha_{d,n}^{\rm hd-pd} - \tilde \psi_{n, i}' \beta_{d, n}^{\rm hd-pd}~.$$ It is instructive to note that \eqref{eq:sparse} requires $\beta_{d,n}^{\rm hd-pd}$ to be sparse and \eqref{eq:foc} is the subgradient condition for a $\ell_1$-penalized regression of the outcome $Y_i(d)$ on $\tilde \psi_{n,i}$ when the penalty is of order $o(\lambda_n^{\rm hd-pd})$. If $p_n = o(n)$, then both conditions are satisfied for the $\beta_{d, n}^{\rm hd-pd}$ equal to the coefficients of a linear projection of $Y_i(d)$ onto $\tilde \psi_{n,i}$. When $p_n \gg n$, but $E[Y_i(d)|X_i,W_i]$ is approximately sparse in the sense that there exists some sparse $\beta_{d,n}^*$ with $\max_{d \in \{0, 1\}}||\beta_{d,n}^*||_0 \ll n$ such that the approximation error $|E[Y_i(d)|X_i,W_i] - \tilde \psi_{n, i}'\beta_{d,n}^*|$ is sufficiently small, then \eqref{eq:sparse} and \eqref{eq:foc} are satisfied for $\beta_{d,n}^{\rm hd-pd} = \beta_{d,n}^*$. We emphasize, however, these conditions can still hold when $E[Y_i(d)|X_i,W_i]$ is neither approximately sparse nor linear in $\tilde \psi_{n,i}$. We additionally require that \begin{equation} \label{eq:boundedbeta} \limsup_{n \rightarrow \infty}\max_{d \in \{0, 1\}}||\beta_{d,n}^{\rm hd-pd}||_\infty < \infty~. \end{equation} Further restrictions on $\beta_{d,n}^{\rm hd-pd}$ and $\lambda_n^{\rm hd-pd}$ will be imposed through a combination of the assumptions below. We now proceed with the statement of our assumptions. The first assumption collects a variety of moment conditions that will be used in our formal analysis: \begin{assumption} \begin{enumerate}[(a)] \item For some $q > 2$ and constant $C_1$, \begin{align*} \sup_{n \geq 1} \max_{1 \leq l \leq p_n} E[|\psi_{n, i, l}^q| | X_i] &\leq C_1 \\ \sup_{n \geq 1} |\psi_{n,i}'\beta_{d,n}^{\rm hd-pd}| &\leq C_1 \\ \sup_{n \geq 1} |E[Y_i(a)|X_i,W_{n,i}]| &\leq C_1 \end{align*} with probability one. \item For some $c_0$, $\ubar \sigma$, $\bar \sigma$, the following statements hold with probability one: \begin{align*} \max_{d \in \{0, 1\}, 1 \leq l \leq p_n} \frac{1}{2n} \sum_{1 \leq i \leq 2n} E[\epsilon_{n, i}^4(d) | X_i] \leq c_0 &< \infty \\ \sup_{n \geq 1}~ \max_{d \in \{0, 1\}} E[\epsilon_{n, i}^4(d)] \leq c_0 &< \infty \\ \min_{d \in \{0, 1\}} \var[Y_i(d) - \psi_{n,i}'(\beta_{1,n}^{\rm hd-pd} + \beta_{0,n}^{\rm hd-pd})/2] \geq \ubar \sigma^2 &> 0\\ \min_{1 \leq l \leq p_n} \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} \var[\tilde \psi_{n, i, l} \epsilon_{n, i}(d) | X_i] \geq \ubar \sigma^2 &> 0 \\ \min_{1 \leq l \leq p_n} \frac{1}{n} \sum_{1 \leq j \leq n}E[\tilde \psi_{n,\pi(2j),l}^2| D^{(n)},X^{(n)}] E[\epsilon_{n,\pi(2j-1)}^2 D_{\pi(2j-1)}| D^{(n)},X^{(n)}] \geq \ubar \sigma^2 &> 0 \\ \min_{1 \leq l \leq p_n} \frac{1}{n} \sum_{1 \leq j \leq n}E[\tilde \psi_{n,\pi(2j-1),l}^2| D^{(n)},X^{(n)}] E[\epsilon_{n,\pi(2j)}^2 D_{\pi(2j)}| D^{(n)},X^{(n)}] \geq \ubar \sigma^2 &> 0 \\ \min_{1 \leq l \leq p_n, d \in \{0, 1\}} \var[E[\tilde \psi_{n, i, l} \epsilon_{n, i}(d) | X_i]] \geq \ubar \sigma^2 &> 0~. \end{align*} \end{enumerate} \label{ass:LASSO1-moments} \end{assumption} \noindent Assumption \ref{ass:LASSO1-moments}(a)--(b) are standard in the high-dimensional estimation literature; see, for instance, \cite{belloni2017program}. The last four inequalities in Assumption \ref{ass:LASSO1-moments}(b), in particular, permit us to apply the high-dimensional central limit theorem in \citet[Theorem 2.1]{chernozhukov2017central}. As in the preceding sections, we will additionally require some discipline on the way in which pairs are formed. As before, we will require that units in each pair are ``close'' in the sense described by the first part of the following assumption, but we will additionally require a Lipschitz-like condition that will play the role of Assumption \ref{ass:Q}(c). \cite{bai2021inference} provide algorithms ensuring that part (a) is satisfied with $\zeta_n = O(n^{-1/(2k_x)})$ under weak assumptions on the distribution of $X_i$. \begin{assumption} \label{ass:LASSO1-match} \begin{enumerate}[(a)] \item For some $\zeta_n$, \[ \left ( \frac{1}{n}\sum_{1\leq j \leq n}\left\Vert X_{\pi(2j)} - X_{\pi(2j-1)} \right\Vert_2^2\right )^{1/2} \leq \sqrt{\var[||X_i||_2]} \zeta_n\quad \text{with probability one}~. \] \item For some $L > 0$ and any $x_1$ and $x_2$ in the support of $X_i$, we have \[ |(\Psi(x_1)-\Psi(x_2))'\beta_{d,n}^{\rm hd-pd}| \leq L ||x_1-x_2||_2~. \] \end{enumerate} \end{assumption} We next specify our restrictions on the penalty parameter $\lambda_{ n}^{\rm hd-pd}$. \begin{assumption} \label{ass:LASSO1-penalty} \begin{enumerate}[(a)] \item For some $\ell \ell_n \rightarrow \infty$, \[ \lambda_{ n}^{\rm hd-pd} = \ell \ell_n \left(\frac{1}{\sqrt n} \Phi^{-1} \left ( 1 - \frac{0.1}{2 \log(n) p_n} \right ) + \zeta_n\right)~, \] where $\zeta_n$ is as in Assumption \ref{ass:LASSO1-match}(a). \item $\Xi_n^2 (\log (p_n \vee n))^7 / n \to 0$, $\ell\ell_n s_n^{\rm hd-pd} \log (p_n \vee n) / \sqrt{n} \to 0$, and $s_n^{\rm hd-pd} \log^{1/2}(p_n \vee n) \ell\ell_n \zeta_n \to 0$. \end{enumerate} \end{assumption} \noindent We note that the first three requirements in Assumption \ref{ass:LASSO1-penalty}(b) allow $p_n$ to be much greater than $n$. If $\zeta_n = O(n^{-1/(2k_x)})$, then the last requirement in Assumption \ref{ass:LASSO1-penalty}(b) implies $s_n^{\rm hd-pd} = o(n^{1/{(2k_x)}})$. Finally, as is common in the analysis of $\ell_1$-penalized regression, we require a ``restricted eigenvalue'' condition. See, for instance, \cite{belloni2017program}. This assumption permits us to apply \citet[Lemma 4.1]{bickel2009simultaneous} and establish the error bounds for $$|\hat \alpha_n^{\rm hd-pd} - \alpha_n^{\rm hd-pd}|+||\hat \beta_n^{\rm hd-pd} - \beta_n^{\rm hd-pd}||_1 \quad \text{and} \quad \frac{1}{n}\sum_{1\leq j \leq n}\left(\hat \alpha_n^{\rm hd-pd} - \alpha_n^{\rm hd-pd}+\delta_{\psi,j}'(\hat \beta_n^{\rm hd-pd} - \beta_n^{\rm hd-pd})\right)^2,$$ where $\alpha_n^{\rm hd-pd} = \alpha_{1,n}^{\rm hd-pd}-\alpha_{0,n}^{\rm hd-pd}$ and $\beta_n^{\rm hd-pd} = (\beta_{1,n}^{\rm hd-pd}+\beta_{0,n}^{\rm hd-pd})/2$. \begin{assumption} \label{ass:LASSO1-ev} For some $\kappa_1 > 0$, $\kappa_2$ and $\ell_n \to \infty$, the following statements hold with probability approaching one: \begin{align*} \inf_{d \in \{0, 1\}, v \in \mathbf R^{p_n+1}: \|v\|_0 \leq (s_n^{\rm hd-pd}+1) \ell_n} (\|v\|_2^2)^{-1} v' \Bigg ( \frac{1}{n} \sum_{1 \leq j \leq n} \breve \delta_{\psi,j} \breve \delta_{\psi,j}' \Bigg ) v & \geq \kappa_1 \\ \sup_{d \in \{0, 1\}, v \in \mathbf R^{p_n+1}: \|v\|_0 \leq (s_n^{\rm hd-pd}+1) \ell_n} (\|v\|_2^2)^{-1} v' \Bigg ( \frac{1}{n} \sum_{1 \leq j \leq n} \breve \delta_{\psi,j} \breve \delta_{\psi,j}' \Bigg ) v & \leq \kappa_2~, \end{align*} where $\breve \delta_{\psi,j} = (1,\delta_{\psi,j}')'$. \end{assumption} Using these assumptions, the following theorem characterizes the behavior of $\hat \Delta_n^{\rm hd-pd}$: \begin{theorem} \label{thm:LASSO} Suppose $Q$ satisfies Assumption \ref{ass:Q} and the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}. Further suppose Assumptions \ref{ass:LASSO1-moments}--\ref{ass:LASSO1-ev} and \eqref{eq:loadings1} -- \eqref{eq:boundedbeta} hold. Then, \eqref{eq:rate}, \eqref{eq:L2}, and Assumption \ref{ass:md} are satisfied with $\hat m_{d, n} = \hat \alpha_{n}^{\rm hd-pd} + \psi_{n, i}' \hat \beta_{n}^{\rm hd-pd}$ and $m_{d, n}(X_i, W_{n, i}) = \alpha_{n}^{\rm hd-pd} + \psi_{n, i}'\beta_{n}^{\rm hd-pd}~$ for $d \in \{0, 1\}$ and $n \geq 1$. Moreover, the variance of $\hat \Delta_n^{\rm hd-pd}$, denoted by $\sigma_n^{\rm hd-pd,2}$, satisfies \begin{align*} \limsup_{n \to \infty} (\sigma_n^{\rm hd-pd,2} - \sigma_n^{\rm na,2}) \leq 0. \end{align*} If we further assume the true specification is approximately sparse, i.e., there exists $\beta^*_{d,n}$ such that $||\beta^*_{d,n}||_1 = O(s_n)$, $E[Y_i(d)|X_i,W_i] = \alpha^*_{d,n} + \psi_{n,i}'\beta^*_{d,n} + R_{n,i}$ and $E [R_{n,i}^2] = o(1)$, then $\sigma_n^{\rm hd-pd,2}$ achieves the minimum variance, i.e., \begin{align*} \lim_{n \to \infty} \sigma_n^{\rm hd-pd, 2} = \sigma_2^2(Q) + \sigma_3^2(Q)~. \end{align*} \end{theorem} \begin{remark} If the additional covariates $W_{n,i}$ are fixed-dimensional, and $\psi_{n,i}$ contains sieve bases of $(W_{n,i},X_i)$, then approximate sparsity holds under appropriate smoothness conditions on the conditional expectation. Under these circumstances, Theorem \ref{thm:LASSO} implies the LASSO-based adjustment achieves minimum variance derived in Remark \ref{remark:equivalent}, which achieves the semiparametric efficiency bound derived by \cite{armstrong2022asymptotic}. \end{remark} \begin{remark} In practice, we choose \[ \ell\ell_n = \sqrt{\log \log n}/5 \] and replace $\zeta_n$ by \[ \Bigg (\frac{1}{n}\sum_{1\leq j \leq n}\left\Vert X_{\pi(2j)} - X_{\pi(2j-1)} \right\Vert_2^2\Bigg )^{1/2}/\hat{\sigma}_X~, \] where $\hat{\sigma}_X$ is the sample standard deviation of $\{||X_i||_2\}_{1\leq i \leq n}$. \end{remark} \begin{remark} \label{remark:loadings} While our theory only requires that $\hat \omega_{n,\ell}, \ell = 1, \ldots, p_n$ satisfy \eqref{eq:loadings1}, we recommend employing an iterative estimation procedure outlined by \cite{belloni2017program} to estimate $(\hat \alpha_n^{{\rm hd-pd}, (m)},\hat \beta_{d, n}^{\rm hd-pd})$, in which the $m$-th step's penalty loadings are estimated based the $(m-1)$-th step's LASSO estimates. Formally, this iterative procedure is described by the following algorithm: \begin{algorithm} \hspace{1cm} \begin{enumerate} \item[] \underline{Step 0}: Set $\hat \epsilon_{n,j}^{{\rm hd-pd},(0)} = \delta_{Y,j}$. \item[] \hspace{1in} $\vdots$ \item[] \underline{Step $m$}: Compute $\hat \omega_{n,l}^{(m)} = \sqrt{\frac{1}{n}\sum_{1\leq j \leq n} \delta_{\psi,j,l}^2 (\hat \epsilon_{n,j}^{{\rm hd-pd},(m-1)})^2}$ and compute $(\hat \alpha_n^{{\rm hd-pd}, (m)},\hat \beta_n^{{\rm hd-pd}, (m)})$ following \eqref{eq:LASSO} with $\hat \omega_{n,l}^{{\rm hd-pd},(m)}$ as the penalty loadings, and $\hat \epsilon_{n,j}^{\rm hd-pd,(m)} = \delta_{Y,j} - \hat \alpha_n^{{\rm hd-pd}, (m)}-\delta_{\psi,j}'\hat \beta_n^{\rm hd-pd, (m)}$. \item[] \hspace{1in} $\vdots$ \item[] \underline{Step $M$}: $\ldots$ \item[] \underline{Step $M+1$}: Set $\hat \beta_n^{\rm hd-pd} = \hat \beta_n^{\rm hd-pd, (M)}$. \end{enumerate} \end{algorithm} \noindent As suggested by \cite{belloni2017program}, we set $M$ to be 15. We note that {\tt R} package \textbf{hdm} has a built-in option for this iterative procedure. For this choice of penalty loadings, arguments similar to those in \cite{belloni2017program} can be used to verify \eqref{eq:loadings1} under ``matched pairs'' designs. \end{remark} \subsection{Second LASSO-based Adjustment} \label{sec:LASSO2} For $d \in \{0, 1\}$, define \begin{equation} \label{eq:LASSO2} (\hat \alpha_{d, n}^{\rm hd}, \hat \beta_{d, n}^{\rm hd}) \in \argmin_{a \in \mathbf R, b \in \mathbf R^{p_n}} \frac{1}{n} \sum_{1 \leq i \leq 2n: D_i = d} (Y_i - a - \psi_{n,i}' b)^2 + \lambda_{d, n}^{\rm hd} \|\hat \Omega_n(d) b\|_1~, \end{equation} where $\lambda_{d, n}^{\rm hd}$ is a penalty parameter that will be disciplined by the assumptions below, $\hat{\Omega}_n(d) = \diag(\hat{\omega}_1(d),\cdots,\break\hat{\omega}_{p_n}(d))$ is a diagonal matrix, and $\hat{\omega}_{n,l}(d)$ is the penalty loading for the $l$th regressor. Define $\Omega_n^*(d) = \diag(\omega_{n,1}^*(d),\cdots,\omega_{n,p_n}^*(d))$, where $\omega_{n,l}^{*,2}(d) = \var[\psi_{n, i, l} v_i]$ and $v_i = Y_i - E[Y_i|X_i,W_i]$. For some $\ubar c$ and $\bar c$, we require that \begin{equation} \label{eq:loadings2} 0<\underline{c} \leq \liminf_{n \rightarrow \infty} \min_{1 \leq l \leq p_n} \hat{\omega}_{n,l}(d)/\omega_{n,l}^\ast(d) \leq \limsup_{n \rightarrow \infty} \max_{1 \leq l \leq p_n} \hat{\omega}_{n,l}(d)/\omega_{n,l}^\ast(d) \leq \bar{c} < \infty \end{equation} with probability one. Let $\hat \Delta_n^{\rm hd}$ denote the estimator in \eqref{eq:hatdelta} with $\hat m_{d, n} = \psi_{n, i}' \hat \beta_{d, n}^{\rm hd}$ for $d \in \{0, 1\}$. Our analysis of this estimator will require the following assumptions. In our statement of the assumptions, we will make use of the quantity $\beta_{d, n}^{\rm hd}$, which will be assumed to satisfy \begin{equation} \label{eq:sparse2} s_n^{\rm hd} = \max_{d \in \{0, 1\}} \|\beta_{d,n}^{\rm hd}\|_0 \end{equation} and \begin{equation} \label{eq:foc2} \|\Omega_n^\ast(d)^{-1}E[\psi_{n, i} \epsilon_{n, i}^{\rm hd}(d)]\|_\infty + |E\epsilon_{n, i}^{\rm hd}(d)|= o\left( \lambda_{d, n}^{\rm hd}\right)~, \end{equation} where \[ \epsilon_{n, i}^{\rm hd}(d) = Y_i(d) - \alpha_{d, n}^{\rm hd} - \psi_{n, i}' \beta_{d, n}^{\rm hd}~. \] Here, it is useful to recall the discussion after equations \eqref{eq:sparse}--\eqref{eq:foc}. We now proceed with the statement of our assumptions. The first assumption collects a variety of moment conditions that will be used in our formal analysis: \begin{assumption} \label{ass:LASSO2-moments} \begin{enumerate}[(a)] \item For some $q > 2$ and constant $C_1$, \begin{align*} \sup_{n \geq 1} \max_{1 \leq l \leq p_n} E[|\psi_{n, i, l}^q| | X_i] &\leq C_1 \\ \sup_{n \geq 1} |\psi_{n,i}'\beta_{d,n}^{\rm hd-pd}| &\leq C_1 \\ \sup_{n \geq 1} |E[Y_i(a)|X_i,W_{n,i}]| &\leq C_1 \end{align*} with probability one. \item For some $c_0$, $\ubar \sigma$, $\bar \sigma$, \[ 0<\ubar\sigma^2 \leq \liminf_{n \rightarrow \infty}~ \min_{d \in \{0, 1\}, 1 \leq l \leq p_n} \omega_{n,l}^2(d) \leq \limsup_{n \rightarrow \infty}~ \max_{d \in \{0, 1\}, 1 \leq l \leq p_n} \omega_{n,l}^2(d) \leq \bar{\sigma}^2 < \infty~. \] Moreover, the following statements hold with probability one: \begin{align*} \sup_{n \geq 1} \max_{d \in \{0, 1\}}E[(\psi_{n, i}'\beta_{d,n}^{\rm hd})^2] \leq c_0 & < \infty \\ \max_{d \in \{0, 1\}, 1 \leq l \leq p_n} \frac{1}{2n} \sum_{1 \leq i \leq 2n} E[\epsilon_{n, i}^4(d) | X_i] \leq c_0 & < \infty \\ \sup_{n \geq 1}~ \max_{d \in \{0, 1\}} E[\epsilon_{n, i}^4(d)] \leq c_0 & < \infty \\ \min_{d \in \{0, 1\}} \var[Y_i(d) - \psi_{n,i}'(\beta_{1,n}^{\rm hd} + \beta_{0,n}^{\rm hd})/2] \geq \ubar \sigma^2 & > 0 \\ \min_{1 \leq l \leq p_n} \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} \var[\psi_{n, i, l} \epsilon_{n, i}(d) | X_i] \geq \ubar \sigma^2 & > 0 \\ \min_{1 \leq l \leq p_n} \var[E[\psi_{n, i, l} \epsilon_{n, i}(d) | X_i]] \geq \ubar \sigma^2 & > 0~. \end{align*} \end{enumerate} \end{assumption} \noindent The discussion after Assumption \ref{ass:LASSO1-moments} applies to the preceding assumption as well. Our analysis will, as before, also require some discipline on the way in which pairs are formed. For this purpose, Assumption \ref{ass:close} will suffice, but we will need an additional Lipshitz-like condition, similar to Assumption \ref{ass:LASSO1-match}(b): \begin{assumption} \label{ass:LASSO2-match} For some $L > 0$ and any $x_1$ and $x_2$ in the support of $X_i$, we have \[ |(\Psi(x_1)-\Psi(x_2))'\beta_{d,n}^{\rm hd}| \leq L ||x_1-x_2||_2~. \] \end{assumption} We next specify our restrictions on the penalty parameter $\lambda_{ n}^{\rm hd}$. \begin{assumption} \label{ass:LASSO2-penalty} \begin{enumerate}[(a)] \item For some $\ell \ell_n \rightarrow \infty$, \[ \lambda_{d, n}^{\rm hd} = \frac{\ell \ell_n }{\sqrt n} \Phi^{-1} \left ( 1 - \frac{0.1}{2 \log(n) p_n} \right )~. \] \item $\Xi_n^2 (\log p_n)^7 / n \to 0$ and $(\ell\ell_n s_n^{\rm hd} \log p_n) / \sqrt{n} \to 0$. \end{enumerate} \end{assumption} We note that Assumption \ref{ass:LASSO2-penalty}(b) permits $p_n$ to be much greater than $n$. Finally, as is common in the analysis of $\ell_1$-penalized regression, we require a ``restricted eigenvalue'' condition. This assumption permits us to apply \citet[Lemma 4.1]{bickel2009simultaneous} and establish the error bounds for $|\hat \alpha_{d,n}^{\rm hd} - \alpha_{d,n}^{\rm hd}|+||\hat \beta_{d,n}^{\rm hd} - \beta_{d,n}^{\rm hd}||_1$ and $\frac{1}{n}\sum_{1\leq i\leq 2n}I\{D_i=d\}\left(\hat \alpha_{d,n}^{\rm hd} - \alpha_{d,n}^{\rm hd}+\psi_{n,i}'(\hat \beta_{d,n}^{\rm hd} - \beta_{d,n}^{\rm hd})\right)^2$. \begin{assumption} \label{ass:LASSO2-ev} For some $\kappa_1 > 0, \kappa_2$ and $\ell_n \to \infty$, the following statements hold with probability approaching one: \begin{align*} \inf_{d \in \{0, 1\}, v \in \mathbf R^{p_n+1}: \|v\|_0 \leq (s_n^{\rm hd}+1) \ell_n} (\|v\|_2^2)^{-1} v' \Bigg ( \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} \breve \psi_{n, i} \breve \psi_{n, i}' \Bigg ) v & \geq \kappa_1 \\ \sup_{d \in \{0, 1\}, v \in \mathbf R^{p_n+1}: \|v\|_0 \leq (s_n^{\rm hd}+1) \ell_n} (\|v\|_2^2)^{-1} v' \Bigg ( \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} \breve \psi_{n, i} \breve \psi_{n, i}' \Bigg ) v & \leq \kappa_2 \\ \inf_{d \in \{0, 1\}, v \in \mathbf R^{p_n+1}: \|v\|_0 \leq (s_n^{\rm hd}+1) \ell_n} (\|v\|_2^2)^{-1} v' \Bigg ( \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} E[\breve \psi_{n, i}\breve \psi_{n, i}' | X_i] \Bigg ) v & \geq \kappa_1 \\ \sup_{d \in \{0, 1\}, v \in \mathbf R^{p_n+1}: \|v\|_0 \leq (s_n^{\rm hd}+1) \ell_n} (\|v\|_2^2)^{-1} v' \Bigg ( \frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} E[\breve \psi_{n, i} \breve \psi_{n, i}' | X_i] \Bigg ) v & \leq \kappa_2~, \end{align*} where $\breve \psi_{n,i} = (1,\psi_{n,i}')'$. \end{assumption} Using these assumptions, the following theorem characterizes the behavior of $\hat \Delta_n^{\rm hd}$: \begin{theorem} \label{thm:LASSO2} Suppose $Q$ satisfies Assumption \ref{ass:Q} and the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}. Further suppose Assumptions \ref{ass:LASSO2-moments}--\ref{ass:LASSO2-ev} and \eqref{eq:loadings2} hold. Then, \eqref{eq:rate}, \eqref{eq:L2}, and Assumption \ref{ass:md} are satisfied with $\hat m_{d, n} = \hat \alpha_{d,n}^{\rm hd}+ \psi_{n, i}' \hat \beta_{d, n}^{\rm hd}$ and \[ m_{d, n}(X_i, W_{n, i}) = \alpha_{d,n}^{\rm hd} + \psi_{n, i}' \beta_{d, n}^{\rm hd} \] for $d \in \{0, 1\}$ and $n \geq 1$. Denote the variance of $\hat \Delta_n^{\rm hd}$ by $\sigma_n^{\rm hd,2}$. If the LASSO adjustment is approximately correctly specified, i.e., $E [Y_i(d)|X_i,W_i] = \alpha_{d,n}^{\rm hd} + \psi_{n,i}'\beta^{\rm hd}_{d,n} + R_{n,i}(d)$ and $\max_{d \in \{0, 1\}}E[R_{n,i}^2(d)] = o(1)$, then $\sigma_n^{\rm hd,2}$ achieves the minimum variance, i.e., \begin{align*} \lim_{n \to \infty} \sigma_n^{\rm hd,2} = \sigma_2^2(Q) + \sigma_3^2(Q). \end{align*} \end{theorem} \begin{remark} As in Remark \ref{remark:loadings}, we recommend employing an iterative estimation procedure outlined by \cite{belloni2017program} to estimate $\hat \beta_{d, n}^{\rm hd}$, in which the $m$-th step's penalty loadings are estimated based the $(m-1)$-th step's LASSO estimates. Formally, this iterative procedure is described by the following algorithm: \begin{algorithm} \hspace{1cm} \begin{enumerate} \item[] \underline{Step 0}: Set $\hat \epsilon_{n,i}^{{\rm hd},(0)}(d) = Y_i$ if $D_i = d$. \item[] \hspace{1in} $\vdots$ \item[] \underline{Step $m$}: Compute $\hat \omega_{n,l}^{(m)}(d) = \sqrt{\frac{1}{n} \sum_{1 \leq i \leq 2n} I \{D_i = d\} \psi_{n, i, l}^2 (\hat \epsilon_{n, i}^{{\rm hd}, (m - 1)}(d))^2}$ and compute $(\hat \alpha_{d, n}^{{\rm hd}, (m)},\hat \beta_{d, n}^{{\rm hd}, (m)})$ following \eqref{eq:LASSO2} with $\hat \omega_{n,l}^{(m)}$ as the penalty loadings, and $\hat \epsilon_{n,i}^{{\rm hd},(m)}(d) = Y_i -\hat \alpha_{d, n}^{{\rm hd},(m)}-\psi_i'\hat \beta_{d, n}^{{\rm hd}, (m)}$ if $D_i = d$. \item[] \hspace{1in} $\vdots$ \item[] \underline{Step $M$}: $\ldots$ \item[] \underline{Step $M+1$}: Set $\hat \beta_{d, n}^{\rm hd} = \hat \beta_{d, n}^{{\rm hd}, (M)}$. \end{enumerate} \end{algorithm} As suggested by \cite{belloni2017program}, we set $M$ to be 15. We note that {\tt R} package \textbf{hdm} has a built-in option for this iterative procedure. For this choice of penalty loadings, arguments similar to those in \cite{belloni2017program} can be used to verify \eqref{eq:loadings2} under ``matched pairs'' designs. \end{remark} When the LASSO adjustment is approximately correctly specified, Theorem \ref{thm:LASSO} shows $\hat \Delta_n^{\rm hd}$ derived in Remark \ref{remark:equivalent}, and thus, is guaranteed to be weakly more efficient than the ATE estimator without any adjustments. On the other hand, when the LASSO adjustment is not approximately correctly specified, $\hat \Delta_n^{\rm hd}$ suffers from \cite{freedman2008regression}'s critique that it may be less efficient than $\hat \Delta_n^{\rm unadj}$. To overcome this problem, we consider an additional step in which we treat the LASSO adjustments $(\psi_{n, i}' \hat \beta_{1, n}^{\rm hd}, \psi_{n, i}'\hat \beta_{0, n}^{\rm hd})$ as linear covariates and rerun a linear regression with pair fixed effects. Such a procedure has also been studied by \cite{cohen2020no-harm} in the setting with low-dimensional covariates and complete randomization. Theorem \ref{thm:refit} below shows the new estimator for the ATE is weakly more efficient than both $\hat \Delta_n^{\rm unadj}$ and $\hat \Delta_n^{\rm hd}$. To state the results, define $\Gamma_{n,i} = (\psi_{n, i}' \beta_{1, n}^{\rm hd}, \psi_{n, i}' \beta_{0, n}^{\rm hd})'$, $\hat \Gamma_{n,i} = (\psi_{n, i}' \hat \beta_{1, n}^{\rm hd}, \psi_{n, i}'\hat \beta_{0, n}^{\rm hd})$, and $\hat \Delta_n^{\rm hd-f}$ as the estimator in \eqref{eq:pfe} with $\psi_i$ replaced by $\hat \Gamma_{n,i}$. Note that $\hat \Delta_n^{\rm hd-f}$ remains numerically the same if we include the intercept $\hat \alpha_{d,n}^{\rm hd}$ in the definition of $\hat \Gamma_{n,i}$. Following Remark \ref{remark:fogarty}, $\hat \Delta_n^{\rm hd-f}$ is the intercept in the linear regression of $(D_{\pi(2j-1)}-D_{\pi(2j-1)})(Y_{\pi(2j-1)} - Y_{\pi(2j)})$ on constant and $(D_{\pi(2j-1)}-D_{\pi(2j-1)})(\hat \Gamma_{n,\pi(2j-1)} - \hat \Gamma_{n,\pi(2j)})$. Replacing $\hat \Gamma_{n,i}$ by $\hat \Gamma_{n,i} + (\hat \alpha_{1,n}^{\rm hd},\hat \alpha_{0,n}^{\rm hd})'$ will not change the regression estimators. The following assumption will be employed to control $\Gamma_{n,i}$ in our subsequent analysis: \begin{assumption}\label{ass:refit} For some $\kappa_1 > 0$ and $\kappa_2$, \vspace{-.3cm} \begin{align*} & \inf_{n \geq 1} \inf_{v \in \mathbf R^2} ||v||_2^{-2}v' E [\var[\Gamma_{n,i}|X_i]]v \geq \kappa_1 \\ & \sup_{n \geq 1} \sup_{v \in \mathbf R^2} ||v||_2^{-2}v' E [\var[\Gamma_{n,i}|X_i]]v \leq \kappa_2~. \end{align*} \end{assumption} The following theorem characterizes the behavior of $\hat \Delta_n^{\rm hd-f}$: \begin{theorem} \label{thm:refit} Suppose $Q$ satisfies Assumption \ref{ass:Q} and the treatment assignment mechanism satisfies Assumptions \ref{ass:treatment}--\ref{ass:close}. Further suppose Assumptions \ref{ass:LASSO2-moments}--\ref{ass:LASSO2-ev} and \eqref{eq:loadings2} hold. In addition, suppose Assumption \ref{ass:refit} holds. Then, \eqref{eq:rate}, \eqref{eq:L2}, and Assumption \ref{ass:md} are satisfied with $\hat m_{d, n}(X_i, W_{n, i}) = \hat \Gamma_{n, i}' \hat \beta_{n}^{\rm hd-f}$ and \[ m_{d, n}(X_i, W_{n, i}) = \Gamma_{n, i}' \beta_{n}^{\rm hd-f} \] for $d \in \{0, 1\}$ and $n \geq 1$, where $\beta_{n}^{\rm hd-f} = (2E [\var[\Gamma_{n, i}|X_i]])^{-1}E[\cov[\Gamma_{n, i},Y_i(1)+Y_i(0)|X_i]]$. In addition, denote the variance of $\hat \Delta_n^{\rm hd-f}$ as $\sigma_n^{\rm hd-f,2}$. Then, $\sigma_n^{\rm na,2} \geq \sigma_n^{\rm hd-f,2}$ and $\sigma_n^{\rm hd,2}\geq \sigma_n^{\rm hd-f,2}$. \end{theorem} \begin{remark} \label{remark:comparison} We briefly comment on the comparison between the two LASSO-based adjustment methods. First, when the LASSO adjustment is approximately correctly specified, then both methods produce the same adjustment asymptotically, which achieves the minimum variance. Second, when the pseudo-true values in the two methods are different, it is unclear which adjustment is more efficient. However, it is possible to use the regression adjustments obtained from both LASSO estimations as regressors in the refitting step in the second method and produce one regression-adjusted ATE estimator which is more efficient than both $\hat \Delta_n^{\rm hd-pd}$ and $\hat \Delta_n^{\rm hd-f}$, provided that the full rank condition in Assumption \ref{ass:refit} holds. Third, the first method tends to select less regressors when the dimension of $X_{i}$ is large, as its $\ell_1$ penalty depends on $\zeta_n$. Fourth, the $\ell_1$ penalty of the second method is well studied in the literature. See, for example, \cite{belloni2012sparse}, \cite{belloni2014inference}, and \cite{belloni2017program}. Finally, it is possible to relax the full rank condition in Assumption \ref{ass:refit} by running a ridge regression or truncating the minimum eigenvalue of the gram matrix in the refitting step in the second method, which is left for future work. \end{remark} \section{Simulations} \label{sec:simulations} In this section, we conduct Monte Carlo experiments to assess the finite-sample performance of the inference methods proposed in the paper. In all cases, we follow \cite{bai2021inference} to consider tests of the hypothesis that \begin{align*} H_{0}: \Delta(Q) = \Delta_{0} \text{ versus } H_{1}: \Delta(Q) \neq \Delta_{0}. \end{align*} with $\Delta_{0}=0$ at nominal level $\alpha=0.05$. \subsection{Data Generating Processes} We generate potential outcomes for $d\in\{0,1\}$ and $1\leq i\leq2n$ by the equation \begin{equation} \ensuremath{Y_{i}(d)=\mu_{d}+m_{d}(X_{i},W_{i})+\sigma_{d}(X_{i},W_{i})\epsilon_{d,i}},~a=0,1,\label{eq:simulpart01} \end{equation} where $\mu_{d},m_{d}\left(X_{i},W_{i}\right),\sigma_{d}\left(X_{i},W_{i}\right)$, and $\epsilon_{d,i}$ are specified in each model as follows. In each of the specifications, ($X_{i},W_{i},\epsilon_{0,i},\epsilon_{1,i}$) are i.i.d. across $i$. The number of pairs $n$ is equal to 100 and 200 respectively. The number of replications is 10,000. \begin{description} \item [{Model 1}] $\left(X_{i},W_{i}\right)^\top=\left(\Phi\left(V_{i1}\right),\Phi\left(V_{i2}\right)\right)^\top$, where $\Phi(\cdot)$ is the standard normal distribution function and \[ V_{i}\sim N\left(\left(\begin{array}{l} 0\\ 0 \end{array}\right),\left(\begin{array}{ll} 1 & \rho\\ \rho & 1 \end{array}\right)\right), \] $m_{0}\left(X_{i},W_{i}\right)=\gamma\left(W_{i}-\frac{1}{2}\right)$; $m_{1}\left(X_{i},W_{i}\right)=m_{0}\left(X_{i},W_{i}\right)$; $\epsilon_{d,i}\sim N(0,1)$ for $a=0,1$; $\sigma_{0}\left(X_{i},W_{i}\right)=\sigma_{0}=1$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\sigma_{1}$. We set $\gamma=4$, $\sigma_{1}=1$, $\rho=0.2$. \item [{Model 2}] $\left(X_{i},W_{i}\right)^\top=\left(\Phi\left(V_{i1}\right),V_{1i}V_{i2}\right)^\top$, where $V_{i}$ is the same as in Model 1. $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}\left(W_{i} - \rho\right)+ \gamma_{2}\left(\Phi^{-1}\left(X_{i}\right)^2 - 1\right)$. $\epsilon_{d,i}\sim N(0,1)$ for $a=0,1$; $\sigma_{0}\left(X_{i},W_{i}\right)=\sigma_{0}=1$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\sigma_{1}$. $\left(\gamma_{1},\gamma_{2}\right)^\top=\left(1,2\right)^\top$, $\sigma_{1}=1$, $\rho=0.2$. \item [{Model 3}] The same as in Model 2, except that $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}\left(W_{i} - \rho\right)+ \gamma_{2}\left(\Phi\left(W_{i}\right) - \frac{1}{2}\right) + \gamma_{3}\left(\Phi^{-1}\left(X_{i}\right)^2 - 1\right)$ with $\left(\gamma_{1},\gamma_{2},\gamma_{3}\right)^\top=\left(\frac{1}{4},1,2\right)^\top$. \item [{Model 4}] $\left(X_{i},W_{i}\right)^\top=\left(V_{i1},V_{1i}V_{i2}\right)^\top$, where $V_{i}$ is the same as in Model 1. $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}\left(W_{i} - \rho\right)+ \gamma_{2}\left(\Phi\left(W_{i}\right) - \frac{1}{2}\right) + \gamma_{3}\left(X_{i}^2 - 1\right)$. $\epsilon_{d,i}\sim N(0,1)$ for $a=0,1$; $\sigma_{0}\left(X_{i},W_{i}\right)=\sigma_{0}=1$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\sigma_{1}$. $\left(\gamma_{1},\gamma_{2},\gamma_{3}\right)^\top=\left(2,1,2\right)^\top$. \item [{Model 5}] The same as in Model 4, except that $m_{1}\left(X_{i},W_{i}\right)= m_{0}\left(X_{i},W_{i}\right) + \left(\Phi\left(X_{i}\right) - \frac{1}{2}\right)$. \item [{Model 6}] The same as in Model 5, except that $\sigma_{0}\left(X_{i},W_{i}\right)=\left(\Phi\left(X_{i}\right)+0.5\right)$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\left(\Phi\left(X_{i}\right)+0.5\right)\sigma_{1}$. \item [{Model 7}] $X_{i}=\left(V_{i1},V_{i2}\right)^\top$ and $W_{i}=\left(V_{i1}V_{i3},V_{i2}V_{i4}\right)^\top$, where $V_{i} \sim N(0,\Sigma)$ with $\text{dim}(V_{i})=4$ and $\Sigma$ consisting of 1 on the diagonal and $\rho$ on all other elements. $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}^\prime\left(W_{i} - \rho\right) + \gamma_{2}^\prime\left(\Phi\left(W_{i}\right) - \frac{1}{2}\right) + \gamma_{3}\left(X_{i1}^2 - 1\right)$ with $\gamma_{1}=\left(2,2\right)^\top,\gamma_{2}=\left(1,1\right)^\top, \gamma_{3}=1$. $\epsilon_{d,i}\sim N(0,1)$ for $a=0,1$; $\sigma_{0}\left(X_{i},W_{i}\right)=\sigma_{0}=1$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\sigma_{1}$. $\sigma_{1}=1$, $\rho=0.2$. \item [{Model 8}] The same as in Model 7, except that $m_{1}\left(X_{i},W_{i}\right)= m_{0}\left(X_{i},W_{i}\right) + \left(\Phi\left(X_{i1}\right) - \frac{1}{2}\right)$. \item [{Model 9}] The same as in Model 8, except that $\sigma_{0}\left(X_{i},W_{i}\right)=\left(\Phi\left(X_{i1}\right) +0.5\right)$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\left(\Phi\left(X_{i1}\right) +0.5\right)\sigma_{1}$. \item [{Model 10}] $X_{i}=\left(\Phi\left(V_{i1}\right),\cdots,\Phi\left(V_{i4}\right)\right)^\top$ and $W_{i}=\left(V_{i1}V_{i5},V_{i2}V_{i6}\right)^\top$, where $V_{i}\sim N(0,\Sigma)$ with $\text{dim}(V_{i})=6$ and $\Sigma$ consisting of 1 on the diagonal and $\rho$ on all other elements. $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}^\prime\left(W_{i} - \rho\right) + \gamma_{2}^\prime\left(\Phi\left(W_{i}\right) - \frac{1}{2}\right) + \gamma_{3}^\prime\left(\left(\Phi^{-1}\left(X_{i1}\right)^2,\Phi^{-1}\left(X_{i2}\right)^2\right)^\top - 1\right)$ with $\gamma_{1}=\left(1,1\right)^\top,\gamma_{2}=\left(\frac{1}{2},\frac{1}{2}\right)^\top, \gamma_{3}=\left(\frac{1}{2},\frac{1}{2}\right)^\top$. $\epsilon_{d,i}\sim N(0,1)$ for $a=0,1$; $\sigma_{0}\left(X_{i},W_{i}\right)=\sigma_{0}=1$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\sigma_{1}$. $\sigma_{1}=1$, $\rho=0.2$. \item [{Model 11}] The same as in Model 10, except that $m_{1}\left(X_{i},W_{i}\right)= m_{0}\left(X_{i},W_{i}\right) + \frac{1}{4}\sum_{j=1}^{4}\left(X_{ij} - \frac{1}{2}\right)$. \item [{Model 12}] $X_{i}=\left(\Phi\left(V_{i1}\right),\cdots,\Phi\left(V_{i4}\right)\right)^\top$ and $W_{i}=\left(V_{i1}V_{i41},\cdots,V_{i40}V_{i80}\right)^\top$, where $V_{i} \sim N(0,\Sigma)$ with $\text{dim}(V_{i})=80$. $\Sigma$ is the Toeplitz matrix \begin{align*} \Sigma = \begin{pmatrix} 1 & 0.5 & 0.5^2 &\cdots & 0.5^{79} \\ 0.5 & 1 & 0.5 & \cdots & 0.5^{78} \\ 0.5^2 & 0.5 & 1 & \cdots & 0.5^{77} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0.5^{79} & 0.5^{78} & 0.5^{77} & \cdots & 1 \end{pmatrix}. \end{align*} $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}^\prime \left(W_{i}-\frac{1}{2}\right) + \gamma_{2}^\prime\left(\Phi^{-1}\left(X_{i}\right)^2 - 1\right)$, $\gamma_{1}=\left(\frac{1}{1^2},\frac{1}{2^2},\cdots,\frac{1}{40^2}\right)^\top$ with $\text{dim}(\gamma_{1})=40$, and $\gamma_{2}=\left(\frac{1}{8},\frac{1}{8},\frac{1}{8},\frac{1}{8}\right)^\top$ with $\text{dim}(\gamma_{2})=4$. $\epsilon_{d,i}\sim N(0,1)$ for $a=0,1$; $\sigma_{0}\left(X_{i},W_{i}\right)=\sigma_{0}=1$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\sigma_{1}$ with $\sigma_{1}=1$ \item [{Model 13}] The same as in Model 12, except that $m_{0}\left(X_{i},W_{i}\right)=m_{1}\left(X_{i},W_{i}\right)=\gamma_{1}^\prime\left(W_{i} - \rho\right) + \gamma_{2}^\prime\left(\Phi\left(W_{i}\right) - \frac{1}{2}\right) + \gamma_{3}^\prime\left(\Phi^{-1}\left(X_{i}\right)^2 - 1\right)$, $\gamma_{1}=\left(\frac{1}{1^2},\cdots,\frac{1}{40^2}\right)^\top$, $\gamma_{2}=\frac{1}{8}\left(\frac{1}{1^2},\cdots,\frac{1}{40^2}\right)^\top$, and $\gamma_{3}=\left(\frac{1}{8},\frac{1}{8},\frac{1}{8},\frac{1}{8}\right)^\top$ with $\text{dim}(\gamma_{1})=\text{dim}(\gamma_{2})=40$ and $\text{dim}(\gamma_{3})=4$. \item [{Model 14}] The same as in Model 13, except that $m_{1}\left(X_{i},W_{i}\right)= m_{0}\left(X_{i},W_{i}\right) + \sum_{j=1}^{4}\frac{1}{j^2}\left(X_{ij}- \frac{1}{2}\right)$. \item [{Model 15}] The same as in Model 14, except that $\sigma_{0}\left(X_{i},W_{i}\right)=\left(X_{i1}+0.5\right)$ and $\sigma_{1}\left(X_{i},W_{i}\right)=\left(X_{i1}+0.5\right)\sigma_{1}$. \end{description} It is worth noting that Models 1, 2, 3, 4, 7, 10, 12, and 13 imply homogeneous treatment effects because $m_{1}\left(X_{i},W_{i}\right) = m_{0}\left(X_{i},W_{i}\right)$. Among them, $E[Y_i(a)|X_i,W_i] - E[Y_i(a)|X_i]$ is linear in $W_i$ in Models 1, 2, and 12. Models 5, 8, 11, and 14 have heterogeneous but homoscedastic treatment effects. In Models 6, 9, and 15, however, the implied treatment effects are both heterogeneous and heteroscedastic. Models 12-15 contain high-dimensional covariates. We follow \cite{bai2021inference} to match pairs. Specifically, if $\text{dim}\left(X_i\right)=1$, we match pairs by sorting $X_i, i = 1, \ldots, 2n$. If $\text{dim}\left(X_i\right)>1$, we match pairs by the permutation $\pi$ calculated using the {\tt R} package \emph{nbpMatching}. For more details, see \citet[Section 4]{bai2021inference}. After matching the pairs, we flip coins to randomly select one unit within each pair for treatment and another for control. \subsection{Estimation and Inference} \label{sec:sims2} We set $\mu_0=0$ and $\mu_{1}=\Delta$, where $\Delta=0$ and $1/4$ are used to illustrate the size and power, respectively. Rejection probabilities in percentage points are presented. To further illustrate the efficiency gains obtained by regression adjustments, in Figure \ref{fig:bar}, we plot the average standard error reduction in percentage relative to the standard error of the estimator without adjustments for various estimation methods. Specifically, we consider the following adjusted estimators. \begin{enumerate}[(i)] \item NA: the estimator with no adjustments. In this case, our standard error is identical to the adjusted standard error proposed by \cite{bai2021inference}. \item LA: the linear adjustments with regressors $W_i$ but without pair dummies. \item LA2: the linear adjustments with $X_i$ and $W_i$ regressors but without pair dummies. \item LDA: the linear adjustments with regressors $W_i$ and pair dummies. \item HD-PD: the first LASSO-based adjustment. \item HD-F: the second LASSO-based adjustment. \end{enumerate} See Section \ref{sec:sims-details} for the regressors used in the LASSO adjustments. For Models 1-11, we examine the performance of estimators (i)-(v). For Models 12-15, we assess the performance between estimators (i) and (v) in high-dimensional settings. Note that the adjustments are misspecified for almost all the models. The only exception is Model 1, for which the linear adjustment in $W_i$ is correctly specified because $m_d(X_i,W_i)$ is just a linear function of $W_i$. \subsection{Simulation Results} Tables \ref{tab:no_LASSO_tab} and \ref{tab:no_LASSO_tab2} report size at the 0.05 level and power of the different methods for Models 1--11 when $n$ is 100 and 200, respectively. Several patterns emerge. First, for all the estimators, the rejection rates under $H_0$ are close to the nominal level even when $n=100$ and with misspecified adjustments. This result is expected because all the estimators take into account the dependence structure arising in MPDs, consistent with the findings in \cite{bai2021inference}. Second, in terms of power, ``LDA'' is higher than ``NA'', ``LA'', and ``LA2'' for all eleven models, as predicted by our theory. This finding confirms that ``LDA'' is the optimal linear adjustment and will not degrade the precision of the ATE estimator. In contrast, we observe that ``LA'' and ``LA2'' in Model 3 are even less powerful than the unadjusted estimator ``NA.'' Figure \ref{fig:bar} further confirms that these two methods inflate the estimation standard error. This result echos Freedman's critique \citep{freedman2008regression} that careless regression adjustments may degrade the estimation precision. Our ``LDA'' addresses this issue because it has been proven to be weakly more efficient than the unadjusted estimator. Third, the improvement of power for ``LDA'' is mainly due to the reduction of estimation standard errors, which can be more than 50\% as shown in Figure \ref{fig:bar} for Models 4--9. This means that the length of the confidence interval of the ``LDA'' estimator is just half of that for the ``NA'' estimator. Note the standard error of the ``NA'' estimator is the one proposed by \cite{bai2021inference}, which has already been adjusted to account for the cross-sectional dependence created in pair matching. The extra 50\% reduction is therefore produced purely by the regression adjustment. For Models 10-11, the reduction of standard error achieved by ``LDA'' is more than 40\% as well. For Model 1, the correct specification in the adjustments leads to all three methods achieving the global minimum asymptotic variance and maximum power. For Model 2, $m_d(X_i,W_i) - E[m_d(X_i,W_i)|X_i] = \gamma (W_i - E[W_i|X_i])$ so that the linear adjustment $\gamma W_i$ satisfies the conditions in Theorem \ref{thm:main}. Therefore, ``LDA'', as the best linear adjustment, is also the best adjustment globally, achieving the global minimum asymptotic variance and maximum power. In contrast, ``LA'' and ``LA2'' are not the best linear adjustment and therefore less powerful than ``LDA'' because of the omitted pair dummies. Finally, the LASSO-based adjustments have the best power for most models as they automatically achieve the global minimum asymptotic variance. Compared to ``HD-PD'', ``HD-F'' has slightly better power. Tables \ref{tab:LASSO_tab} and \ref{tab:LASSO_tab2} report the size and power for LASSO-based adjustments when both $W_i$ and $X_i$ are high-dimensional. We see that the size under the null is close to the nominal 5\% while the power for the adjusted estimator is higher than the unadjusted one. Figure \ref{fig:bar} further illustrates the reduction of the standard error is more than 30\% for all high-dimensional models. \newcolumntype{L}{>{\raggedright\arraybackslash}X} \newcolumntype{C}{>{\centering\arraybackslash}X} \begin{table}[ht!] \caption{Rejection probabilities for Models 1-11 when $n=100$} \vspace{1ex} \centering{}% \begin{tabularx}{1\textwidth}{LCCCCCCCCCCCC} \toprule & \multicolumn{6}{c}{$H_{0}$: $\Delta=0$} & \multicolumn{6}{c}{$H_{1}$: $\Delta=1/4$}\\ \cmidrule(lr){2-7} \cmidrule(lr){8-13} Model & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{LA} & \multicolumn{1}{c}{LA2} & \multicolumn{1}{c}{LDA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{LA} & \multicolumn{1}{c}{LA2} & \multicolumn{1}{c}{LDA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} \\ \midrule 1 & 5.47 & 5.57 & 5.63 & 5.76 & 6.12 & 5.84 & 22.48 & 43.89 & 43.95 & 43.91 & 44.69 & 43.92 \\ 2 & 4.96 & 5.26 & 5.30 & 5.47 & 5.74 & 5.32 & 23.32 & 28.02 & 27.96 & 37.21 & 39.00 & 33.12 \\ 3 & 4.99 & 5.28 & 5.24 & 5.48 & 5.78 & 5.27 & 32.19 & 27.88 & 27.96 & 37.34 & 38.59 & 36.29 \\ 4 & 5.31 & 5.28 & 5.28 & 5.48 & 5.93 & 5.79 & 11.78 & 27.88 & 28.03 & 37.34 & 42.21 & 43.28 \\ 5 & 5.43 & 5.09 & 5.08 & 5.49 & 5.84 & 5.78 & 11.87 & 27.72 & 27.88 & 36.69 & 41.24 & 43.08 \\ 6 & 5.28 & 5.43 & 5.41 & 5.58 & 5.90 & 5.79 & 11.78 & 26.67 & 26.72 & 34.71 & 38.76 & 40.29 \\ 7 & 5.64 & 5.63 & 5.62 & 5.98 & 6.45 & 6.04 & 9.24 & 34.55 & 34.65 & 37.96 & 37.72 & 42.08 \\ 8 & 5.63 & 5.54 & 5.51 & 6.03 & 6.26 & 6.17 & 9.28 & 34.11 & 34.42 & 37.22 & 36.78 & 41.29 \\ 9 & 5.74 & 5.69 & 5.76 & 6.19 & 6.32 & 5.89 & 8.99 & 32.39 & 32.30 & 35.42 & 34.66 & 38.75 \\ 10 & 5.24 & 5.78 & 5.73 & 6.05 & 6.07 & 6.04 & 14.27 & 30.80 & 30.75 & 32.02 & 28.37 & 32.51 \\ 11 & 5.19 & 5.78 & 5.72 & 6.07 & 6.01 & 5.95 & 14.36 & 30.60 & 30.49 & 32.21 & 27.92 & 32.81 \\ \bottomrule \end{tabularx} \label{tab:no_LASSO_tab} \end{table} \begin{table}[ht!] \caption{Rejection probabilities for Models 12-15 when $n=100$} \vspace{1ex} \centering{}% \begin{tabularx}{1\textwidth}{LCCCCCC} \toprule & \multicolumn{3}{c}{$H_{0}$: $\Delta=0$} & \multicolumn{3}{c}{$H_{1}$: $\Delta=1/4$}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} \\ \midrule 12 & 5.35 & 6.15 & 6.12 & 22.01 & 39.59 & 42.56 \\ 13 & 5.31 & 6.21 & 6.11 & 21.47 & 39.62 & 42.47 \\ 14 & 5.24 & 6.04 & 6.07 & 21.39 & 38.11 & 41.14 \\ 15 & 5.31 & 6.05 & 6.23 & 20.73 & 35.90 & 38.67 \\ \bottomrule \end{tabularx} \label{tab:LASSO_tab} \end{table} \begin{table}[ht!] \caption{Rejection probabilities for Models 1-11 when $n=200$} \vspace{1ex} \centering{}% \begin{tabularx}{1\textwidth}{LCCCCCCCCCCCC} \toprule & \multicolumn{6}{c}{$H_{0}$: $\Delta=0$} & \multicolumn{6}{c}{$H_{1}$: $\Delta=1/4$}\\ \cmidrule(lr){2-7} \cmidrule(lr){8-13} Model & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{LA} & \multicolumn{1}{c}{LA2} & \multicolumn{1}{c}{LDA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{LA} & \multicolumn{1}{c}{LA2} & \multicolumn{1}{c}{LDA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} \\ \midrule 1 & 5.08 & 5.04 & 5.10 & 5.21 & 5.38 & 5.31 & 38.94 & 70.35 & 70.36 & 70.32 & 70.53 & 70.30 \\ 2 & 5.69 & 5.28 & 5.28 & 5.24 & 5.42 & 5.40 & 40.31 & 49.25 & 49.32 & 65.36 & 65.71 & 57.87 \\ 3 & 5.44 & 5.29 & 5.30 & 5.35 & 5.60 & 5.41 & 56.89 & 49.43 & 49.51 & 64.96 & 65.34 & 62.42 \\ 4 & 5.45 & 5.29 & 5.29 & 5.35 & 5.42 & 5.20 & 18.55 & 49.43 & 49.67 & 64.96 & 67.93 & 69.96 \\ 5 & 5.45 & 5.24 & 5.18 & 5.19 & 5.44 & 5.29 & 18.41 & 48.65 & 48.80 & 64.11 & 66.83 & 69.09 \\ 6 & 5.62 & 5.32 & 5.31 & 5.35 & 5.50 & 5.43 & 18.19 & 46.71 & 46.67 & 61.09 & 63.95 & 65.98 \\ 7 & 5.24 & 5.51 & 5.46 & 5.34 & 5.78 & 5.49 & 11.86 & 60.73 & 60.63 & 65.14 & 64.88 & 69.24 \\ 8 & 5.23 & 5.49 & 5.47 & 5.35 & 6.00 & 5.65 & 11.84 & 60.00 & 60.10 & 64.93 & 64.02 & 68.02 \\ 9 & 5.30 & 5.58 & 5.57 & 5.66 & 5.73 & 5.81 & 11.90 & 57.25 & 57.28 & 61.61 & 60.98 & 64.88 \\ 10 & 5.34 & 5.19 & 5.15 & 5.25 & 5.33 & 5.31 & 23.95 & 55.49 & 55.44 & 56.64 & 52.05 & 56.43 \\ 11 & 5.41 & 5.36 & 5.32 & 5.34 & 5.53 & 5.41 & 23.88 & 55.01 & 55.05 & 56.31 & 51.87 & 56.18 \\ \bottomrule \end{tabularx} \label{tab:no_LASSO_tab2} \end{table} \begin{table}[ht!] \caption{Rejection probabilities for Models 12-15 when $n=200$} \vspace{1ex} \centering{}% \begin{tabularx}{1\textwidth}{LCCCCCC} \toprule & \multicolumn{3}{c}{$H_{0}$: $\Delta=0$} & \multicolumn{3}{c}{$H_{1}$: $\Delta=1/4$}\\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} & \multicolumn{1}{c}{NA} & \multicolumn{1}{c}{HD-PD} & \multicolumn{1}{c}{HD-F} \\ \midrule 12 & 4.97 & 5.22 & 5.22 & 38.91 & 65.28 & 68.10 \\ 13 & 4.95 & 5.24 & 5.19 & 38.04 & 65.29 & 68.06 \\ 14 & 5.01 & 5.20 & 5.24 & 37.65 & 63.92 & 66.69 \\ 15 & 5.15 & 5.27 & 5.40 & 36.61 & 61.11 & 63.79 \\ \bottomrule \end{tabularx} \label{tab:LASSO_tab2} \end{table} \begin{figure}[ht!] \includegraphics[width=5.5in, height=5.5in]{variance_bar_200.png} \centering \caption{Average Standard Error Reduction in Percentage under $H_{1}$ when $n=200$} \label{fig:bar} \vspace{-1ex} \justify Notes: The figure plots average standard error reduction in percentage achieved by regression adjustments relative to ``NA'' under $H_{1}$ for Models 1-15 when $n=200$. \end{figure} \section{Empirical Illustration} \label{sec:empirical} In this section, we revisit the randomized experiment with a matched pairs design conducted in \cite{groh2016macroinsurance}. In the paper, they examined the impact of macroinsurance on microenterprises. Here, we apply the covariate adjustment methods developed in this paper to their data and investigate the average effect of macroinsurance on three outcome variables: the microenterprise owners' loan renewal, their firms' monthly profits, and revenues. The subjects in the experiment are microenterprise owners, who were the clients of the largest microfinance institution in Egypt. In the randomization, after an exact match of gender and the institution's branch code, those clients were grouped into pairs by applying an optimal greedy algorithm to additional 13 matching variables. Within each pair, a macroinsurance product was then offered to one randomly assigned client, and the other acted as a control. Based on the pair identities and all the matching variables, we re-order the pairs in our sample according to the procedure described in Section 5.1 of \cite{Jiang2022QTE}. The resulting sample contains 2824 microenterprise owners, that's, 1412 pairs of them.\footnote{See \cite{groh2016macroinsurance} and \cite{Jiang2022QTE} for more details.} Table \ref{tab:emp_ate} reports the ATEs with the standard errors (in parentheses) estimated by different methods. Among them, ``GM'' corresponds to the method used in \cite{groh2016macroinsurance}.\footnote{\cite{groh2016macroinsurance} estimated the effect by regression with regressors including some baseline covariates and dummies for the pairs. Specifically, for loan renewal, the regressors include a variable ``high chance of renewing loan'' and its interaction with treatment status. For the other two outcome variables, the regressor is the baseline value for the outcome of interest. The standard errors for the ``GM'' ATE estimate are calculated by the usual heteroskedastity-consistent estimator. The ``GM'' results in Table \ref{tab:emp_ate} were obtained by applying the Stata code provided by \cite{groh2016macroinsurance}.} The description of other methods is similar to that in Section \ref{sec:sims2}.\footnote{To maintain comparability, we keep $X_i$ and $W_i$ the same in all the adjustments for each outcome variable. Specifically, \begin{enumerate}[(i)] \item $X_i$ include gender and 13 additional matching variables for all the adjustments. Three of the matching variables are continuous and others are dummies. \item For loan renewal, $W_i$ include baseline value of loan amount, high chance of renewing loan, the interaction between the high chance of renewing loan and treatment status, and the interaction of these three variables with three continuous variables and the first three discrete variables in $X_i$. For the other two outcome variables, $W_i$ only includes the baseline value for the outcome of interest and its interaction with three continuous variables and the first three discrete variables in $X_i$. All the continuous variables in $X_i$, the baseline values of loan amount, and the baseline value for the other three outcome variables are standardized at first when the regression-adjusted estimators are used. \end{enumerate} } The results in this table prompt the following four observations. First, in line with the theoretical and simulation results, the standard errors for the covariate-adjusted ATEs are generally lower than those for the ATE estimate without adjustment. This observation holds for almost all the outcome variables and adjustment methods. For example, when the outcome variable is revenue, the standard errors for the covariate-adjusted ATE estimates are at least 9.7\% less than that for the ATE estimate without adjustment. Second, the standard errors for the ATE estimates obtained by the ``GM'' method are mostly higher than those for the ATE estimates obtained by the covariate adjustments. Especially, when the outcome variable is loan renewal, the standard errors for the ``GM'' ATE estimates are at least 16.7\% higher than those for all other estimates. This observation may imply that the ``GM'' method is not the most efficient way to estimate the ATE of macrofinance on loan renewal. Third, the size of the standard errors is mostly similar for all the covariate-adjustment ATEs. Among them, the standard errors for the ``LA2'' and ``LDA'' estimates are slightly less than those for the other regression-adjusted estimates. Finally, between the two LASSO-based adjustments, ``HD-F'' achieves the smaller size of the standard errors. Surprisingly, ``HD-PD'' has the same estimates as ``NA'', which means it selects none of the variables in the adjustments. This result is caused by using a large rule-of-thumb penalty. There are more than 10 matching variables in this application, which leads to low matching quality and then produces a large penalty for the adjustments. \begin{table}[H] \centering \caption{Impacts of Macronsurance for Microenterprises} \vspace{1ex} \begin{tabularx}{1\textwidth}{LCCCCCCCC} \toprule Y & n & NA & GM & LA & LA2 & LDA & HD-PD &HD-F\\ \midrule Loan & 1350 & -0.007 & 0.004 & -0.004 & -0.006 & 0.006 & -0.007 & -0.003 \\ renewal & & (0.0180) & (0.0212) & (0.0178) & (0.0177) & (0.0177) & (0.0180) & (0.0177) \\ Profits & 1322 & -85.6 & -50.9 & -35.6 & -46.8 & -40.6 & -85.6 & -55.1 \\ & & (49.4) & (46.4) & (45.7) & (45.3) & (45.6) & (49.4) & (45.7) \\ Revenue & 1318 & -838.6 & -657.6 & -666.8 & -664.7 & -671.3 & -838.6 & -590.1 \\ & & (319.0) & (283.4) & (283.5) & (279.8) & (281.4) & (319.0) & (285.2) \\ \bottomrule \end{tabularx} \\ \vspace{-1ex} \justify Notes: The table reports the ATE estimates of the effect of macroinsurance for microenterprises. Standard errors are in parentheses. \label{tab:emp_ate} \end{table} \clearpage \newpage
train/arxiv
BkiUbX04uzki04AfVTdG
5
1
\section{Introduction} Artificial Intelligence is serving as an increasingly significant role in modern days. Object detection, as a key branch in AI, has been deployed in various commercial applications. Under the pressure of the demanding precision and inferring speed in downstream task, plenty of novel AI networks are created to pursue faster and more accurate performance. The object detecting usually are separated as~\cite{li-et-al:deep} one-stage and two-stage method. The two-stage methods, such as~\cite{girshick-et-al:rich} R-CNN,~\cite{girshick-et-al:fast} Fast-RCNN and~\cite{he-et-al:mask} Mask RCNN, is to firstly get the region proposal, and then do the classification and the bounding box regression. While one-stage methods like~\cite{liu-et-al:single} SSD,~\cite{lin-et-al:focal} RetinaNET and~\cite{redmon-et-al:you} YOLO, are getting the bounding box and classify directly without the region proposal. Comparing with the one-stage methods, the two-stage object detecting methods usually have higher accuracy on the result but lower speed in inferring. Due to the more flattering inferring speed, the one-stage methods are preferred by AI engineers in commercial application. Benefit on the success of~\cite{bochkovskiy-et-al:YOLOv4} YOLOv4, it soon swept across the AI industry and became the mainstream due to its lighter weights files, thus came as a milestone in the object detection field. One year later, the appearance of YOLOv5 surpassed the YOLOv4 for its better performance on accuracy and speed. Then plenty of ingenious networks are created later to further shrink the weight files and made the object detection algorithm much easier to be deployed on edge devices, such as~\cite{ge-et-al:YOLOX} YOLOX and~\cite{ganesh-et-al:YOLO-ReT} YOLO-Ret. \begin{figure}[t] \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=77mm]{intro_a.png} \caption{} \label{fig:intro-a} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=77mm]{intro_b.png} \caption{} \label{fig:intro-b} \end{subfigure} \caption{Part (a) is a figure of a Sierpinski triangle, which is a standard self-similar pattern; while part (b) demonstrates the same patterns on the fire. Naturally, each part of the fire can be considered as positive sample. The difficulty of how to draw the bounding box of the fire is just like the perplex of pointing out the triangles in the Sierpinski triangle.} \label{fig:intro} \end{figure} On the other hand, some common datasets that can cover most of the common seen objects in the real life, such as chair, birds, cars, etc., are published publicly to make the object detection algorithm softer to be applied. However, unlike cars and chairs, some special objects such as fire and smoke are hard to be detected since they have non-solid shapes, in other words, they have more varied outlines, which will bring difficulties to present their general features in the above-mentioned networks. Most object detection tasks in practical application seldom talking about the intrinsic feature of the objects. We realized that the part of the fire is geometrically similar to the whole outline of this fire. As shown in Figure \ref{fig:intro}, the part of fire in the red bolck is sharing similar contour with the fire in blue box, such patern matches the methamatical defination of self-similar. Since the fire is self-similar, then any part of the fire can be treated as a fractal mathematically. To this point, if this fractal was detected during the training process, then this candidate should be considered as a positive sample. Furthermore, it is also hard to label these objects in images since there is no common recognized standard on how to draw the bounding boxes. For example, in Figure \ref{fig:example}, there are two types of labeling the fire. Such phenomenon often happens in fire, smoke, dust, or sea wave detection and lead to the divergence of labeling, and concequently affect the detecting result. Based on the self-similarity, we propose to draw a single bounding box to cover the object area in this situation to present this object. Our experimental results also verify this labeling rule as a good break through point to solve this ambiguous labeling problem. To our best knon, we are the firt one to propose an appropriate method for labelling these objects, but also provide methods to measure the self-similar feature during the training process. \section{Related Works} \subsection{Fire Detection } Current fire detection methods can be classified as traditional method, CNN classification, object detection,~\cite{kim-et-al:a,jeong-et-al:light,xu-et-al:advances} video-based analysis and~\cite{dunnings-et-al:experimentally} instance segmentation. The traditional method is to analyze the pixel color.~\cite{chen-et-al:digital} has found that the fire usually has a unique frequency of flick, therefore, an area can be inferred as fire if it has a regular mode of pixel color change. ~\cite{muhammad-et-al:convolutional} tried to use CNN classification to recognize whether an image contains fire. ~\cite{li-et-al:image} shows that the machine learning can easily achieve higher accuracy than the traditional method in determine the occurrence of fire in an image. The deep learning based object detection method is to get the bounding boxes of the fire and smoke area. This method may consume human work to label the ground truth bounding boxes for each image, but can get the accurate position of the fire and smoke, which is benefit for some commercial applications such as vehicle fire alarm, forest fire alarm, kitchen fire and smoke alarm, etc. \begin{figure}[tbp] \hfill \centering \begin{subfigure}{.25\linewidth} \centering \includegraphics[height=38mm]{example_a.png} \caption{} \label{fig:example_a} \end{subfigure} \hfill \begin{subfigure}{.5\linewidth} \centering \includegraphics[height=38mm]{example_b.png} \caption{} \label{fig:example_b} \end{subfigure} \caption{Two criteria of labeling the fire.} \label{fig:example} \end{figure} \subsection{Self-similar} The self-similar phenomenons are as common in the nature, such as snowflake, tree branch and coastline, as them in mathematics, such as Sierpinski triangle and Koch curve. Each part of these objects are similar to themselves, in other words, the detail of them can be infinitely extended with same pattern so the structure will never change. In mathematics, the self-similar has more restrict definition such as fractal geometry. According to the definition, a self-similar pattern can be described by its~\cite{silva-et-al:fractal,rapaport-et-al:on} Hausdorff dimension, as Equation \ref{eq:hausdorff dimension}. \begin{equation} diam(A_n)^S = \Sigma_{A\in A_{n+1}}diam(A)^S \label{eq:hausdorff dimension} \end{equation} \begin{itemize} \item $s$: the Hausdorff dimension to be solved. \item $A$: element in a fractal set. \item $A_n$: fractal set. \end{itemize} \begin{equation} \begin{split} H(A,B) = max\left\lbrace h(A,B),h(B,A)\right\rbrace \\ h(A,B) = max\left\lbrace min\left\lbrace \lVert a-b\rVert \right\rbrace \right\rbrace \end{split} \label{eq:hausdorff distance} \end{equation} \begin{itemize} \item $H(A,B)$: the Hausdorff distance between set A to set B. \item $h(A,B)$: the Hausdorff distance from set A to set B. \end{itemize} \begin{figure}[tbp] \hfill \centering \includegraphics[width=80mm]{hausdorff.png} \caption{Hausdorff distance from contour A to contour B.} \label{fig:hausdorff} \end{figure} \hspace*{\fill} The Hausdorff distance ($H$) is a method to metric the similarity of two contours. It calculate the maximum value of the minimum distances of the points from one contour to another contour. The definition is as Equation \ref{eq:hausdorff distance}. The smaller the $H$ is, the more similar the two contours are. The illustration of \ref{eq:hausdorff distance} is shown in Figure \ref{fig:hausdorff}, the minimal distance from $i$th point $a_i$ in contour A to contour B is $\vert\vert\vec {a_{i}b_{j}} \vert\vert$, where $j$ refers to $j$th point in contour B. And the maximum distance of $\vert\vert\vec {a_{i}b_{j}} \vert\vert$ would be the Hausdorff distance from contour A to contour B $H(A,B)$. The Hausdorff distance is an irreversible operation, therefore, $H(A,B) \neq H(B,A)$ \subsection{Datasets} Our dataset is from~\cite{ko-et-al:modeling,zhang-et-al:wildland,wu-et-al:automatic,enis:http,geng:http} several common datasets. These datasets contain fire, conflagration, smoke, candle, helmet, and natural scene. We randomly mix these images up to 12,000 images that contains 10,000 images of training dataset and 2,000 images of validation dataset. All these images are unlabeled originally. \subsection{One-stage Object Detection} The one-stage object detection neural networks are increasingly flourishing these years. As the most prestigious one in the AI industry, YOLO series is famous of its speed and lightweight in a long term. Instead of using the sliding window, YOLO series preset a series of anchors that used for detecting different scales of objects. When calculating the loss of bounding box, they use the~\cite{zheng-et-al:enhancing} CIOU to determine the ratio of intersection area and union area between the predicted bounding boxes and ground truths. As shown in Equation \ref{eq:ciou loss}, different from the~\cite{yu-et-al:unitBox} IOU, the CIOU also considers the aspect ratio. Furthermore, YOLOv5 also use mosaic augment to expand the quantity of samples, which is beneficial to the generalization of the model. While the YOLOX network uses both mosaic and mix up augmentation. \begin{equation} L_{IOU} = 1- IOU + \frac{\rho^2 (b,b^{gt})}{c^2} + \alpha \nu \label{eq:ciou loss} \end{equation} \begin{itemize} \item $L_{IOU}$: CIOU loss. \item $IOU$: Ratio of intersection area over union area between two boxes. \item $\rho^2 (b,b^{gt}$: Euclidean distance between the centers of two boxes. \item $c$: The diagonal of the minimal bounding box that can contain both the predicted box and ground truth. \item $\alpha$: trade-off. \item $\nu$: Consistency of aspect ratio. \end{itemize} \section{Proposed Methods} \subsection{Loss Function} According to the classical loss calculation in object detection, if the predicted bounding boxes is intersected with the ground truth boxes, then the network checks the IOUs of these two bounding boxes set. The larger the IOU is, the lower loss it gets, in other words, a low IOU indicates the network make a correct prediction, since the predicted bounding box is mostly overlapped with the ground truth. \begin{figure}[tbp] \hfill \centering \includegraphics[width=80mm]{illustration.png} \caption{The predicted bounding boxes should be considered as positive samples.} \label{fig:illustration} \end{figure} However, as we suggest, the fire is self-similar. If the predicted bounding box contains a part of the fire, then this bounding box still refers to an area of fire, therefore, it is unlogical to assert that the network makes an incorrect infer, instead, the network is getting correct outputs. In this situation, we should revise such low IOU to a higher one. As shown in Figure \ref{fig:illustration}, if a proposed bounding box of fire totally falls in the area of its corresponding ground truth box, that is the intersection area of this proposal ($A_{p,gt}$) equals to the area of itself ($A_{p,gt}$ = $A_{p}$), then we set the classification loss and box regression loss of this proposal as 0.0001 (we do not set the loss exactly as 0 in order to avoid the exploding or eliminating gradient). The corresponding loss function is as Equation \ref{eq:loss function}. Considering we can accept a tiny error of overflow of the predicted box, we allow the $A_{p,gt}$ over the $A_{p}$ falls in 0.9998 $\sim$ 1 so that we can recognize it as a correct predict. \begin{equation} \begin{split} L_{IOU}\!=\!\left\{ \begin{array}{ll} \!1\!-\!IOU\!+\!\frac{\rho^{2}(b,b^{gt})}{c^2}\!+\!\alpha\nu,&\\ \!0.0001,&\frac{A_{p,gt}}{A_{p}}\!>\!0.9998\\ \end{array} \right. \end{split} \label{eq:loss function} \end{equation} \begin{itemize} \item $A_{p,gt}$: Intersection area of predicted bounding box and its corresponding ground truth bounding box. \item $A_{p}$: Area of predicted bounding box. \end{itemize} \begin{figure}[tbp] \hfill \centering \includegraphics[width=80mm]{method.png} \caption{Use Hausdorff distance to determine whether the predicted bounding box is correct.} \label{fig:method} \end{figure} \subsection{Hausdorff Distance} Moreover, to avoid the predicted bounding boxes contain nothing but its coordinates fall in the ground truth as shown in Figure \ref{fig:method}, we should determine if these bounding boxes really contains the objects we want. Thus, we use Hausdorff distance ($H$) to metric the similarity between them and the ground truth. It firstly converts these RGB images into Grayscale, then extract the contour of the connected component of these images separately, finally measure the $H$ of these connected components. The smaller the $H$ is, the more similar the two images are. Thus, we crop the predicted bounding boxes and the ground truth from the original image, and get the contours in them, then calculate the $H$ from the contour of the predicted box ($C_{p}$) to the contour of the ground truth box ($C_{gt}$), and mark as $H(C_{p},C_{gt})$. Similarly, we calculate $H(C_{gt},C_{p})$ due to the irreversibility of Hausdorff distance. Afterwards, we compare the the lager one between $H(C_{p},C_{gt})$ and $H(C_{gt},C_{p})$ with a threshold ($Thre_{H}$) we set. Then we revise the $L_{IOU}$ only when $max(H(C_{p},C_{gt}),H(C_{gt},C_{p})) \textless Thre_{H}$. \subsection{Epochs Milestone} When to start using above method during training is also an important factor. Take 300 epochs as an example, if we start using this method from the first epoch, then the model may be hard to find the fire "fractal" since it have not learnt any feature of fire; but if we start using this method too late, such as the 275th epoch, then the remaining epochs of training may not update the weight too much since the parameters has well-shaped by previous epochs. Therefore, we must start our method from an appropriate point in the middle of training, we call this time point as epoch milestone ($M_E$). We apply our self-similar loss only if the current epoch ($M_C$) reaches the $M_E$. The process of how to determine this milestone will be discussed in the Experiments section. \begin{figure}[tbp] \hfill \centering \includegraphics[width=80mm]{flowchart.png} \caption{When current epoch reaches the epoch milestone, start applying the Hausdorff distance loss to reduce the loss for fractal.} \label{fig:flowchart} \end{figure} \begin{algorithm}[tb] \caption{Hausdorff Distance ($H$)} \label{alg:hausdorff distance} \textbf{Input}: Predicted box, ground truth, original image\\ \textbf{Parameter}: Epoch milestone ($M_E$)\\ \textbf{Output}: IOU loss $L_{IOU}$ \begin{algorithmic}[1] \IF {$M_C$ / total epochs \textless $M_E$} \IF {$A_{p,gt}$ / $A_{p}$ \textgreater 0.9998} \STATE Pred crop = crop(original image). \STATE GT crop = crop(original image). \STATE $C_{p}$ = findContours(Pred crop). \STATE $C_{gt}$ = findContours(GT crop). \STATE $H(C_{p},C_{gt})$ = Distance ($C_{p}$, $C_{gt}$)\\ $H(C_{gt},C_{p})$ = Distance ($C_{gt}$, $C_{p}$). \IF {$max(H(C_{p},C_{gt}),H(C_{gt},C_{p})) \textless Thre_{H}$} \STATE $L_{IOU}$ = 0.0001. \ELSE \STATE pass. \ENDIF \ELSE \STATE pass. \ENDIF \ELSE \STATE pass. \ENDIF \STATE \textbf{return} $L_{IOU}$ \end{algorithmic} \end{algorithm} The pseudo code to apply $M_E$, loss function and Hausdorff distance is as Algorithm \ref{alg:hausdorff distance}. Corresponding flow chart is shown in Figure \ref{fig:flowchart}. \begin{figure}[t] \centering \begin{subfigure}{\linewidth} \centering \includegraphics[width=77mm]{criteria_a.png} \caption{} \label{fig:criteria-a} \end{subfigure} \begin{subfigure}{\linewidth} \centering \includegraphics[width=77mm]{criteria_b.png} \caption{} \label{fig:criteria-b} \end{subfigure} \caption{The labeling criteria. Part (a) left: fire from different sources; right: fire from same sources. Part (b) left: smoke from different sources; right: smoke from same sources.} \label{fig:criteria} \end{figure} \subsection{Datasets} Labelling the self-similar objects is just like labelling all of the triangles in the Figure \ref{fig:intro}. Their self-similar feature makes it impossible to find out all of the fractals since it is infinite. We propose that it is enough to only draw the largest bounding box, for fire detection, that means to draw an area of fire or smoke with same source; but leave the fire or smoke as more than one bounding box when they belong to two separate burning sources, as shown in Figure \ref{fig:criteria}. \section{Experiments} To verify our proposal is universally applicable instead of case unique. We implemented our thoughts on two commonly used object detection networks: YOLOv5s and YOLOXs. We assume our method can effectively help the network to learn the self-similar feature of the fire and smoke, but this method might be invalid for objects without self-similar feature. \subsection{Data Processing} Due to the difference of YOLOv5 and YOLOX in loading the images and their label, we made two format of annotation files for these two networks: For YOLOv5s, the annotation files are "txt" format, each image has a corresponding annotation file. In this annotation file, each line refers to an object, and separated by space. The elements from left to right are class index, normalized x and y coordinates of the center of the bounding box, normalized width and height of the bounding box. While for YOLOXs, the dataset applies the structure of PASCAL VOC dataset. Each image has a corresponding annotation file in "json" format. And extra text file for saving the image name for training and validation datasets are needed. We choose four classes for training: conflagration, smoke, disturb and candle. For smoke, the black, white, and colored smoke are both considered in this class. The disturb class is to avoid the error detect from some fire-like objects like spotlight, orange working clothes, taillights of cars in the night. \begin{figure}[tbp] \hfill \centering \includegraphics[width=80mm]{hausdorff_distance.png} \caption{Determine the Hausdorff distance threshold. Images rom left to right represents the ground truth and the three predictions. The max Hausdorff distance between the predicted box and ground truth are listed below the corresponding images.} \label{fig:hausdorff_distance} \end{figure} \subsection{Hausdorff Distance Threshold} To get the Hausdorff distance threshold, we crop some ground truth bounding boxes from the images and manually pick some parts from them as "fractal", also we selected some irrelevant area as negative sample. And then we calculate the Hausdorff distance of these samples with ground truth crops, as shown in Figure \ref{fig:hausdorff_distance}. Then calculate the average value of them, which can be used as our threshold. Currently we are using 300 as the threshold value. \begin{table} \centering \begin{tabular}{ccc} \hline $M_E$ & Weight Decay & $LR$ \\ \hline 100\% & 1.00E-03 & 0.1\\ 75\% & 1.00E-03 & 0.1\\ 50\% & 1.00E-03 & 0.1\\ 25\% & 1.00E-03 & 0.1\\ 0\% & 1.00E-03 & 0.1\\ \hline \end{tabular} \caption{Hyper parameters for YOLOv5s} \label{tab:Hyper YOLOv5s} \end{table} \subsection{Training details} We use YOLOv5s network, all input images will be resized into 640 * 640. We set the total training epoch as 250 and batch size of 64 with two GPU, the other important hyper parameters are shown in Table \ref{tab:Hyper YOLOv5s}. The candidate bounding boxes of each feature map are scaled to match the origin image size. Those bounding boxes that do not satisfy the Hausdorff distance threshold are filtered out. The epoch milestone presents the ratio of remaining epochs over total epochs. Thus, 100\% epoch milestone means all epochs are using our method, while 0\% epoch milestone is equivalent to the origin YOLOv5s loss calculation. \begin{table} \centering \begin{tabular}{ccc} \hline $M_E$ & Weight Decay & Learning Rate \\ \hline 100\% & 5.00E-04 & 0.01\\ 75\% & 5.00E-04 & 0.01\\ 50\% & 5.00E-04 & 0.01\\ 25\% & 5.00E-04 & 0.01\\ 0\% & 5.00E-04 & 0.01\\ \hline \end{tabular} \caption{Hyper parameters for YOLOXs} \label{tab:Hyper YOLOXs} \end{table} For YOLOXs network, we set the training epoch as 250 and the batch size as 64 with one GPU. The input images are resized as 640 * 640. The other important hyper parameters are shown in Table \ref{tab:Hyper YOLOXs}. We do not apply any pre-trained model for these two networks. \begin{table} \centering \begin{tabular}{c|cccc} \hline $M_E$ & Conflagrations & Smoke & Disturb & Candle \\ \hline 100\% & 0.727 & 0.663 & 0.485 & 0.903\\ 75\% & 0.725 & 0.658 & 0.488 & 0.896\\ 50\% & 0.727 & 0.666 & 0.480 & 0.897\\ 25\% & 0.733 & 0.666 & 0.494 & 0.898\\ 0\% & 0.717 & 0.663 & 0.489 & 0.898\\ \hline \end{tabular} \caption{APs for YOLOv5} \label{tab:APs YOLOv5} \end{table} \subsection{Results} We have implemented the experiments for both YOLOv5 and YOLOX networks with epoch milestone from 0\% to 100\%, while the result of 0\% epoch milestone is actually the result of original networks, which serve as the reference to evaluate the achievement of our method. The overall result is shown in Table \ref{tab:APs YOLOv5}. {\bf YOLOv5.} In Table \ref{tab:APs YOLOv5}, all of the APs for conflagration when using our method is higher than the original network, the highest AP has surpassed the original method by 2.23\%. Most of the APs for smoke has improved. Most of the APs for disturb and candle has decayed. Besides, the highest improved APs for conflagration and smoke occurs in 25\% epoch milestone for YOLOv5. \begin{table} \centering \begin{tabular}{c|cccc} \hline $M_E$ & Conflagrations & Smoke & Disturb & Candle \\ \hline 100\% & 0.845 & 0.767 & 0.412 & 0.828\\ 75\% & 0.841 & 0.746 & 0.412 & 0.896\\ 50\% & 0.854 & 0.757 & 0.413 & 0.828\\ 25\% & 0.860 & 0.752 & 0.422 & 0.838\\ 0\% & 0.841 & 0.748 & 0.413 & 0.831\\ \hline \end{tabular} \caption{APs for YOLOX} \label{tab:APs YOLOX} \end{table} {\bf YOLOX.} For YOLOX in Table \ref{tab:APs YOLOX}, all of the APs for conflagration when using our method is higher than the original network. Half of the APs for smoke has improved. Half of the APs for disturb and candle has decayed. Besides, the highest improved APs for conflagration and smoke occurs in 25\% epoch milestone. \begin{figure}[tbp] \hfill \centering \includegraphics[width=80mm]{performance.png} \caption{Left part is the detecting results from original YOLOv5 network, while right part is the detecing result after applying our self-similar loss. From left to right demonstrate the mproved detecting confidence for smoke and less missing for fire in this test image.} \label{fig:performance} \end{figure} Besides the improvement of the statistic result we get, the most surprising feedback we receive is that some missed fire or smoke while testing of the original model can be successfully recognized by plugging in our method; the prediction confidences are also improved for fire and smoke, as shown in Figure \ref{fig:performance}. \subsection{Analysis} According to our assumption, objects like conflagration and smoke are self-similar, when applying our method, the APs of these objects can get obvious improvement. The model will learn the self-similar feature of the objects so it will not fall into the traps of various shapes of these kinds of objects. In fact, our method can be seen as a way of data augmentation. In contrast, for objects likeneon light (disturb) and candle that are not self-similar, applying our loss function might lead to negative impact. The model will actually mistakenly take the predicted "fractal" of light and candle as positive samples, which are actually negative samples since the light and candle do not have fractal, while applying our method. The epoch milestone is an important parameter while training. Our results show that the model performs best when the milestone is 25\%, which indicates that the network needs some training epoch first to learn the basic feature of the objects by the general method, once the network have learnt enough rough feature, then it is the time to start learning the self-similar feature. \section{Conclusion } For the objects with self-similar feature, our proposed self-similar loss method can effectively improve the precision and the confidence of the bounding boxes. To meature the self-similar, our experimental results indicate it is useful to use the Hausdorff distance to get a more precise model, especially when the training dataset contains self-similar objects. Our method is valid in fire and smoke detection and can be easily transplant to other tasks such as dust detection and coastline detection and will significantly reinforce the robust of model in commercial application. Moreover, our criteria for labeling the self-similar objects should benefit to the AI community.
train/arxiv
BkiUdmc4eIXh8Eim8I7O
5
1
\section{Introduction} The investigations of dust ion acoustic (DIA) solitary structures in four component electron-positron-ion-dust (e-p-i-d) plasmas have received a great deal of attention in the last few years as e-p-i-d plasma may be found in numerous cosmic sites such as around the pulsars \cite{shukla04}, near the surface of the neutron stars \cite{zeldovich1971,shukla04}, in the hot spots on dust ring in the galactic centre \cite{zurek1985}, interstellar medium \cite{zurek1985,higdon09,shukla2008}, interior regions of accretion disks near neutron stars and magnetars \cite{dubinov12}, in Milky way \cite{shukla2008}, in the magnetosphere and in the ionosphere of the Earth \cite{alfven1981,gusev2000,gusev2001}, in the magnetosphere of the Jupiter \cite{merlino2006} and the Saturn \cite{horanyi2004} as well as in laboratory environments \cite{shukla04,dubinov12}. Using the reductive perturbation method Ghosh and Bharuthram \cite{ghosh08} investigated the nonlinear propagation of small but finite amplitude ion acoustic (IA) solitons and double layers in a collisionless unmagnetized e-p-i-d plasma consisting of cold ions, negatively charged static dust particulates and Boltzmann distributed electrons and positrons. Using Bernoulli's pseudo-potential method, Dubinov \textit{et al.} \cite{dubinov12} elaborated the nonlinear theory of DIA waves in a collisionless unmagnetized four component e-p-i-d plasma consisting of warm ions, negatively charged static dust impurities, isothermal electrons and positrons. Several authors \cite{el-tantawy11a,el-tantawy11b,saini13,banerjee16} investigated small or arbitrary amplitude DIA solitary structures in different e-p-i-d plasma systems. Paul \textit{et al.} \cite{paul17ppr} investigated the existence of different DIA solitary structures in a collisionless unmagnetized four component e-p-i-d plasma consisting of negatively charged static dust grains, adiabatic warm ions, isothermally distributed electrons and positrons. They reported the existence of solitary waves of both polarities, coexistence of solitary waves of both polarities, existence of double layers of both polarities, and the existence of positive potential solitons after the formation of positive potential double layer. The existence of positive potential solitons after the formation of double layer confirms the existence of positive potential supersolitons. Again, Paul \& Bandyopadhyay \cite{paul2016} considered the e-p-i-d plasma system of Paul \textit{et al.} \cite{paul17ppr}, but they considered the Cairns \cite{cairns95} distributed nonthermal electrons instead of isothermal electrons. In this paper, Paul \& Bandyopadhyay \cite{paul2016} extensively discussed the DIA solitary structures with the help of the qualitatively different compositional parameter spaces showing the nature of existence of different solitary structures giving a special emphasis on the existence of solitary structures after the formation of double layer of same polarity. Recently, Paul \textit{et al.} \cite{paul17pop} rigorously studied the formation of supersoliton with the help of the phase portraits of the dynamical system describing the nonlinear behaviour of the DIA waves in a four component e-p-i-d plasma consisting of nonthermal electrons and nonthermal positrons. They clearly discussed the transition process of different solitary structures viz., soliton $\to$ double layer $\to$ supersoliton $\to$ soliton after the formation of double layer for increasing values of Mach number. In the above mentioned works, DIA solitary structures have been considered at the supersonic speed only, i.e., for $U>C_{D}$, where $U$ is the velocity of the wave frame and $C_{D}$ is the linearized velocity of the DIA wave for long wave length plane wave perturbation. However, the numerical observations \cite{baluku10,baluku10a,verheest10} of the solitary structures at the acoustic speed, i.e., for $U=C_{D}$, influnced Das \textit{et al.} \cite{das12mc} to set up a general analytical theory for the existence of the solitary structures at the acoustic speed, i.e., for $U=C_{D} \Leftrightarrow M=M_{c}$, where $M=U/C_{D}$ and $M_{c}$ is the lower bound of the Mach number for the existence of solitary structures, i.e., the solitary structures start to exist for $M>M_{c}$. In fact, Das \textit{et al.} \cite{das12mc} have proved three important results to confirm the existence of solitary structures at the acoustic speed. Das \textit{et al.} \cite{das12mc} investigated dust acoustic (DA) solitary structures at the acoustic speed with the help of analytical theory developed in the same paper and they also prescribed a computational scheme to investigate the nature of existence of solitary structures at the acoustic speed. Later, Das \textit{et al.} \cite{das2011existence} investigated DIA solitary structures at the acoustic speed in a collisionless unmagnetized dusty plasma consisting of negatively charged static dust grains, adiabatic warm ions and Cairns distributed nonthermal electrons. They found that the system supports the negative potential solitary waves (NPSWs), positive potential solitary waves, negative potential double layers (NPDLs) and negative potential supersolitons at the acoustic speed. They also showed the qualitatively different existence domains of DIA solitary structures at the acoustic speed. Recently, Verheest and Hellberg \cite{verheest2015} investigated the existence of IA and DIA solitary structures at the acoustic speed and found the existence of NPDL and negative potential supersoliton at the acoustic speed. In the present work, following the analytic theory and the computational scheme as developed by Das \textit{et al.} \cite{das12mc}, we have studied the DIA solitary structures at the acoustic speed with the help of the existence domains and the phase portraits of the dynamical system describing the nonlinear behaviour of the DIA waves in the same plasma system considered by Paul \& Bandyopadhyay \cite{paul2016}. In fact, the present paper is an extension of the published work of Paul \& Bandyopadhyay \cite{paul2016} in the following directions.\\ (i) Instead of considering DIA solitary structures at the supersonic speed ($U>C_{D} \Leftrightarrow M>M_{c}$), we have considered different DIA solitary structures at the acoustic speed ($U=C_{D} \Leftrightarrow M=M_{c}$) in a collisionless unmagnetized four component e-p-i-d plasma consisting of negatively charged static dust grains, adiabatic warm ions, Cairns \cite{cairns95} distributed nonthermal electrons, and isothermal positrons.\\ (ii) For the first time, we have introduced the phase portrait analysis of the dynamical system corresponding to the solitary structures at the acoustic speed ($U=C_{D} \Leftrightarrow M=M_{c}$).\\ (iii) Phase portraits of the dynamical system corresponding to different DIA solitary structures clearly indicate the difference between the different DIA solitary structures at the acoustic speed ($U=C_{D} \Leftrightarrow M=M_{c}$) and the solitary structures at the supersonic speed ($U>C_{D} \Leftrightarrow M>M_{c}$).\\ (iv) We have also considered the case when there is no positron in the system and for this particular case, the system supports NPSWs after the formation of NPDL. The existence of NPSWs after the formation of NPDL confirms the existence of negative potential supersolitons. Here we have also discussed the transition process of negative potential solitary structures at the acoustic speed, viz., soliton $\to$ double layer $\to$ supersoliton $\to$ soliton after the formation of double layer . We have seen that the transition process of different solitary structures at the acoustic speed ($U=C_{D} \Leftrightarrow M=M_{c}$) is same as the transition mechanism of solitary structures at the supersonic speed ($U>C_{D} \Leftrightarrow M>M_{c}$). \section{\label{sec:basic_eqn}Basic Equations \& Energy Integral} We consider the exactly same plasma system of Paul \& Bandyopadhyay \cite{paul2016} and consequently we consider the same set of basic equations of Paul \& Bandyopadhyay \cite{paul2016} to study the nature of existence of DIA solitary structures at the acoustic speed. The following are the governing equations describing the nonlinear behaviour of DIA waves propagating along $x$-axis in a collisionless unmagnetized multicomponent dusty plasma system consisting of adiabatic warm ions, negatively charged static dust particulates, nonthermally distributed electrons and isothermal positrons. \begin{eqnarray}\label{continuity} \frac{\partial n_{i}}{\partial t}+\frac{\partial}{\partial x}(n_{i}u_{i})=0, \end{eqnarray} \begin{eqnarray}\label{momentum} M_{s}^{2}\bigg(\frac{\partial u_{i}}{\partial t}+u_{i}\frac{\partial u_{i}}{\partial x}\bigg)+\frac{(1-p)\sigma_{ie}}{n_{i}}\frac{\partial p_{i}}{\partial x}+\frac{\partial \phi}{\partial x}=0, \end{eqnarray} \begin{eqnarray}\label{pressure} \frac{\partial p_{i}}{\partial t}+u_{i}\frac{\partial p_{i}}{\partial x}+\gamma p_{i} \frac{\partial u_{i}}{\partial x}=0, \end{eqnarray} \begin{eqnarray}\label{poisson} \frac{\partial^{2} \phi}{\partial x^{2}}=-\frac{M_{s}^{2}-\gamma \sigma_{ie}}{1-p}\bigg(n_{i}-n_{e}+n_{p}-\frac{Z_{d}n_{d0}}{n_{0}}\bigg). \end{eqnarray} Here $n_{i}$, $n_{e}$, $n_{p}$, $u_{i}$, $p_{i}$, $\phi$, $x $ and $ t $ are, respectively, the number density of ions, the number density of electrons, the number density of positrons, velocity of ion fluid, ion fluid pressure, electrostatic potential, spatial variable and time, and these have been normalized by $n_{0}$ ($=n_{i0}+n_{p0}=n_{e0}+Z_{d}n_{d0}$), $n_{0}$, $n_{0}$, $C_{D}$ (linearized velocity of the DIA wave in the present plasma system for long-wavelength plane wave perturbation), $n_{i0}K_{B}T_{i}$, $\Phi=\frac{K_{B}T_{e}}{e}$, $ \lambda_{D} $ (Debye length of the present plasma system) and $\lambda_{D}/C_{D}$ with $n_{e0}$, $n_{i0}$, $n_{p0}$ and $n_{d0}$ are, respectively, the equilibrium number densities of electrons, ions, positrons and dust particulates, $ \gamma(=3) $ is the adiabatic index, $ Z_{d} $ is the number of electrons residing on a dust grain surface, $-e$ is the charge of an electron, $T_{i}$ ($T_{e}$) is the average temperature of ions (electrons) and $K_{B}$ is the Boltzmann constant. The expressions of $M_{s}$ and the four basic parameters $p$, $\mu$, $\sigma_{ie}$, $\sigma_{pe}$ are given by the following equations: \begin{eqnarray}\label{Ms} M_{s}=\sqrt{\gamma\sigma_{ie}+\frac{(1-p)\sigma_{pe}}{p+\mu (1-\beta_{e}) \sigma_{pe}}}, \end{eqnarray} \begin{eqnarray}\label{p} p=\frac{n_{p0}}{n_{0}},~\mu=\frac{n_{e0}}{n_{0}},~\sigma_{ie}=\frac{T_{i}}{T_{e}},~\sigma_{pe}=\frac{T_{p}}{T_{e}}, \end{eqnarray} where $T_{p}$ is the average temperature of positrons and $\beta_{e}$ is the nonthermal parameter associated with the Cairns model \cite{cairns95} for electron species, and according to Verheest \& Pillay \cite{verheest08}, the physically admissible bounds of $\beta_{e}$ is given by $0 \leq \beta_{e} \leq \frac{4}{7} \approx 0.6$. The normalized number densities of nonthermal electrons and isothermal positrons are given by \begin{eqnarray}\label{ne} n_{e} = \mu(1-\beta_{e}\phi+\beta_{e}\phi^{2})e^{\phi}, \end{eqnarray} \begin{eqnarray}\label{np} n_{p} = p e^{-\phi / \sigma_{pe}}. \end{eqnarray} The above equations are supplemented by the following unperturbed charge neutrality condition: \begin{eqnarray} n_{i0}+n_{p0}=n_{e0}+Z_{d}n_{d0}. \end{eqnarray} To investigate the steady state arbitrary amplitude DIA solitary structures, we make all the dependent variables depend only on a single variable $ \xi=x-Mt $, where $M$ is the dimensionless velocity of the wave frame normalized by the linearized DIA speed ($C_{D}$) for long-wavelength plane wave perturbation. Using this transformation and applying the boundary conditions:\\ $ \big(n_{i},p_{i},u_{i},\phi,\frac{d\phi}{d\xi}\big)\rightarrow \big(1-p,1,0,0,0\big)\mbox{ as } |\xi|\rightarrow \infty, $\\ we get the following energy integral: \begin{eqnarray}\label{energy_integral} \frac{1}{2}\bigg(\frac{d\phi}{d\xi}\bigg)^{2}+V(\phi)=0, \end{eqnarray} where \begin{eqnarray}\label{V_phi_1} V(\phi) = (M_{s}^{2}-3\sigma_{ie}) \Big[~V_{i} +\frac{p }{1-p} \sigma_{pe} V_{p}-\frac{\mu}{1-p}V_{e}-\frac{1-\mu}{1-p}V_{d}\Big], \end{eqnarray} \begin{eqnarray}\label{V_i_1} V_{i} = M^{2}M_{s}^{2}+\sigma_{ie} -N_{i}\Big[M^{2}M_{s}^{2}+3\sigma_{ie}-2\phi -2\sigma_{ie}N_{i}^{2}\Big], \end{eqnarray} \begin{eqnarray}\label{N_i_1} N_{i}=\frac{n_{i}}{1-p}=\frac{MM_{s}\sqrt{2}}{(\sqrt{\Phi_{M}-\phi}+\sqrt{\Psi_{M}-\phi})}, \end{eqnarray} \begin{eqnarray}\label{Phi_M_1} \Phi_{M} &=& \frac{1}{2}\Big(MM_{s}+\sqrt{3\sigma_{ie}}\Big)^{2}, \\ \Psi_{M} &=& \frac{1}{2}\Big(MM_{s}-\sqrt{3\sigma_{ie}}\Big)^{2}, \end{eqnarray} \begin{eqnarray}\label{V_e_1} V_{e} &=& \big(1+3\beta_{e}-3\beta_{e}\phi+\beta_{e}\phi^{2}\big)e^{\phi}-(1+3\beta_{e}), \\ V_{p} &=& 1-e^{-\phi/\sigma_{pe}}, ~V_{d}=\phi. \end{eqnarray} The energy integral (\ref{energy_integral}) can be regarded as the one-dimensional motion of a particle of unit mass whose position is $\phi$ at time $\xi$ with velocity $d\phi/d\xi$ under the action of the force field $-V'(\phi)$. The first term in the energy integral (\ref{energy_integral}) can be regarded as the kinetic energy of a particle of unit mass at position $\phi$ and time $\xi$, whereas $V(\phi)$ is the potential energy of the same particle at that instant. Now, according to Sagdeev \cite{sagdeev66}, for the existence of a positive (negative) potential solitary wave [PPSW] ([NPSW]) solution of (\ref{energy_integral}), we must have the following three conditions: (i) $\phi=0$ is the position of unstable equilibrium of a particle of unit mass associated with the energy integral (\ref{energy_integral}), i.e., $V(0)=V'(0)=0$ and $V''(0)<0$. (ii) $V(\phi_{m}) = 0$, $V'(\phi_{m}) > 0$ ($V'(\phi_{m}) < 0$) for some $\phi_{m} > 0$ ($\phi_{m} < 0$). This condition is responsible for the oscillation of the particle within the interval $\min\{0,\phi_{m}\}<\phi<\max\{0,\phi_{m}\}$. (iii) $V(\phi) < 0$ for all $0 <\phi < \phi_{m}$ ($\phi_{m} < \phi < 0$). This condition is necessary to define the energy integral (\ref{energy_integral}) within the interval $\min\{0,\phi_{m}\}<\phi<\max\{0,\phi_{m}\}$. For the existence of a positive (negative) potential double layer [PPDL] ([NPDL]) solution of (\ref{energy_integral}), the second condition is replaced by $V(\phi_{m}) = 0$, $V'(\phi_{m}) = 0$, $V''(\phi_{m}) < 0$ for some $\phi_{m} > 0$ ($\phi_{m} < 0)$. This condition states that the particle cannot be reflected back from the point $\phi=\phi_{m}$ to the point $\phi = 0$. Therefore, the necessary condition for the existence of solitary waves and / or double layers of any polarity states that $\phi=0$ is the position of unstable equilibrium of a particle of unit mass associated with the energy integral (\ref{energy_integral}), i.e., $V''(0)<0$ along with $V(0)=0$ and $V'(0)=0$. In other words, $\phi=0$ can be made an unstable position of equilibrium if the potential energy of the said particle attains its maximum value at $\phi=0$. Now, from the condition $V''(0)<0$ we get, $M>M_{c}=1$, i.e., the solitary structures (solitary waves and / or double layers) start to exist just above the curve $M = M_{c}=1$. The condition $V''(0)>0$ gives $M<M_{c}=1$. If $M<M_{c}$, the potential energy of the said particle attains its minimum value at $\phi=0$, and consequently $\phi=0$ is the position of stable equilibrium of the particle. In this case, it is impossible to make any oscillation of the particle even when it is slightly displaced from its position of stable equilibrium. Therefore, there is no question of existence of solitary waves or double layers of any polarity for $M<M_{c}$. Now, let us consider the case for which $V''(0)=0 \Leftrightarrow M=M_{c} \Leftrightarrow V''(M_{c},0)=0$. If $V''(M_{c},0)=0$ along with $V'''(M_{c},0)=0$ then $\phi=0$ is a stable or unstable position of equilibrium according to whether $V''''(M_{c},0)>0$ or $V''''(M_{c},0)<0$. If $V''''(M_{c},0)<0$ then solitary structures may exist at $M=M_{c}$ if the other conditions for the existence of solitary structures are fulfilled. If $V''''(M_{c},0)>0$ then there is no question of existence of solitary structures at $M=M_{c}$. But if $V'''(M_{c},0) \neq 0$ along with $V(M_{c},0)= V'(M_{c},0) = V''(M_{c},0) = 0$, then following the analytic theory developed by Das \textit{et al.} \cite{das12mc}, one can easily study the existence of solitary wave and / or double layer solutions of the energy integral (\ref{energy_integral}) at $M = M_{c}$. If $V(M,0)=V'(M,0)=V''(M_{c},0)=0$, $V'''(M_{c},0) < 0$ ($V'''(M_{c},0) > 0$), $\partial V/\partial M<0$ for all $M > 0$ and for all $\phi > 0$ ($\phi < 0$), Das \textit{et al.} \cite{das12mc} have proved the following important results to confirm the existence of solitary structures at the acoustic speed.\\ \textbf{Result-1:} If there exists at least one value $M_{0}$ of $M$ such that the system supports PPSWs (NPSWs) for all $M_{c}< M < M_{0}$, then there exist either a PPSW (NPSW) or a PPDL (NPDL) at $M=M_{c}$.\\ \textbf{Result-2:} If the system supports only NPSWs (PPSWs) for $M>M_{c}$, then there does not exist PPSW (NPSW) at $M=M_{c}$.\\ \textbf{Result-3:} It is not possible to have coexistence of both positive and negative potential solitary structures at $M=M_{c}$. Again, according to Das \textit{et al.} \cite{das12mc}, the PPDL (NPDL) solution at $M=M_{c}$ is possible only when there exists a PPDL (NPDL) solution in any right neighborhood of $M_{c}$, i.e., PPDL (NPDL) solution at $M=M_{c}$ is possible only when the curve $M=M_{PPDL}$ ($M=M_{NPDL}$) tends to intersect the curve $M=M_{c}$ at some point in the existence domain of the energy integral, where each point of the curve $M=M_{PPDL}$ ($M=M_{NPDL}$) corresponds to a PPDL (NPDL) solution of the energy integral whenever $M_{PPDL} > M_{c}$ ($M_{NPDL} > M_{c}$). From \textbf{Result-1}, \textbf{Result-2} and \textbf{Result-3}, we see that the existence of solitary structures at $M=M_{c}$ depends on the existence of the solitary structures for $M>M_{c}$. Therefore, in the next section, we shall consider qualitatively different existence domains of solitary structures for $M>M_{c}$ to investigate the existence and polarity of the solitary structures at $M=M_{c}$. \section{\label{sec:solution_spaces} Different existence domains} From the discussions as given in the previous section, we see that we must have a definite idea regarding the existence and the polarity of the solitary structures in the right neighbourhood of $M=M_{c}$ to apply \textbf{Result-1}, \textbf{Result-2} and \textbf{Result-3} of Das \textit{et al.} \cite{das12mc} Again, differentiating $V$ with respect to $M$, we get the following equation. \begin{eqnarray}\label{diff_V_wrt_M} \frac{\partial V}{\partial M}=-\Bigg\{\sqrt{\frac{M_{s}^{2}M(1-p)\sigma_{pe}}{p+\mu (1-\beta_{e}) \sigma_{pe}}}\Bigg(\sqrt{N_{i}}-\frac{1}{\sqrt{N_{i}}}\Bigg)\Bigg\}^{2}. \end{eqnarray} From equation (\ref{diff_V_wrt_M}), it is simple to check that the following condition holds good. \begin{eqnarray}\label{condition_1} \frac{\partial V}{\partial M}<0~~\mbox{for all}~~M>0. \end{eqnarray} Therefore, all the conditions of \textbf{Result-1}, \textbf{Result-2} and \textbf{Result-3} hold good if $V'''(M_{c},0)\neq 0$. So, to discuss the existence and polarity of the solitary structures at $M=M_{c}$, it is necessary to study the qualitatively different existence domains for $M>M_{c}$. It is also necessary to determine the sign of $V'''(M_{c},0)$. Figures \ref{sol_spc_wrt_beta_e_p=0_pt_00001}(a) - \ref{sol_spc_wrt_beta_e_p=0_pt_1}(a) are qualitatively different existence domains with respect to $\beta_{e}$. These figures show the nature of existence of different solitary structures for $M>M_{c}$ for different values of $p$ whenever $\mu=0.2$ and $\sigma_{ie}=\sigma_{pe}=0.9$. On the other hand, $V'''(M_{c},0)$ is plotted against $\beta_{e}$ in the lower panel (or marked as (b)) of each figure. Although figure \ref{sol_spc_wrt_beta_e_p=0_pt_00001}(a) is the existence domain for $p = 0.00001$, but qualitatively it represents the existence domain for any $p$ lying within the interval $0 <p < 0.0008$ for any physically admissible value of $\beta_{e}$. Similarly, figure \ref{sol_spc_wrt_beta_e_p=0_pt_01}(a), figure \ref{sol_spc_wrt_beta_e_p=0_pt_04}(a) and figure \ref{sol_spc_wrt_beta_e_p=0_pt_07}(a) stand for any $p$ lying within the intervals $0.008 \leq p < 0.034$, $0.034 \leq p <0.065$ and $0.065 \leq p < 0.083$, respectively. Finally, figure \ref{sol_spc_wrt_beta_e_p=0_pt_1}(a) represents the existence domain for $p > 0.083$. In the above mentioned figures, P, N, S, and C denote the existence regions of PPSWs, NPSWs, PPSWs after the formation of the PPDL and the region of coexistence of both PPSWs and NPSWs respectively. Here $M_{max}$ is the upper bound of the Mach number $ M $ for the existence of all positive potential solitary structures. Following Das \textit{et al.} \cite{das09,das12}, it is simple to check that $ M_{max} $ is the largest positive root of the equation $ V(\Psi_{M}) = 0 $ subject to the condition $ V(\Psi_{M}) \geq 0 $ for all $ M \leq M_{max} $. In other words, $M$ assumes its upper limit $ M_{max} $ for the existence of all positive potential solitary structures when $\phi$ tends to $\Psi_{M}$, i.e., when ion number density goes to maximum compression. Mach number $M=M_{PPDL}$ ($M=M_{NPDL}$) corresponds to a PPDL (NPDL) solution of the energy integral (\ref{energy_integral}). In our earlier papers \cite{paul17ppr,paul2016,paul17pop}, following Das \textit{et al.} \cite{das09,das12}, we have developed a numerical scheme to find the Mach number $M_{PPDL}$ ($M_{NPDL}$) corresponding to a PPDL (NPDL) solution of the energy integral (\ref{energy_integral}) at some point of the parameter space. To investigate the existence and polarity of different solitary structures at $M>M_{c}$, we have defined the following cut off values of $\beta_{e}$: \begin{description} \item[$\beta_{ea}$] $\beta_{ea}$ is a cut off value of $\beta_{e}$ such that $M_{max}$ (upper bound of the Mach number for the existence of positive potential solitary structures) exists for all $0<\beta_{e} \leq \beta_{ea}$. Consequently, $\beta_{e}=\beta_{ea}$ is the upper bound of $\beta_{e}$ for the existence of PPSWs. \item[$\beta_{eb}$] $\beta_{eb}$ is a cut off value of $\beta_{e}$ such that NPDL starts to exist whenever $\beta_{e} \geq \beta_{eb}$ i.e., $\beta_{e}=\beta_{eb}$ is the lower bound of $\beta_{e}$ for the existence of NPDL solution. In other words, for any $\beta_{e} \geq \beta_{eb}$, there exists a sequence of NPSWs of increasing amplitude which converges to NPDL at $M = M_{NPDL}$. \item[$\beta_{ec}$] $\beta_{ec}$ is the value of $\beta_{e}$ at which $V'''(M_{c},0)=0$. \end{description} Now, figuers \ref{sol_spc_wrt_beta_e_p=0_pt_00001} to \ref{sol_spc_wrt_beta_e_p=0_pt_1} are self explanatory. For example, consider figure \ref{sol_spc_wrt_beta_e_p=0_pt_01}. From figure \ref{sol_spc_wrt_beta_e_p=0_pt_01}(a) we see that the system supports only PPSWs in the right neighbourhood of the curve $M=M_{c}$ for $0<\beta_{e}<\beta_{eb}$. Now, NPDLs start to exist for $\beta_{e} \geq \beta_{eb}=0.054$ along the curve $M=M_{NPDL}$ and the coexistence of solitary waves of both polarities has been observed in the interval $\beta_{eb}<\beta_{e} \leq \beta_{ea}$. For $\beta_{e}>\beta_{ea}$, the system supports only NPSWs in the right neighbourhood of the curve $M=M_{c}$. Thus, from figure \ref{sol_spc_wrt_beta_e_p=0_pt_01}(a), we have a clear idea about the existence and polarity of the solitary structures in the right neighbourhood of the curve $M=M_{c}$. On the other hand, figure \ref{sol_spc_wrt_beta_e_p=0_pt_01}(b) shows the variation of $V'''(M_{c},0)$ with respect to $\beta_{e}$. Therefore, using \textbf{Result-1}, \textbf{Result-2} and \textbf{Result-3}, one can draw the following conclusions regarding the existence and the polarity of the solitary structures along the curve $M=M_{c}$.\\ (i) For $0 \leq \beta_{e} < \beta_{eb}$, the system supports only PPSWs in the right neighbourhood of $M=M_{c}$ and in this interval of $\beta_{e}$, we have $V'''(M_{c},0)>0$. So, using \textbf{Result-2} we can conclude that there does not exist any solitary structure at $M=M_{c}$ for $\beta_{e}$ lying within the interval $0 \leq \beta_{e} < \beta_{eb}$.\\ (ii) For $\beta_{eb} < \beta_{e}< \beta_{ec}$, the system supports both PPSWs and NPSWs in the right neighbourhood of $M=M_{c}$ and in this interval of $\beta_{e}$, we have $V'''(M_{c},0)>0$ which indicates the existence of NPSWs at $M=M_{c}$ for $\beta_{eb} < \beta_{e}< \beta_{ec}$ (\textbf{Result-1}).\\ (iii) For $\beta_{ec} < \beta_{e} \leq \beta_{ea}$, the system again supports both PPSWs and NPSWs in the right neighbourhood of $M=M_{c}$, but in this interval of $\beta_{e}$, we have $V'''(M_{c},0)<0$ which indicates the existence of PPSWs at $M=M_{c}$ in the interval $\beta_{ec} < \beta_{e} \leq \beta_{ea}$ (\textbf{Result-1}).\\ (iv) For $\beta_{ea} < \beta_{e} <0.6$, the system supports NPSWs in the right neighbourhood of $M=M_{c}$, but in this interval of $\beta_{e}$, we have $V'''(M_{c},0)<0$. So, we can conclude that there does not exist any solitary structure at the acoustic speed in the interval $\beta_{ea} < \beta_{e} <0.6$ (\textbf{Result-2}).\\ (v) It is simple to check that $V'''(M_{c},0)\Big|_{\beta_{e} = \beta_{ec}}=0$ and $V''''(M_{c},0)\Big|_{\beta_{e} = \beta_{ec}}>0$. Therefore, the potential energy of the pseudo particle of unit mass associated with the energy integral (\ref{energy_integral}) attains a minimum value at $\phi=0$ when $\beta_{e} = \beta_{ec}$, $M=M_{c}$ with $p=0.01$, $\mu=0.2$ and $\sigma_{ie}=\sigma_{pe}=0.9$. In this case, $\phi=0$ is the position of stable equilibrium of the particle. So, it is impossible to make any oscillation of the particle even when the particle is slightly displaced from its stable position of equilibrium and consequently there is no question of the existence of any solitary structure at the acoustic speed $U=C_{D} \Leftrightarrow M=M_{c}$ when $\beta_{e} = \beta_{ec}$.\\ (vi) From figure \ref{sol_spc_wrt_beta_e_p=0_pt_01}(a), we see that the curve $M=M_{NPDL}$ tends to intersect the curve $M=M_{c}$ at the point $\beta_{e} = \beta_{eb}$ in the existence domain with $p=0.01$, $\mu=0.2$ and $\sigma_{ie}=\sigma_{pe}=0.9$, i.e., there always exists a NPDL solution in any right neighborhood of $M_{c}$. Therefore, according to Das \textit{et al.} \cite{das12mc}, there must exist a NPDL solution at $M=M_{c}$ at the point $\beta_{e} = \beta_{eb}$ in the existence domain with $p=0.01$, $\mu=0.2$ and $\sigma_{ie}=\sigma_{pe}=0.9$.\\ (vii) There does not exist any PPDL at the acoustic speed. Thus, from figures \ref{sol_spc_wrt_beta_e_p=0_pt_00001} - \ref{sol_spc_wrt_beta_e_p=0_pt_1}, we observe that for $p>0$, there exists a cut off value $p^{(c)}$ of $p$, such that for $0 <p \leq p^{(c)}$, the system supports NPSWs at the acoustic speed for $0 < \beta_{e}< \beta_{ec}$, whereas for $\beta_{ec} < \beta_{e} \leq \beta_{ea}$ the system supports PPSWs at the acoustic speed. Again, there exists a cut off value $p^{(k)}$ of $p$, such that for $p^{(c)} < p \leq p^{(k)}$, the system supports NPSWs at the acoustic speed for $\beta_{eb} < \beta_{e}< \beta_{ec}$, whereas for $\beta_{ec} < \beta_{e} \leq \beta_{ea}$ the system supports PPSWs at the acoustic speed. Here we see that for $M>M_{c}$, the curve $M=M_{NPDL}$ tends to intersect the curve $M=M_{c}$ at $\beta_{e} = \beta_{eb}$ and consequently we have a NPDL solution at the acoustic speed when $\beta_{e} $ assumes the value $\beta_{eb}$. For definiteness, we draw $V(\phi)$ against $\phi$ in figure \ref{phi_vs_vphi_npsw_p=0_pt_01} at the acoustic speed for different values of $\beta_{e} $ lying in the interval $\beta_{eb} \leq \beta_{e}< \beta_{ec}$. From this figure we see that the amplitude of the NPSWs at the acoustic speed increases with decreasing $\beta_{e} $ and this sequence of NPSWs ends with a NPDL at $\beta_{e} = \beta_{eb}$. In figure \ref{phi_vs_vphi_ppsw_p=0_pt_01}, $V(\phi)$ is plotted against $\phi$ for different values of $\beta_{e} $ lying in the interval $\beta_{ec} < \beta_{e} \leq \beta_{ea}$. From this figure we see that the amplitude of the PPSWs at $M=M_{c}$ increases with increasing $\beta_{e} $ lying within the interval $\beta_{ec} < \beta_{e} \leq \beta_{ea}$, whereas at the point $\beta_{e} = \beta_{ec}$ both NPSWs and PPSWs collapse. It is simple to check that potential energy of the system assumes a minimum value when $\beta_{e} = \beta_{ec}$, i.e., at $\beta_{e} = \beta_{ec}$, $\phi=0$ is a position of stable equilibrium. In fact, at this point $V(M_{c},0)=V'(M_{c},0)=V''(M_{c},0)=V'''(M_{c},0)=0$, whereas $V''''(M_{c},0)>0$. Consequently, there is no question of the existence of any solitary structure at $\beta_{e} = \beta_{ec}$. For further increment in $p$ from $p=p^{(k)}$, there exists a cut off value $p^{(m)}$ of $p$, such that for $p^{(k)} < p \leq p^{(m)}$, the system supports PPSWs at the acoustic speed for $\beta_{ec} < \beta_{e}< \beta_{eb}$, whereas the system does not support any negative potential solitary structure at the acoustic speed for any admissible value of $\beta_{e} $. Finally, for $p>p^{(m)}$, the system does not support any solitary structure at $M=M_{c}$. Again, from figure \ref{sol_spc_wrt_beta_e_p=0_pt_04} and figure \ref{sol_spc_wrt_beta_e_p=0_pt_07}, we see that the system supports PPDLs in a right neighbourhood of $M=M_{c}$, but the system does not support PPDLs at $M=M_{c}$. Consequently, the present plasma system does not support any positive potential supersoliton at the acoustic speed. Again, since there does not exist any soliton after the formation of NPDL at the acoustic speed, there does not exist any negative potential supersoliton at the acoustic speed for $p>0$. But for $p=0$, i.e., if there is no positrons in the system, Das \textit{et al.} \cite{das2011existence} found the existence of NPDLs and most importantly, the existence of negative potential supersolitons at the acoustic speed. Now, in the present work, if we consider $p=0$, i.e., the positron concentration in the system is zero, then the present plasma system reduces to the exactly same system of Das \textit{et al.} \cite{das2011existence} Figure \ref{sol_spc_wrt_mu_@M=Mc_p=0} is the existence domain with respect to $\mu$ at $p=0$ when $\sigma_{ie}=0.9$. This existence domain is qualitatively same as the existence domain as given in figure 10 in the paper of Das \textit{et al.} \cite{das2011existence} In figure \ref{sol_spc_wrt_mu_@M=Mc_p=0}, P stands for the existence region of PPSWs at $M=M_{c}$, N stands for the existence region of NPSWs at $M=M_{c}$, along the curve $\beta_{e}=\beta_{eb}$ we have NPDLs at $M=M_{c}$, NS represents the existence region of NPSWs after the formation of NPDL at $M=M_{c}$ and $V'''(M_{c},0)=0$ along the curve $\beta_{e}=\beta_{ec}$. This figure shows the existence domain with respect to $\mu$ at the acoustic speed $U=C_{D} \Leftrightarrow M=M_{c}$ for $p=0$ and $\sigma_{ie}=0.9$. To describe figure \ref{sol_spc_wrt_mu_@M=Mc_p=0}, we have defined the following cut off values of $\mu$: \begin{description} \item[$\mu_{p}$] $\mu_{p}$ is a cut off value of $\mu$ such that $M_{max}$ does not exist for any admissible value of $\beta_{e}$ if $\mu$ lies within the interval $0 < \mu < \mu_{p}$, i.e., if $\mu \geq \mu_{p}$, there exists a value $\beta_{e}^{*}$ of $\beta_{e}$ such that $M_{max}$ exists at $\beta_{e}=\beta_{e}^{*}$; moreover, if $\beta_{e}^{*}>0$, then $M_{max}$ exists for all $\beta_{e}$ lies within the interval $0 \leq \beta_{e} < \beta_{e}^{*}$. \item[$\mu_{c}$] $\mu_{c}$ is a cut off value of $\mu$ such that $V''(M_{c},0)=0$ and $V'''(M_{c},0)=0$ at $\mu=\mu_{c}$ and $\beta_{e}=\beta_{ec}$ for fixed values of $\sigma_{ie}$. \item[$\mu_{r}$] $\mu_{r}$ is another cut off value of $\mu$ such that for all $\mu_{r} \leq \mu_{T}$, the curve $M = M_{NPDL}$ tends to intersect the curve $M = M_{c}$ at the point $\beta_{e}=\beta_{eb}$. \item[$\mu_{T}$] $\mu_{T}$ is a physically admissible upper bound of $\mu$. \end{description} The Figure \ref{sol_spc_wrt_mu_@M=Mc_p=0} clearly shows that there exist two types of NPSWs at $M=M_{c}$ if $\mu$ lies within the interval $\mu_{r}< \mu < \mu_{T}$. The first type is bounded by the curves $\beta_{e}=\beta_{ec}$ and $\beta_{e}=\beta_{eb}$ and the amplitude of these NPSWs is restricted by the amplitude of NPDL at the acoustic speed. The second type of NPSWs exist beyond the curve $\beta_{e}=\beta_{eb}$, i.e., after the formation of NPDL at $M=M_{c}$. The amplitude of the NPSWs after the formation of double layer at $M=M_{c}$ increases with decreasing $\beta_{e}$ and finally, attains its maximum value when $\beta_{e}=0$ at $M=M_{c}$. Again, there exists a jump type discontinuity between the amplitude of the NPSWs at the acoustic speed just before and after the formation of NPDL at $M=M_{c}$ (see figure \ref{profile_supersoliton_p=0}). Since, the existence of solitons after the formation of double layer confirms the existence of at least one sequence of supersolitons \cite{paul17pop}, therefore, we can conclude that whenever there exists no positron in the system, it supports negative potential supersolitons at the acoustic speed. Consequently, there must be a smooth transition of solitary structures at $M=M_{c}$, viz., soliton $\to$ double layer $\to$ supersoliton $\to$ soliton. For the first time, this transition process has been elaborately discussed in the next section with the help of phase portraits of the dynamical system corresponding to the DIA solitary structures at the acoustic speed. \section{\label{sec:Phase_Portraits} Phase Portraits of different solitary structures at the acoustic speed} Before going to investigate the mechanism of transition of the solitary structures at the acoustic speed, we have to describe the phase portraits of the dynamical system corresponding to the different solitary structures at the acoustic speed. It is also necessary to make a clear difference between the solitary structures at the acoustic speed, i.e., at $M=M_{c}$ and the solitary structures at the supersonic speed, i.e., for $M>M_{c}$. Differentiating the energy integral (\ref{energy_integral}) with respect to $\phi$, we get the following differential equation: \begin{eqnarray}\label{energy_integral_differentiation} \frac{d^{2}\phi}{d\xi^{2}}+V'(\phi)=0. \end{eqnarray} This equation is equivalent to the following system of differential equations \begin{eqnarray}\label{phase_portraits} \frac{d\phi_{1}}{d\xi}=\phi_{2}~,~\frac{d\phi_{2}}{d\xi}=-V'(\phi_{1})~, \end{eqnarray} where $\phi_{1}=\phi$. In the present paper, we have considered the solitary structures at $M=M_{c}$ with the help of qualitatively different existence domains. Now, we explain their shapes with the help of phase portraits of the system of coupled equations (\ref{phase_portraits}) in the $\phi_{1}-\phi_{2}$ plane. The fixed point of the dynamical system (\ref{phase_portraits}) is ($\phi_{1}^{*}$, $\phi_{2}^{*}$), where $\phi_{2}^{*}=0$ and $\phi_{1}^{*}$ is given by the equation \begin{eqnarray}\label{phi_1_star} V'(\phi_{1}^{*})=0 \end{eqnarray} This equation gives the value(s) of $\phi_{1}^{*}$ as a function of the physical parameters of the system at the Mach number $M=M_{c}=1$, i.e., $\phi_{1}^{*}$ is a function of $p$, $\mu$ $\beta_{e}$, $\sigma_{ie}$ and $\sigma_{pe}$. So, we can write \begin{eqnarray}\label{phi_1_star_function} \phi_{1}^{*}=\phi_{1}^{*}(p,\mu,\beta_{e},\sigma_{ie},\sigma_{pe}). \end{eqnarray} In the present work, we take $\sigma_{pe}=0.9$, i.e., the average thermal temperatures of positrons is nearly same as that of electrons and we have also considered $\sigma_{ie}=0.9$ (the usual dusty plasma approximation $T_{i}\approx T_{e}$). Therefore, for fixed values of $p$ and $\mu$, the equation (\ref{phi_1_star_function}) reduces to \begin{eqnarray}\label{phi_1_star_function_1} \phi_{1}^{*}=\phi_{1}^{*}(\beta_{e}). \end{eqnarray} To know the value of $\beta_{e}$, we have already drawn the existence domains with respect to $\beta_{e}$ (see figure \ref{sol_spc_wrt_beta_e_p=0_pt_00001}(a), \ref{sol_spc_wrt_beta_e_p=0_pt_01}(a), \ref{sol_spc_wrt_beta_e_p=0_pt_04}(a), \ref{sol_spc_wrt_beta_e_p=0_pt_07}(a), and \ref{sol_spc_wrt_beta_e_p=0_pt_1}(a)) and from these existence domains, we can easily decide the value of $\beta_{e}$ for the existence of the desired solitary structure at the acoustic speed. To describe the phase portraits of the solitary structures at $M=M_{c}$, we consider figures \ref{pp_npsw_p=0_pt_01} - \ref{final_pp_NPDL_p=0_pt_01}. Here we have used the existence domain as shown in figure \ref{sol_spc_wrt_beta_e_p=0_pt_01} to determine the value of $\beta_{e}$ for the existence of desired solitary structure at the acoustic speed. In figures \ref{pp_npsw_p=0_pt_01}(a) - \ref{final_pp_NPDL_p=0_pt_01}(a), $V(\phi)$ is plotted against $\phi$. The lower panel (or marked as (b)) of each figure shows the phase portrait of the system (\ref{phase_portraits}). In these figures, we have used the values of the parameters as indicated in the figures with $p=0.01$, $\mu=0.2$ and $\sigma_{pe}=\sigma_{ie}=0.9$. The curve $V(\phi)$ and the phase portrait have been drawn on the same horizontal axis $\phi(=\phi_{1})$. The small solid square corresponds to the point of inflexion at the origin, the small solid circle corresponds to a saddle point and the small solid star indicates an equilibrium point other than saddle point or the point of inflexion of the system (\ref{phase_portraits}). It is simple to check that each maximum (minimum) point of $V(\phi)$ corresponds to a saddle point (an equilibrium point other than a saddle point) of the system (\ref{phase_portraits}). Again, small solid square corresponds to the point of inflexion of the system (\ref{phase_portraits}). The concept of the point of inflexion in the study of solitons at the acoustic speed is not new one. Das \textit{et al.} \cite{das12mc} have already mentioned that the origin is the point of inflexion of the system (\ref{phase_portraits}) for solitary structures at the acoustic speed. In fact, if $V(0)=V'(0)=0$, $V''(M_{c},0)=0$ and $V'''(M_{c},0)\neq 0$, the point $\phi=0$ is the point of inflexion which seperates the convex part of the curve $V(M_{c},\phi)$ from its concave part. According to Theorms 3 and 4 of Das \textit{et al.} \cite{das12mc}, the origin $(0,0)$ is always a point of inflexion of the system (\ref{phase_portraits}) for solitary structures at the acoustic speed. But in case of supersonic solitary structures ($M>M_{c}$), the origin $(0,0)$ is not the point of inflexion of the system (\ref{phase_portraits}). In this case, i.e., for the supersonic case, the origin $(0,0)$ is always a saddle point of the system (\ref{phase_portraits}). This gives a difference between the solitary structures for $M>M_{c}$ and the solitary structures at $M=M_{c}$. Now, there is a one-one correspondence between the separatrix of the phase portrait as shown with a heavy blue line in the lower panel with the curve $V(\phi)$ against $\phi$ of the upper panel. In fact, this one-one correspondence between the separatrix of the phase portrait and the curve $V(\phi)$ against $\phi$ has been elaborately discussed by Paul \textit{et al.} \cite{paul17pop} for supersonic solitary structures. In this section, we want to discuss the phase portraits of the solitary structures at the acoustic speed and the transition process: solitons $\to$ double layers $\to$ supersolitons $\to$ solitons after the formation of double layer at the acoustic speed when there is no positrons in the system. For the sonic case, i.e., at the acoustic speed, the separatrix corresponding to a solitary structure starts from the point of inflexion (0,0) and ends at the point of inflexion (0,0). This shows that if a separatrix is formed in the positive $\phi$ - axis then it is impossible to form another separatrix in the negative direction of $\phi$ - axis, and consequently coexistence of solitary structures of both polarities is not possible at the acoustic speed. This is also not a new result because Das \textit{et al.} \cite{das12mc} have already proved the following theorem: \textit{\textbf{Theorem 5:} If $V(0)=V'(0)=0$, $V''(M_{c},0)=0$ and $V'''(M_{c},0)\neq 0$, it is not possible to have coexistence of both positive and negative potential solitary structures at $M=M_{c}$.} Therefore, phase portrait analysis confirms the \textbf{Result - 3} or \textbf{Theorem -5}. The separatrix corresponding to a solitary structure is shown with a heavy blue line, whereas other separatrices (if exist) are shown by green lines. The closed curve about an equilibrium point (other than a saddle point or the point of inflexion) contained in at least one separatrix indicates the possibility of the periodic wave solution about that fixed point. Figure \ref{pp_npsw_p=0_pt_01}(a) shows the existence of a NPSW at $M=M_{c}$ and figure \ref{pp_npsw_p=0_pt_01}(b) describes the corresponding phase portrait. Here we see that the system has a point of inflexion at the origin, an equilibrium point at $(-0.33,0)$ and a saddle at $(-1.27,0)$. Again, from figure \ref{pp_npsw_p=0_pt_01}(b), we see that there are two separatrices: (i) the separatrix (as shown by a heavy blue line) that starts and ends at the origin enclosing the non-saddle fixed point and this separatrix corresponds to the negative potential soliton at $M=M_{c}$ and (ii) the separatrix (as shown by heavy green line) which appears to pass through the saddle point $(-1.27,0)$ and this separatrix contains the separatrix (as shown by a heavy blue line) that starts and ends at the origin. There exist infinitely many closed curves between these two separatrices and each of these closed curves corresponds to a super-nonlinear periodic wave. Thus, figure \ref{pp_npsw_p=0_pt_01}(b) confirms the existence of super-nonlinear periodic waves at the acoustic speed. Figure \ref{pp_ppsw_p=0_pt_01}(a) shows the existence of a PPSW at $M=M_{c}$ and figure \ref{pp_ppsw_p=0_pt_01}(b) describes the corresponding phase portrait. Here we see that the system has a point of inflexion at the origin, an equilibrium point at $(0.39,0)$ which is not a saddle point. From figure \ref{pp_ppsw_p=0_pt_01}(b), we see that there exists only one separatrix (as shown by a heavy blue line) that starts and ends at the origin enclosing the non-saddle fixed point $(0.39,0)$ and consequently this separatrix corresponds to the positive potential soliton at $M=M_{c}$. Figure \ref{final_pp_NPDL_p=0_pt_01}(b) shows the phase portrait of a NPDL at the acoustic speed and this figure shows that the separatrix corresponding to the double layer solution at the acoustic speed appears to start and end at the point of inflexion $(0,0)$ and again it appears to pass through the saddle point at $(-1.1,0)$ enclosing the non-saddle fixed point $(-0.62,0)$. In figure \ref{final_pp_NPDL_p=0_pt_01}(a), $V(\phi)$ is plotted against $\phi$ at the acoustic speed for the given values of the parameters as indicated in the figure. Figure \ref{final_pp_NPDL_p=0_pt_01}(a) and figure \ref{final_pp_NPDL_p=0_pt_01}(b) together give a one-one correspondence between the separatrix of the phase portrait as shown with a heavy blue line in the lower panel with the curve $V(\phi)$ against $\phi$ of the upper panel. This mechanism holds good for formation of PPSWs and also for the formation of NPSWs at the acoustic speed. Now, we are in a position to discuss the transition process of the solitary structures, viz., solitons $\to$ double layers $\to$ supersolitons $\to$ solitons after the formation of double layer at the acoustic speed when there is no positrons in the system. In this case, i.e., for $p=0$, we consider the existence domain as given in figure \ref{sol_spc_wrt_mu_@M=Mc_p=0} to find the values of $\mu$ and $\beta_{e}$ for the existence of the desired solitary structure at the acoustic speed. One - one correspondence between the separatrix of the phase portrait with the curve $V(\phi)$ against $\phi$ and the transition process of different solitary structures at the acoustic speed have been shown through the figures \ref{pp_npsw_p=0} - \ref{equilibrium_points_p=0}. Figure \ref{pp_npsw_p=0}(a) shows the existence of a NPSW at $M=M_{c}$ before the formation of NPDL and figure \ref{pp_npsw_p=0}(b) describes the phase portrait of the dynamical system (\ref{phase_portraits}) at $M=M_{c}$ for the values of the parameters as mentioned in the figure. Here we see that the system has a point of inflexion at the origin, an equilibrium point at $(-0.63,0)$ and a saddle at $(-1.49,0)$. Again, from figure \ref{pp_npsw_p=0}(b), we see that there are two separatrices. (i) The separatrix (as shown by a heavy blue line) that appears to start and end at the origin enclosing the non-saddle fixed point corresponds to the negative potential soliton at $M=M_{c}$. (ii) This blue separatrix is contained in another separatrix (as shown by a heavy green line) that appears to pass through the saddle point $(-1.49,0)$. There exist infinitely many closed curves between these two separatrices and each of these closed curves corresponds to a super-nonlinear periodic wave. Thus, figure \ref{pp_npsw_p=0}(b) confirms the existence of super-nonlinear periodic waves at the acoustic speed. Figure \ref{pp_npdl_p=0}(b) shows the phase portrait of NPDL at the acoustic speed when $\beta_{e}=\beta_{eb}=0.36918$ and this figure shows that the separatrix corresponding to the double layer solution at the acoustic speed appears to start and end at the point of inflexion $(0,0)$ and again it appears to pass through the saddle point at $(-1.41,0)$ enclosing the non-saddle fixed point $(-0.7,0)$. Now we slightly decrease the value of $\beta_{e}$ from $\beta_{eb}$ and draw figure \ref{pp_npsupersoliton_p=0} for $\beta_{e}=\beta_{eb}-0.003$. In figure \ref{pp_npsupersoliton_p=0}(a), $V(\phi)$ is plotted against $\phi$, where the region between $-2$ and $0$ is shown in larger scale in the inset, whereas figure \ref{pp_npsupersoliton_p=0}(b) describes the phase portrait of the dynamical system (\ref{phase_portraits}) at $M=M_{c}$ for the values of the parameters as mentioned in the figure, where the region between $-2$ and $0$ is shown in larger scale in the inset. The figure \ref{pp_npsupersoliton_p=0}(a) shows that $V(\phi)$ has two consecutive minima at $\phi = -0.81504 \approx -0.82$ and at $\phi = -7.4622 \approx -7.46$. Consequently, the phase portrait of the system has two non-saddle fixed points as shown in the lower panel of figure \ref{pp_npsupersoliton_p=0}. The separatrix crresponding to the solitary structure that appears to start and end at the point of inflexion $(0,0)$ encloses one non-zero saddle point $(-1.2875,0)\approx (-1.29,0)$, and two non-saddle fixed points $(-0.81504,0)\approx (-0.82,0)$ and $(-7.4622,0)\approx(-7.46,0)$. From the region between $-2$ and $0$ as shown in larger scale in the inset of figure \ref{pp_npsupersoliton_p=0}(b), one can also check that this separatrix also envelopes one inner separatrix (shown by a green line) that appears to pass through the saddle point $(-1.2875,0)\approx (-1.29,0)$. Therefore, according to the definition of supersoliton, this separatrix is associated with a new type of solitary wave at the acoustic speed - a supersoliton at the acoustic speed. Thus, figure \ref{pp_npsupersoliton_p=0} confirms the existence of negative potential supersolitons at the acoustic speed. Now, we further reduce the value of $\beta_{e}$ and draw figure \ref{pp_npsw_after_npdl_p=0} for $\beta_{e}=\beta_{eb}-0.1=0.26908$. From figure \ref{pp_npsw_after_npdl_p=0}(b), we see that the phase portrait is qualitatively same as the phase portrait of NPSW at the acoustic speed (as shown in \ref{pp_npsw_p=0}(b)). Although, there exists a jump type discontinuity between the amplitudes of the solitons before and after the fomation of NPDL at the acoustic speed (see figure \ref{profile_supersoliton_p=0}). Therefore, for decreasing values of $\beta_{e}$ with $\beta_{e}<\beta_{eb}$, the negative potential supersolitons ultimately reduce to NPSWs after the formation of NPDL. In other words, there must be a critical value $\beta_{e}^{(cr)}$ of $\beta_{e}$, such that the system supports negative potential supersoliton at the acoustic speed when $\beta_{e}$ lies within the interval $\beta_{eb} < \beta_{e} < \beta_{e}^{(cr)}$ and for $\beta_{e} > \beta_{e}^{(cr)}$, the system supports NPSW after the formation of double layer at the acoustic speed. Thus, we see that there exists a transition between the solitary structures at the acoustic speed, viz., soliton $\to$ double layer $\to$ supersoliton $\to$ soliton after the formation of double layer . Such transition process have also been observed by Paul \textit{et al.} \cite{paul17pop} for the solitary structures in case of supersonic waves, i.e., for $M>M_{c}$. To understand the mechanism of this transition process of solitary structures at the acoustic speed, we plot the origin (i.e., the point of inflexion), the saddle and other equilibrium points of the system (\ref{phase_portraits}) on the $\phi(=\phi_{1})$-axis for decreasing values of $\beta_{e}$ starting from $\beta_{e}=\beta_{eb}-0.000001$ in figure \ref{equilibrium_points_p=0}. This figure shows that for decreasing values of $\beta_{e}$, the distance between the the non-zero saddle and the non-saddle fixed point nearest to it decreases and ultimately both of them disappear from the system. Finally, the system contains only the point of inflexion, i.e., the origin and a non-zero equilibrium point. Consequently, the separatrix corresponding to the solitary structure appears to start and end at the origin enclosing the non-saddle fixed point and we have NPSW after the formation of NPDL at the acoustic speed. Thus, we see that the mechanism of the transition of solitary structures at the acoustic speed is qualitatively same as the transtion process of solitary structures for supersonic waves as reported by Paul \textit{et al.} \cite{paul17pop}. \section{\label{sec:Conclusions} Conclusions} In the present work, we have investigated the nature of existence of different DIA solitary structures at the acoustic speed in a collisionless unmagnetized dusty plasma consisting of negatively charged static dust grains, adiabatic warm ions, Cairns distributed nonthermal electrons and isothermal positrons with the help of existence domains and phase portraits. Although from the paper of Paul \& Bandyopadhyay \cite{paul2016} it has been observed that for supersonic case the same system supports double layers of both polarities and positive potential supersolitons, but at the acoustic speed the system supports PPSWs, NPSWs and NPDLs only. Again, in the present paper, if we consider $p=0$ then we observe that the system supports PPSWs, NPSWs, NPDLs, NPSWs after the formation of NPDL and negative potential supersolitons at the acoustic speed. These results agree with the results of Das \textit{et al.} \cite{das2011existence}, where they have considered a collisionless unmagnetized three component dusty plasma consisting of negatively charged static dust grains, adiabatic warm ions and Cairns distributed nonthermal electrons to investigate the DIA solitary structures at the acoustic speed. For the first time, we have introduced the phase portraits of the dynamical system corresponding to the DIA solitary structures at the acoustic speed. We found the following qualitative differences between the phase portraits of the solitary structures at $M=M_{c}$ and the phase portraits of the solitary structures for $M>M_{c}$ which have been discussed by Paul \textit{et al.} \cite{paul17pop}. (i) For $M>M_{c}$, the origin is always a saddle point and the separatrix corresponding to the solitary structures appears to pass trough the origin, whereas for $M=M_{c}$, the origin is the point of inflexion and the separatrix corresponding to the solitary structures appears to start and end at the origin. (ii) For $M>M_{c}$, the phase portraits of the dynamical system corresponding to DIA double layers have two saddles and two non-saddle fixed points, but in the case of double layers at the acoustic speed, the system has a point of inflexion at the origin, one non-zero saddle and one non-saddle fixed point. (iii) From the paper of Dubinov and Kolotkov \cite{dubinov12b} and Paul \textit{et al.} \cite{paul17pop}, for the case of supersolitons, we see that there exist at least two separatrices and the separatrix through the origin (saddle point) encloses the other one for $M>M_{c}$. In the case of sonic DIA waves, i.e., for $M=M_{c}$, we have the same definition of the supersolitons, i.e., for supersolitons at the acoustic speed, there are at least two separatrices and the separatrix that appears to start and end at the origin (point of inflexion) encloses the other one. With the help of the phase portraits, we have also explained the transition process of the solitary structures at the acoustic speed for $p=0$, viz., soliton $\to$ double layer $\to$ supersoliton $\to$ soliton for decreasing values of $\beta_{e}$ and it is not possible to explain the transition process of solitary structures by considering the existence domains only or simply by drawing the curve $V(\phi)$ against $\phi$. This transition phenomenon at the acoustic speed happens according to the mechanism as described in figure \ref{equilibrium_points_p=0}. Again, the transition mechanism at the acoustic speed is same as that of the supersonic solitary structures as reported by Paul \textit{et al.} \cite{paul17pop}. From this work, we can conclude that the formation of double layer is also possible at the acoustic speed. Again, according to Alfv{\'e}n \cite{alfven1981}, a double layer consists of two oppositely charged parallel layers resulting in a potential drop in the layer and a vanishing electric field on each side of the layer. Formation of double layers in a plasma system releases an amount of energy which accelerates the charged particles of the system. Above the ionosphere of the Earth acceleration of electrons has been observed in a rather narrow region and the possible cause of such acceleration is the formation of several double layers in that region \cite{alfven1981}. This work is helpful to understand the formation of the double layer at the acoustic speed. \acknowledgments One of the authors (Ashesh Paul) is thankful to the Department of Science and Technology, Govt. of India, INSPIRE Fellowship Scheme for financial support. \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
train/arxiv
BkiUbms5qoTDtofJ9PjD
5
1
\section{Introduction} Tracer advection constitutes a large portion of the compute time for modern global climate models, due to the large number of chemical and hydrometeor species that must be accounted for. For physical consistency, the advection of tracers must be conservative while being as numerically accurate and computationally efficient as possible. In a previous article, a novel characteristic discontinuous Galkerin (CDG) advection scheme was presented which allows for conservative advection on unstructured meshes and arbitrarily long time steps while also scaling sub-linearly with the advection of additional tracers \cite{Lee16}. In idealized geometries, the CDG scheme was found to outperform a traditional flux corrected transport (FCT) scheme \cite{BB73, Zalesak79} for a moderate number of tracers, and was shown to converge at higher order, through the use of a modal basis expansion of the tracer in each element. Several conservative transport schemes have recently been presented based on the idea of integrating either the edges or the entire element backwards along Lagrangian characteristics and integrating over the resultant area in order to determine either the fluxes or the element values at the new time level. These schemes have the appealing properties that they are unconditionally stable with respect to time step, and so may be run with longer time steps than the underlying dynamics requires, and their performance scales sub-linearly with the number of tracers being advected, due to the fact that the computation of the Lagrangian characteristics may be reused for additional tracers. The Incremental Remap (IR) scheme, which has been implemented on both planar Cartesian \cite{DB00} and spherical geodesic \cite{LR05} grids, exploits this idea in flux form. The IR scheme uses the mean values of the tracer in the neighbouring elements in order to reconstruct the tracer gradients, which are in turn used to integrate the swept regions of the edges. It has previously \firstRev{been } shown \cite{Lee16} that the CDG scheme is approximately as accurate as the IR scheme at half the resolution, due to the compact nature of the trial functions compared to the IR gradients. In remap form, by which the entire element is integrated backwards along the characteristics via the Reynolds transport theorem in order to compute its new conservative value, this idea has been used to construct the Conservative Semi-Lagrangian Multi-tracer transport scheme (CSLAM) \cite{LNU10,ELT13}. In this formulation the intersection of the pre-image of the element and the Eulerian grid at the previous time level is integrated using line integrals via the Gauss-Green theorem in order to determine the weights of a quadratic polynomial representation of the tracer in each element. This scheme is currently in use within the High-Order Methods Modelling Environment (HOMME) atmospheric model \cite{TF10}. The methods discussed above use some form of reconstruction to determine the higher order structure of the tracer field. An alternative approach is to introduce a set of test functions which are integrated along velocity characteristics so to satisfy the adjoint equation to the weak form of the problem. This is the approach used in the Eulerian-Lagrangian Localized Adjoint Method (ELLAM) \cite{CRHE90,RC02}. One downside of the ELLAM method however is that it requires the assembly and solution of a global system of equations, as either a finite volume or finite element problem. This issue is negated in a similar and recently developed semi-Lagrangian discontinuous Galerkin scheme for tracer transport in atmospheric flows on cubed spheres \cite{GNQ14}, which prognoses the trial function representation of the tracer, while integrating the quadrature points of the test functions forwards in time along velocity characteristics in order to satisfy the adjoint equation. The method is applied in one dimension, for which the quadrature points of the Lagrangian pre-image of the element are integrated forward in time, where they are used to evaluate the tracer via its trial function representation. This is required in order to preserve the values of the test functions along characteristics, also a feature of the CDG scheme. The CDG scheme is similar to the previous semi-Lagrangian discontinuous Galerkin scheme \cite{GNQ14} in that the higher order structure is prognosed via the solution of a system of linear equations for the coefficients of the trial functions in each element, with the fluxes determined via an integration of the swept region made by the vertices of the edges along characteristics. However unlike the previous scheme it may be applied in two dimensions without the use of dimensional splitting, and so is suitable for fully unstructured grids. In this paper we present the implementation of the CDG scheme within the MPAS-Ocean model \cite{Ringler13}, a mimetic c-grid finite volume model for the advection of both passive and active tracers. The scheme is implemented in both the horizontal, on planar and spherical unstructured grids, and in the vertical, which makes use of an arbitrary Lagrangian Eulerian (ALE) grid \cite{Petersen15}. Consistency between the dynamics and the transport scheme is ensured via a normalization of the fluxes by the volume fluxes derived from the continuity equation, and special care is taken to ensure that the vertical advection remains conservative in the context of the moving layers. The remainder of this paper is presented as follows: In section 2 the formulation of the CDG scheme for the advection equation is presented, with particular emphasis on the \firstRev{splitting of fluxes between the horizontal and vertical dimensions and the construction of the vertical scheme in the context of the ALE vertical coordinate. } \firstRev{Section 3 } presents the results of various idealized test cases and comparisons to the existing flux corrected transport (FCT) scheme in MPAS-Ocean. In \firstRev{section 4 } the conclusions are discussed. \firstRev{Finally the formulation of a local coordinate system tangent to the sphere in each element and the mapping between local and global coordinates are given in the appendix.} \section{Formulation} \subsection{Characteristic discontinuous Galerkin advection} For a detailed description of the CDG scheme, the reader is referred to the previous article \cite{Lee16}. Here we provide a brief overview of the formulation. The equation for the advection of a thickness-weighted tracer concentration is given as \begin{equation}\label{eqn1.1} \frac{\partial hq}{\partial t} + \nabla\cdot(\vec uhq) = 0 \end{equation} where $h$ is the layer thickness, $q$ the tracer concentration and $\vec u$ the transport velocity. \firstRev{Note that here the layer thickness is assumed to be constant with time. We will relax this assumption when discussing the implementation on the vertical ALE grid in section \ref{form_vert}. } We begin by discretizing the domain into a set of $k$ contiguous elements, \firstRev{for which $h_k$ and $q_k$ are the discrete forms of the layer thickness and tracer concentration respectively. } Multiplying \firstRev{$h_kq_k$ } by a set of $i$ test functions that vary in both space and time $\phi_{k,i}(\vec x,t)$, and expanding via the chain rule gives \begin{equation}\label{eqn1.2} \frac{\partial\phi_{k,i}h_kq_k}{\partial t} + \nabla\cdot(\phi_{k,i}\vec{u}h_{k}q_{k}) = \phi_{k,i}\Big(\frac{\partial h_kq_k}{\partial t} + \nabla\cdot(\vec{u}h_kq_k)\Big) + h_kq_k\Big(\frac{\partial\phi_{k,i}}{\partial t} + \vec{u}\cdot\nabla\phi_{k,i}\Big). \end{equation} \firstRev{Note that this formulation differs from the standard Galerkin formulation in that the tracer concentration and not the full equation has been multiplied by the test function } \cite{Lee16}. The first term on the right-hand side of \eqref{eqn1.2} is the discrete form of \eqref{eqn1.1} weighted by the test function and so vanishes, and the second term represents the material derivative of the test functions $D\phi_{k,i}/Dt$. Integrating by parts over the element area $\Omega_k$ and between time levels $n$ and $n+1$, and then applying Gauss' theorem, the weak form is given as \begin{multline}\label{eqn1.3} \int_{\Omega_k}(\phi_{k,i}h_kq_k)^{n+1} - (\phi_{k,i}h_kq_k)^n\mathrm{d}\vec x + \int_{t^n}^{t^{n+1}}\int_{\partial\Omega_k}\phi_{k,i}\vec{u}h_{k'}q_{k'}\cdot\mathrm{d}\vec{s}\mathrm{d}t = \\ \int_{t^n}^{t^{n+1}}\int_{\Omega_k}h_kq_k\frac{D\phi_{k,i}}{Dt}\mathrm{d}\vec x\mathrm{d}t, \end{multline} \firstRev{where $k'$ denotes a set of elements within a local neighbourhood of element $k$ from which the flux is to be determined. } The right-hand side may be taken as zero if the values of the test functions are constant along characteristics, a condition also enforced in the ELLAM \cite{CRHE90,RC02} and semi-Lagrangian discontinuous Galerkin \cite{GNQ14} schemes, such that \begin{equation}\label{eqn1.4} \frac{D\phi_{k,i}}{Dt} = 0. \end{equation} Equation (\ref{eqn1.4}) is satisfied via the introduction of a test function $\beta$ which varies with respect to a Lagrangian coordinate in space and time $\vec\Gamma$ as \begin{equation}\label{eqn1.5} \phi_{k,i}(\vec x,t) = \beta_{k,i}(\vec\Gamma(\vec\xi(s),s)) \end{equation} where $\vec\Gamma(\vec\xi(s),s)$ is constant with respect to the parametric variable $s$ along the characteristic trajectory \begin{equation}\label{eqn1.6} \frac{\mathrm{d}\vec\xi}{\mathrm{d}s} = \vec u(\vec\xi(s),s)\qquad\vec\xi(t) = \vec x \end{equation} and $t$ is the point on $s$ where the boundary condition is applied, such that \begin{equation}\label{eqn1.7} \frac{\mathrm{d}\vec\Gamma(\vec\xi(s),s)}{\mathrm{d}s} = 0\qquad \vec\Gamma(\vec x,t^{n+1}) = \vec\xi(t^{n+1}). \end{equation} Note that the boundary condition for \eqref{eqn1.7} follows from that for \eqref{eqn1.6} such that for any $s = t$, $\Gamma(\vec\xi(t),t) = \Gamma(\vec x,t)$, with $t^{n+1}$ being the specific time at which the boundary condition is applied. Integrating (\ref{eqn1.6}) with respect to $s$ between $t$ and $t^{n+1}$ and recalling the boundary conditions on $\vec\xi(t)$ and $\vec\xi(t^{n+1})$ gives \begin{equation}\label{eqn1.9} \vec{\Gamma}(\vec x,t) = \vec{x} + \int_{t}^{t^{n+1}}\vec{u}(\vec\xi(s),s)\mathrm{d}s \end{equation} such that for any $(\vec{x},t)$ \eqref{eqn1.9} preserves the constant value of $\beta$ along characteristics and hence the constant value of $\phi$ along those same characteristics. Equation \eqref{eqn1.9} implies that the test functions \emph{arrive} at their static Eulerian coordinates $\vec x$ at time level $n+1$ such that $\phi_{k,i}(\vec x,t^{n+1}) = \beta_{k,i}(\vec x)$, with $\vec x$ being the location of $\vec\Gamma(\vec x,t^{n+1})$ as given in \eqref{eqn1.9}. The values of the test functions at the same coordinate at the previous time level $n$ are then given as $\phi_{k,i}(\vec x,t^n) = \beta_{k,i}(\vec\Gamma(\vec x,t^n))$. As a practical matter, this means that if we wish to evaluate a test function $\phi_{k,i}(\vec x,t^n)$ at a given coordinate $\vec x$ at a previous time level $n$ subject to \eqref{eqn1.4} then we may equivalently evaluate the static test function $\beta_{k,i}(\vec\Gamma(\vec x,t^n))$ at its previous location by integrating \emph{forwards} with the Eulerian velocity field. Unlike the test functions, $\phi_{k,i}(\vec x,t)$, there is no requirement that the trial functions be conserved along characteristics, and so may remain static, defined at the arrival locations of the characteristics at time level $n+1$ as given in \eqref{eqn1.9}. Ensuring that the mass matrix by which the solution coefficients are multiplied remained static in this fashion motivated our choice of boundary conditions in \eqref{eqn1.6} and \eqref{eqn1.7}. \firstRev{We represent the discrete form of the tracer concentration via an expansion of yet to be defined trial functions as } \begin{equation} q_{k}(\vec x,t) = \sum_jc_{k,j}(t)\beta_{k,j}(\vec x). \end{equation} This gives rise to the following linear system for the solution of the trial function coefficients $c_{k,j}^{n+1}$ in each element $k$ at the new time level $n+1$ \begin{equation}\label{eqn1.10} \sum_j \int_{\Omega_k}h_k\beta_{k,i}\beta_{k,j}\mathrm{d}\vec x c_{k,j}^{n+1} = \int_{\Omega_k}(\phi_{k,i}h_kq_k)^n\mathrm{d}\vec x - \int_{t^n}^{t^{n+1}}\int_{\partial\Omega_k}\phi_{k,i}\vec{u}h_{k'}q_{k'}\cdot\mathrm{d}\vec{s}\mathrm{d}t. \end{equation} Like the standard discontinuous Galerkin formulation, only the boundary fluxes are required to determine the solution of the tracer coefficients at the new time level, so no global mass matrix is required. However, unlike the standard discontinuous Galerkin formulation, where the fluxes are determined via some Eulerian process, such as some form of averaging, upwinding or a Riemann solver, under the CDG formulation the edge fluxes must be evaluated by taking the area made by the edge as it is swept backward in time to its static Eulerian location from the previous time level and integrating the tracer mass over this area. This may be expressed as \begin{equation}\label{eqn1.11} \sum_j \int_{\Omega_k}h_k\beta_{k,i}\beta_{k,j}\mathrm{d}\vec x c_{k,j}^{n+1} = \int_{\Omega_k}(\phi_{k,i}h_kq_k)^n\mathrm{d}\vec x - \\ \sum_e\sum_{k'}\int_{\Omega_{k,k',\Delta t}}(\phi_{k,i}h_{k'}q_{k'})^n\mathrm{d}\vec x \end{equation} where $\Omega_{k,k',\Delta t}$ is the intersection of the \emph{swept region} of the edge $e$ of element $k$ over time step $\Delta t$ and element $k'$. The solution of (\ref{eqn1.11}) subject to (\ref{eqn1.4}) represents the characteristic discontinuous Galerkin formulation for updating the tracer trial function coefficients $c_{k,j}^{n+1}$ at the new time level $n+1$. \begin{figure}[!hbtp] \centering \includegraphics[width=0.48\textwidth,height=0.40\textwidth]{remap_perspective.pdf} \caption{Schematic of the flux computation for the CDG scheme. Edge vertices are integrated back in time to their departure points in order to determine the swept region for an edge over a given time step. The volume of the tracer over the swept region is then integrated with the quadrature points integrated forwards to arrival points where the test functions are evaluated in order to preserve the value of the test functions along characteristics.} \label{Fig1} \end{figure} \subsection{Vertical CDG advection in ALE coordinates}\label{form_vert} Before presenting the full 3D formulation of the CDG advection scheme we discuss the 1D vertical advection in some detail. This is in order to present the subtleties required to conservatively apply the vertical advection scheme in the context of the moving ALE vertical grid \cite{Petersen15}. The vertical advection of a tracer concentration $q(z,t)$ in a layer of varying thickness $h(z,t)$ is given as \begin{equation}\label{eqn2.1} \frac{\partial hq}{\partial t} + \frac{\partial(w-w_r)hq}{\partial z} = 0 \end{equation} where $w$ is the vertical velocity and $w_r$ the velocity of the layer interface such that $w-w_r$ is the effective transport velocity across the layer interface. \firstRev{As for the horizontal formulation in the preceeding section, we begin by multiplying the discrete form of the tracer concentration $q_l$ by the $i$ test functions for level $l$, } $\phi_{l,i}(z,t)$ \firstRev{and expanding via the chain rule as } \begin{multline}\label{eqn2.2} \frac{\partial\phi_{l,i}h_lq_l}{\partial t} + \frac{\partial\phi_{l,i}(w-w_r)h_{l}q_{l}}{\partial z} = \phi_{l,i}\Big(\frac{\partial h_lq_l}{\partial t} + \frac{\partial (w-w_r)h_lq_l}{\partial z}\Big) \\ + h_lq_l\Big(\frac{\partial\phi_{l,i}}{\partial t} + (w-w_r)\frac{\partial\phi_{l,i}}{\partial z}\Big). \end{multline} \firstRev{Again, the first term on the right hand side vanishes as it is the discrete form of \eqref{eqn2.1} weighted by the test function. } For the right-hand side to be zero the test function must move with velocity $w-w_r$ such that \begin{equation}\label{eqn2.3} \frac{\partial\phi_{l,i}}{\partial t} + (w-w_r)\frac{\partial\phi_{l,i}}{\partial z} = 0\qquad \phi_{l,i}(z,t^{n+1}) = \beta_{l,i}(z). \end{equation} The horizontal formulation of the CDG scheme given in equation \eqref{eqn1.10} assumes a constant layer thickness $h$. For the rest of this article we relax this assumption in order to account for the varying layer thickness of the vertical ALE grid. Instead of solving for a thickness-weighted tracer concentration $hq(\vec x,t)$ subject to constant thickness, we must therefore integrate over the layer thickness in the vertical. Proceeding from this representation we assume a trial function expansion as \begin{equation}\label{eqn2.3.1} q_l(z,t) = \sum_ja_{l,j}(t)\beta_{l,j}(z), \end{equation} and integrating with respect to space and time gives \begin{multline}\label{eqn2.4} \sum_j\int_{h_l^{n+1}}\beta_{l,i}\beta_{l,j}\mathrm{d}za_{l,j}^{n+1} = \int_{h_l^n}(\phi_{l,i}q_l)^n\mathrm{d}z - \\ \int_{\partial h_{l}^n}\int_{t^n}^{t^{n+1}}(w-w_r)\phi_{l,i}q_{l'}|_{l-} - (w-w_r)\phi_{l,i}q_{l'}|_{l+}\mathrm{d}t\mathrm{d}z \end{multline} where $l\pm$ denote the bottom and top of the layer interface respectively (with the vertical coordinate decreasing with layer index). Note that the vertical domains differ between time levels $n$ and $n+1$ as $h_l(z,t)$ evolves. \firstRev{We will assume } that the departure regions for the top and bottom of layer \firstRev{$l$ } are determined at time level $n$, \firstRev{noting that this is a particular choice for the representation of the flux terms and is not specifically required of the algorithm. } We can express the right hand side boundary terms as a sum over intersections between the swept region of the bottom and top interfaces over a time step $\Delta t$ of $l$ and the intersecting levels $l'$ (at time level $n$), $h_{l,l',\Delta t}^n$ as \begin{equation}\label{eqn2.5} \sum_j\int_{h_l^{n+1}}\beta_{l,i}\beta_{l,j}\mathrm{d}za_{l,j}^{n+1} = \int_{h_l^n}(\phi_{l,i}q_l)^n\mathrm{d}z - \int_{h_{l,l',\Delta t}^n}(\phi_{l,i}q_{l'})^n|_{l-} - (\phi_{l,i}q_{l'})^n|_{l+}\mathrm{d}z. \end{equation} \firstRev{Equation \eqref{eqn2.5} is the vertical analogue of the horizontal CDG advection equation \eqref{eqn1.11}. } \secondRev{We note that the CDG scheme for vertical transport is fundamentally different from the widely used Lagrangian remap scheme \cite{Lin04}, in that the higher order moments are determined via a Galerkin projection of swept region fluxes onto these terms in the same fashion as the mean component, rather than being reconstructed from the mean values in the neighbouring layers. } \firstRev{In \cite{Lee16} the horizontal CDG scheme was shown to be locally conservative. The vertical scheme is also locally conservative, since the tracer \emph{mass} is the integral of the tracer concentration $q_l$ over the element volume, $h_l^n$ for the 1D vertical scheme. Assuming a set of coefficients $b_i$ for the test functions, such that $\sum_ib_i\beta_i(z) = 1$, \eqref{eqn2.5} gives \begin{multline}\label{eqn2.6} \sum_ib_i\sum_j\int_{h_l^{n+1}}\beta_{l,i}\beta_{l,j}\mathrm{d}za_{l,j}^{n+1} = \sum_ib_i\int_{h_l^n}(\phi_{l,i}q_l)^n\mathrm{d}z \\ - \sum_ib_i\int_{h_{l,l',\Delta t}^n}(\phi_{l,i}q_{l'})^n|_{l-} - (\phi_{l,i}q_{l'})^n|_{l+}\mathrm{d}z. \end{multline} Using the vertical analogue of \eqref{eqn1.5}, this simplifies to \begin{equation}\label{eqn2.7} \int_{h_l^{n+1}}q_l^{n+1}\mathrm{d}z - \int_{h_l^n}q_l^n\mathrm{d}z = -\int_{h_{l,l',\Delta t}^n}q_{l'}^n|_{l-} - q_{l'}^n|_{l+}\mathrm{d}z. \end{equation} The flux terms cancel with those from the neighbouring elements, since these are equal and opposite. Since $q_l^n$ is a \emph{concentration}, the tracer \emph{mass} is the integral of $q_l^n$ over element $l$, which is conserved between time levels. } \subsection{CDG advection in 3D} Having presented the formulation of the CDG advection scheme independently in the horizontal and the vertical, we now proceed to the formulation of the full 3D scheme. The 3D advection of the tracer concentration is given as \begin{equation}\label{eqn3.1} \frac{\partial hq}{\partial t} + \nabla\cdot(\vec uhq) + \frac{\partial(w-w_r)hq}{\partial z} = 0. \end{equation} We assume a modal Taylor series test function expansion in both the horizontal and vertical dimensions as \begin{multline}\label{eqn3.2.1} \beta_{k,l}(x,y,z) = \sum_i\beta_{k,l,i}(x,y,z) = 1 + \frac{1}{\Delta x}(x - \overline{x}) + \frac{1}{\Delta y}(y - \overline{y}) + \\ \frac{1}{2\Delta x^2}(x^2 - \overline{x^2}) + \frac{1}{\Delta x\Delta y}(xy - \overline{xy}) + \frac{1}{2\Delta y^2}(y^2 - \overline{y^2}) + ... + \\ \frac{1}{\Delta z}(z - \overline{z}) + \frac{1}{\Delta z^2}(z^2 - \overline{z^2}) + ... \end{multline} \firstRev{where $k$ and $l$ are the element indices in the horizontal and vertical dimensions respectively, and } $\Delta x$, $\Delta y$ and $\Delta z$ are length scales of the element in $x$, $y$ and $z$ respectively by which the terms are normalized in order to keep them $\mathcal{O}(1)$. The terms denoted by the overbars are the mean components defined as \begin{equation}\label{eqn1.9.3} \overline{x^my^n} = \int_{\Omega_k}x^my^n\mathrm{d}\vec x\mathrm{d}z\qquad \overline{z^m} = \int_{\Omega_k}z^m\mathrm{d}\vec x\mathrm{d}z \end{equation} which are removed from the higher order terms to ensure that they remain massless, such that a slope limiter may be applied to these without loss of conservation. We also assume a trial function expansion for $q$ in each element as \begin{equation}\label{eqn3.2.2} q_{k,l}(\vec x,z,t) = \sum_jc_{k,l,j}(t)\beta_{k,l,j}(\vec x,z). \end{equation} Recalling the horizontal and vertical formulations of the CDG scheme as given in \eqref{eqn1.11} and \eqref{eqn2.5} respectively gives \begin{multline}\label{eqn3.3} \sum_j\int_{h_{k,l}^{n+1}}\int_{\Omega_{k,l}}\beta_{k,l,i}\beta_{k,l,j}\mathrm{d}\vec x\mathrm{d}zc_{k,l,j}^{n+1} = \int_{h_{k,l}^n}\int_{\Omega_{k,l}}(\phi_{k,l,i}q_{k,l})^n\mathrm{d}\vec x\mathrm{d}z - \\ \sum_e\frac{V_e^{con}}{V_e^{cdg}} \sum_{k'}\int_{h_{k,k',l}^n}\int_{\Omega_{k,k',l,\Delta t}}(\phi_{k,l,i}q_{k',l})^n\mathrm{d}\vec x\mathrm{d}z - \\ \sum_{l'}\int_{h_{k,l,l',\Delta t}^n}\int_{\Omega_{k,l,l'}}(\phi_{k,l,i}q_{k,l'})^n|_{l-} - (\phi_{k,l,i}q_{k,l'})^n|_{l+}\mathrm{d}\vec x\mathrm{d}z, \end{multline} \firstRev{where $k'$ and $l'$ are the set of elements in the horizontal and vertical respectively which intersect with the swept region of element $k$, $l$ over time step $\Delta t$. } Note that the layer thicknesses are determined separately from the continuity equation at integer time steps. \secondRev{While the fluxes have been partitioned into their horizontal and vertical components in \eqref{eqn3.3}, these are still applied at the same time level. As such there is no time splitting of the horizontal and vertical advection operators, and both the horizontal and vertical flux terms project onto the three dimensional solution of the tracer concentration. } The horizontal fluxes have been normalized by the factor $V_e^{con}/V_e^{cdg}$. This represents the ratio of the volume fluxed across edge $e$ as determined from the (single moment, finite volume) continuity equation and that swept across the same edge using the CDG algorithm. The volume fluxed across the edge from the continuity equation is centered in space, given as \begin{equation} V_e^{con} = 0.5(h_{e-} + h_{e+})\Delta t\vec u\cdot\vec nd_e \end{equation} where $d_e$ is the width of the edge and $h_{e-}$, $h_{e+}$ are the thicknesses of the elements either side of the edge. Unlike the continuity equation volume fluxes, the CDG swept region flux is upwinded, and may be given by a piecewise constant integration of the layer thickness over the swept regions as \begin{equation} V_e^{cdg} = \sum_{k'}\int_{\Omega_{k,k',\Delta t}}h_{k'}^n\mathrm{d}\vec x. \end{equation} This scaling of the edge fluxes ensures that the implicit volume fluxes of the CDG scheme are \emph{consistent} with respect to the explicit volume fluxes from the continuity equation. This procedure for enforcing consistency is much simpler than that required for remap schemes based on a multi-moment representation of the continuity equation \cite{Lauritzen16}. \secondRev{The relative simplicity of the consistency fix presented here is based on i. the fact that the layer thickness is represented via a single moment in each element in keeping with the finite volume formulation of the continuity equation and ii. the formulation of the CDG scheme in flux form rather than remap form, such that the edges and not the elements are traced back along characteristics. } It is worth noting that while the CDG scheme uses an upwinded flux, the volume flux used by the continuity equation is centered. This is necessary since the continuity equation must account for both the left and right gravity wave solutions of the linearized shallow water equations, whereas no such restriction is required for the CDG advection of passive tracers. While the vertical and horizontal fluxes are evaluated separately in order to avoid the need to evaluate swept region intersection in three dimensions, \eqref{eqn1.4} \firstRev{is still } satisfied in 3D as \begin{equation} \frac{\partial\phi}{\partial t} + \vec u\cdot\nabla\phi + (w-w_r)\frac{\partial\phi}{\partial z} = 0\qquad \phi(\vec x,z,t^{n+1}) = \beta(\vec x,z). \end{equation} \firstRev{This is to ensure that the effects of both the horizontal and vertical fluxes are accounted for in the advection of the test functions.} \section{Results} The CDG scheme has previously been verified via convergence studies for analytic solutions on planar quadrilateral and hexahedral grids \cite{Lee16}. Here we present comparisons to the existing FCT scheme in MPAS-Ocean \cite{Ringler13} for both passive advection on the sphere \cite{LSPT12} and a suite of idealized test cases with full ocean dynamics \cite{Ilicak12, Petersen15}. While the CDG scheme may be run to arbitrarily high order accuracy (provided that a quadrature rule \secondRev{and a mapping } exist to integrate the curvature of the pre image of each edge to the desired order of accuracy), in each of the test cases presented here we use a linear basis in each dimension. \secondRev{The principle reason for this is that while higher-order mappings from the sphere to elements in the plane are relatively straight forward to construct for methods on quadrilateral tensor product elements \cite{GNQ14,Lauritzen16}, a method for doing this on Voronoi elements with an arbitrary number of sides is not known to the authors. } The second order coefficients are slope limited using either a 3D implementation of the vertex based Barth-Jespersen limiter \cite{Kuzmin10} or simplified WENO method which uses the basis functions within each element to determine the smoothing coefficients \cite{ZS13,GNZ16}. \secondRev{As reported previously \cite{Lee16}, the CDG scheme, like other semi-Lagrangian methods, is unconditionally stable with time step. Provided that the halo size used for the parallel decomposition of the domain is sufficiently large and the characteristics for a given edge do not cross one another \cite{LR05}, then the scheme may be run with any CFL number. While the scheme has been run on the sphere with CFL numbers up to 2.5, in the results presented here we limit ourselves to time steps equal to those used by the dynamics $(\mathrm{CFL} < 1)$. This is because it is difficult to robustly preserve consistency with respect to the continuity equation with larger CFL numbers, due to variations in layer thickness. The scheme requires a single halo update at the end of each time step in order to ensure that the tracer fields on the boundaries are consistent for each processor, however we note that this halo update must include all the moments and not just the mean components as is the case for the FCT scheme.} \subsection{Test case 1: passive advection} The first test case involves the passive advection of a tracer field \firstRev{with a Gaussian initial distribution } within a deformational shear flow on the sphere \cite{LSPT12}. \firstRev{Note that this configuration serves only to test the convergence of errors for the horizontal scheme on the sphere. } The $L_2$ errors are computed after 12 days when the tracer field has returned to its original position, for both the CDG scheme and the existing FCT scheme \cite{BB73,Zalesak79} within MPAS-Ocean. \begin{figure}[!hbtp] \centering \includegraphics[width=0.51\textwidth,height=0.38\textwidth,valign=c]{l2_errs_passive_advection_shear_flow_sphere.png} \includegraphics[width=0.48\textwidth,height=0.28\textwidth,valign=c]{qField_passiveSphere_0hours.png} \caption{Left: $L_2$ errors for the CDG and FCT transport schemes for passive advection for the deformational shear flow on the sphere test case. Right: Initial condition.} \label{Fig2} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[width=0.48\textwidth,height=0.28\textwidth]{qField_passiveSphere_6hours.png} \includegraphics[width=0.48\textwidth,height=0.28\textwidth]{qField_passiveSphere_12hours.png} \caption{Deformational shear flow advection on the sphere using CDG after 6 hours (left) and 12 hours (right).} \label{Fig3} \end{figure} As can be seen from fig. \ref{Fig2}, the CDG scheme for passive tracer transport on the sphere compares favorably to the existing FCT scheme with MPAS ocean. The second order CDG scheme displays an error convergence rate superior to the third order FCT scheme in both the unlimited and WENO limited case. For the subsequent active ocean test cases presented below we use the vertex based slope limiter \cite{Kuzmin10} as the WENO limiter is not able to preserve strict monotonicity. \subsection{Test case 2: lock exchange} For the second test case, an initial temperature distribution of $T=5^\circ$C on the left and $T=30^\circ$C on the right side of a box generates a pressure gradient that drives a flow of sinking cool fluid along the bottom to the right and rising warm fluid along the top to the left. The model uses a linear temperature ($T$) dependent equation of state in order to determine the density $\rho $ of the form $\rho = 1000.0 - 0.2(T - 5.0)$, from which the pressure is derived via hydrostatic balance. The initial temperature and passive tracer fields are given as $q(y) = 5.0 + 12.5(1.0 + 2.0^{-4}\tanh(y - y_0))$. \firstRev{The model has periodic boundary conditions in the $x$ dimension with just 16 hexagonal elements along the periodic channel such that the dynamics are weak in this dimension. The ALE grid is configured such that the vertical height of the elements within each column stretch uniformly with perturbations in sea surface height, which are barely perceptible in figures \ref{Fig4} and \ref{Fig5}. } Details of the specific geometry and model configuration can be found in \cite{Petersen15}. The resting potential energy (RPE) is determined in order to quantify the amount of spurious vertical mixing of the CDG scheme with respect to the existing FCT scheme for passive tracer transport. The RPE is computed by reordering all the elements of the domain, by descending order of density $\rho$, into a single one dimensional column and then integrating this reorderd density $\rho^*$ down the column as $\mathrm{RPE} = \int_{\Omega}g\rho^*z\mathrm{d}V$ \cite{Ilicak12, Petersen15}. The RPE is computed both for the CDG and FCT schemes for passive tracer transport, as well as for the FCT derived temperature as a reference. While the passive FCT tracers are integrated using a first order forward Euler scheme, the active temperature is solved using an iterated shooting method \cite{Ringler13} in order to derive a second order scheme which is necessary in order to ensure model stability. \begin{figure}[!hbtp] \centering \includegraphics[width=0.51\textwidth,height=0.38\textwidth,valign=c]{rpe_lock_exchange_021017.png} \includegraphics[width=0.48\textwidth,height=0.36\textwidth,valign=c]{temperature_le.png} \caption{Left: resting potential energy (RPE) with time for the lock exchange test case. Right: temperature field (active tracer) at 18 hours. Values are between $5^{\circ}$ C and $30^{\circ}$ C.} \label{Fig4} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[width=0.48\textwidth,height=0.36\textwidth]{tracer1_le.png} \includegraphics[width=0.48\textwidth,height=0.36\textwidth]{qField_le.png} \caption{Passive tracer after 18 hours using second order FCT (left) and CDG (right) advection. Values are between $3^{\circ}$ C and $30^{\circ}$ C.} \label{Fig5} \end{figure} As can be seen from fig. \ref{Fig4} the amount of spurious vertical mixing as measured by the RPE is significantly higher for the CDG scheme than either the passive or active FCT tracers. This suggests that while the current limiting approach preserves monotonicity it is excessively diffusive and new limiting strategies should be explored. This can also be seen by inspecting the tracer fields at the final time, as given in figs. \ref{Fig4} and \ref{Fig5}. \subsection{Test case 3: overflow} The third test case also involves a horizontal step in temperature (between $10^\circ$C and $20^\circ$C), however in this case a vertical step in topography is also included, such that as the cool fluid is driven down and rightward it also sinks down the topographic slope. The model configuration is similar to that for the lock exchange test case, \firstRev{with periodic boundary conditions in the $x$ dimension and uniform stretching of the elements in the vertical in proportion to the perturbations in sea surface height due to the barotropic mode, } and can be found in \cite{Petersen15}. This test serves to demonstrate that the CDG scheme remains stable for large vertical motions along steep topography. \begin{figure}[!hbtp] \centering \includegraphics[width=0.48\textwidth,height=0.36\textwidth]{qField_of_0000.png} \includegraphics[width=0.48\textwidth,height=0.36\textwidth]{qField_of_0025.png} \caption{Tracer field $q$ at $t = 0$ hours (left), $t = 5$ hours (right) for the overflow test case \cite{Petersen15}. Values are between $10^{\circ}$ C and $20^{\circ}$ C.} \label{Fig6} \end{figure} \begin{figure}[!hbtp] \centering \includegraphics[width=0.48\textwidth,height=0.36\textwidth]{qField_of_0050.png} \includegraphics[width=0.48\textwidth,height=0.36\textwidth]{qField_of_0075.png} \caption{Tracer field $q$ at $t = 10$ hours (left), $t = 15$ hours (right) for the overflow test case \cite{Petersen15}. Values are between $10^{\circ}$ C and $20^{\circ}$ C.} \label{Fig7} \end{figure} Figures \ref{Fig6} and \ref{Fig7} show that the CDG scheme appropriately represents the flow of passive tracers along a steeply varying slope. The results are qualitatively similar to those previously published for the FCT scheme \cite{Petersen15}. However the quality of the results here are somewhat misleading since the overly diffusive solution as demonstrated for the previous test case serve to stabilize the strong downward motion, whereas the less diffusive FCT scheme requires an explicit vertical viscosity in order to suppress numerical instabilities. \subsection{Test case 4: baroclinic channel} The baroclinic channel test case is initialized with vertically sloping isotherms in the meridional direction, as well as a sinusoidal temperature profile in the plane which drive the formation of baroclinically unstable eddies \cite{Petersen15}. Unlike the previous test cases, the baroclinic channel configuration allows for significant motion in all dimensions, and so more fully supports the evolution of nonlinear momentum transport. Properly resolving the formation and transport of the resultant eddies presents a significant challenge for the CDG scheme due to the additional numerical diffusion introduced by the slope limiting. \begin{figure}[!hbtp] \centering \includegraphics[width=0.25\textwidth,height=0.75\textwidth]{qField_day01_bc.png} \includegraphics[width=0.25\textwidth,height=0.75\textwidth]{qField_day06_bc.png} \includegraphics[width=0.25\textwidth,height=0.75\textwidth]{qField_day12_bc.png} \caption{Tracer field $q$ at $t = 0$ days (left), $t = 6$ days (center), and $t = 12$ days (right) for the baroclinic channel test case \cite{Petersen15} with channel length $400km$ and resolution $\Delta x = 2.5km$.} \label{Fig8} \end{figure} Given a linear surface to bottom temperature profile with a difference of $3^\circ$C, a depth of $H=1000 m$, a reference density of $\rho_0 = 1000 km/m^3$ and a Coriolis parameter of $f = -1.2\times 10^{-4}s^{-1}$, and recalling the linear equation of state, the first baroclinic deformation radius is given as $L_d = \sqrt{g/\rho_0\partial p/\partial z}H/\pi f \approx 6.4 km$. With a horizontal resolution of $\Delta x = 1 km$ the resultant eddies are just within the range of resolved scales of the simulation. Therefore the maintained presence of these eddies in the tracer field is highly sensitive to any spurious numerical diffusion due to excessive slope limiting. For this reason the eddy field is observed to be significantly muted in the tracer field compared to the FCT advected temperature field \cite{Petersen15}, as shown in fig. \ref{Fig8}. \subsection{Test case 5: global ocean} The final test case involves the spin up of a global ocean domain with a resolution of $120 km$. It is initialized with climatological temperature and salinity and is driven by a temporally constant surface wind profile. In order to successfully implement the CDG scheme in this configuration the departure and quadrature point integrations and the swept region intersections are performed on the sphere assuming great circle arcs for each of the edges, and the updated tracer coefficients for each element are solved by projecting each cell into the plane as described in the appendix. \begin{figure}[!hbtp] \centering \includegraphics[width=0.80\textwidth,height=0.40\textwidth]{tracer1_10-08.png} \includegraphics[width=0.80\textwidth,height=0.40\textwidth]{qField_10-08.png} \caption{Passive tracer (FCT, $2^{\mathrm{rd}}$ order, top) and passive tracer (CDG, $2^{\mathrm{nd}}$ order, bottom) for global ocean after 9 months. Color bars range from $-1.8^{\circ}$ C to $30^{\circ}$ C.} \label{Fig9} \end{figure} As can be seen in fig. \ref{Fig9} the CDG solution for the temperature field looks broadly similar to the FCT solution. However the equatorial temperature is weaker and the secondary features such as western boundary currents are less pronounced, as seen in the Gulf Stream and Kuroshio current. \section{Conclusion} We have presented an implementation of the characteristic discontinuous Galerkin (CDG) tracer transport scheme within the MPAS-Ocean model for both horizontal advection on an unstructured Voronoi grid and vertical advection on a temporally varying arbitrary Lagrangian-Eulerian (ALE) grid. The scheme has been used to model passive advection for a suite of idealized test cases, using the resting potential energy (RPE) as a measure of spurious vertical mixing. Consistency between the implicit volume flux of the CDG scheme and the explicit volume flux from the dynamics is enforced via a renormalization of the edge fluxes with respect to the volume fluxes from the continuity equation. Since the layer thickness is piecewise constant, this process is much simpler than the consistency fixers required for higher order representations of the thickness \cite{Lauritzen16}. While the results compare favorably to the FCT scheme for passive advection with prescribed velocity on the surface of the sphere, the absence of a better limiting scheme and the need to preserve strict monotonicity leads to excessive \firstRev{diffusion}, which significantly degrades the CDG solution in 3D. \secondRev{A significant issue with the slope limiting approach used here is that the same limiting coefficient is used for moments in all dimensions for a given element, irrespective of which dimension has a non-monotone solution. } As future work an improved limiter is required, perhaps using the recently developed anisotropic approach \cite{AKKR17}, \secondRev{so as to ensure that the limiting coefficients used to preserve monotone solutions in the vertical are not projected onto the horizontal moments and vice versa. } \section*{Acknowledgements} The authors are grateful to Drs Mark Taylor, Peter Bosler and Andrew Bradley at Sandia National Laboratory for many enlightening discussions concerning the formulation of the CDG algorithm. We would particularly like to thank Dr. Bradley for supplying the library for computing polygon intersections on the sphere. We also thank Dr. Bill Lipscombe for supplying the method for performing tangent projections from the sphere, the two anonymous reviewers, who's helpful comments greatly improved the clarity of this article. We would also like to acknowledge the support of LANL Institutional Computing. This work was carried out under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. This research was supported by the Office of Science (BER), U.S. Department of Energy. Los Alamos Report LA-UR-17-22608.
train/arxiv
BkiUdkU5qdmDGzKHcJcr
5
1
\section{Introduction} \label{sec:intro} Recall that a graph $\Gamma$ is called {\em $\frac{1}{2}$-arc-transitive} provided that its automorphism group ${\rm {Aut}}(\Gamma)$ acts transitively on its edge-set ${{\rm E}}(\Gamma)$ and on its vertex-set ${\rm V}(\Gamma)$ but intransitively on its arc-set ${{\rm A}}(\Gamma)$. More generally, if $G$ is a subgroup of ${\rm {Aut}}(\Gamma)$ such that $G$ acts transitively on ${{\rm E}}(\Gamma)$ and ${\rm V}(\Gamma)$ but intransitively on ${{\rm A}}(\Gamma)$, then $G$ is said to {\em act $\frac{1}{2}$-arc-transitively} on $\Gamma$ and we say that $\Gamma$ is {\em $(G,\frac{1}{2})$-arc-transitive}. To shorten notation, we shall say that a $\frac{1}{2}$-arc-transitive graph is a \emph{HAT} and that a graph admitting a $\frac{1}{2}$-arc-transitive group of automorphisms is a \emph{GHAT}. Clearly, any HAT is also a GHAT. Conversely, a GHAT is either a HAT or arc-transitive. The history of GHATs goes back to Tutte who, in his 1966 paper \cite[7.35, p.59]{tutte}, proved that every GHAT is of even valence and asked whether HATs exist at all. The first examples of HATs were discovered a few years later by Bouwer \cite{Bou}. After a short break, interest in GHATs picked up again in the 90s, largely due to a series of influential papers of Maru\v{s}i\v{c} concerning the GHATs of valence $4$ (see \cite{AlsMarNow,Mar98,MarPra,MarXu}, to list a few). For a nice survey of the topic, we refer the reader to \cite{MarSurvey}, and for an overview of some more recent results, see \cite{KutMarSpaWanXu,MarSpa}. To shorten notation further, we shall say that a connected GHAT (HAT, respectively) of valence $4$ is a $4$-GHAT ($4$-HAT, respectively). The main result of this paper is a compilation of a complete list of all $4$-GHATs with at most $1000$ vertices. This result was obtained indirectly using an intimate relation between $4$-GHATs and connected arc-transitive asymmetric digraphs of in- and out-valence $2$ (we shall call such digraphs $2$-ATDs for short) -- see Section~\ref{sec:ATvsGHAT} for details on this relationship. These results can be succinctly summarised as follows: \begin{theorem} There are precisely 26457 pairwise non-isomorphic $2$-ATDs on at most $1000$ vertices, and precisely 11941 $4$-GHATs on at most $1000$ vertices, of which 8695 are arc-transitive and 3246 are $\frac{1}{2}$-arc-transitive. \end{theorem} The actual lists of (di)graphs, together with a spreadsheet (in a ``comma separated values'' format) with some graph theoretical invariants, is available at \cite{online}. The rest of this section is devoted to some interesting facts gleaned from these lists. All the relevant definitions that are omitted here can be found in Section~\ref{sec:not}. In Section~\ref{sec:comp}, we explain how the lists were computed and present the theoretical background which assures that the computations were exhaustive. In Section~\ref{sec:doc}, information about the format of the files available on \cite{online} is given. We now proceed with a few comments on the census of $4$-HATs. By a {\em vertex-stabiliser} of a vertex-transitive graph or digraph $\Gamma$, we mean the stabiliser of a vertex in ${\rm {Aut}}(\Gamma)$. Even though it is known that a vertex-stabiliser of a $4$-HAT can be arbitrarily large (see \cite{DM05}), not many examples of $4$-HATs with vertex-stabilisers of order larger than $2$ were known, and all known examples had a very large number of vertices. Recently, Conder and \v{S}parl (see also \cite{ConPotSpa}) discovered a $4$-HAT on $256$ vertices with vertex-stabiliser of order $4$ and proved that this is the smallest such example. This fact is confirmed by our census; in fact, the following theorem can be deduced from the census. \begin{theorem} \label{the:largeGv} Amongst the 3246 $4$-HATs on at most $1000$ vertices, there are seventeen with vertex-stabiliser of order $4$, three with vertex-stabiliser of order $8$, and none with larger vertex-stabilisers. The smallest $4$-HAT with vertex-stabiliser of order $4$ has order $256$ and the smallest two with vertex-stabilisers of order $8$ have $768$ vertices; the third $4$-HAT with vertex-stabiliser of order $8$ has $896$ vertices. \end{theorem} Another curiosity about $4$-HATs is that those with a non-abelian vertex-stabiliser tend to be very rare (at least amongst the ``small'' graphs). The first known $4$-HAT with a non-abelian vertex-stabiliser was discovered by Conder and Maru\v{s}i\v{c} (see~\cite{ConderMarusic}) and has $10752$ vertices. Further examples of $4$-HATs with non-abelian vertex-stabilisers were discovered recently (see \cite{ConPotSpa}), including one with a vertex-stabiliser of order $16$. However, the one on $10752$ vertices remains the smallest known example. Using our list, the following fact is easily checked. \begin{theorem} \label{the:HATnab} Every $4$-HAT with a non-abelian vertex-stabiliser has more than $1000$ vertices. \end{theorem} In fact, there are strong indications that the graph on $10752$ vertices discovered by Conder and Maru\v{s}i\v{c} is the smallest $4$-HAT with a non-abelian vertex-stabiliser. We will call a $4$-HAT with a non-solvable automorphism group a {\em non-solvable $4$-HAT}. The first known non-solvable $4$-HAT was constructed by Maru\v{s}i\v{c} and Xu \cite{MarXu}; and its order is $7!/2$. An infinite family of non-solvable $4$-HATs was constructed later by Malni\v{c} and Maru\v{s}i\v{c} \cite{MalMar99}. The smallest member of this family has an even larger order, namely $11!/2$. To the best of our knowledge, no smaller non-solvable $4$-HATs was known prior to the construction of our census. Perhaps surprisingly, small examples of non-solvable $4$-HATs seem not to be too rare, as can be checked from our census. (The terms {\em radius}, {\em attachment number}, {\em alter-exponent}, and {\em alter-perimeter} are defined in Sections \ref{sec:alt} and \ref{sec:rad}.) \begin{theorem} There are thirty-two non-solvable $4$-HATs with at most $1000$ vertices. The smallest one, named HAT[480,44], has order $480$, girth $5$, radius $5$, attachment number $2$, alter-exponent $2$, and alter-perimeter $1$. It is non-Cayley and non-bipartite. \end{theorem} Let us now continue with a few comments on the census of $2$-ATDs. All the undefined notions mentioned in the theorems below are explained in Sections~\ref{sec:not}, \ref{sec:alt} and \ref{sec:rad}. It is not surprising that, apart from the generalised wreath digraphs (see Section~\ref{sec:wreath} for the definition), very few of the $2$-ATDs on at most $1000$ vertices are $2$-arc-transitive. In fact, the following can be deduced from the census. \begin{theorem} Out of the 26457 $2$-ATDs on at most $1000$ vertices, 961 are generalised wreath digraphs. Of the remaining 25496, only 1199 are $2$-arc-transitive (the smallest having order $18$), only 255 are $3$-arc-transitive (the smallest having order $42$), only 61 are $4$-arc-transitive (the smallest having order $90$), and only 5 are $5$-arc-transitive (the smallest two having order $640$); none of them is $6$-arc-transitive. \end{theorem} Note that the non-existence of a $6$-arc-transitive non-generalised-wreath $2$-ATD on at most $1000$ vertices follows from a more general result (see Corollary~\ref{cor:genlost}). Recall that there is no $4$-HAT on at most $1000$ vertices with a non-abelian vertex-stabiliser (Theorem~\ref{the:HATnab}). Consequently (see Section~\ref{sec:ATvsGHAT}), every $2$-ATD on at most $1000$ vertices with a non-abelian vertex-stabiliser has an arc-transitive underlying graph; and there are indeed such examples. In fact, the following holds (see Section~\ref{sec:def} for the definition of {\em self-opposite}). \begin{theorem} There are precisely forty-five $2$-ATDs on at most $1000$ vertices with a non-abelian vertex-stabiliser. They are all self-opposite, at least $3$-arc-transitive, have non-solvable automorphism groups, and radius $3$. The smallest of these digraphs has order $42$, and the smallest that is $4$-arc-transitive has order $90$. There are no $5$-arc-transitive $2$-ATDs with a non-abelian vertex-stabiliser and order at most $1000$. \end{theorem} If a $2$-ATD is self-opposite, then the isomorphism between the digraph and its opposite digraph is an automorphism of the underlying graph, making the underlying graph arc-transitive. Hence, self-opposite $2$-ATDs always yield arc-transitive $4$-GHATs. However, the converse is not always true: there are $2$-ATDs that are not self-opposite, but have an arc-transitive underlying graph. In this case, the index of the automorphism group of the $2$-ATD in the automorphism group of its underlying graph must be larger than $2$ (for otherwise the former would be normal in the latter and thus any automorphism of the underlying graph would either preserve the arc-set of the digraph, or map it to the arc-set of the opposite digraph). It is perhaps surprising that there are not many small examples of such behaviour. \begin{theorem} There are precisely fifty-two $2$-ATDs on at most $1000$ vertices that are not self-opposite but have an arc-transitive underlying graph. The smallest two have order $21$. None of these digraphs is $2$-arc-transitive. The index of the automorphism group of these digraphs in the automorphism group of the underlying graphs is always $8$. \end{theorem} \section{Notation and definitions} \label{sec:not} \subsection{Digraphs and graphs} \label{sec:def} A \emph{digraph} is an ordered pair $(V,A)$ where $V$ is a finite non-empty set and $A\subseteq V \times V$ is a binary relation on $V$. We say that $(V,A)$ is \emph{asymmetric} if $A$ is asymmetric, and we say that $(V,A)$ is a \emph{graph} if $A$ is irreflexive and symmetric. If $\Gamma=(V,A)$ is a digraph, then we shall refer to the set $V$ and the relation $A$ as the {\em vertex-set} and the {\em arc-set} of $\Gamma$, and denote them by ${\rm V}(\Gamma)$ and ${{\rm A}}(\Gamma)$, respectively. Members of $V$ and $A$ are called {\em vertices} and {\em arcs}, respectively. If $(u,v)$ is an arc of a digraph $\Gamma$, then $u$ is called the {\em tail}, and $v$ the {\em head} of $(u,v)$. If $\Gamma$ is a graph, then the unordered pair $\{u,v\}$ is called an {\em edge} of $\Gamma$ and the set of all edges of $\Gamma$ is denoted ${{\rm E}}(\Gamma)$. If $\Gamma$ is a digraph, then the {\em opposite digraph} ${\Gamma^{\rm{opp}}}$ has vertex-set ${\rm V}(\Gamma)$ and arc-set $\{(v,u) : (u,v) \in {{\rm A}}(\Gamma)\}$. The {\em underlying graph} of $\Gamma$ is the graph with vertex-set ${\rm V}(\Gamma)$ and with arc-set ${{\rm A}}(\Gamma) \cup {{\rm A}}({\Gamma^{\rm{opp}}})$. A digraph is called {\em connected} provided that its underlying graph is connected. Let $v$ be a vertex of a digraph $\Gamma$. Then the {\em out-neighbourhood} of $v$ in $\Gamma$, denoted by $\Gamma^+(v)$, is the set of all vertices $u$ of $\Gamma$ such that $(v,u) \in {{\rm A}}(\Gamma)$, and similarly, the {\em in-neighbourhood} $\Gamma^-(v)$ is defined as the set of all vertices $u$ of $\Gamma$ such that $(u,v) \in {{\rm A}}(\Gamma)$. Further, we let $\mathop{\rm val}^+(v) = |\Gamma^+(v)|$ and $\mathop{\rm val}^-(v) = |\Gamma^-(v)|$ be the {\em out-valence} and {\em in-valence} of $\Gamma$, respectively. If there exists an integer $r$ such that $\mathop{\rm val}^+(v) = \mathop{\rm val}^-(v) = r$ for every $v\in {\rm V}(\Gamma)$, then we say that $\Gamma$ is {\em regular} of {\em valence} $r$, or simply that $\Gamma$ is an {\em $r$-valent} digraph. An $s$-arc of a digraph $\Gamma$ is an $(s+1)$-tuple $(v_0,v_1, \ldots, v_s)$ of vertices of $\Gamma$, such that $(v_{i-1},v_i)$ is an arc of $\Gamma$ for every $i\in \{1,\ldots,s\}$ and $v_{i-1}\not=v_{i+1}$ for every $i\in \{1,\ldots,s-1\}$. If $x=(v_0,v_1, \ldots, v_s)$ is an $s$-arc of $\Gamma$, then every $s$-arc of the form $(v_1, v_2, \ldots, v_s,w)$ is called a {\em successor} of $x$. An \emph{automorphism} of a digraph $\Gamma$ is a permutation of ${\rm V}(\Gamma)$ which preserves the arc-set ${{\rm A}}(\Gamma)$. Let $G$ be a subgroup of the full automorphism group ${\rm {Aut}}(\Gamma)$ of $\Gamma$. We say that $\Gamma$ is \emph{$G$-vertex-transitive} or \emph{$G$-arc-transitive} provided that $G$ acts transitively on ${\rm V}(\Gamma)$ or ${{\rm A}}(\Gamma)$, respectively. Similarly, we say that $\Gamma$ is \emph{$(G,s)$-arc-transitive} if $G$ acts transitively on the set of $s$-arcs of $\Gamma$. If $\Gamma$ is a graph, we say that it is \emph{$G$-edge-transitive} provided that $G$ acts transitively on ${{\rm E}}(\Gamma)$. When $G={\rm {Aut}}(\Gamma)$, the prefix $G$ in the above notations is usually omitted. If $\Gamma$ is a digraph and $v\in {\rm V}(\Gamma)$, then a {\em $v$-shunt} is an automorphism of $\Gamma$ which maps $v$ to an out-neighbour of $v$. \subsection{From $4$-GHATs to $2$-ATDs and back} \label{sec:ATvsGHAT} If $\Gamma$ is a connected $4$-valent $(G,\frac{1}{2})$-arc-transitive graph, then $G$ has two orbits on the arc-set of $\Gamma$, opposite to each other, each orbit having the property that each vertex of $\Gamma$ is the head of precisely two arcs, and also the tail of precisely two arcs of the orbit. By taking any of these two orbits as an arc-set of a digraph on the same vertex-set, one thus obtains a 2-ATD whose underlying graph is $\Gamma$, and admitting $G$ as an arc-transitive group of automorphisms. Conversely, the underlying graph of a $G$-arc-transitive $2$-ATD is a $(G,\frac{1}{2})$-arc-transitive $4$-GHAT. In this sense the study of $4$-GHATs is equivalent to the study of $2$-ATDs. In Section~\ref{sec:comp}, we explain how a complete list of all $2$-ATDs on at most $1000$ vertices was obtained. The above discussion shows how this yields a complete list of all $4$-GHATs on at most $1000$ vertices. \subsection{Generalised wreath digraphs} \label{sec:wreath} Let $n$ be an integer with $n\ge 3$, let $V=\ZZ_n\times \ZZ_2$, and let $A=\{ ((i,a),(i+1,b)) : i \in \ZZ_n, a,b\in \ZZ_2\}$. The asymmetric digraph $(V,A)$ is called a {\em wreath digraph} and denoted by $\vec{{\rm W}}_n$. If $\Gamma$ is a digraph and $r$ is a positive integer, then the {\em $r$-th partial line digraph} of $\Gamma$, denoted ${\rm Pl}^r(\Gamma)$, is the digraph with vertex-set equal to the set of $r$-arcs of $\Gamma$ and with $(x,y)$ being an arc of ${\rm Pl}^r(\Gamma)$ whenever $y$ is a successor of $x$. If $r=0$, then we let ${\rm Pl}^r(\Gamma)=\Gamma$. Let $r$ be a positive integer. The $(r-1)$-th partial line digraph ${\rm Pl}^{r-1}(\vec{{\rm W}}_n)$ of the wreath digraph $\vec{{\rm W}}_n$ is denoted by $\vec{{\rm W}}(n,r)$ and called a {\em generalised wreath digraph}. Generalised wreath digraphs were first introduced in \cite{PraHATD}, where $\vec{{\rm W}}(n,r)$ was denoted $C_n(2,r)$. It was proved there that ${\rm {Aut}}(\vec{{\rm W}}(n,r)) \cong C_2\wr C_n$ and that ${\rm {Aut}}(\vec{{\rm W}}(n,r))$ acts transitively on the $(n-r)$-arcs but not on the $(n-r+1)$-arcs of $\vec{{\rm W}}(n,r)$~\cite[Theorem 2.8]{PraHATD}. In particular, $\vec{{\rm W}}(n,r)$ is arc-transitive if and only if $n\ge r+1$. Note that $|{\rm V}(\vec{{\rm W}}(n,r))| = n2^{r}$, and thus $|{\rm {Aut}}(\vec{{\rm W}}(n,r))_v| = n2^n/n2^{r} = 2^{n-r}$. The underlying graph of a generalised wreath digraph will be called a {\em generalised wreath graph}. \subsection{Coset digraphs} Let $G$ be a group generated by a core-free subgroup $H$ and an element $g$ with $g^{-1} \not\in HgH$. One can construct the {\em coset digraph}, denoted ${\rm {Cos}}(G,H,g)$, whose vertex-set is the set $G/H$ of right cosets of $H$ in $G$, and where $(Hx,Hy)$ is an arc if and only if $yx^{-1} \in HgH$. Note that the condition $g^{-1} \not\in HgH$ guarantees that the arc-set is an asymmetric relation. Moreover, since $G=\langle H,g\rangle$, the digraph ${\rm {Cos}}(G,H,g)$ is connected. The digraph ${\rm {Cos}}(G,H,g)$ is $G$-arc-transitive (with $G$ acting upon $G/H$ by right multiplication), and hence ${\rm {Cos}}(G,H,g)$ is a $G$-arc-transitive and $G$-vertex-transitive digraph with $g$ being a $v$-shunt. On the other hand, it is folklore that every such graph arises as a coset digraph. \begin{lemma} \label{lem:coset} If $\Gamma$ is a connected $G$-arc-transitive and $G$-vertex-transitive digraph, $v$ is a vertex of $\Gamma$, and $g$ is a $v$-shunt contained in $G$, then $\Gamma \cong {\rm {Cos}}(G,G_v,g)$. \end{lemma} \section{Constructing the census} \label{sec:comp} If $\Gamma$ is a $G$-vertex-transitive digraph with $n$ vertices, then $|G| = n|G_v|$. If one wants to use the coset digraph construction to obtain all $2$-ATDs on $n$ vertices, one thus needs to consider all groups $G$ of order $n|G_v|$ that can act as arc-transitive groups of $2$-ATDs. In order for this approach to be practical, two issues must be resolved: First, one must get some control over $|G|$ and thus over $|G_v|$. (Recall that in $\vec{{\rm W}}(n,r)$, $|G_v|$ can grow exponentially with $|{\rm V}(\vec{{\rm W}}(n,r))|$, as $n\to \infty$ and $r$ is fixed). Second, one must obtain enough structural information about $G$ to be able to construct all possibilities. Fortunately, both of these issues were resolved successfully. The problem of bounding $|G_v|$ was resolved in a recent paper \cite{genlost} and details can be found in Section~\ref{sec:bound}. The second problem was dealt with in \cite{MarNed3}, and later, in greater generality in \cite{PotVer} (both of these papers rely heavily on a group-theoretical result of Glauberman \cite{Glaub2}); the summary of relevant results is given in Section~\ref{sec:type}. \subsection{Bounding the order of the vertex-stabiliser} \label{sec:bound} The crucial result that made our compilation of a complete census of all small $2$-ATDs possible is Theorem~\ref{thm:genlost}, stated below, which shows that the generalised wreath digraphs (defined in Section~\ref{sec:wreath}) are very special in the sense of having large vertex-stabilisers. In fact, together with the correspondence described in Section~\ref{sec:ATvsGHAT}, \cite[Theorem~9.1]{genlost} has the following corollary: \begin{theorem} \label{thm:genlost} Let $\Gamma$ be a $G$-arc-transitive $2$-ATD on at most $m$ vertices and let $t$ be the largest integer such that $m> t 2^{t+2}$. Then one of the following occurs: \begin{enumerate} \item $\Gamma\cong \vec{{\rm W}}(n,r)$ for some $n\ge 3$ and $1\le r \le n-1$, \item $|G_v| \le \max\{16,2^t\}$, \item $(\Gamma,G)$ appears in the last line of \cite[Table~5]{genlost}. In particular, $|{\rm V}\Gamma|=8100$. \end{enumerate} \end{theorem} The following is an easy corollary: \begin{corollary} \label{cor:genlost} Let $\Gamma$ be a $G$-arc-transitive $2$-ATD on at most $1000$ vertices. Then either $|G_v| \le 32$ or $\Gamma\cong \vec{{\rm W}}(n,r)$ for some $n\ge 3$ and $1\le r \le n-1$. \end{corollary} \subsection{Structure of the vertex-stabiliser} \label{sec:type} \begin{definition}\label{defdef} Let $s$ and $\alpha$ be positive integers satisfying $\frac{2}{3}s\le \alpha \le s$, and let $c$ be a function assigning a value $c_{i,j}\in \{0,1\}$ to each pair of integers $i,j$ with $\alpha \le j \le s-1$ and $1\le i \le 2\alpha-2s+j+1$. Let $A_{s,\alpha}^c$ be the group generated by $\{x_0, x_1, \ldots, x_{s-1}, g\}$ and subject to the defining relations: \begin{itemize} \item $x_0^2 = x_1^2 = \cdots = x_{s-1}^2 = 1$; \item $x_i^g=x_{i+1}$ for $i\in\{0,1,\ldots,s-2\}$; \item if $j < \alpha$, then $[x_0,x_j] = 1$; \item if $j\ge \alpha$, then $[x_0,x_j] = x_{s-\alpha}^{c_{1,j}}\, x_{s-\alpha+1}^{c_{2,j}}\,\cdots\, x_{j-s+\alpha}^{c_{2\alpha-2s+j+1,j}}$. \end{itemize} \end{definition} Furthermore, let ${\mathcal{A}}_{s,\alpha}$ be the family of all groups $A_{s,\alpha}^c$ for some $c$. It was proved in \cite{MarNed3} (see also \cite{PotVer}) that every group $G$ acting arc-transitively on a $2$-ATD is isomorphic to a quotient of some $A_{s,\alpha}^c$. More precisely, the following can be deduced from \cite{MarNed3} or \cite{PotVer}. \begin{theorem} \label{thm:structure} Let $\Gamma$ be a $G$-arc-transitive $2$-ATD, let $v\in {\rm V}(\Gamma)$ and let $s$ be the largest integer such that $G$ acts transitively on the set of $s$-arcs of $\Gamma$. Then there exists an integer $\alpha$ satisfying $\frac{2}{3}s\le \alpha \le s$, a function $c$ as in Definition~\ref{defdef}, and an epimorphism $\wp\colon A_{s,\alpha}^c \to G$, which maps the group $\langle x_0, \ldots, x_{s-1}\rangle$ isomorphically onto $G_v$ and the generator $g$ to some $v$-shunt in $G$. In particular, $|G_v|=2^s$. \end{theorem} In this case, we will say that $(\Gamma,G)$ is of {\em type} $A_{s,\alpha}^c$, and call the group $A_{s,\alpha}^c$ the {\em universal group} of the pair $(\Gamma,G)$. For $s$, $\alpha$, and a function $c$ satisfying the conditions of Definition~\ref{defdef}, let $c'$ be the function defined by $c'_{i,j} = c_{2\alpha-2s+j+2\,-\,i,j}$. The relationship between $c$ and $c'$ can be visualised as follows: if one fixes the index $j$ and views the function $i\mapsto c_{i,j}$ as the sequence $[c_{1,j}, c_{2,j}, \ldots, c_{2\alpha-2s+j+1,j}]$, then the sequence for $c'$ is obtained by reversing the one for $c$. If ${\tilde{G}} = A_{s,\alpha}^c$ then we denote the {\em reverse type} $A_{s,\alpha}^{c'}$ by ${\tilde{G}}^{{\rm opp}}$. Observe that if $(\Gamma,G)$ is of type ${\tilde{G}}$, then $({\Gamma^{\rm{opp}}},G)$ is of type ${\tilde{G}}^{{\rm opp}}$. A class of groups, obtained from ${\mathcal{A}}_{s,\alpha}$ by taking only one group in each pair $\{{\tilde{G}},{\tilde{G}}^{{\rm opp}}\}$, ${\tilde{G}}\in {\mathcal{A}}_{s,\alpha}$, will be denoted ${\mathcal{A}}_{s,\alpha}^{{\rm red}}$. (Note that some groups ${\tilde{G}}$ might have the property that ${\tilde{G}} = {\tilde{G}}^{{\rm opp}}$.) In view of Corollary~\ref{cor:genlost}, we shall be mainly interested in the universal groups $A_{s,\alpha}^c$ with $s\le 5$ (as, excluding generalised wreath digraphs, these are the only types of $2$-ATDs of order at most $1000$). We list the relevant classes ${\mathcal{A}}_{s,\alpha}^{\rm{red}}$ for $s\le 5$ explicitly in Table~\ref{tab:1}. Groups in ${\mathcal{A}}_{s,\alpha}^{\rm{red}}$, for a fixed $s$ will be named by $A_s^i$, where $i$ will be a positive integer, where groups with larger $\alpha$ will be indexed with lower $i$. Also, the generators $x_0, x_1, x_2, x_3$, and $x_4$ will be denoted $a$, $b$, $c$, $d$, and $e$, respectively. \begin{table}[hhh] \begin{center} \begin{small} \begin{tabular}{|c|c|} \hline \phantom{$\overline{\overline{G_j^G}}$} name & $\tilde{G}$ \\ \hline\hline $A_1^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,g \mid a^2 \rangle$ \\ \end{tabular} \\ \hline $A_2^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,g \mid a^2,b^2,a^gb, [a,b] \rangle$ \\ \end{tabular} \\ \hline $A_3^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,g \mid a^2,b^2,c^2,a^gb,b^gc,[a,b],[a,c] \rangle$ \\ \end{tabular} \\ \hline $A_3^2$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,g \mid a^2,b^2,c^2,a^gb,b^gc,[a,b],[a,c]b \rangle$ \\ \end{tabular} \\ \hline $A_4^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,g \mid a^2,b^2,c^2,d^2,a^gb, b^gc, c^gd, [a,b],[a,c],[a,d] \rangle$ \\ \end{tabular} \\ \hline $A_4^2$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,g \mid a^2,b^2,c^2,d^2,a^gb, b^gc, c^gd, [a,b],[a,c],[a,d]b \rangle$ \\ \end{tabular} \\ \hline $A_4^3$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,g \mid a^2,b^2,c^2,d^2,a^gb, b^gc, c^gd, [a,b],[a,c],[a,d]bc \rangle$ \\ \end{tabular} \\ \hline $A_5^1$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e] \rangle$ \\ \end{tabular} \\ \hline $A_5^2$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]b \rangle$ \\ \end{tabular} \\ \hline $A_5^3$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]c \rangle$ \\ \end{tabular} \\ \hline $A_5^4$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]bc \rangle$ \\ \end{tabular} \\ \hline $A_5^5$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]bd \rangle$ \\ \end{tabular} \\ \hline $A_5^6$ & \begin{tabular}{l} \phantom{$\overline{\overline{G_j^G}}$} $\langle a,b,c,d,e,g \mid a^2,b^2,c^2,d^2,e^2,d^2,a^gb,b^gc,c^gd,d^ge,[a,b],[a,c],[a,d],[a,e]bcd \rangle$ \\ \end{tabular} \\ \hline \end{tabular} \end{small} \caption{Universal groups of $2$-ATDs with $|\tilde{G}_v| \le 32$} \label{tab:1} \end{center} \end{table} \subsection{The algorithm and its implementation} We now have all the tools required to present a practical algorithm that takes an integer $m$ as input and returns a complete list of all $2$-ATDs on at most $m$ vertices (see Algorithm~1). It is based on the fact that every such digraph can be obtained as a coset digraph of some group $G$ (see Lemma~\ref{lem:coset}), and that $G$ is in fact an epimorphic image of some group $A_{s,\alpha}^c$ (see Theorem~\ref{thm:structure}) with $G_v$ and the shunt being the corresponding images of $\langle x_0, \ldots, x_{s-1}\rangle$ and $g$ in $A_{s,\alpha}^c$. Moreover, if $\Gamma$ is not a generalised wreath digraph or the exceptional digraph on $8100$ vertices mentioned in part 3 of Theorem~\ref{thm:genlost}, then the parameter $s$ satisfies $s2^{s+2}<m$, and the order of the epimorphic image $G$ is bounded by $2^s m$ (see Theorem~\ref{thm:genlost}). The algorithm thus basically boils down to the task of finding normal subgroups of bounded index in the finitely presented groups $A_{s,\alpha}^c$. \begin{algorithm} \begin{algorithmic} \label{alg:main} \REQUIRE positive integer $m$ \ENSURE ${\mathcal{D}} = \{ \Gamma : \Gamma$ is 2-ATD, $|V(\Gamma)| \le m\}$ \STATE Let $t$ be the largest integer such that $m> t 2^{t+2}$; \STATE Let ${\mathcal{D}}$ be the list of all arc-transitive generalised wreath digraphs on at most $m$ vertices; \STATE If $m\ge 8100$, add to ${\mathcal{D}}$ the exceptional digraph $\Gamma$ on $8100$ vertices, mentioned in part 3 of Theorem~\ref{thm:genlost}; \FOR{$s\in\{1,\ldots,\max\{4,t\}\}$} \FOR{$\alpha \in \{ \lceil\frac{2}{3}s\rceil, \lceil\frac{2}{3}s\rceil+1, \ldots, s \}$} \FOR{${\tilde{G}} \in {\mathcal{A}}_{s,\alpha}^{{\rm red}}$} \STATE Let ${\mathcal{N}}$ be the set of all normal subgroups of ${\tilde{G}}$ of index at most $2^sm$; \FOR{$N \in {\mathcal{N}}$} \STATE Let $G:={\tilde{G}}/N$ and let $\wp \colon {\tilde{G}} \to G$ be the quotient projection; \STATE Let $H:=\wp(\langle x_0, \ldots, x_{s-1}\rangle)$; \IF{$H$ is core-free in $G$ \AND $|H| = 2^s$ \AND $\wp(g)^{-1} \not \in H\wp(g)H$} \STATE Let $C :=\cos(G,H,\wp)$; \FOR{$\Gamma\in \{C,C^{\rm{opp}}\}$} \IF{$\Gamma$ is not isomorphism to any of the digraphs in ${\mathcal{D}}$} \STATE add $\Gamma$ to the list ${\mathcal{D}}$; \ENDIF \ENDFOR \ENDIF \ENDFOR \ENDFOR \ENDFOR \ENDFOR \end{algorithmic}\caption{~$2$-ATDs on at most $m$ vertices.} \end{algorithm} Practical implementations of this algorithm have several limitations. First, the best known algorithm for finding normal subgroups of low index in a finitely presented group is an algorithm due to Firth and Holt~\cite{Firth}. The only publicly available implementation is the {\tt LowIndexNormalSubgroups} routine in {\sc Magma} \cite{Magma} and the most recent version allows one to compute only the normal subgroups of index at most $5\cdot 10^5$; hence only automorphisms groups of order $5\cdot 10^5$ can possibly be obtained in this way. More importantly, even when only normal subgroups of relatively small index need to be computed, some finitely presented groups are computationally difficult. For example, finding all normal subgroups of index at most $2048$ of the group $A_1^1\cong C_2*C_\infty$ seems to represent a considerable challenge for the {\tt LowIndexNormalSubgroups} routine in {\sc Magma}. In order to overcome this problem, we have used a recently computed catalogue of all $(2,*)$-groups of order at most $6000$ \cite{2star}, where by a {\em $(2,*)$-group} we mean any group generated by an involution $x$ and one other element $g$. Since $A_1^1$ is a $(2,*)$-group and every non-cyclic quotient of a $(2,*)$-group is also a $(2,*)$-group, this catalogue can be used to obtain all the quotients of $A_1^1$ of order up to $6000$. Consequently, all $2$-ATDs admitting an arc-regular group of automorphisms of order at most $3000$ can be obtained. Similarly, since $A_2^1$ is also a $(2,*)$-group, we can use this catalogue to obtain all the $2$-ATDs of order at most $1500$ admitting an arc-transitive group $G$ with $|G_v|=4$. Like $A_1^1$ and $A_2^1$, the groups with $\langle x_0, \ldots, x_{s-1}\rangle$ abelian (namely those with $\alpha=s$ and $c_{i,j}=0$ for all $i,j$) are also computationally very difficult. One can make the task easier by dividing it into cases, where the order of $g$ is fixed in each case. Since $g$ represents a shunt, it can be proved that its order cannot exceed the order of the digraph (see, for example, \cite[Lemma 13]{cubiccensus}). Cases can then be run in parallel on a multi-core algorithm. \section{The census and accompanying data} \label{sec:doc} Using Algorithm~1, we found that there are exactly 26457 $2$-ATDs of order up to $1000$. Following the recipe explained in Section~\ref{sec:ATvsGHAT}, we have also computed all the $4$-GHATs, which we split in two lists: $4$-HATs and arc-transitive $4$-GHATs. The data about these graphs, together with {\sc Magma} code that generates them, is available on-line at \cite{online}. The package contains ten files. The file ``Census-ATD-1k-README.txt'' is a text file containing information similar to the information in this section. The remaining nine files come in groups of three, one group for each of the three lists ($2$-ATDs, arc-transitive $4$-GHATs, $4$-HATs). In each groups, there is a $*$.mgm file, a $*$.txt file and a $*$.csv file. The $*$.mgm file contains {\sc Magma} code that generates the corresponding digraphs. After loading the file in {\sc Magma}, a double sequence is generated (named either ATD, GHAT, or HAT, depending on the file). The length of each double sequence is $1000$ and the $n$-th component of the sequence is the sequence of all the corresponding digraphs of order $n$, with the exception of the generalised wreath digraphs. Thus, ATD[32,2] will return the second of the four non-generalised-wreath $2$-ATDs on $32$ vertices (the ordering of the digraphs in the sequence ATD[32] is arbitrary). In order to include the generalised wreath digraphs into the corresponding sequence, one can call the procedure {\tt AddGWD($\sim$ATD,GWD)} in the case of the $2$-ATDs, or {\tt AddGWG($\sim$GHAT,GWG)} in the case of the $4$-GHATs (note that a generalised wreath graph is never $\frac{1}{2}$-arc-transitive). The $*$.txt file contains the list of neighbours of each digraph. This file is needed when the $*$.mgm file is loaded into {\sc Magma}, but, being an ASCII file, it can be used also by other computer systems to reconstruct the digraphs. For the details of the format, see the ``README'' file. Finally, the $*$.csv file is a ``comma separated values'' file representing a spreadsheet containing some precomputed graph invariants. We shall first introduce some of these invariants and then discuss each $*$.csv separately. \subsection{Walks and cycles} Let $\Gamma$ be a digraph. A {\em walk} of length $n$ in $\Gamma$ is an $(n+1)$-tuple $(v_0,v_1,\ldots,v_n)$ of vertices of $\Gamma$ such that, for any $i\in\{1,\ldots n\}$, either $(v_{i-1},v_i)$ or $(v_i,v_{i-1})$ is an arc of $\Gamma$. The walk is {\em closed} if $v_0=v_n$ and {\em simple} if the vertices $v_i$ are pairwise distinct (with the possible exception of the first and the last vertex when the walk is closed). A closed simple walk in $\Gamma$ is called a {\em cyclet}. The {\em inverse} of a cyclet $(v_0, \ldots, v_{n-1},v_0)$ is the cyclet $(v_0, v_{n-1}, \ldots, v_1,v_0)$, and a cyclet $(v_0, \ldots, v_{n-1},v_0)$ is said to be a {\em shift} of a cyclet $(u_0,\ldots,u_{n-1},u_0)$ provided that there exists $k\in \ZZ_n$ such that $u_i = v_{i+k}$ for all $i\in \ZZ_n$. Two cyclets $W$ and $U$ are said to be {\em congruent} provided that $W$ is a shift of either $U$ or the inverse of $U$. The relations of ``being a shift of'' and ``being congruent to'' are clearly equivalence relations, and their equivalence classes are called {\em oriented cycles} and {\em cycles}, respectively. With a slight abuse of terminology, we shall sometimes identify a (oriented) cycle with any of its representatives. \subsection{Alter-equivalence, alter-exponent, alter-perimeter, and alter-sequence} \label{sec:alt} Let $\Gamma$ be an asymmetric digraph. The {\em signature} of a walk $W=(v_0,v_1,\ldots,v_n)$ is an $n$-tuple $(\epsilon_1, \epsilon_2, \ldots, \epsilon_n)$, where $\epsilon_i=1$ if $(v_{i-1},v_i)$ is an arc of $\Gamma$, and $\epsilon=-1$ otherwise. The signature of a walk $W$ will be denoted by $\sigma(W)$. The sum of all the integers in $\sigma(W)$ is called the {\em sum} of the walk $W$ and denoted by $s(W)$; similarly, the $k^{th}$ {\em partial sum $s_k(W)$} is the sum of the initial walk $(v_0,v_1,\ldots,v_k)$ of length $k$. By convention, we let $s_0(W)=0$. The {\em tolerance} of a walk $W$ of length $n$, denoted $T(W)$, is the set $\{s_k(W) : k\in \{0,1,\ldots, n\}\}$. Observe that the tolerance of a walk is always an interval of integers containing $0$. Let $t$ be a positive integer or $\infty$. We say that two vertices $u$ and $v$ of $\Gamma$ are {\em alter-equivalent with tolerance $t$} if there is a walk from $u$ to $v$ with sum $0$ and tolerance contained in $[0,t]$; we shall then write $u \mathcal{A}_t v$. The equivalence class of $\mathcal{A}_t$ containing a vertex $v$ will be denoted by $\mathcal{A}_t(v)$. Since we assume that $\Gamma$ is a finite digraph, there exists an integer $e\geq 0$ such that $\mathcal{A}_e = \mathcal{A}_{e+1}$ (and then $\mathcal{A}_e = \mathcal{A}_\infty$). The smallest such integer is called the {\em alter-exponent} of $\Gamma$ and denoted by $\exp(\Gamma)$. The number of equivalence classes of $\mathcal{A}_{\infty}$ is called the {\em alter-perimeter} of $\Gamma$. The name originates from the fact that the quotient digraph of $\Gamma$ with respect to $\mathcal{A}_\infty$ is either a directed cycle or the complete graph $K_2$ or the graph $K_1$ with one vertex. If $e$ is the alter-exponent of a (vertex-transitive) digraph $\Gamma$, then the finite sequence $[|\mathcal{A}_1(v)|, |\mathcal{A}_2(v)|, \ldots, |\mathcal{A}_e(v)|]$ is called the {\em alter-sequence} of $\Gamma$. Several interesting properties of the alter-exponent can be proved (see \cite{bridge} for example). For example, if $\Gamma$ is connected and $G$-vertex-transitive, then $\exp(\Gamma)$ is the smallest positive integer $e$ such that the setwise stabiliser $G_{\mathcal{A}_e(v)}$ is normal in $G$. The group $G_{\mathcal{A}_e(v)}$ is the group generated by all vertex-stabilisers in $G$ and $G/G_{\mathcal{A}_e(v)}$ is a cyclic group. All notions defined in this section for digraphs generalise to half-arc-transitive graphs, where instead of the graph one of the two natural arc-transitive digraphs are considered. As was shown in \cite{bridge}, all the parameters defined here remain the same if instead of a digraph, its opposite digraph is considered. \subsection{Alternating cycles -- radius and attachment number} \label{sec:rad} A walk $W$ in an asymmetric digraph is called {\em alternating} if its tolerance is either $[0,1]$ or $[-1,0]$ (that is, if the signs in its signature alternate). Similarly, a cycle is called {\em alternating} provided that any (and thus every) of its representatives is an alternating walk. This notion was introduced in \cite{Mar98} and used to classify the so-called {\em tightly attached} $4$-GHATs. The concept of alternating cycles was explored further in a number of papers on $4$-HATs (see for example \cite{MarPra,Spa08}). Let $\Gamma$ be a $2$-ATD, let ${\mathcal{C}}$ be the set of all alternating cycles of $\Gamma$, and let $G={\rm {Aut}}(\Gamma)$. The set ${\mathcal{C}}$ is clearly preserved by the action of $G$ upon the cycles of $\Gamma$. Moreover, since $\Gamma$ is arc-transitive, $G$ acts transitively on ${\mathcal{C}}$. In particular, all the alternating cycles of $\Gamma$ are of equal length. Half of the length of an alternating cycle is called the {\em radius} of $\Gamma$. Since $\Gamma$ is $2$-valent, every vertex of $\Gamma$ belongs to precisely two alternating cycles. It thus follows from vertex-transitivity of $\Gamma$ that any (unordered) pair of intersecting cycles can be mapped to any other such pair, implying that there exists a constant $a$ such that any two cycles meet either in $0$ or in $a$ vertices. The parameter $a$ is then called the {\em attachment number} of $\Gamma$. In general, the attachment number divides the length of the alternating cycle (twice the radius), and there are digraphs where $a$ equals this length; they were classified in \cite[Proposition 2.4]{Mar98}, where it was shown that their underlying graphs are always arc-transitive. A $2$-valent asymmetric digraph with attachment number $a$ is called {\em tightly attached} if $a$ equals the radius, is called {\em antipodally attached} if $a=2$, and is called {\em loosely attached} if $a=1$. Note that tightly attached $2$-ATDs are precisely those with alter-exponent 1. \subsection{Consistent cycles} Let $\Gamma$ be a graph and let $G\le {\rm {Aut}}(\Gamma)$. A (oriented) cycle $C$ in a graph $\Gamma$ is called $G$-consistent provided that there exists $g\in G$ that preserves $C$ and acts upon it as a $1$-step rotation. A $G$-orbit of $G$-consistent oriented cycles is said to be {\em symmetric} if it contains the inverse of any (and thus each) of its members, and is {\em chiral} otherwise. Consistent oriented cycles were first introduced by Conway in a public lecture \cite{con} (see also \cite{biggs,mik,overlap}). Conway's original result states that in an arc-transitive graph of valence $d$, the automorphism group of the graph has exactly $d-1$ orbits on the set of oriented cycles. In particular, if $\Gamma$ is $4$-valent and $G$-arc-transitive, then there are precisely three $G$-orbits of $G$-consistent oriented cycles. Since chiral orbits of $G$-consistent cycles come in pairs of mutually inverse oriented cycles, this implies that there must be at least one symmetric orbit, while the other two are either both chiral or both symmetric. Conway's result was generalised in \cite{ByK} to the case of $\frac{1}{2}$-arc-transitive graphs by showing that if $\Gamma$ is a $4$-valent $(G,\frac{1}{2})$-arc-transitive graph, then there are precisely four $G$-orbits of $G$-consistent oriented cycles, all of them chiral. These four orbits of oriented cycles thus constitute precisely two $G$-orbits of $G$-consistent (non-oriented) cycles. \subsection{Metacirculants} A {\em metacirculant} is a graph whose automorphism group contains a vertex-transitive metacyclic group $G$, generated by $\rho$ and $\sigma$, such that the cyclic group $\langle \rho \rangle$ is semiregular on the vertex-set of the graph, and is normal in $G$. Metacirculants were first defined by Alspach and Parsons \cite{AlsPar}, and metacirculants admitting $\frac{1}{2}$-arc-transitive groups of automorphisms were first investigated in \cite{sajna}. Recently, the interesting problem of classifying all $4$-HATs that are metacirculants was considered in \cite{MarSpa,Spa09,Spa10}. Such $4$-HATs fall into four (not necessarily disjoint) classes (called Class I, Class II, Class III, and Class IV), depending on the structure of the quotient by the orbits of the semiregular element $\rho$. For a precise definition of the {\em class} of a $4$-HAT metacirculant see, for example, \cite[Section 2]{Spa09}. Since a given $4$-HAT may admit several vertex-transitive metacyclic groups, a fixed graph can fall into several of these four classes. Several interesting facts about $4$-HAT metacirculants are known. For example, tightly attached $4$-HATs are precisely the $4$-HATs that are metacirculants of Class I. \subsection{The data on $2$-ATDs} \label{sec4.6} The ``Census-ATD-1k-data.csv'' file concerns $2$-ATDs. Each line of the file represents one of the digraphs in the census, and has 19 fields described below. Since this file is in ``csv'' format, every occurrence of a comma in a field is substitute with a semicolon. \begin{itemize} \item {\tt Name}: the name of the digraph (for example, ATD[32,2]); \item {\tt $|$V$|$}: the order of the digraph; \item {\tt SelfOpp}: contains ``yes" if the digraph is isomorphic to its opposite digraph and ``no" otherwise; \item {\tt Opp}: the name of the opposite digraph (the same as ``Name" if the digraph is self-opposite); \item {\tt IsUndAT}: ``yes" if the underlying graph is arc-transitive, ``no'' otherwise; \item {\tt UndGrph}: the name of the underlying graph, as given in the files ``Census-HAT-1k-data.csv'' and ``Census-GHAT-1k-data.csv'' -- if the underlying graph is generalized wreath, then this is indicated by, say, ``GWD(m,k)" where $m$ and $k$ are the defining parameters. \item {\tt s}: the largest integer $s$, such that the digraph is $s$-arc-transitive; \item {\tt GvAb}: ``Ab'' if the vertex-stabiliser in the automorphism group of the digraph is abelian, otherwise ``n-Ab''; \item {\tt $|$Tv:Gv$|$}: the index of the automorphism group $G$ of the digraph in the smallest arc-transitive group $T$ of the underlying graph that contains $G$ -- if there's no such group $T$, then $0$; \item {\tt $|$Av:Gv$|$}: the index of the automorphism group of the digraph in the automorphism group of the underlying graph; \item {\tt Solv}: this field contains ``solve" if the automorphism group of the digraph is solvable and ``n-solv" otherwise; \item {\tt Rad}: the {\em radius}, that is, half of the length of an alternating cycle; \item {\tt AtNo}: the {\em intersection number}, that is, the size of the intersection of two intersecting alternating cycles; \item {\tt AtTy}: the {\em attachment type}, that is: ``loose" if the attachment number is $1$, ``antipodal" if $2$, and ``tight" if equal to the radius, otherwise ``---"; \item {\tt $|$AltCyc$|$}: the number of alternating cycles; \item {\tt AltExp}: the alter-exponent; \item {\tt AltPer}: the alter-perimeter; \item {\tt AltSeq}: the alter-sequence; \item {\tt IsGWD}: ``yes" if the digraph is generalized wreath, and ``no" otherwise. \end{itemize} \subsection{The data on arc-transitive $4$-GHATs} \label{sec4.7} The ``Census-GHAT-1k-data.csv'' file concerns arc-transitive $4$-GHATs . Each line of the file represents one of the graphs in the census, and has nine fields, described below. Note, however, that the file does not contain the generalised wreath graphs. \begin{itemize} \item {\tt Name:} the name of the graph (for example GHAT[9,1]); \item {\tt $|$V$|$}: the order of the graph; \item {\tt gir}: the girth (length of a shortest cycle) of the graph; \item {\tt bip}: this field contains ``b" if the graph is bipartite and ``nb" otherwise; \item {\tt CayTy}: this field contains ``Circ" if the graph is a circulant (that is, a Cayley graph on a cyclic group), ``AbCay" if the graph is Cayley graph on an abelian group, but not a circulant, and ``Cay" if it is Cayley but not on an abelian group -- it contains ``n-Cay" otherwise; \item {\tt $|A_v|$}: the order of the vertex-stabiliser in the automorphism group of the graph; \item {\tt $|G_v|$}: a sequence of the orders of vertex-stabilisers of the maximal half-arc-transitive subgroups of the automorphism group -- up to conjugacy in the automorphism group; \item {\tt solv}: this field contains ``solve" if the automorphism group of the graph is solvable and ``n-solv" otherwise; \item {\tt $[|$ConCyc$|]$}: the sequence of the lengths of $A$-consistent oriented cycles of the graph (one cycle per each $A$-orbit, where $A$ is the automorphism group of the graph) -- the symbols ``c'' and ``s" indicate whether the corresponding cycle is chiral or symmetric -- for example, $[4c;4c;10s]$ means there are two chiral orbits of $A$-consistent cycles, both containing cycles of length $4$, and one orbit of symmetric consistent cycles, containing cycles of length $10$. \end{itemize} \subsection{The data on $4$-HATs} The ``Census-HAT-1k-data.csv'' file concerns $4$-HATs. Each line of the file represents one of the graphs in the census, and has 16 fields. The fileds {\tt $|$V$|$}, {\tt gir}, {\tt bip}, and {\tt Solv} are as in Section~\ref{sec4.7}, and the fields {\tt Rad, AtNo, AtTy, AltExp, AltPer} and {\tt AltSeq} are as in Section~\ref{sec4.6}. The remaining fileds as follows: \begin{itemize} \item {\tt Name:} the name of the graph (for example HAT[27,1]); \item {\tt IsCay}: this field contains ``Cay" if the graph is Cayley and ``n-Cay" otherwise; \item {\tt $|G_v|$}: the order of the vertex-stabiliser in the automorphism group of the graph; \item {\tt CCa}: the length of a shortest consistent cycle; \item {\tt CCb}: the length of a longest consistent cycle; \item {\tt MetaCircTy}: ``$\{\}$" if the graph is not a meta-circulant; otherwise a set of types of meta-circulants that represents the graph. \end{itemize}
train/arxiv
BkiUauDxK7Tt52mM6yNU
5
1
\section{\label{sec:level1}Introduction} Ghost imaging (GI) retrieves the object information via intensity correlation between two separated but related light fields, i.e., object arm and reference arm. To our knowledge, the concept of GI was theoretically proposed by Belinskii and Klyshko in 1994 \cite{Belinskii1994}, and the first GI experiment was implemented in 1995 with quantum entangled photon pairs \cite{Pittman1995}, making GI be originally claimed as a quantum phenomenon. Later, it was shown that the classic thermal or pseudo-thermal light \cite{DaZhang2005,Ferri2005,Jun2005,Liu2014} could also be used to acquire the ghost images, arising a controversy on whether the entanglement is necessary for GI. One thing is certain: thermal or pseudo-thermal light has become a favorable source for GI, given its convenience in practical applications \cite{YuOC2016,Clemente2010,YuAO2019,Gong2015,YuSR2014}. If one uses a programmable spatial light modulator (SLM) or digital micromirror device (DMD) to replace the reference arm, then the pixelated array detector can be removed, which simplifies the imaging configuration. This technique was called computational GI \cite{Shapiro2008,Bromberg2009}. In recent years, GI has attracted more and more attention, and in order to improve its imaging quality and efficiency, many efforts have been made in its correlation reconstruction algorithms, such as background-removal GI \cite{Gatti2004}, high-order GI \cite{Chan2009}, differential GI (DGI) \cite{Ferri2010}, sequential-deviation GI \cite{LiOE2019}, super sub-Nyquist GI \cite{YuSensors2019}, etc. However, the imaging mechanism of these algorithms lacks a universal unified interpretation. Inspired by Yang's theoretical interpretation work \cite{CaoPRA2018} which mainly focused on correspondence imaging and required a strong assumption, we build a new unified theoretical model where the speckle patterns can obey any identical distribution and the objects can be of gray-scale. We assume that the intensities of any two pixels in one speckle pattern as well as the ones on a certain pixel of different speckle patterns are independently and identically distributed, all following the same distribution. That is, any two pixel values of reference patterns in both space and time dimensions are independently and identically distributed, and each pixel value can be regarded as a stochastic variable. Then, the single-pixel (bucket) signal (also a random variable) can be treated as linear combinations of all these pixels. With these assumptions, we deduce an independence discrimination separated formula, which is applied to prove that the results of multi-functional intensity correlation forms are the linear transformation of the original gray values in terms of mean, thus the object information can be extracted. Both simulation and experiments have been performed to verify the correctness of this theoretical model, and to demonstrate that the recovered values via intensity correlation in the pixel region of the same original gray value will obey a Gaussian distribution. \section{\label{sec:level2}Probability theory for ghost imaging} The total pixel number of the object image can be denoted by $M$, and the gray value of one pixel is expressed by $d$. Then, we denote the gray value of the $m$th pixel as $d_m$, with a range covering from 0 to 1, where 0 stands for being completely opaque and 1 stands for being completely transparent. Assume that each reference speckle pattern also has $M$ pixels, and accordingly, the light intensity of the $m$th pixel is denoted by $I_m$. Suppose that the intensities of any two pixels in one reference pattern are independently and identically distributed, and those on a certain pixel of any two different reference patterns are also independently and identically distributed. They all follow an identical probability distribution $\mathcal{I}$, with a mean $E(\mathcal{I})$ and a variance $D(\mathcal{I})$. For the sake of generality, in this paper we will mainly discuss a functional form of $\mathcal{I}$ for reconstruction, i.e., $\mathcal{F}=f(\mathcal{I})$ \cite{Zhang2019}, instead of $\mathcal{I}$, where $f$ can be a power function, exponential function, logarithmic function, etc. Here, the function for the $m$th pixel is written as $F_m$, obeying a distribution $\mathcal{F}$. After each reference pattern interacts with the object, the light intensity at the $m$th pixel of the spatial light field can be written as $\gamma d_mI_m$, where the factor $\gamma$ is a constant. The bucket signal can be acquired via \begin{equation} S=\gamma\sum_{m}^Md_mI_m. \end{equation} In the following, we need to divide $S$ into two parts, $\widetilde{S}$ and $\gamma d_nI_n$, i.e., \begin{equation} S=\widetilde{S}+\gamma d_nI_n, \end{equation} where $\widetilde{S}=\gamma\sum_{m\ne n}^M d_mI_m$. Noticeably, $\widetilde{S}$ is independent of $I_n$. If $I_n$ is replaced with $F_n$, then $\widetilde{S}$ is also independent of $F_n$. The second-order correlation function for the $n$th pixel can be written as \begin{equation} G^{(2)}_{n}=\langle S\cdot I_n\rangle=\frac{1}{T}\sum_t^T S_tI_{tn}, \end{equation} where $T$ is the total number of reference patterns, the subscript $t$ stands for the $t$th measurement, and the subscript $tn$ denotes the $n$th pixel for the $t$th reference pattern. According to Liu's work \cite{Zhang2019}, $I_n$ in correlation function can be replaced with some functions $F_n$. Considering generality, we use $F_n=f(I_n)$ to represent an arbitrary function expect for constant function. With this assumption, the function $F$ can also be treated as a random variable, obeying a distribution $\mathcal{F}$. Then, we have \begin{equation} G^{(2)}_{n}=\langle S\cdot F_n\rangle=\frac{1}{T}\sum_t^T S_tF_{tn}. \end{equation} Obviously, $G^{(2)}_{n}$ is also a stochastic variable. Now, let us calculate its mean: \begin{align}\label{eq:EG2} E(G^{(2)}_{n})&=\frac{1}{T}\sum_t^T E(S_tF_{tn})\nonumber\\ &=\frac{1}{T}\sum_t^T E[(\widetilde{S}_{t}+\gamma d_nI_{tn})F_{tn}]\nonumber\\ &=\frac{1}{T}\sum_t^T [E(\widetilde{S}_{t})E(F_{tn})+E(\gamma d_nI_{tn}F_{tn})]. \end{align} For the first term of Eq.~(\ref{eq:EG2}), it can be written as \begin{align}\label{eq:firstterm} E(\widetilde{S}_{t})E(F_{tn})&=E(\gamma \sum_{m\ne n}^M d_mI_{tm})E(F_{tn})\nonumber\\ &=\gamma\sum_{m\ne n}^M d_mE(\mathcal{I})E(\mathcal{F})\nonumber\\ &=\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})-\gamma d_nE(\mathcal{I})E(\mathcal{F}). \end{align} For the second term of Eq.~(\ref{eq:EG2}), it can be given as \begin{align}\label{eq:BE} E(\gamma d_mI_{tm}F_{tn})&=\gamma d_mE(I_{tm}F_{tn})\nonumber\\ &=\begin{cases} \gamma d_nE(\mathcal{IF})& m=n\\ \gamma d_mE(\mathcal{I})E(\mathcal{F})& m\ne n, \end{cases} \end{align} which we call the independence discrimination separated formula. Substituting Eqs.~(\ref{eq:firstterm}) and (\ref{eq:BE}) into Eq.~(\ref{eq:EG2}) gives \begin{align}\label{eq:EG2final} E(G^{(2)}_{n})=&\frac{1}{T}\sum_t^T[\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})-\gamma d_nE(\mathcal{I})E(\mathcal{F})\nonumber\\ &+\gamma d_nE(\mathcal{IF})]\nonumber\\ =&\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})\nonumber\\ &+\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n\nonumber\\ =&C_2+C_1d_n, \end{align} where both $C_1$ and $C_2$ are constants: \begin{align} C_1&=\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})],\\ C_2&=\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F}). \end{align} Therefore, the mean of $G^{(2)}_{n}$ is a linear transformation of the original object's gray value $d_n$, while the transformation coefficients $C_1$ and $C_2$ are independent of this gray value $d_n$. This explains why the object image can be restored. We can also see that for different functional forms $\mathcal{F}$ of $\mathcal{I}$, it only affects the coefficients $C_1$ and $C_2$, and the reconstruction transformation is still linear, thus the image can still be recovered. This also explains why power functions, exponential functions, logarithmic functions, etc. can be used to reconstruct the ghost images. Following above calculation idea, we further deduce the mean of the background-removal correlation function $\Delta G^{(2)}$. For the $n$th pixel, $\Delta G^{(2)}_n$ can be written as \begin{align} \Delta G^{(2)}_n&=\langle S\cdot F_n\rangle-\langle S\rangle\langle F_n\rangle\nonumber\\ &=G^{(2)}_n-\frac{1}{T^2}\sum_t^T S_t\sum_t^T F_{tn}\nonumber\\ &=G^{(2)}_n-\frac{1}{T^2}(\sum_t^T S_tF_{tn}+\sum_t^T\sum_{t'\ne t}^T S_tF_{t'n})\nonumber\\ &=G^{(2)}_n-\frac{1}{T}G^{(2)}_n-\frac{1}{T^2}\sum_t^T\sum_{t'\ne t}^T S_tF_{t'n}. \end{align} Then, we calculate its mean: \begin{equation}\label{eq:DeltaG2n} E(\Delta G^{(2)}_n)=(1-\frac{1}{T})E(G^{(2)}_n)-\frac{1}{T^2}E(\sum_t^T\sum_{t'\ne t}^T S_tF_{t'n}). \end{equation} Since the first term has already been given by Eq.~(\ref{eq:EG2final}), we will directly calculate the second term: \begin{align}\label{eq:Deltasecondterm} E(\sum_t^T\sum_{t'\ne t}^T S_tF_{t'n})&=\sum_t^T\sum_{t'\ne t}^T E(S_tF_{t'n})\nonumber\\ &=\sum_t^T\sum_{t'\ne t}^T E(S_t)E(F_{t'n})\nonumber\\ &=\sum_t^T\sum_{t'\ne t}^T(\gamma\sum_m^M d_mE(I_{tm}))E(F_{t'n})\nonumber\\ &=T(T-1)\gamma\sum_m^M d_m E(\mathcal{I})E(\mathcal{F}). \end{align} Substitute Eqs.~(\ref{eq:EG2final}) and (\ref{eq:Deltasecondterm}) into Eq.~(\ref{eq:DeltaG2n}), then we will have \begin{align}\label{eq:Deltafinal} E(\Delta G^{(2)}_n)=&\left(1-\frac{1}{T}\right)\{\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})\nonumber\\ &+\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n\}\nonumber\\ &-\left(1-\frac{1}{T}\right)\gamma\sum_m^M d_m E(\mathcal{I})E(\mathcal{F})\nonumber\\ =&\left(1-\frac{1}{T}\right)\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n\nonumber\\ \approx&\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n\nonumber\\ =&C_1d_n. \end{align} Obviously, the mean of $\Delta G^{(2)}_{n}$ is also a linear transformation of the original gray value $d_n$, while the transformation coefficient $C_1$ is also independent of this gray value $d_n$. Therefore, the object information can be retrieved by $\Delta G^{(2)}_{n}$. For different functions $\mathcal{F}=f(\mathcal{I})$, this linear transformation relationship still exists. In the above, we have already analyzed the mean of $\Delta G^{(2)}_{n}$, but this analysis is not enough. As we know, there exists some certain noise fluctuations in the reconstructed result of $\Delta G^{(2)}_{n}$, so only using the average value to describe the reconstruction performance of $\Delta G^{(2)}_{n}$ is one-sided. Next, we will discuss such fluctuations in detail. As we know, $\Delta G^{(2)}_n$ can be derived from second-order correlation function $G^{(2)}_n$: \begin{align}\label{eq:Deltaequivalence} \Delta G^{(2)}_n=&\langle SF_n\rangle-\langle S\rangle\langle F_n\rangle\nonumber\\ \Leftrightarrow&\langle(S-\langle S\rangle)(F_n-\langle F_n\rangle)\rangle. \end{align} When the number of reference patterns is large enough, there are $\langle S\rangle\approx E(S)$ and $\langle F_n\rangle\approx E(F_n)$. Thus, Eq.~(\ref{eq:Deltaequivalence}) can be approximated as \begin{align} \Delta G^{(2)}_n\approx&\langle(S-E(S))(F_n-E(F_n))\rangle\nonumber\\ =&\langle(S-E(S))(F_n-E(\mathcal{F}))\rangle. \end{align} According to the well-known ``central limit theorem" \cite{Laplace1812,Lyapunov1954} in probability theory: when the number of reference patterns $T$ is large enough, the result of $\Delta G^{(2)}_n$ for an original gray value $d_n$ will obey a Gaussian distribution with a mean $\mu_n=E[(S-E(S))(F_n-E(\mathcal{F}))]$ and a variance $\sigma_n^2=\frac{1}{T}D\{[S-E(S)][F_n-E(\mathcal{F})]\}$. This reveals an important feature of $\Delta G^{(2)}$: for a certain gray-scale value of the original object, the calculated $\Delta G^{(2)}_n$ value is not a certain number, but follows a Gaussian distribution. That is, the reconstructed $\Delta G^{(2)}_n$ values for different object gray-scale values will fluctuate around different means, obeying different Gaussian distributions, and their differences can be resolved from the recovered image. The feature that ``one original gray-scale level corresponds to one Gaussian curve, which can be calculated from the reconstructed values in the corresponding pixel positions" is crucial for explaining why $\Delta G^{(2)}_n$ can reconstruct a ghost image. \section{\label{sec:level3}Numerical simulation} To verify that the recovered values of deformation function form $\Delta G^{(2)}_n$ with respect to each original gray value can obey a certain theoretical distribution, we use the patterns that obey a given distribution $\mathcal{I}$ for measurement and applied its functional forms $\mathcal{F}=f(\mathcal{I})$ for reconstruction. Since we cannot go through all functions for verification, only some common functions such as $\mathcal{F}=\mathcal{I}$, $\mathcal{F}=\mathcal{I}^3$, $\mathcal{F}=exp(\mathcal{I})$ and $\mathcal{F}=ln(\mathcal{I})$ are used here. After reconstructing the image, we calculate the practical probability density distributions and the Gaussian theoretical curves (obtained from theoretical mean and variance) of recovered pixel values falling in each pixel region of the same original gray value $d_n$, to see whether the reconstructed data is consistent with the theoretical Gaussian curve. In the previous section, we have pointed out that the mean and variance of theoretical Gaussian curve by using functional form $\Delta G^{(2)}_n$ to calculate the pixels of original gray value $d_n$ are $\mu_n=E[(S-E(S))(F_n-E(\mathcal{F}))]$ and $\sigma_n^2=\frac{1}{T}D\{[S-E(S)][F_n-E(\mathcal{F})]\}$. Now, we need to express both the mean and variance in terms of those of $\mathcal{I}$ and $\mathcal{F}$. For the mean $\mu_n=E[(S-E(S))(F_n-E(\mathcal{F}))]$, there is \begin{align} &E[(S-E(S))(F_n-E(\mathcal{F}))]\nonumber\\ =&E(SF_n)-E(S)E(\mathcal{F})\nonumber\\ =&E[(\widetilde{S}+\gamma d_nI_n)F_n]-E(\widetilde{S}+\gamma d_nI_n)E(\mathcal{F})\nonumber\\ =&E(\widetilde{S})E(\mathcal{F})+\gamma d_nE(\mathcal{IF})\nonumber\\ &-(E(\widetilde{S})E(\mathcal{F})+\gamma d_nE(\mathcal{I})E(\mathcal{F}))\nonumber\\ =&\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n. \end{align} For the variance $\sigma_n^2=\frac{1}{T}D\{[S-E(S)][F_n-E(\mathcal{F})]\}$, we have \begin{align} &D\{[S-E(S)][F_n-E(\mathcal{F})]\}\nonumber\\ =&D[SF_n-E(S)F_n-E(\mathcal{F})S]\nonumber\\ =&D(SF_n)+D[E(S)F_n]+D[E(\mathcal{F})S]\nonumber\\ &-2Cov[SF_n,E(S)F_n]-2Cov[SF_n,E(\mathcal{F})S]\nonumber\\ &+2Cov[E(S)F_n,E(\mathcal{F})S]\nonumber\\ =&D(SF_n)+E(S)^2D(\mathcal{F})+E(\mathcal{F})^2D(S)\nonumber\\ &-2E(S)[E(SF_n^2)-E(SF_n)E(\mathcal{F})]\nonumber\\ &-2E(\mathcal{F})[E(S^2F_n)-E(SF_n)E(S)]\nonumber\\ &+2E(S)E(\mathcal{F})[E(SF_n)-E(S)E(\mathcal{F})]\nonumber\\ =&E(S)[6E(SF_n)E(\mathcal{F})-2E(SF_n^2)]\nonumber\\ &+E(S)^2[D(\mathcal{F})-2E(\mathcal{F})^2]\nonumber\\ &+D(S)E(\mathcal{F})^2-2E(S^2F_n)E(\mathcal{F})+D(SF_n). \end{align} Obviously, there are still many terms in the formula of variance that have not been expressed by the mean and variance of $\mathcal{I}$ and $\mathcal{F}$. To this end, we will calculate analyze each item in this variance formula. The unknown terms $E(S)$, $E(SF_n)$ and $E(SF_n^2)$ in the first term $E(S)[6E(SF_n)E(\mathcal{F})-2E(SF_n^2)]$ of the variance formula can be calculated as follows \begin{align} E(S)=&E(\widetilde{S}+\gamma d_nI_n)=E(\widetilde{S})+\gamma d_nE(\mathcal{I}),\label{eq:ES}\\ E(SF_n)=&E[(\widetilde{S}+\gamma d_nI_n)F_n]\nonumber\\ =&E(\widetilde{S})E(\mathcal{F})+\gamma d_nE(\mathcal{IF}),\label{eq:SFn}\\ E(SF_n^2)=&E[(\widetilde{S}+\gamma d_nI_n)F_n^2]\nonumber\\ =&E(\widetilde{S})E(\mathcal{F}^2)+\gamma d_nE(\mathcal{I}\mathcal{F}^2), \end{align} where \begin{equation}\label{eq:EtildeS} E(\widetilde{S})=E(\gamma\sum_{m\ne n}^M d_mI_m)=\gamma\sum_{m\ne n}^M d_mE(\mathcal{I}). \end{equation} Since the second term $E(S)^2[D(\mathcal{F})-2E(\mathcal{F})^2]$ of the variance formula only has one unknown term $E(S)$, which has been provided by Eq.~(\ref{eq:ES}). The unknown term $D(S)$ in the third term $D(S)E(\mathcal{F})^2$ of the variance formula can be deduced as \begin{equation} D(S)=D(\widetilde{S}+\gamma d_nI_n)=D(\widetilde{S})+\gamma^2 d_n^2D(\mathcal{I}), \end{equation} where \begin{equation}\label{eq:DtildeS} D(\widetilde{S})=D(\gamma\sum_{m\ne n}^M d_mI_m)=\gamma^2\sum_{m\ne n}^M d_m^2D(\mathcal{I}). \end{equation} Then, we compute the unknown term $E(S^2F_n)$ in the fourth term $-2E(S^2F_n)E(\mathcal{F})$: \begin{align} E(S^2F_n)=&E[(\widetilde{S}+\gamma d_nI_n)^2F_n]\nonumber\\ =&E[(\widetilde{S}^2+2\widetilde{S}\gamma d_nI_n+\gamma^2 d_n^2I_n^2)F_n]\nonumber\\ =&E(\widetilde{S}^2F_n)+2\gamma d_nE(\widetilde{S}I_nF_n)+\gamma^2 d_n^2E(I_n^2F_n)\nonumber\\ =&E(\widetilde{S}^2)E(\mathcal{F})+2\gamma d_nE(\widetilde{S})E(\mathcal{IF})\nonumber\\ &+\gamma^2 d_n^2E(\mathcal{I}^2\mathcal{F}). \end{align} where $E(\widetilde{S})$ has been provided by Eq.~(\ref{eq:EtildeS}), and $E(\widetilde{S})^2$ can be calculated by using Eqs.~(\ref{eq:EtildeS}) and (\ref{eq:DtildeS}): \begin{align}\label{eq:EtildeS2} E(\widetilde{S}^2)=&E(\widetilde{S})^2+D(\widetilde{S})\nonumber\\ =&\left[\gamma\sum_{m\ne n}^M d_mE(\mathcal{I})\right]^2+\gamma^2\sum_{m\ne n}^M d_m^2D(\mathcal{I}). \end{align} Now, let us calculate the fifth term $D(SF_n)$: \begin{equation} D(SF_n)=E(S^2F_n^2)-E(SF_n)^2. \end{equation} where $E(SF_n)$ has been provided by Eq.~(\ref{eq:SFn}), and $E(S^2F_n^2)$ can be given by \begin{align}\label{eq:ES2Fn2} E(S^2F_n^2)=&E[(\widetilde{S}+\gamma d_nI_n)^2F_n^2]\nonumber\\ =&E[(\widetilde{S}^2+2\widetilde{S}\gamma d_nI_n+\gamma^2 d_n^2I_n^2)F_n^2]\nonumber\\ =&E(\widetilde{S}^2F_n^2)+2\gamma d_nE(\widetilde{S}I_nF_n^2)+\gamma^2 d_n^2E(I_n^2F_n^2)\nonumber\\ =&E(\widetilde{S}^2)E(\mathcal{F}^2)+2\gamma d_n E(\widetilde{S})E(\mathcal{I}\mathcal{F}^2)\nonumber\\ &+\gamma^2d_n^2E(\mathcal{I}^2\mathcal{F}^2). \end{align} In above formula, $E(\widetilde{S})$ and $E(\widetilde{S}^2)$ has been given by Eqs.~(\ref{eq:EtildeS}) and (\ref{eq:EtildeS2}), respectively. So far, we have expressed each term in the variance formula in terms of the mean and variance of $\mathcal{I}$ and $\mathcal{F}$. Now, all theoretical calculations for the mean $\mu_n=E[(S-E(S))(F_n-E(\mathcal{F}))]$ and the variance $\sigma_n^2=\frac{1}{T}D\{[S-E(S)][F_n-E(\mathcal{F})]\}$ are completed, all described by the mean and variance of $\mathcal{I}$ and $\mathcal{F}$. Based on the mean and variance, we can easily acquire the Gaussian theoretical curves of recovered pixel values for each original gray value $d_n$. The object image used for numerical simulation is an airplane graph of $200\times200$ pixels created by us, containing four gray-scale values 0, 0.4, 0.7 and 1.0, which separately account for 81.975\%, 5.265\%, 8.055\%, and 4.705\% of the whole pixels, as shown in Fig.~\ref{fig:simulation}(a). We generated 100000 reference patterns which obeyed an identical uniform distribution, with the values ranging from 0.1 to 1. Accordingly, 100000 simulated measurements could be acquired. Here, we used $\mathcal{F}=\mathcal{I}$, $\mathcal{F}=\mathcal{I}^3$, $\mathcal{F}=exp(\mathcal{I})$ and $\mathcal{F}=ln(\mathcal{I})$ in $\Delta G^{(2)}_n$, and the corresponding reconstructed results were given in Fig.~\ref{fig:simulation}(b)--\ref{fig:simulation}(e). We counted the value probability of reconstructed pixels in the region where the pixels of the same original gray value were located, with comparisons to their Gaussian theoretical curves. The Gaussian theoretical curves and calculated probability statistics for different functions were presented in Figs.~\ref{fig:PDF}(a)--\ref{fig:PDF}(d), where the ordinate represented the occurrence probability of these reconstructed values. It could be clearly seen that the probability of recovered pixel values were highly consistent with their Gaussian theoretical distributions. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-1} \caption{\label{fig:simulation}Simulation results. (a) is the original airplane image with four different gray-scale values. (b)--(e) are the reconstructed images using different functions $\mathcal{F}=\mathcal{I}$, $\mathcal{F}=\mathcal{I}^3$, $\mathcal{F}=exp(\mathcal{I})$ and $\mathcal{F}=ln(\mathcal{I})$.} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-2} \caption{\label{fig:PDF}Probability versus the recovered pixel values falling in the pixel region where the original gray value $d=$0, 0.4, 0.7 and 1.0, compared with the Gaussian theoretical curves. (a)--(d) are the cases of using $\mathcal{F}=\mathcal{I}$, $\mathcal{F}=\mathcal{I}^3$, $\mathcal{F}=exp(\mathcal{I})$ and $\mathcal{F}=ln(\mathcal{I})$ in $\Delta G^{(2)}_n$, respectively. The abbreviations ``Sim." and ``The." separately stand for the simulation data and the theoretical curve.} \end{figure*} \section{\label{sec:level4}Experiment and results} In experiment, we applied a computational GI scheme, in which a DMD was used as a SLM, as shown in Fig.~\ref{fig:setup}. The thermal light from a halogen lamp passed through an aperture diaphragm and was collimated by a beam expander. Then, the light beam illuminated the DMD, with the light intensity being modulated by the preset patterns of the latter. The used DMD consisted of 1024 $\times$ 768 micromirrors, each of which was of size $13.68\times13.68\ \mu\textrm{m}^2$ and could be switched to either +12$^\circ$ and -12$^\circ$ based on the pixel value 1 or 0 on the modulated patterns. Thereby, we let the illumination light be incident to the DMD working plane at an angle of 24$^\circ$ from its normal, so that the light corresponding to the bright pixel 1 could be emitted along the normal direction and was projected onto a black-and-white film printed with the letter ``A" (as an object). The transmission light then converged to a bucket (single-pixel) detector through a collecting lens. Here, we used uniformly distributed 0-1 modulation patterns (obeying an identical distribution $\mathcal{I}$) for measurements, each pattern occupied the central $160\times160$ pixels of the DMD. Thus, the probability of occurring 0 and 1 on the modulated patterns is the same, both of 0.5. The functions chosen for calculations are: $\mathcal{F}=\mathcal{I}$, $\mathcal{F}=\mathcal{I}^3$, and $\mathcal{F}=exp(\mathcal{I})$. The reason why $\mathcal{F}=ln(\mathcal{I})$ was not used here was that $ln(0)$ did not exist when the reference pattern pixels could take the values of 0. \begin{figure}[htbp] \centering \includegraphics[width=0.95\linewidth]{figure-3} \caption{\label{fig:setup}Experimental apparatus for computational GI with a DMD. The computational illumination of thermal light generated by the light intensity modulation of the DMD was projected onto an object. The total transmission light intensities were collected by a bucket detector.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-4} \caption{\label{fig:expresults}Experimental results. (a) is a binarized image taken by a camera. (b)--(d) are the recovered images using $\mathcal{F}=\mathcal{I}$, $\mathcal{F}=\mathcal{I}^3$, and $\mathcal{F}=exp(\mathcal{I})$, respectively.} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=0.9\linewidth]{figure-5} \caption{\label{fig:expPDF}Probability as a function of recovered pixel values that locate in the pixel region where the original gray value equals 0 or 1, compared with their Gaussian theoretical curves. The calculations use (a) $\mathcal{F}=\mathcal{I}$, (b) $\mathcal{F}=\mathcal{I}^3$ and (c) $\mathcal{F}=exp(\mathcal{I})$ in the function $\Delta G^{(2)}_n$. The abbreviation ``Exp." stands for the experimental data and ``The." is short for the theoretical curve.} \end{figure*} The experimental results of $\Delta G^{(2)}_n$ using 11940 measurements were given in Figs.~\ref{fig:expresults}(b)--\ref{fig:expresults}(d). Since the measurement noise was inevitable, its influence should be considered. Given this, the bucket value can be written as $S'=S+e=\widetilde{S}+\gamma d_nI_n+e=(\widetilde{S}+e)+\gamma d_nI_n$, where $e$ denotes the noise. Here, we only need to replace $\widetilde{S}$ in Eqs.~(\ref{eq:ES})--(\ref{eq:ES2Fn2}) with $\widetilde{S}+e$ for acquiring the noisy formulas. We will have the mean $E(\widetilde{S}+e)=E(\widetilde{S})+E(e)=\gamma\sum_{m\ne n}^M d_mE(\mathcal{I})+E(e)$ and the variance $D(\widetilde{S}+e)=D(\widetilde{S})+E(e)=\gamma^2\sum_{m\ne n}^M d_m^2D(\mathcal{I})+D(e)$. By estimation, the average value of the measurement noise was $E(e)=2.0985\times10^{6}$, and its variance was $D(e)=1.2260\times10^{10}$. Then, we calculated the probability of reconstructed pixel values that located in the region in which the original gray-scale value is 0 or 1, and made an comparison with the Gaussian theoretical curves, as shown in Fig.~\ref{fig:expPDF}. From the charts, it could be seen clearly that the experimental statistical data agreed well with the Gaussian theoretical curves. \section{\label{sec:level5}Extension: to explain other correlation functions} By using the same idea, it can also be proven that the mean results of other traditional intensity correlation functions of GI are also the linear transformations of the original object's gray values. We can take $g^{(2)}_n=\frac{\langle S\cdot F_n\rangle}{\langle S\rangle\langle F_n\rangle}$ and $\textrm{DGI}_n=\langle S\cdot F_n\rangle-\frac{\langle S\rangle}{\langle S_R\rangle}\langle S_R\cdot F_n\rangle$ ($S_R$ is defined as $S_R=\sum_m^M I_m$) \cite{Ferri2010} for example. For $g^{(2)}_n$, there is \begin{align} g^{(2)}_n=&\frac{\langle S\cdot F_n\rangle}{\langle S\rangle\langle F_n\rangle}\nonumber\\ \approx&\frac{\langle S\cdot F_n\rangle}{E(S)E(F_n)}\nonumber\\ =&\frac{G^{(2)}_n}{\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})}. \end{align} Using $E(G^{(2)}_n)$ result, we can obtain the mean of $g^{(2)}_n$: \begin{align} &E(g^{(2)}_n)\nonumber\\ =&\frac{\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})+\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n}{\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})}\nonumber\\ =&1+\frac{E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})}{\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})}d_n\nonumber\\ =&1+C_3d_n, \end{align} where $C_3=\frac{E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})}{\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})}$. For $\textrm{DGI}_n$, we have \begin{align} \textrm{DGI}_n=&\langle S\cdot F_n\rangle-\frac{\langle S\rangle}{\langle S_R\rangle}\langle S_R\cdot F_n\rangle\nonumber\\ \approx&\langle S\cdot F_n\rangle-\frac{E(S)}{E(S_R)}\langle S_R\cdot F_n\rangle\nonumber\\ =&G^{(2)}_n-\frac{\gamma\sum_m^M d_mE(\mathcal{I})}{ME(\mathcal{I})}\langle S_R\cdot F_n\rangle, \end{align} where only the mean of the term $\langle S_R\cdot F_n\rangle$ is unknown. We can deduce this mean as \begin{align} E(\langle S_R\cdot F_n\rangle)=&E(\sum_m^M I_mF_n)\nonumber\\ =&E[(\sum_{m\ne n}^M I_m+I_n)F_n]\nonumber\\ =&(M-1)E(\mathcal{I})E(\mathcal{F})+E(\mathcal{IF})\nonumber\\ =&ME(\mathcal{I})E(\mathcal{F})+E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F}). \end{align} Thus, the mean of $\textrm{DGI}_n$ can be written as \begin{align} &E(\textrm{DGI}_n)\nonumber\\ =&E(G^{(2)}_n)-\frac{\gamma\sum_m^M d_mE(\mathcal{I})}{ME(\mathcal{I})}E(\langle S_R\cdot F_n\rangle)\nonumber\\ =&\gamma\sum_m^M d_mE(\mathcal{I})E(\mathcal{F})+\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]d_n\nonumber\\ &-\frac{\gamma\sum_m^M d_m}{M}[ME(\mathcal{I})E(\mathcal{F})+E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]\nonumber\\ =&\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]\left(d_n-\frac{\sum_m^M d_m}{M}\right)\nonumber\\ =&C_1(d_n-C_4), \end{align} where $C_1=\gamma[E(\mathcal{IF})-E(\mathcal{I})E(\mathcal{F})]$ and $C_4=\frac{\sum_m^M d_m}{M}$. Obviously, no matter which kind of function $\mathcal{F}=f(\mathcal{I})$ is used, the means of both $g^{(2)}_n$ and $\textrm{DGI}_n$ are the linear transformations of original gray values. Therefore, the target gray information is retained, and the object image can be recovered. \section{\label{sec:level6}Conclusion} In summary, we have proposed and demonstrated a unified theoretical model that assumes the reference patterns of arbitrary identical distribution $\mathcal{I}$ and the objects of gray-scale, to reveal the imaging mechanism of correlation functions. Take $\Delta G_2$ for example, no matter which kind of functional forms $\mathcal{F}=f(\mathcal{I})$ is used for calculation, the probability of recovered pixel values that locate in the pixel region of the same original gray value will present a Gaussian distribution. The means of these Gaussian distributions for different pixel regions have a linear relationship with their original gray values, which explains why intensity correlation can retrieve the object information. Each Gaussian distribution has a variance, which indicates the fluctuation of the reconstructed pixel values falling in the same pixel region, and accounts for the facts that why the visibility of GI is not very high, generally accompanied by a lot of noise. As a proof and promotion of concept, other two classic correlation functions $g^{(2)}_n$ and $\textrm{DGI}_n$ were discussed to further verify the universality of this theory. According to our strict theoretical proofs, the essential reason why a classical correlation function can reconstruct the object image is that the reconstruction mean in a specified pixel region (corresponding to the same original gray value) has a linear relationship with this original gray value. Thereby, this work gives a statistical perspective to the GI theory. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China (Grant No. 61801022), the Natural Science Foundation of Beijing Municipality (Grant No. 4184098), the National Key Research and Development Program of China (Grant No. 2016YFE0131500), the Civil Space Project of China (Grant No. D040301), the International Science and Technology Cooperation Special Project of Beijing Institute of Technology (Grant No. GZ2018185101). \end{acknowledgments} \nocite{*}
train/arxiv
BkiUd4o5qhLACAkxBWKz
5
1
\section{Introduction} Supermassive black holes of about a billion solar masses have been observed at $z > 6$ \citep{2003AJ....125.1649F,2006AJ....131.1203F,2011Natur.474..616M}. How such massive objects are assembled within a billion years after the Big Bang remains one of the unresolved mysteries in the Universe. Various pathways have been suggested to explain the formation of supermassive black holes in the early universe \citep{1984ARA&A..22..471R,2008arXiv0803.2862D,2009ApJ...702L...5B,2009MNRAS.396..343R,2010A&ARv..18..279V,2012arXiv1203.6075H,2012ApJ...750...66J,2013ApJ...771..116J,2013arXiv1309.1067S} such as merging and accretion of PopIII remnants \citep{2001ApJ...552..459H,2004ApJ...613...36H,2009ApJ...696.1798T,2012ApJ...756L..19W,2013ApJ...772L...3L}, collapse of a dense stellar cluster \citep{2004Natur.428..724P,2008ApJ...686..801O, 2009ApJ...694..302D} and the monolithic collapse of a massive primordial gas cloud \citep{2002ApJ...569..558O,2003ApJ...596...34B,2006ApJ...652..902S,2006MNRAS.370..289B,2006MNRAS.371.1813L,2008MNRAS.391.1961D,2008arXiv0803.2862D,2010MNRAS.402.1249S,2010MNRAS.tmp.1427J,2010ApJ...712L..69S,2011MNRAS.411.1659L,2013arXiv1304.1369C,2013MNRAS.433.1607L,2013ApJ...774...64W,2013arXiv1309.1097L}. The necessary conditions for the direct collapse model are that the gas must be of a primordial composition and the formation of molecular hydrogen remains inhibited. The latter can be achieved in the presence of a strong Lyman Werner flux produced by the stellar populations in the first galaxies \citep{2001ApJ...546..635O,2007MNRAS.374.1557J,2008MNRAS.391.1961D,2010MNRAS.402.1249S,2011MNRAS.410..919J,2010ApJ...712L..69S,2011MNRAS.418..838W,2011A&A...532A..66L,2012MNRAS.425.2854A,2013MNRAS.430..588L}, also see \cite{2012MNRAS.422.2539I,2013A&A...553L...9V}. The potential sites for the direct collapse are the massive primordial halos of $\rm 10^{7}-10^{8}~M_{\odot}$ where the above mentioned conditions can be fulfilled. Numerical simulations performed to study the collapse of a protogalactic halo in the presence of a strong Lyman Werner flux show that massive objects can be formed \citep{2003ApJ...596...34B,2008ApJ...682..745W,2009MNRAS.393..858R,2011MNRAS.411.1659L,2013MNRAS.433.1607L}. Furthermore, theoretical models propose that supermassive stars formed as a result of direct collapse are the potential embryos of supermassive black holes \citep{2008MNRAS.387.1649B,2010MNRAS.402..673B,2011MNRAS.414.2751B,2012ApJ...756...93H,2012MNRAS.421.2713B,2013A&A...558A..59S,2013ApJ...768..195W,2013arXiv1308.4457H}. Previous numerical simulations mainly focused on the hydrodynamics of the problem, while the role of magnetic fields during the formation of seed black holes via direct collapse remained largely unexplored. Magnetic fields are expected to influence the formation of black holes by exerting extra magnetic pressure and providing additional means for the transport of angular momentum by magnetic torques. Magnetic pressure may enhance the Jeans mass ($ M_{J,B} \propto B^{3}/ \rho^{2}$) and consequently help in suppressing fragmentation which is a key requirement for the direct collapse model. The role of magnetic torques is expected to become significant in the central accretion disk, implying the presence of strong rotation measures and enhanced accretion rates. In fact, the detection of strong rotation measures in quasars at $z = 5.3$ indicates the relevance of magnetic fields in the early universe \citep{2012arXiv1209.1438H}. It is further known from observations of nearby active galactic nuclei that magnetic fields play a vital role in the transport of angular momentum \citep{1999Natur.397..324B,Beck05}. The standard model of cosmology does not provide any constraints on the initial magnetic fields strength. They could be generated via electro-weak or quantum chromodynamics phase transitions \citep{1996PhRvD..53..662B,1989ApJ...344L..49Q} or alternatively, during structure formation via mechanisms such as the Biermann battery effect, the Weibel instability \citep{1950ZNatA...5...65B,1959PhRvL...2...83W,2003ApJ...599L..57S} or thermal fluctuations in the plasma \citep{2012PhRvL.109z1101S}. In addition to the gravitational compression under the constraint of flux freezing, astrophysical dynamos can efficiently amplify the magnetic field, particularly the small scale dynamo which operates by converting the turbulent energy into the magnetic energy \citep{1968JETP...26.1031K,2005PhR...417....1B,Schobera,2013NJPh...15b3017S}. Numerous studies confirm that the small scale dynamo gets excited during structure formation provided that turbulent energy is well resolved \citep{2010A&A...522A.115S,2011ApJ...731...62F,2010ApJ...721L.134S,Schobera,2012ApJ...745..154T,Schoberb,2013NJPh...15a3055B,2013MNRAS.432..668L}. The amplification of magnetic fields by the small scale dynamo was further confirmed by \cite{Federrath11} for higher Mach numbers and by \cite{2012ApJ...760L..28P} for different thermodynamical conditions. A recent study by \cite{2013MNRAS.tmp.2194M} shows that the magnetic field may have a significant impact on the formation of Pop III stars as it strongly influences the fragmentation properties of a gas cloud. In the context of black hole formation via direct collapse, we have shown in our previous study \citep{2013MNRAS.432..668L} that for a Jeans resolution of 64 cells, the small scale dynamo gets excited and exponentially amplifies the magnetic field. It is thus expected that magnetic fields can influence the formation of seed black holes. A recent study by \cite{2014ApJ...782..108S} further shows that the radiation source can aid the generation of magnetic fields. In this study, we explore for the first time the impact of magnetic fields on the fragmentation properties of atomic cooling halos, the potential birthplaces of supermassive black holes. To accomplish this goal, we perform high resolution cosmological magnetohydrodynamical simulations for four distinct halos and employ a fixed resolution of 64 cells per Jeans length during the entire course of the simulations. To investigate the impact of saturated magnetic fields on fragmentation, the initial seeds of higher magnetic field strength are selected based on the results of our previous study \citep{2013MNRAS.432..668L}. We employ a constant background Lyman Werner flux of strength $\rm 10^3$ in units of $\rm J_{21}$ and follow the collapse for a few free fall times by evolving the simulations beyond the formation of the first peak. This study enables us to assess the role of magnetic fields in the assembling of supermassive black holes via direct collapse. The organization of this article is the following. In the second section, we describe the numerical methods and simulation setup employed in this work. The main results from this study are presented in the third section. We discuss our conclusion and summary of the main findings in the fourth section. \section{Computational methods} The simulations presented here are performed with the publicly available cosmological magnetohydrodynamics code ENZO \citep{2004astro.ph..3044O,2013arXiv1307.2265T}. It is a massively parallel code and very well suited for simulations following the collapse from cosmological scales down to the scales of AU. The equations of magnetohydrodynamics (MHD) are solved employing the Harten-Lax-van Leer (HLL) Riemann solver with a piece-wise linear construction. The Dedner scheme \citep{2008ApJS..176..467W,2010NewA...15..581W} is imposed for divergence cleaning. We start our simulations at $ z=$ 100 with cosmological initial conditions which are generated using the inits package. Our computational volume has a comoving size of 1~$\rm Mpc/h$ and periodic boundary conditions are employed both for magneto-hydrodynamics and gravity. We initially run $\rm 128^3$ cells uniform grid simulations to select the most massive halos forming in our computational domain for various random seeds. Simulations are restarted with two additional nested refinement levels each with a resolution of $\rm 128^3$ cells centered on the most massive halo. To simulate the evolution of dark matter dynamics, 5767168 particles are initialized which provide a particle mass resolution of $\rm \sim 600~M_{\odot}$. During the course of the simulations, additional 27 dynamic refinement levels are employed which yield an effective resolution of 0.25 AU. Apart from the fixed Jeans resolution of 64 cells, our resolution criteria are based on the gas over-density and the particle mass resolution. The cells exceeding four times the mean baryonic density are marked for refinement. Similarly, grid cells are flagged for refinement if the dark matter density exceeds 0.0625 times $ \rho_{DM}r^{l (1+ \alpha)}$ where $r=$ 2 is the refinement factor, $l$ is the refinement level and $\alpha =-0.3$ makes the refinement super-Lagrangian. Although gravity by the baryons dominates in the core of the simulated halos, the smoothing of dark matter particles becomes essential in order to avoid a spurious heating of the baryons. We smooth particles at scales of 0.68 pc which corresponds to a refinement level of 14. Our approach is similar to the simulations performed to explore gravitational collapse in previous studies \citep{2008ApJ...682..745W,2012ApJ...745..154T,2013MNRAS.433.1607L,2013ApJ...772L...3L}. The simulations are evolved adiabatically above densities of $\rm 10^{-11}~g/cm^{3}$ after reaching the maximum refinement level to follow the collapse for several dynamical times. Such an approach makes the structures stable on the smallest scales while collapse proceeds on larger scales. We consider these adiabatic cores as proxies for supermassive protostars, which are expected to form at higher densities where cooling is suppressed by the continuum opacity \citep{2001ApJ...546..635O,2008ApJ...686..801O}. In total, we perform eight simulations for four distinct halos each with a weak and a strong initial seed field. The strength of the initial seed fields and the properties of the halos are listed in table \ref{table1}. The simulations are compared at a peak density of $\rm 7 \times 10^{-10}~g/cm^3$. Similar to our previous studies \citep{2013MNRAS.432..668L,2013MNRAS.430..588L}, we employ a strong Lyman Werner flux of strength $\rm 10^3$ in units of $\rm J_{21}=~erg~cm^{-2}~s^{-1}~Hz^{-1}~sr^{-1}$ for stellar spectra of $\rm 10^5$~K and ignore the effect of self-shielding. To model the thermal evolution of the gas, the rate equations of $\rm H$,~$\rm H^{+}$,~$\rm He$,~$\rm He^{+}$,~$\rm He^{++}$,~$\rm e^{-}$,~$\rm H^{-}$,~$\rm H_{2}$,~$\rm H_{2}^{+}$ are self consistently solved with cosmological simulations. \begin{table*} \begin{center} \caption{Properties of the simulated halos are listed here.} \begin{tabular}{cccccc} \hline \hline Model & Initial Mass & spin parameter & Collapse redshift & Initial Magnetic field strength, & Fragmentation \\ & $\rm M_{\odot} $ & $\lambda$ & z & [$\rm Gauss $] & (For unsaturated cases) \\ \hline \\ A & $\rm 4.3 \times 10^{6}$ & 0.0309765 &11.3 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & No\\ B & $\rm 1.0 \times 10^{7}$ & 0.0338661 &12.8 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & Yes\\ C & $\rm 2.3 \times 10^{7}$ & 0.021782 &15.9 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & Yes\\ D & $\rm 1.9 \times 10^{7}$ & 0.0084786 &13.7 & $\rm 3 \times 10^{-20}$, $\rm 3 \times 10^{-11}$ & No\\ \hline \end{tabular} \label{table1} \end{center} \end{table*} \begin{figure*} \hspace{-10.0cm} \centering \begin{tabular}{c} \begin{minipage}{6cm} \includegraphics[scale=0.2]{MHDBmag-comp2.ps} \end{minipage} \end{tabular} \caption{This figure shows the density-weighted magnetic field strength for four halos at the end of our simulations. The top panels show the non-saturated cases while bottom panels depict the saturated cases. Both panels from left to right show the halos A to D as listed in table \ref{table1}.} \label{figh1} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{TimeBmag.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{TimeVRad.ps} \end{minipage} \end{tabular} \caption{Time evolution of the magnetic field strength and radial velocity is shown for halo A, the unsaturated case. Each line color represents different time evolution in units of years as mentioned in the legend. } \label{fig2} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{TimeBmagG.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{TimeVRadG.ps} \end{minipage} \end{tabular} \caption{Same as figure \ref{fig2}. Here we show the time evolution of these quantities for halo C, the unsaturated case.} \label{fig3} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{GRcompresRadius1.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{GRShearBRadius1.ps} \end{minipage} \end{tabular} \caption{The absolute values of the growth rates of magnetic field amplification of all halos are plotted against the radius in this figure. The left panel shows the magnetic growth rate due to the compression while right panel shows the growth rate due to the shear. The dotted lines present unsaturated cases while the solid lines stand for the saturated cases. For the definition of growth rate see the text and references therein. Each color represents a halo. } \label{fig5} \end{figure*} \begin{figure*} \centering \begin{tabular}{c} \begin{minipage}{6cm} \hspace{-1cm} \includegraphics[scale=0.4]{AVGUnGrowthRate.ps} \end{minipage} \end{tabular} \caption{Spherically averaged positive growth rate of the magnetic field for a representative case (unsaturated case, halo A) is plotted against the radius for the earlier and later times. The green line shows the magnetic growth rate at the beginning of accretion shock while red line shows the growth rate close to the saturation stage. For the definition of growth rate see the text and references therein.} \label{fign} \end{figure*} \begin{figure*} \centering \begin{tabular}{c} \begin{minipage}{6cm} \hspace{-1cm} \includegraphics[scale=0.4]{CompShearAmp.ps} \end{minipage} \end{tabular} \caption{The absolute values of the growth rates of magnetic field amplification for a representative halo are plotted against the radius in this figure. The green color shows the magnetic growth rate due to the compression while blue color shows the growth rate due to the shear. The dotted lines present unsaturated cases while the solid lines stand for the saturated cases. For the definition of growth rate see the text and references therein.} \label{fign1} \end{figure*} \begin{figure*} \hspace{-15.0cm} \centering \begin{tabular}{c} \begin{minipage}{3cm} \includegraphics[scale=1.5]{GRate.ps} \end{minipage} \end{tabular} \caption{This figure shows growth rates (absolute values) of magnetic field amplification by the shear (left) and compression (right) for unsaturated cases. The slices of growth rate are shown here centered at the peak density.} \label{figh6} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{SatEBEt1.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{SatEBEKinet1.ps} \end{minipage} \end{tabular} \caption{ The ratio of magnetic to turbulent energy (left panel) and magnetic to kinetic energy (right panel) is shown in the figure. B1, B3, B5, B7 represent the non-saturated cases while B2, B4, B6, B8 stand for the saturated field cases as listed in table \ref{table1}. } \label{fig7} \end{figure*} \begin{figure*} \centering \begin{tabular}{c} \begin{minipage}{6cm} \hspace{-1cm} \includegraphics[scale=0.4]{NewSatEBEt1.ps} \end{minipage} \end{tabular} \caption{The ratio of magnetic to turbulent energy is shown in the figure for centeral region. Dashed and solid lines represent saturated and non-saturated cases as indicated in the legend.} \label{fign2} \end{figure*} \begin{figure*} \vspace{-1.0cm} \hspace{-9.0cm} \centering \begin{tabular}{c} \begin{minipage}{8cm} \includegraphics[scale=0.8]{Halo_profile1.ps} \end{minipage} \end{tabular} \caption{ Radially binned spherically averaged radial profiles are shown for the halos A, B, C and D. The solid lines represent saturated cases while the dashed lines stand for non-saturated cases. Top left and right panels show the enclosed density and mass radial profiles. The accretion rates and magnetic field strength radial profiles are depicted in the bottom left and right panels.} \label{figh2} \end{figure*} \begin{figure*} \hspace{-13.0cm} \centering \begin{tabular}{c} \begin{minipage}{6cm} \includegraphics[scale=0.22]{MHDDensity-comp2.ps} \end{minipage} \end{tabular} \caption{The state of simulations is represented by the density-weighted mean along the axis of projection at the central peak density of $\rm 7 \times 10^{10}~g/cm^{3}$. Non-saturated cases (top panel) and saturated cases (bottom panel) are shown for halos A to D (starting from the left).} \label{figh3} \end{figure*} \begin{figure*} \hspace{-13.0cm} \centering \begin{tabular}{c} \begin{minipage}{6cm} \includegraphics[scale=0.22]{MHDDensityG-comp.ps} \end{minipage} \end{tabular} \caption{The time evolution of the density-weighted mean along the axis of projection for the halo C. The time in years, after the formation of the first peak is shown in each case for the central 770 AU.} \label{figh31} \end{figure*} \begin{figure*} \centering \begin{tabular}{c c} \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{MagSupport.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{SatMagSupport.ps} \end{minipage} \\ \begin{minipage}{4cm} \hspace{-4cm} \includegraphics[scale=0.4]{MagSupportD.ps} \end{minipage} & \begin{minipage}{4cm} \includegraphics[scale=0.4]{SatMagSupportD.ps} \end{minipage} \end{tabular} \caption{Figure shows the contribution of support terms for both saturated (right) and non-saturated (left) cases. The upper and lower panels represent two different halos. The solid lines represent the positive support of the quantities while dashed lines represent the negative support. For the definitions of support terms see text. The local support of thermal, turbulent and magnetic fields is shown in this figure.} \label{fig9} \end{figure*} \section{Main Results} In all, we have performed 8 cosmological magnetohydrodynamics simulations for four distinct halos each with an initial magnetic field strength of $\rm 3 \times 10^{-20}$ G (hereafter called non-saturated cases) and $\rm 3 \times 10^{-11}$ G (hereafter called saturated cases). As shown by \cite{2013MNRAS.432..668L}, the latter implies an approximate equipartition between magnetic and turbulent energy at densities of $\rm 10^{-12}~g/cm^{3}$. The lower value is characteristic for magnetic field generation via the Biermann battery \citep{1950ZNatA...5...65B} or through thermal fluctuations \citep{2012PhRvL.109z1101S}, while the higher value may occur if magnetic fields are generated during the QCD or electroweak phase transition \citep{2004PhRvD..70l3003B}. The initial masses and collapse redshifts of the halos are listed in table \ref{table1}. The density perturbations decouple from the Hubble flow and start to collapse via gravitational instability. The gas falls into the dark mater potential and gets shock heated. This process continues until the gas temperature exceeds $10^4$~K where cooling due to Lyman alpha radiation becomes effective and brings the temperature down to 8000 K. Further cooling to lower temperatures remains suppressed due to the photo-dissociation of $\rm H_{2}$ molecules by the strong Lyman-Werner flux. Consequently, an isothermal collapse occurs. The density profile follows an $R \rm ^{-2}$ behavior as expected from an isothermal collapse. The radial infall velocity is about 10 $\rm km/s$. Overall, the collapse dynamics is similar to our previous studies during the initial phases \citep{2013MNRAS.432..668L,2013MNRAS.433.1607L}. In the following, we explore the amplification of magnetic fields during the collapse and their impact on fragmentation. \subsection{Amplification of magnetic fields} The simulations were started with initial seed magnetic fields as mentioned in table \ref{table1}. For both runs, the magnetic field is mainly amplified by gravitational compression below densities of $\rm 10^{-12}~g/cm^{3}$. In this regime, the amplification of the magnetic field strength scales with $B \propto \rho^{2/3}$ and the strength of the magnetic field at the end of our simulations remains much weaker in the non-saturated runs compared to the saturated cases as shown in figure \ref{figh1}. It is found that the strength of the magnetic fields becomes almost equal at densities of $\rm 7 \times 10^{-10}~g/cm^3$ for both the saturated and non-saturated cases during the transition to the adiabatic evolution after reaching the maximum refinement level. This is evident from figure \ref{figh1}. As we will show in the following, this rapid amplification in the non-saturated runs is due to the occurrence of strong accretion shocks. At densities above $\rm 10^{-11}~g/cm^3$, the evolution becomes adiabatic, and a stable core is formed which reaches the state of hydrostatic equilibrium. This core is considered as a proxy for a supermassive protostar. The infall of the gas onto the central core results in the formation of accretion shocks. To investigate the rapid amplification of non-saturated magnetic fields in accretion shocks, we show the time evolution of the magnetic field strength and the radial velocity profiles for the two representative cases in figures \ref{fig2} and \ref{fig3}. It can be noted from the figures that the amplification of magnetic fields is closely related to the radial infall velocity. The radial velocity increases from 10 $\rm km/s$ to 30 $\rm km/s$ (even higher) within a time scale of about 1 year. Similarly, the field strength is amplified by a few orders of magnitude during the same time. The profile of the radial infall velocity is smooth in the beginning and becomes sharper as accretion shocks are formed. The sharp jump in the radial velocity profile around 100 AU is a typical signature of the accretion shocks. It may further be noted that the increase in density corresponding to different times in figures \ref{fig2} and \ref{fig3} is just about an order of magnitude. Due to gravitational compression B should increase with $\rho^{2/3}$, which could not explain such large increase in magnetic field strength. Amplification by gravitational compression would further be more homogenous with in the Jeans volume. Thus, amplification is due to the accretion shocks. As demonstrated by figure \ref{fig2}, the initial amplification occurs at the shock front enclosing the core and subsequently grows inside the core until the end of the simulation is reached. Apart from turbulent diffusion, amplification by shear can contribute to the growth of the magnetic field in the core. Additional contributions may also come from the advection of the magnetic field in the core and diamagnetic pumping from a gradient in the turbulent intensity. In order to understand the different contributions to the magnetic field amplification, we have computed the growth rates of amplification both by turbulent shear and compression for all halos. The rate of change of the magnetic pressure in a fluid element moving with the flow can be computed from the source terms of the induction equation \citep{2013MNRAS.431.3196S}: \begin{equation} {D \over Dt} \left( {B^{2} \over 8 \pi} \right) = {1 \over 4 \pi}\left(B_{i}B_{j}S_{ij}^{*} - {2 \over 3}B^{2}d \right), \label{Bpres} \end{equation} where $\rm{D \over Dt}$ is $\rm {\partial \over \partial t} + v \cdot \nabla$, $d=\nabla \cdot v$ is the velocity divergence and $S_{ij}^{*}=S_{ij}-1/3 d\delta_{ij}$ is the trace-free rate of strain tensor. Dividing both sides of equation (\ref{Bpres}) by $B^2/8\pi $, the first term on the right-hand side represents the growth rate of magnetic energy by shear while the second term is the growth rate by compression (both due to gravity and shocks). The absolute values of the growth rates by shear and compression are shown in figure \ref{fig5}. They increase towards smaller radii, peak around 100 AU and decline toward the center. Such a trend is observed for all halos, both for the saturated and the non-saturated cases. To further elucidate the differences in the amplification rates for the saturated and unsaturated cases, we have overplotted the amplification rates by the shear and compression for a representative case in figure \ref{fign1}. The plot shows that the growth rate is higher for the unsaturated case. It is also noted that the growth rate is higher for shear compared to the compression. In figure \ref{fign}, we show the spherically averaged positive growth rate for a representative case (i.e., halo A, unsaturated run) at the start of the accretion shock and close to the saturation state. The strongest amplification occurs at the accretion shock on scales of a few 100 AU. When saturation occurs in the central region, the growth rate declines in the core and the peak in the growth rate shifts towards larger radii. This is a clear indication of magnetic field saturation on small scales while amplification still proceeds on scales larger than 100 AU. The inverse of the growth rate gives the amplification time scale which locally decreases to less than 0.1 year. The amplification time scale for compression is about half of the shear amplification time scale. We note here that even for compressively driven turbulence part of the energy (about $\rm 1/3-1/2$) lies in the solenoidal modes \citep{2010A&A...512A..81F} thus naturally providing a two-to-one ratio of compressive and shear modes. The local variations in the growth rates are shown in figure \ref{figh6} for a representative case. The very short amplification time scale shows that the magnetic field can be amplified very rapidly in the presence of strong accretion shocks. In the center of the core, however, the amplification by compression and shear is weak. This suggests that the growth of the magnetic field inside the core (see figures 2 and 3) is mainly caused by advection and turbulent diffusion. To further assess the amplification and saturation of magnetic fields, we have computed the ratio of the magnetic to turbulent energy as shown in figure \ref{fig7}. It increases with density both for the saturated and the non-saturated cases. At the strong accretion shock the magnetic field amplification time scale becomes very short and rapid amplification happens for both saturated and non-saturated cases until the magnetic energy becomes comparable the turbulent energy. To further clarify the differences between the saturated and unsaturated cases, we have plotted the ratio of magnetic to turbulent energy for the centeral region. This figure shows that magnetic energy is in equipartition with turbulent energy and the magnetic field gets saturated as evident from the change in the slop of $\rm E_{B}/E_{turb}$. The ratio of the magnetic to the total kinetic energy is depicted in the right panel of figure \ref{fig7}. It initially increases with density, gets enhanced rapidly by accretion shocks, reaches a peak value of $\rm 10^{-1}$ and then declines which is an indication of magnetic field saturation. It is further noted that saturation occurs around densities of $\rm 10^{-10}~g/cm^{3}$ which is deduced from the change in the slope of the magnetic to kinetic energy ratio. In the saturated cases, the amplification is many orders of magnitude lower compared to their counterparts which reach the same field strength from a much smaller value. This is expected, as the initial seed field is already in approximate equipartition with the turbulent energy, implying amplification predominantly by gravitational compression. \subsection{Implications for the formation of seed black holes} The central properties of the halos at their collapse redshifts are shown in figure \ref{figh2}. The density profile shows an $R^{-2}$ behavior as expected from an isothermal collapse and becomes flat in the central adiabatic core. This trend is observed for all cases. The small bumps in the density profiles for non-saturated cases are due to the formation of additional clumps. The maximum value of the density is $\rm 7 \times 10^{-8}~g/cm^{3}$. The mass profile increases with $R^{2}$ in the center, becomes flat around 100 AU and then increases linearly with radius. The mass profiles are very similar for the saturated and non-saturated cases. The mass accretion rates are about 1 $\rm M_{\odot}/yr$ at larger radii and drop down to $\rm 10^{-3}~M_{\odot}/yr$ in the central adiabatic core. The profile of the magnetic field strength shows that irrespective of the initial seed field, the magnetic field reaches the saturation value in the presence of strong accretion shocks. Overall, the halo properties are in good agreement with previous studies \citep{2013MNRAS.433.1607L,2013arXiv1309.1097L}. The state of the simulations is shown by the density-weighted mean along the projection axis for four distinct halos in figure \ref{figh3}. It is found that massive clumps of a few hundred solar masses are formed in every halo both for the saturated and the non-saturated runs. In addition to this, fragmentation is observed in two halos for the non-saturated cases. The masses of these clumps are a few ten solar masses (20 $\rm M_{\odot}$ and 30 $\rm M_{\odot}$) and they are gravitationally bound. The suppression of fragmentation in the saturated cases is attributed to the additional magnetic pressure on larger scales. The time evolution of the density structure for halo C is shown in figure \ref{figh31}. The initially turbulent cloud collapses to form a massive clump within a few years. It keeps accreting gas from its surroundings and the formation of an additional clump can be seen after 10 years of evolution. We have investigated the impact of magnetic fields on the fragmentation properties of these halos and computed the local support terms for magnetic fields. The local support is derived from the source terms of the differential equation for the rate of compressions of the gas \citep{2013MNRAS.431.3196S} \begin{equation} -{D d \over Dt}= 4 \pi G \rho_{0}\delta -\Lambda \end{equation} Here, $\delta$ is the overdensity relative to the mean density $\rho_0$ and $\Lambda$ is the local support against gravitational compression. $\Lambda$ receives the contributions from thermal pressure, resolved turbulence, and the magnetic fields. The support by magnetic fields is \citep{2013MNRAS.431.3196S}: \begin{dmath} \Lambda_{\rm mag} = {1 \over 4 \pi \rho} \left[-{ \partial^{2} \over \partial x_{i} \partial x_{j}} \left( {1 \over 2} B^{2} \right) + {\partial B_{i} \over \partial x_{j}} {\partial B_{j} \over \partial x_{i}} \right] + \\ {1 \over 4 \pi \rho^{2}} {\partial \rho \over \partial x_{i}} \left[ { \partial \over \partial x_{i}} \left( {1 \over 2} B^{2} \right) + B_{j}{\partial B_{i} \over \partial x_{j}} \right] \end{dmath} For the definition of thermal and turbulence support terms see \cite{2013MNRAS.431.3196S}, while a first application is presented by \cite{2013MNRAS.433.1607L}. Like other support terms, the magnetic support has positive and negative components. The positive components provide support against gravity while the negative components aid to the gravitational compression. The contributions of the local support terms against gravity are shown for two representative cases in figure \ref{fig9}. It is important to note that the contribution of the positive support by the magnetic field dominates over the turbulent and thermal pressure support in the vicinity of the accretion shocks at radii around 100 AU. The support by turbulence is dominated by the negative contribution from compression by accretion shocks. Negative support is a characteristic of compressible turbulence, particularly in the presence of shocks. For a detailed discussion on negative turbulent support see \cite{2013MNRAS.431.3196S}. Even stronger support comes from the thermal pressure, while the magnetic support is sub-dominant near the center. For the saturated field cases, the large positive support from magnetic fields helps in the suppression of fragmentation on radial scales ranging from less than 100 to about 1000 AU, which encompasses the fragmentation scale in figure \ref{figh3}. Particularly at radii outside the accretion shock, this is a result of the initially larger magnetic field. For the unsaturated case, the amplification of the magnetic field produces support comparable to the saturated case only for a narrower range of scales around 100 AU. As numerical simulations tend to underestimate the physical amplification rate, a final conclusion on the role of initial field strength and dynamo support is, however, not possible at this stage. \section{Discussion} In total, we have performed 8 cosmological MHD simulations to investigate the role of magnetic fields during the formation of supermassive black holes. The simulations were carried out for four distinct halos with initial seed magnetic fields of $\rm 3 \times 10^{-20}$ G and $\rm 3 \times 10^{-11}$ G. The main motivation for the selection of the stronger magnetic field strength was to explore the impact of saturated magnetic fields on the fragmentation properties of so-called atomic cooling halos. To achieve this goal, we evolved the simulations adiabatically beyond the formation of the first peak for a few free-fall times until they reached the same peak density of $\rm 7 \times 10^{-10}~g/cm^{3}$. Our results show that irrespective of the initial seed field strength, the magnetic field gets amplified very rapidly in the presence of strong accretion shocks. This is indicated by the short time scale for compressive amplification compared to the free-fall time. The amplification is mainly caused by the shock fronts and the magnetic field is subsequently transported into the core by turbulent diffusion and similar processes until the magnetic energy grows to equipartition with kinetic energy. We therefore report a new mode of magnetic field amplification by the accretion shocks in atomic cooling halos as well as a possible contribution from the compressive turbulent modes driven by the accretion process. We further note that, while the adiabatic cores in our simulations were introduced to follow the collapse beyond the first peak, very similar cores are expected to form during the formation of protostars at higher densities \citep{2001ApJ...546..635O,2008ApJ...686..801O}. It is thus desirable to extend the calculations pursued here to the formation of protostars. We also emphasize that the turbulent amplification of magnetic fields depends strongly on the Reynolds number of the flow \citep{1968JETP...26.1031K,1998MNRAS.294..718S}. Since we cannot resolve all length scales down to the physical dissipation length scale, the actual amplification is probably even stronger. Such rapidly amplified magnetic fields may suppress the fragmentation on even larger scales than shown in these simulations. Currently, however, fully resolved simulations are infeasible. A possible solution to this problem might be the application of subgrid-scale models for MHD turbulence. Our results indicate that magnetic fields are relevant for the formation of seed black holes, as they help in the suppression of fragmentation via additional magnetic pressure. The masses of the clumps at the end of our simulations are a few hundred solar masses and large accretion rates of about $\rm 1~M_{\odot}/yr$ are observed. Given such high accretion rates, these objects are expected to reach $\rm 10^5~M_{\odot}/yr$ within a short time. The amount of fragmentation is significantly less compared to the hydrodynamics simulations performed in our previous study \citep{2013MNRAS.433.1607L}. The peak density reached in the MHD simulations is about 13 times lower than in the hydrodynamical case. Further differences may arise from the use of different Riemann solvers. We evolved these simulations only for a few free fall times after the formation of the first peak. Further evolution of such high-resolution simulations becomes extremely costly due to the Courant constraints on the computation of timestep. However, we expect that the presence of the magnetic fields will be favorable for the formation of massive seed black holes as it suppresses the fragmentation. We have also shown in recent studies that subgrid scale turbulence helps in the formation of stable accretion disks and assembling massive objects of $\rm 10^5~M_{\odot}$ in 20,000 years via rapid accretion \citep{2013MNRAS.433.1607L,2013arXiv1309.1097L}. The presence of subgrid scale MHD turbulence may further help in the formation of accretion disks in magnetized halos. As our previous results indicated that accretion stalls when $\rm \sim 10^{5}~M_{\odot}$ are reached because of an increase in the rotational support, we speculate that magnetic fields may enhance angular momentum transport and increase the final mass scale. This requires cosmological MHD simulations following the accretion for even longer times. \section*{Acknowledgments} The simulations described in this work were performed using the Enzo code, developed by the Laboratory for Computational Astrophysics at the University of California in San Diego (http://lca.ucsd.edu). We acknowledge research funding by Deutsche Forschungsgemeinschaft (DFG) under grant SFB $\rm 963/1$ (projects A12, A15) and computing time from HLRN under project nip00029. DRGS thanks the DFG for funding via the Schwerpunktprogram SPP 1573 ``Physics of the Interstellar Medium'' (grant SCHL $\rm 1964/1-1$). The simulation results are analyzed using the visualization toolkit for astrophysical data YT \citep{2011ApJS..192....9T}.
train/arxiv
BkiUejjxK0wg09KOYSG1
5
1
\section{Introduction} \label{Intro} Jets are present in astrophysical sources with various spatial scales, from Young Stellar Objects (YSOs) to active galactic nuclei (AGN). These collimated outflows are generally considered to be the result of bipolar ejection of plasma, associated with accretion onto a central object \citep{Blandford_Payne}. Variability in the ejection speed can produce internal shocks clearly seen in YSO jets in the form of bright optical knots \citep[e.g.,][]{raga1998,masciadri2002} called Herbig-Haro (HH) objects. Jets that arise from active galaxies can have relativistic speeds, and are well known synchrotron radiation emitters \citep[see for example][]{tregillis2001,laing2006,gomez2008}. In contrast, YSO jets are non-relativistic and typically thermal radio sources. However, a few stellar sources, such as Serpens \citep{rodriguez89}, HH 80-81 \citep{marti95}, Cepheus-A \citep{garay96}, W3(OH) \citep{1999ApJ...513..775W}, and IRAS 16547-4247 \citep{Garay_IRAS} present radio emission with negative spectral index interpreted as non-thermal (synchrotron) radiation. Notably, polarized radio emission was detected in the jet of HH 80-81 \citep{carrasco2010}. Therefore, an interesting question to answer is how jets with velocities of several hundreds km s$^{-1}$ moving into a dense medium are able to produce shocks where particles can be accelerated up to relativistic energies and produce synchrotron radio emission. Synchrotron maps have been computed from MHD numerical simulations by several authors in different contexts such as pulsar wind nebulae, e.g. \citet{2006A&A...453..621D}, \citet{2008A&A...485..337V}; supernova remnants, e.g. \citet{orlando2007}; and accretion disks, e.g. \citet{2005ApJ...621..785G}. \citet{2010ApJ...725..750B} and \citet{2011ApJ...737...42P} have performed MHD numerical simulations of relativistic jets. In particular, \citet{2011ApJ...737...42P} have performed MHD numerical simulation of relativistic AGN jets in order to study the synchrotron emission at the jet acceleration region by computing synthetic emission maps of the spectral index, polarization degree and Rotation Measure (RM). In the case of non-relativistic (YSO) jets, given the large densities of such a jets at the launching region, the base of the jet is a thermal emitter. However, as the jets propagates the density decreases and non-thermal signatures can appear. We present a polarization study in order to shed light on the understanding of the non-thermal emission in protostellar jets. We model, by using axisymmetric, magnetohydrodynamic (MHD) simulations, the synchrotron emission, and we compute the resulting polarization map. The paper is organized as follows: in Section 2, we describe the model and the numerical setup; in Section 3 we show the results (synthetic radio, polarization, and X-ray emission maps); and in Section 4 we present our conclusions. \section{Numerical calculations} \begin{figure*}[] \centering \includegraphics[width=8cm]{croquis} \caption{Coordinate system employed for simulating synchrotron emission. The $rz$-plane is the 2D plane of our axisymmetric simulation. The plane of the sky or image plane is the $x^{\prime}z^{\prime}$-plane and the $y^{\prime}$-axis is the LoS.} \label{fig:croquis} \end{figure*} \subsection{Initial setup} Our study is based on 2.5D axisymmetric, MHD simulations carried out with the adaptive mesh refinement, eulerian code \emph{Mezcal} \citep{decolle2006,decolle2008,decolle12}. We consider a 2D axisymmetrical adaptive grid, with a size of $0.2$ and $0.5$~pc along the $r-$ and $z-$directions, respectively, and a maximum spatial resolution of $1.56 \times 10^{-4}$~pc, corresponding to 1280 $\times$ 3200 cells (at the maximum resolution) along the $r-$ and $z-$directions, and 6 levels of refinement. The environment in which the jet propagates is homogeneous, with a uniform density $n_{\rm env}=3000$~cm$^{-3}$, temperature $T_{\rm env}=100$~K, and magnetic field $B_0$. At every timestep the jet is imposed in the computational domain by rewriting the values of the physical parameters inside a region of the computational domain with $r<R_{\rm jet}=0.03$~pc and $z<0.003$~pc, with density $n_{\rm jet}=300$~cm$^{-3}$ \citep{araudo2012} and velocity $v_{\rm jet}$ (along the $z$-axis). The longitudinal magnetic field (imposed on all computational domain) is $B_z=B_0$, and the toroidal component is given by \citep{1989ApJ...344...89L} \begin{eqnarray} \centering B_{\phi}(r)= \left\{\begin{array}{ll} B_{\rm m} \left(\frac{r}{R_{\rm m}}\right) & \, \, \, 0 \leq r < R_{\rm m}; \\ B_{\rm m} \left(\frac{R_{\rm m}}{r}\right) & \, \, \, R_{\rm m} \leq r < R_{\rm jet}; \\ 0 & \, \, \, r \geq R_{\rm jet}, \end{array} \right. \label{btor} \end{eqnarray} where $R_{\rm m}=0.02$~pc, and $B_{\rm m}$ is given in Table \ref{tab:table1}. In models M4 and M5 $B_z$ and $B_{\rm m}$ are chosen so that $B$ ($=\sqrt{B_z^2+B_{\rm m}^2}$) results of the order of 0.1 mG \citep{carrasco2010,2007MNRAS.382..699C}. The jet pressure profile is constructed to ensure total pressure equilibrium at $t=0$: \begin{eqnarray} \centering p(r)= \left\{\begin{array}{ll} \frac{B_{\rm m}^2}{8\pi}\left(\beta_{\rm m} - \frac{r^2}{R_{\rm m}^2} \right) & \, \, \, 0 \leq r < R_{\rm m}; \\ \frac{B_{\rm m}^2}{8\pi}\left(\beta_{\rm m} - \frac{R_{\rm m}^2}{r^2} \right) & \, \, \, R_{\rm m} \leq r < R_{\rm jet}; \\ p_{\rm env} & \, \, \, r \geq R_{\rm jet}, \end{array} \right. \end{eqnarray} where $\beta_{\rm m}=p_{\rm env}/(B_{\rm m}^2/8\pi)$ and $p_{\rm env}=n_{\rm env} k_B T_{\rm env}$. We consider five different initial configurations. Model M1 represents a continuous jet with constant injection velocity $v_{\rm jet} = v_0$, whereas models M2--M5 have a time dependent injection velocity of the form \begin{equation} v_{\rm jet}= v_0 (1+ \Delta v \, \cos(\omega t)) , \label{vjvar} \end{equation} where $v_0 = 1000$~km$\,$s$^{-1}$ is the mean velocity of the flow, and $\omega = 2\pi/\tau$, $\tau=50$~yr and $\Delta v$ are the frequency, periodicity, and amplitude of the variability, respectively. The values of $B_z$, $\Delta v$ and jet maximum velocity $v_{\rm max}$ $=v_0(1+\Delta v)$ for the different models are given in Table~\ref{tab:table1}. With these values, $v_{\rm jet}$ is in the range of $\sim 600-1400$~km~s$^{-1}$, as observed in HH80-81 \citep{marti95,marti98}. \begin{table*}[] \begin{center} \caption{Initial setup} \label{tab:table1} \begin{tabular}{@{}cccccc} \tableline \tableline Model& $B_{z}$[mG] & $B_{\rm m}$[mG]& $n_{\rm env}/n_{\rm jet}$ &$\Delta v$ & $v_{\rm max}$ [km~s$^{-1}$]\\ \tableline M1& $0$ & $0.1$ & $10$ & $0$ & 1000 \\ M2& $0$ & $0.1$ & $10$ & $0.2$ & 1200 \\ M3& $0$ & $0.1$ & $10$ & $0.4$ & 1400 \\ M4& $0.1/\sqrt{2}$ & $0.1/\sqrt{2}$ & $10$ & $0.4$ & 1400 \\ M5& $0.1/\sqrt{10}$ & $0.3/\sqrt{10}$ & $10$ & $0.4$ & 1400 \\ M6& $0.1/\sqrt{2}$ & $0.1/\sqrt{2}$ & $0.1$ & $0.3$ & 390 \\ \tableline \tableline \end{tabular} \end{center} \end{table*} We have also explored the case of a dense and slow jet (model M6). This model has the same parameters as the model M4, except that $v_0=300$~km~s$^{-1}$, $\Delta v=0.3$, and the density of the jet and the surrounding medium are 1000 and 100 cm$^{-3}$, respectively (see Table~\ref{tab:table1}). \subsection{Synthetic emission maps} \begin{figure*}[] \centering \includegraphics[width=\textwidth]{fig1} \caption{Number density stratification maps, in units of $10^3$ cm$^{-3}$, and displayed in linear color scale (see colorbar at the top). The black arrows depict the velocity field, with a scale shown at the bottom of the leftmost panel. The integration time is $t=1500$ yr in all models.} \label{fig:1} \end{figure*} \begin{figure*}[] \centering \includegraphics[width=\textwidth]{fig2} \caption{Maps of the magnetic field intensity, (see colorbar at the top for the scale, in units of mG) obtained for all models at an integration time of $1500$ yr.} \label{fig:2} \end{figure*} \subsubsection{Non-thermal radio emission and Stokes parameters} In this section, we present a brief description of the strategy used to compute the non-thermal synchrotron emission and the Stokes parameters. For details of the method we refer the reader to \citet{ghisellini2013}. See also \citet{rybicki86} for an in-depth description of synchrotron emission. Synchrotron emission in YSO jets is produced by relativistic electrons accelerated in internal and termination shocks (see Section 3.1). In the present study, we assume that there is a population of relativistic electrons with power-law energy distribution: \begin{equation} n_e = K\ \gamma_e^{-p}, \label{ne} \end{equation} for $\gamma_{\rm min} \le \gamma_e \le \gamma_{\rm max}$ (being $\gamma_e$ the Lorentz's factor), and $n_e = 0$ otherwise. We fix $p = 2.1$ in our calculations and determine $K$ and $\gamma_{\rm min}$ assuming that the number density of non-thermal electrons is a fraction $\chi_{n}$ ($< 1$) of $n_{\rm g}$, the electron density of the gas (assuming as well that the plasma is composed by the same number of electrons and protons, i.e. fully ionized hydrogen, in post-shocked regions): \begin{equation} \chi_{n}\ n_{\rm g}=\int^{\gamma_{\rm max}}_{\gamma_{\rm min}} K \gamma_e^{-p} d\gamma_e \sim K \frac{\gamma_{\rm min}^{-p+1}}{p-1}, \label{chine} \end{equation} and that the energy density is a fraction $\chi_{\epsilon}$ ($< 1$) of the gas kinetic energy density $\epsilon = m_p n_{\rm g} v_{\rm g}^2/2$: \begin{equation} \chi_{\epsilon} \epsilon = \int^{\gamma_{\rm max}}_{\gamma_{\rm min}} K \gamma^{-p}_e (\gamma_e-1) m_e c^2 d\gamma_e \sim K \frac{\gamma_{\rm min}^{-p+2}}{p-2}, \label{chienergy} \end{equation} where $m_e$ is the rest mass of the electron and $c$ is the speed of light. From equations (\ref{chine}) and (\ref{chienergy}), we obtain \begin{equation} \gamma_{\rm min}=\frac{p-1}{p-2}\frac{\chi_{\epsilon} \epsilon}{\chi_n n_{\rm g}} \label{gamma0} \end{equation} and \begin{equation} K = (p-1) \,\chi_n \,n_{\rm g} \,\gamma^{p-1}_{\rm min}. \label{facK} \end{equation} \noindent We are interested in the study of the synchrotron radiation at frequencies $\nu \sim 1$~GHz. This emission is optically thin at frequencies larger than the self-absorption frequency \citep[equation 4.56 of][]{ghisellini2013} \begin{equation} \nu_{\rm sa}=\nu_{\mathrm L} \bigg[\frac{\pi^{3/2} e R K}{4 B} f_{\alpha}(p)\bigg]^{2/(p+4)} \label{nut} \end{equation} where $\nu_{\mathrm L}=e B / (2 \pi m_e c)$ is the Larmor frequency, $e$ is the charge of the electron, and \citep[equation 4.52 of][]{ghisellini2013}: \begin{equation} f_{\alpha}(p)\simeq 3^{\frac{p+1}{2}}\bigg(\frac{1.8}{p^{0.7}}+\frac{p^2}{40}\bigg) . \label{falpha} \end{equation} By setting $R=10^{17}$~cm as the size of the emitting region, we obtain $\nu_{\rm sa}\simeq 3~\textrm{MHz}\ll 1~\textrm{GHz}$ for typical values of density and velocity in our simulations. In the optically thin case, the synchrotron specific intensity, for the isotropic case, can be written as \citep[see equation 4.45][]{ghisellini2013}: \begin{equation} j_s(\nu) = \frac{3}{16}\frac{\sigma_T c K u_B}{\sqrt{\pi}\nu_L} f_j(p) \bigg(\frac{\nu}{\nu_L}\bigg)^{(1-p)/2} \label{jsnu} \end{equation} where $\sigma_T=6.65\times 10^{-25}\mathrm{cm^{-2}}$ is the Thomson cross section, $u_B=B^2/(8\pi)$ is the magnetic energy density, and \begin{equation} f_j(p) \simeq 3^{p/2}(2.25/p^{2.2}+0.105) \label{fac_fj} \end{equation} \citep[see equation 4.46][]{ghisellini2013}. Using equations (\ref{gamma0}) and (\ref{facK}), equation (\ref{jsnu}) gives: \begin{equation} j_s(\nu)= K_s \chi^{1-2\alpha}_n \chi^{2\alpha}_{\epsilon} n_{\rm g} v_{\rm g}^{4\alpha} B_{\perp}^{\alpha +1} \nu^{-\alpha} \;, \label{jsnuf} \end{equation} where $B_{\perp}$ is the component of the magnetic field perpendicular to the line of sight (LoS) and $\alpha=(p-1)/2=0.55$ is the spectral index. The factor $K_s$ in equation (\ref{jsnuf}) is: \begin{equation} K_s=K_1 K_2 m_e^{1-3\alpha} c^{2-5\alpha} (\mu m_H)^{2\alpha} f_j(p) \label{fac_fs} \end{equation} where $\mu$ is the molecular weight and $m_H$ is the proton rest mass. The factors $K_1$ and $K_2$ are \begin{equation} K_1 = (p-1)\left(\frac{p-2}{p-1}\right)^{(p-1)} \label{fac_k1} \end{equation} and \begin{equation} K_2=\frac{3}{2^{7-3\alpha}}\sigma_T (\pi e)^{(2\alpha-1)/2} . \label{fac_k2} \end{equation} Synthetic synchroton intensity maps are obtained from our 2D simulation in the following way. For each cell of our 2D axisymmetric grid we compute the synchrotron emissivity. The 2D plane of simulation ($rz$-plane) is tilted with respect to the plane of the sky (around the $x^{\prime }$-axis) by an angle $\varphi$, and it is revolved around the symmetry axis ($z$), sampling a large number of angles in the $\theta$ direction in order to obtain a ``3D distribution'' of the synchrotron emissivity. Then, the emissivity $j_s(\nu)$ is integrated along the LoS ($I(\nu)=\int_{LoS} j_s(\nu){\rm d}y^{\prime}$), which was chosen to be $y^{\prime}$ (see Figure \ref{fig:croquis}). In this study, a dependence with the angle between the shock normal and the post-shocked magnetic field is not considered (see for instance \citealt{orlando2007,petruk2009,schneiter2015} for a discussion of the acceleration mechanisms in supernova remnants). From the synchrotron emission we have also carried a polarization study by means of maps of the Stokes parameters $Q_B$ and $U_B$, which can be computed as: \begin{equation} Q_B(x^{\prime},z^{\prime},\nu)=\int_{\textrm{LoS}} f_0 j_s(\nu) \cos\left[ 2\phi(y^{\prime})\right] {\rm d}y^{\prime}, \label{factorQ} \end{equation} \begin{equation} U_B(x^{\prime},z^{\prime},\nu)=\int_{\textrm{LoS}} f_0 j_s(\nu) \sin\left[ 2\phi(y^{\prime})\right] {\rm d}y^{\prime}, \label{factorU} \end{equation} \citep[see e.g.][]{clarke1989,jun1996b}, where $(x^{\prime},z^{\prime}$ are the coordinates in the plane of the sky (see Figure \ref{fig:croquis}, ${\rm d}y^{\prime}$ is measured along the LoS, $\phi(y^{\prime})$ is the position angle of the local magnetic field on the plane of the sky, and \begin{equation} f_0=\frac{\alpha +1}{\alpha + 5/3} \end{equation} is the degree of linear polarization. The intensity of the linearly polarized emission is given by \begin{equation} I_P(x^{\prime},z^{\prime},\nu)= \sqrt{Q^2_B(x^{\prime},z^{\prime},\nu)+U^2_B(x^{\prime},z^{\prime},\nu)} \label{ipol} \end{equation} and the map of the position angle of the magnetic field (which gives the orientacion of the magnetic field in the plane of the sky) is computed as \begin{equation} \chi_B(x^{\prime},z^{\prime})=\frac{1}{2}\tan^{-1}(U_B(x^{\prime},z^{\prime},\nu)/Q_B(x^{\prime},z^{\prime},\nu)). \label{chipol} \end{equation} \subsubsection{Thermal X-ray emission} We calculated the thermal emission by integrating the free-free emissivity $j_{\nu}(n_{\rm g},T)$ along the LoS. In the low density regime, $j_{\nu}(n_{\rm g},T)=n^2_{\rm g}\ \Lambda(T)$, where $n_{\rm g}$ is the electron density and $T$ is the temperature. As with non-thermal radio emission, we assume that the post-shocked medium is fully ionized. The function $\Lambda(T)$ was constructed with the {\sc chianti} atomic database \citep{dere1998} considering the energy range [0.15-8] keV and assuming solar metallicity. \section{Results} The numerical simulations with the initial conditions summarized in Table~1 were carried out until they reach an integration time of $1500$~yr. \subsection{Shocks in protostellar jets} \begin{figure*}[] \centering \includegraphics[width=16cm]{fig3} \caption{Comparison of synthetic maps of the intensity of the linearly polarized radio emission obtained for all models at $\nu=5$~GHz. } \label{fig:3} \end{figure*} Figure~\ref{fig:1} displays the number density stratification and the velocity field. As the jet interacts with the surrounding medium, it forms a double shock structure, where the environment gas is accelerated by a forward shock, and the jet plasma is decelerated by a reverse shock. This structure, as well as the contact discontinuity separating the shocked interstellar material from the shocked jet material, is clearly visible at the head of the jet shown in Figure~\ref{fig:1}. In all cases, a slow bowshock travels against the surrounding environment with velocities $\sim [200-260]$ km~s$^{-1}$. Internal shocks are present only in the models where $\Delta v \neq 0$. Several jumps in axial velocity are present which mark the position of the internal shocks and have values in the range $[400-500]$~km~s$^{-1}$. With these values and the Rankine-Hugoniot jump conditions, we can estimate internal shocks velocities as large as 1000~km~s$^{-1}$. These velocities, and considering that these shocks move in a low density medium, with densities of the order of $200$ cm$^{-3}$ imply that the internal shocks are adiabatic\footnote{The cooling distance \citep[see e.g. the equation (6) of][]{raga2002} results larger than the jet radius, implying an adiabatic nature for the internal shocks.}. Note that in models M3--M5 ($\Delta v = 0.4$) the bow shock is significantly slower (200~km~s$^{-1}$) than in models M1 and M2 (260~km~s$^{-1}$). Figure~\ref{fig:2} displays the magnitude of the magnetic field $B$. In models M2 and M3 the jet variability is quite evident by the presence of a thin Mach disk in several internal working surfaces. In contrast, the map corresponding to model M4 and M5 have a more complex morphology with less defined working surfaces, and a larger cocoon structure due to the magnetic field along the symmetry axis. \subsection{Radio polarization} \begin{figure*}[] \centering \includegraphics[width=16cm]{fig4} \caption{Same as Figure \ref{fig:3} but displaying maps of degree of polarization} \label{fig:4} \end{figure*} Figure~\ref{fig:3} shows synthetic maps of the intensity of the linearly polarized radio emission $I_P$, at 5~GHz, for all models. The spatial resolution of these maps is $10^{-3}$~pc. We have considered that the jet axis is tilted 15$^\circ$ with respect to the plane of the sky. In model M1 ($\Delta v=0$) the jet develops a single radio knot associated with material within the Mach disk (see Figure~\ref{fig:3}, left panel). The extended feature observed at $z' \sim 6 \times 10^{17}$ cm, associated with an internal shock, is an artifact produced by the reflective boundary condition imposed along the symmetry axis. In contrast, the other models ($\Delta v\neq 0$) display knotty radio structures produced by the internal shocks\footnote{We are considering a jet with a fixed axis, and thus the working surfaces move along the axis of symmetry of the jet and never exit the cocoon carved by the main bow shock. Thus, their interaction is with the previously ejected jet material and not with the external medium.}. These knots decrease in brightness with distance from the jet source. Model M4 ($B_z=B_{\rm m}$) also shows radio emission in the region behind the main bow shock. The radio emission of this region in model M5, which has $B_z=B_{\rm m}/3$ (see Table \ref{tab:table1}), results in a lower emission compared with model M4. In Figure~\ref{fig:4} the degree of polarization of the synchrotron radiation ($I_P(x^{\prime},z^{\prime},\nu)/ I(x^{\prime},z^{\prime},\nu)$) shows an important result. Models M1-M3 exhibit a high degree of polarization of the synchrotron emission while model M4 displays strong variations toward the symmetry axis. This behavior is also observed in model M5, although to a lesser degree. These results can be understood considering that, in an helical magnetic field, emission from regions with linearly polarized synchrotron radiation whose polarization directions are orthogonal to each other cancel out when the emission is integrated along a LoS nearly perpendicular to the axis of symmetry of jet. A decrease of the degree of polarization has been observed in the jet associated with HH 80-81 \citep{carrasco2010}. Figure~\ref{fig:5} displays maps of the distribution of the position angle of the magnetic field $\chi_B(x^{\prime},z^{\prime})$, As in observational studies, these $\chi_B(x^{\prime},z^{\prime})$'s maps were performed from the synthetic maps of the Stokes parameters $Q_B$ and $U_B$, by using of equations (\ref{factorQ}), (\ref{factorU}), and (\ref{chipol}). These maps show that for models M4 and M5, which display variations in the degree of polarization, the orientation of the magnetic field in the plane of the sky has a component parallel to the symmetry axis (given an additional support of the helical nature of the magnetic field), while in models M1-M3 the magnetic field is mostly perpendicular to it. A comparison between synchrotron and thermal X-ray emission for models M3 and M4 is shown in Figure \ref{fig:6}. For both models, the X-ray emission maps also display a knotty structure although less defined than its radio counterpart. Most of the thermal X-ray emission comes from the environment material swept up by the main bow shock, as shown by \citet{2004A&A...424L...1B,2007A&A...462..645B,2010A&A...517A..68B} in hydrodynamic simulations. As it was mentioned above for the map of the linearly polarized intensity, the total synchrotron emission maps display bright knots close to the central source. These bright knots emit a radio flux, at 5 GHz, of the order of $1.5 \chi^{1-2\alpha}_n \chi^{2\alpha}_{\epsilon} \times 10^{-18}{\mathrm{erg\ s^{-1} sr^{-1}cm^{-2} Hz^{-1}}}$, which is (aside of setting the exact value for the fractions $\chi_n$ and $\chi_{\epsilon}$) in reasonable agreement with the flux reported by \citet{carrasco2010} for the knots in HH 80-81 (1 mJy/beam or $10^{-18}{\mathrm{erg\ s^{-1} sr^{-1}cm^{-2} Hz^{-1}}}$). Figure~\ref{fig:7} shows that the synchrotron emission for the case of a dense and slow jet (model M6) is 30 times lower than the emission for a lighter and faster jet (model M4). Furthermore, it is important to do a comparison of the magnitude obtained for both the non-thermal and thermal emission mechanisms in radio wavelengths. The thermal radio-continuum emission is optically thin for the parameters chosen in our simulations\footnote{The opacity $\tau_{\rm th}$ is $\ll 1$, considering the equation (3) of \citet{velazquez2007}.}. Therefore, one can compute the ratio of non-thermal to thermal emissivities (using equation (5) of \citet{velazquez2007} and equation (\ref{jsnu}) of this paper) in the bright radio knots of models M4 and M6 (located at $z' \simeq 1.8\times 10^{17}$~cm and $z' \simeq 0.5\times 10^{17}$, respectively). For model M6 this ratio is $j_\mathrm{s}(\nu)/j_\mathrm{th}(\nu)=0.03$ at a frequency $\nu=5~\mathrm{GHz}$, while for model M4 the ratio is 1.4. Thus, a dense and slow jet does produce synchrotron emission, however at a level that is negligible compared to the thermal radio-continuum. \section{Discussion and Conclusions} \begin{figure*}[] \centering \includegraphics[width=16cm]{fig6} \caption{Same as Figure \ref{fig:3} but showing the position angle of the magnetic field $B$ (which is measured with respect to the $z'-$~axis, as indicated by the stick marks in the colorbar). } \label{fig:5} \end{figure*} \begin{figure*}[] \centering \includegraphics[width=6.5cm]{fig5} \caption{Comparison of synthetic synchroton emission maps at $\nu=5$~Ghz (left panels) with thermal X-ray emission maps (right panels) for models M3 and M4 (upper and bottom panels, respectively). The synchrotron emission is given in units of $\chi^{1-2\alpha}_n \chi^{2\alpha}_{\epsilon} {\mathrm{[erg\ cm^{-2}\ s^{-1}\ sr^{-1}\ Hz^{-1}]}}$, while the thermal X-ray emission is in units of ${\mathrm{erg\ cm^{-2}\ s^{-1}\ sr^{-1}}}$. } \label{fig:6} \end{figure*} \begin{figure*}[] \centering \includegraphics[width=6.5cm]{fig7} \caption{ Comparison of synthetic synchroton emission maps, at 5~GHz, of model M4 (left panel) with model M6 (right panel). The synchrotron emission is given in units of $\chi^{1-2\alpha}_n \chi^{2\alpha}_{\epsilon} {\mathrm{[erg\ cm^{-2}\ s^{-1}\ sr^{-1}\ Hz^{-1}]}}$. } \label{fig:7} \end{figure*} \citet{carrasco2010} have shown the existence of polarized radio emission associated with the HH 80-81 protostellar jet. However, this issue has not been studied by HD or MHD simulations. We present the results obtained from 2.5D MHD simulations of low- and high-density YSO jets. We have considered cases with a constant, and a time-dependent jet ejection velocity. Furthermore, cases in which the magnetic field is toroidal or helical were analyzed. Assuming a population of relativistic electrons which are accelerated within stellar shock jets, we have used standard prescriptions to estimate their synchrotron emission. Our results indicate that while the thermal X-ray emission is dominated by the shocked environment material at the head of the jet \citep[in agreement with][]{raga2002,2004A&A...424L...1B,2007A&A...462..645B,2010A&A...517A..68B}, the non-thermal radio emission turns out to be dominated by the jet material inside the internal shocks. Also, radio maps reveal that the variability in jet velocity is important in generating bright knots of synchrotron emission, produced when slow jet material is caught up by faster jet material. Our models show that a jet with a toroidal magnetic field emits synchrotron radiation with a high degree of polarization. In contrast, models with a helical magnetic field exhibit a decrease on the degree of polarization, in good agreement with observational results \citep{carrasco2010}. Finally, our results indicate that non-negligible synchrotron emission can be obtained in low-density and high-velocity protostellar jets. \acknowledgments We thank the anonymous referee for her/his very useful comments, with help us to improve the previous version of this manuscript. MC, PFV, FdC, and AE thank financial support from CONACyT grants 167611 and 167625, CONICET-CONACyT grant CAR 190489, and DGAPA-PAPIIT (UNAM) grants IG 100214, IA 103115, IA 109715, IA 103315. A.T.A. acknowledges support from the UK Science and Technology Facilities Council under grant number ST/K00106X/1. C.C-G. acknowledges support by DGAPA-PAPIIT (UNAM) grant number IA 101214. LFR acknowledges support from CONACyT and DGAPA-PAPIIT (UNAM) grants. We also thank Enrique Palacios for maintaining the Linux Server on which the simulations were carried out. PFV dedicates this work to the memory of Jes\'us Francisco Garc\'\i a Cos\'\i o, Mar\'\i a Guadalupe Gudelia Rold\'an, and Mar\'\i a Norma Brito.
train/arxiv
BkiUdczxaKgS2Kyucw5p
5
1
\section{Introduction}\label{intro} Statistical mechanics is by now a rather mature branch of physics. For pure systems like a ferromagnet, it allows to calculate so precise details as the behavior of the specific heat on approaching the Curie-point. We know that it diverges as a function of the distance in temperature to the Curie-temperature, we know that this divergence has the form of a power-law, we can calculate the exponent, and we can do this with at least 3 digits of accuracy. Best of all, these findings are in excellent agreement with the most precise experiments. This is a true success story of statistical mechanics. On the other hand, in nature no system is really pure, i.e.\ without at least some disorder (``dirt''). As experiments (and theory) seem to suggest, a little bit of disorder does not change the behavior much. Otherwise experiments on the specific heat of Helium would not so extraordinarily well confirm theoretical predictions. But what happens for strong disorder? By this we mean that disorder completely dominates over entropy. Then already the question: ``What is the ground-state?'' is no longer simple. This goes hand in hand with the appearance of so-called metastable states. States, which in energy are very close to the ground-state, but which in configuration-space may be far apart. Any relaxational dynamics will take an enormous time to find the correct ground-state, and may fail altogether, as can be seen in computer-simulations as well as in experiments. This means that our way of thinking, taught in the treatment of pure systems, has to be adapted to account for disorder. We will see that in contrast to pure systems, whose universal large-scale properties can be described by very few parameters, disordered systems demand the knowledge of the whole disorder-distribution function (in contrast to its first few moments). We show how universality nevertheless emerges. Experimental realizations of strongly disordered systems are glasses, or more specifically spin-glasses, vortex-glasses, electron-glasses and structural glasses (not treated here). Furthermore random-field magnets, and last not least elastic systems in disorder. What is our current understanding of disordered systems? It is here that the success story of statistical mechanics, with which we started, comes to an end: Despite 30 years of research, we do not know much: There are a few exact solutions, there are phenomenological methods (like the droplet-model), and there is the mean-field approximation, involving a method called replica-symmetry breaking (RSB). This method is correct for infinitely connected systems, e.g.\ the SK-model (Sherrington Kirkpatrick model), or for systems with infinitely many components. However it is unclear, to which extend it applies to real physical systems, in which each degree of freedom is directly coupled only to a finite number of other degrees of freedom. Another interesting system are elastic manifolds in a random medium, which has the advantage of being approachable by other (analytic) methods, while still retaining all the rich physics of strongly disordered systems. Here, we review recent advances. This review is an extended version of \cite{Wiese2002,Wiese2003a}. For lectures on the internet see \cite{LeDoussalWindsor2004,LeDoussalKITP2006,WieseKITP2006}. \section{Physical realizations, model and observables}\label{model} \begin{figure}[t] \centerline{\fig{0.25\textwidth}{domainwallrot}~~~\fig{0.7\textwidth}{ising}} \Caption{An Ising magnet at low temperatures forms a domain wall described by a function $u (x)$ (right). An experiment on a thin Cobalt film (left) \protect\cite{LemerleFerreChappertMathetGiamarchiLeDoussal1998}; with kind permission of the authors.} \label{exp:Magnet} \end{figure} \begin{figure}[b] \centerline{\parbox{0.5\textwidth}{\fig{0.5\textwidth}{manip}} \parbox{0.445\textwidth}{\begin{minipage}{0.445\textwidth} \Fig{CL4X}\\ \Fig{SpatioTemp} \end{minipage}}} \Caption{A contact line for the wetting of a disordered substrate by Glycerine \protect\cite{MoulinetGuthmannRolley2002}. Experimental setup (left). The disorder consists of randomly deposited islands of Chromium, appearing as bright spots (top right). Temporal evolution of the retreating contact-line (bottom right). Note the different scales parallel and perpendicular to the contact-line. Pictures courtesy of S.~Moulinet, with kind permission.} \label{exp:contact-line} \end{figure} \begin{figure}[t]\label{f:vortex-lattic} \centerline{\parbox{0.47\textwidth}{\fig{0.47\textwidth}{vortex}}}\smallskip \Caption{Cartoon of an elastic lattice (e.g.\ vortex lattice) deformed by disorder. This is described by a vector $\vec u (x)$.} \end{figure} Before developing the theory to treat elastic systems in a disordered environment, let us give some physical realizations. The simplest one is an Ising magnet. Imposing boundary conditions with all spins up at the upper and all spins down at the lower boundary (see figure 1), at low temperatures, a domain wall separates a region with spin up from a region with spin down. In a pure system at temperature $T=0$, this domain wall is completely flat. Disorder can deform the domain wall, making it eventually rough again. Two types of disorder are common: random bond (which on a course-grained level also represents missing spins) and random field (coupling of the spins to an external random magnetic field). Figure 1 shows, how the domain wall is described by a displacement field $u (x)$. Another example is the contact line of water (or liquid Helium), wetting a rough substrate, see figure \ref{exp:contact-line}. (The elasticity is long range). A realization with a 2-parameter displacement field $\vec{u} (\vec x) $ is the deformation of a vortex lattice: the position of each vortex is deformed from $\vec x$ to $\vec x+ \vec u (\vec x)$. A 3-dimensional example are charge density waves. All these models have in common, that they are described by a displacement field \begin{equation}\label{u} x\in \mathbb{R}^d \ \longrightarrow\ \vec u (x) \in \mathbb{R}^N \ . \end{equation} For simplicity, we set $N=1$ in the following. After some initial coarse-graining, the energy ${\cal H}={\cal H}_{\mathrm{el}}+{\cal H}_{\mathrm{DO}}$ consists out of two parts: the elastic energy \begin{equation} {\cal H}_{\mathrm{el}}[u] = \int {\mathrm{d}} ^d x \, \frac{1}{2} \left( \nabla u (x)\right)^2 \end{equation} and the disorder \begin{equation}\label{HDO} {\cal H}_{\mathrm{DO}}[u] = \int {\mathrm{d}} ^{d} x \, V (x,u (x))\ . \end{equation} In order to proceed, we need to specify the correlations of disorder. Suppose that fluctuations $u$ in the transversal direction scale as \begin{equation}\label{roughness} \overline{\left[u (x)-u (y) \right]^{2}} \sim |x-y|^{2\zeta } \end{equation} with a roughness-exponent $\zeta <1$. Starting from a disorder correlator \begin{equation} \overline{V (x,u)V (x',u')} = f (x-x') R (u-u') \end{equation} and performing one step in the RG-procedure, one has to rescale more in the $x$-direction than in the $u$-direction. This will eventually reduce $f (x-x')$ to a $\delta $-distribution, whereas the structure of $R (u-u')$ remains visible. We therefore choose as our starting model \begin{equation}\label{DOcorrelR} \overline{V (x,u)V (x',u')} := \delta ^{d } (x-x') R (u-u') \ . \end{equation} There are a couple of useful observables. We already mentioned the roughness-exponent $\zeta $. The second is the renormalized (effective) disorder. It will turn out that we actually have to keep the whole disorder distribution function $R (u)$, in contrast to keeping a few moments. Other observables are higher correlation functions or the free energy. \section{Treatment of disorder}\label{treat disorder} Having defined our model, we can now turn to the treatment of disorder. The problem is to average not the partition-function, but the free energy over disorder: $\overline{{\cal F}}=- k_{\mathrm{B}}T \, \overline{\ln Z} $. This can be achieved by the beautiful {\em replica-trick}. The idea is to write \begin{equation} \ln {\cal Z} = \lim_{n\to 0} \frac{1}{n}\left( {\mathrm{e}}^{n \ln {\cal Z}}-1 \right) = \lim_{n\to 0} \frac{1}{n}\left({\cal Z}^{n}-1 \right) \end{equation} and to interpret ${\cal Z}^{n}$ as the partition-function of an $n$ times replicated system. Averaging ${\mathrm{e}} ^{-\sum _{a=1}^{n}{\cal H}_{a}}$ over disorder then leads to the {\em replica-Hamiltonian} \begin{equation}\label{H} {\cal H}[u] = \frac{1}{T} \sum _{a=1}^{n}\int {\mathrm{d}} ^{d }x\, \frac{1}{2} \left(\nabla u_{a} (x) \right)^{2} -\frac{1}{2 T^{2}} \sum _{a,b=1}^{n} \int {\mathrm{d}} ^{d }x\, R (u_{a} (x)-u_{b} (x))\ . \end{equation} Let us stress that one could equivalently pursue a dynamic (see section \ref{s:dynamics}) or a supersymmetric formulation (section \ref{a5}). We therefore should not, and in fact do not encounter, problems associated with the use of the replica-trick, as long as we work with a perturbative expansion in $R$. \section{Flory estimates}\label{a1} Four types of disorder have to be distinguished, resulting in different universality classes: \begin{itemize} \item [ (i)] Random-Bond disorder (RB): short-range correlated potential-potential correlations, i.e.\ short-range correlated $R (u)$. \item [ (ii)] Random-Field disorder (RF): short-range correlated force-force correlator $\Delta (u):= -R'' (u)$. As the name says, this disorder is relevant for Random-field systems, where the disorder potential is the sum over all magnetic fields in say the spin-up phase. \item [(iii)] Generic long-range correlated disorder: $R (u)\sim |u|^{-\gamma }$. \item [(iv)] Random-Periodic disorder (RP): Relevant when the disorder couples to a phase, as e.g.\ in charge-density waves. $R (u)=R (u+1)$, supposing that $u$ is periodic with period 1. \end{itemize} To get an idea how large the roughness $\zeta$ becomes in these situations, one compares the contributions of elastic energy and disorder, and demands that they scale in the same way. This estimate has first been used by Flory for self-avoiding polymers, and is therefore called the Flory estimate. Despite the fact that Flory estimates are conceptually crude, they often yield a rather good approximation. For RB this gives for an $N$-component field $u$: $\int_{x} (\nabla u)^{2} \sim \int_{x} \sqrt{\overline{VV}}$, or $ L^{d-2} u^2 \sim L^{d} \sqrt{L^{-d}u^{-N}} $, i.e.\ $u \sim L ^{\zeta }$ with \begin{equation}\label{a2} \zeta_{\mathrm{Flory}}^{\mathrm{RB}} = \frac{4-d}{4+N}\ . \end{equation} For RF it is $R''$ that is short-ranged, and we obtain \begin{equation}\label{a3} \zeta_{\mathrm{Flory}}^{\mathrm{RF}} = \frac{4-d}{2+N}\ . \end{equation} For LR \begin{equation}\label{a4} \zeta_{\mathrm{Flory}}^{\mathrm{LR}} = \frac{4-d}{4+\gamma } \end{equation} For RP, the amplitude of $u$ is fixed, and thus $\zeta_{\mathrm{RP}}=0$. \section{Dimensional reduction}\label{dimred} There is a beautiful and rather mind-boggling theorem relating disordered systems to pure systems (i.e.\ without disorder), which applies to a large class of systems, e.g.\ random field systems and elastic manifolds in disorder. It is called dimensional reduction and reads as follows\cite{EfetovLarkin1977}: \noindent {\underline{Theorem:}} {\em A $d$-dimensional disordered system at zero temperature is equivalent to all orders in perturbation theory to a pure system in $d-2$ dimensions at finite temperature. } Moreover the temperature is (up to a constant) nothing but the width of the disorder distribution. A simple example is the 3-dimensional random-field Ising model at zero temperature; according to the theorem it should be equivalent to the pure 1-dimensional Ising-model at finite temperature. But it has been shown rigorously, that the former has an ordered phase, whereas we have all solved the latter and we know that there is no such phase at finite temperature. So what went wrong? Let us stress that there are no missing diagrams or any such thing, but that the problem is more fundamental: As we will see later, the proof makes assumptions, which are not satisfied. Nevertheless, the above theorem remains important since it has a devastating consequence for all perturbative calculations in the disorder: However clever a procedure we invent, as long as we do a perturbative expansion, expanding the disorder in its moments, all our efforts are futile: dimensional reduction tells us that we get a trivial and unphysical result. Before we try to understand why this is so and how to overcome it, let us give one more example. Dimensional reduction allows to calculate the roughness-exponent $\zeta $ defined in equation (\ref{roughness}). We know (this can be inferred from power-counting) that the width $u$ of a $d$-dimensional manifold at finite temperature in the absence of disorder scales as $u\sim x^{(2-d)/2}$. Making the dimensional shift implied by dimensional reduction leads to \begin{equation}\label{zetaDR} \overline{\left[ u (x)-u (0) \right]^{2}} \sim x^{4-d} \equiv x^{2\zeta } \quad \mbox{i.e.}\quad \zeta =\frac{4-d}{2}\ . \end{equation} \section{The Larkin-length, and the role of temperature}\label{Larkin} To understand the failure of dimensional reduction, let us turn to an interesting argument given by Larkin \cite{Larkin1970}. He considers a piece of an elastic manifold of size $L$. If the disorder has correlation length $r$, and characteristic potential energy $\bar f$, this piece will typically see a potential energy of strength \begin{equation} E_{\mathrm{DO}} = \bar f \left(\frac{L}{r} \right)^{\!\frac{d}{2}}\ . \end{equation} On the other hand, there is an elastic energy, which scales like \begin{equation} E_{\mathrm{el}} = c\, L^{d-2}\ . \end{equation} These energies are balanced at the {\em Larkin-length} $L=L_{c}$ with \begin{equation} L_{c} = \left(\frac{c^{2}}{\bar f^{2}}r^{d} \right)^{\frac{1}{4-d}} \ . \end{equation} More important than this value is the observation that in all physically interesting dimensions $d<4$, and at scales $L>L_{c}$, the membrane is pinned by disorder; whereas on small scales the elastic energy dominates. Since the disorder has a lot of minima which are far apart in configurational space but close in energy (metastability), the manifold can be in either of these minimas, and the ground-state is no longer unique. However exactly this is assumed in e.g.\ the proof of dimensional reduction; as is most easily seen in its supersymmetric formulation, see \cite{ParisiSourlas1979} and section \ref{a5}. Another important question is, what the role of temperature is. In (\ref{roughness}), we had supposed that $u$ scales with the systems size, $u\sim L^{\zeta}$. From the first term in (\ref{H}) we conclude that \begin{equation}\label{a8} T\sim L^{\theta}\ ,\qquad \theta =d-2+2 \zeta \end{equation} Temperature is irrelevant when $\theta >0$, which is the case for $d>2$, and when $\zeta >0$ even below. The RG fixed point we are looking for will thus always be at zero temperature. From the second term in (\ref{H}) we conclude that disorder scales as \begin{equation}\label{R-scaling} R\sim L^{d-4+4\zeta}\ . \end{equation} This is another way to see that $d=4$ is the upper critical dimension. \section{The functional renormalization group (FRG)}\label{FRG} Let us now discuss a way out of the dilemma: Larkin's argument (section \ref{Larkin}) or Eq.~(\ref{R-scaling}) suggests that $4$ is the upper critical dimension. So we would like to make an $\epsilon =4-d$ expansion. On the other hand, dimensional reduction tells us that the roughness is $\zeta =\frac{4-d}{2}$ (see (\ref{zetaDR})). Even though this is systematically wrong below four dimensions, it tells us correctly that at the critical dimension $d=4$, where disorder is marginally relevant, the field $u$ is dimensionless. This means that having identified any relevant or marginal perturbation (as the disorder), we find immediately another such perturbation by adding more powers of the field. We can thus not restrict ourselves to keeping solely the first moments of the disorder, but have to keep the whole disorder-distribution function $R (u)$. Thus we need a {\em functional renormalization group} treatment (FRG). Functional renormalization is an old idea going back to the seventies, and can e.g.\ be found in \cite{WegnerHoughton1973}. For disordered systems, it was first proposed in 1986 by D.\ Fisher \cite{DSFisher1986}. Performing an infinitesimal renormalization, i.e.\ integrating over a momentum shell \`a la Wilson, leads to the flow $\partial _{\ell} R (u)$, with ($\epsilon =4-d$) \begin{equation}\label{1loopRG} \partial _{\ell} R (u) = \left(\epsilon -4 \zeta \right) R (u) + \zeta u R' (u) + \frac{1}{2} R'' (u)^{2}-R'' (u)R'' (0)\ . \end{equation} The first two terms come from the rescaling of $R$ in Eq.\ (\ref{R-scaling}) and $u$ respectively. The last two terms are the result of the 1-loop calculations, which are derived in appendix \ref{app:deriveRG}. More important than the form of this equation is it actual solution, sketched in figure \ref{fig:cusp}. \begin{figure}[t] \centerline{\fig{13.4cm}{cuspform}} \Caption{Change of $-R'' (u)$ under renormalization and formation of the cusp.} \label{fig:cusp} \end{figure} After some finite renormalization, the second derivative of the disorder $R'' (u)$ acquires a cusp at $u=0$; the length at which this happens is the Larkin-length. How does this overcome dimensional reduction? To understand this, it is interesting to study the flow of the second and forth moment. Taking derivatives of (\ref{1loopRG}) w.r.t.\ $u$ and setting $u$ to 0, we obtain \begin{eqnarray} \partial_{\ell} R'' (0) &=& \left(\epsilon -2 \zeta \right) R'' (0) + R''' (0)^{2} \ \longrightarrow \ \left(\epsilon -2 \zeta \right) R'' (0)\label{R2of0}\\ \partial_{\ell} R'''' (0) &=& \epsilon R'''' (0) + 3 R'''' (0)^{2} +4 R''' (0)R''''' (0) \ \longrightarrow\ \epsilon R'''' (0) + 3 R'''' (0)^{2}\label{R4of0} \ . \end{eqnarray} Since $R (u)$ is an even function, and moreover the microscopic disorder is smooth (after some initial averaging, if necessary), $R''' (0)$ and $R''''' (0)$ are 0, which we have already indicated in Eqs.\ (\ref{R2of0}) and (\ref{R4of0}) . The above equations for $R'' (0)$ and $R'''' (0)$ are in fact closed. Equation (\ref{R2of0}) tells us that the flow of $R'' (0)$ is trivial and that $\zeta =\epsilon /2\equiv \frac{4-d}{2}$. This is exactly the result predicted by dimensional reduction. The appearance of the cusp can be inferred from equation (\ref{R4of0}). Its solution is \begin{equation} R'''' (0)\hskip0.1ex\raisebox{-1ex}[0ex][0.8ex]{\rule{0.1ex}{2.75ex}\hskip0.2ex} _{\ell}= \frac{c\,{\mathrm{e}}^ {\epsilon \ell }}{1-3\, c \left({\mathrm{e}}^ {\epsilon \ell} -1 \right)/ \epsilon }\ , \qquad c:= R'''' (0)\hskip0.1ex\raisebox{-1ex}[0ex][0.8ex]{\rule{0.1ex}{2.75ex}\hskip0.2ex} _{\ell=0} \end{equation} Thus after a finite renormalization $R'''' (0)$ becomes infinite: The cusp appears. By analyzing the solution of the flow-equation (\ref{1loopRG}), one also finds that beyond the Larkin-length $R'' (0)$ is no longer given by (\ref{R2of0}) with $R''' (0)^{2}=0$. The correct interpretation of (\ref{R2of0}), which remains valid after the cusp-formation, is (for details see below) \begin{equation} \partial_{\ell} R'' (0) = \left(\epsilon -2 \zeta \right) R'' (0) +R''' (0^{+})^{2} \label{R2of0after}\ . \end{equation} Renormalization of the whole function thus overcomes dimensional reduction. The appearance of the cusp also explains why dimensional reduction breaks down. The simplest way to see this is by redoing the proof for elastic manifolds in disorder, which in the absence of disorder is a simple Gaussian theory. Terms contributing to the 2-point function involve $R'' (0)$, $TR'''' (0)$ and higher derivatives of $R (u)$ at $u=0$, which all come with higher powers of $T$. To obtain the limit of $T\to 0$, one sets $T=0$, and only $R'' (0)$ remains. This is the dimensional reduction result. However we just saw that $R'''' (0)$ becomes infinite. Thus $R'''' (0) T$ may also contribute, and the proof fails. \section{Measuring the cusp}\label{measurecusp} Until now the function $R(u)$, a quantity central to the FRG, was loosely described as an effective disorder correlator, which evolves under coarse-graining towards a non-analytic shape. It turns out that it can be given a precise definition as an {\em observable} \cite{LeDoussal2006b}. Hence it can {\em directly} be computed in the numerics, as we will discuss below, and in principle, be measured in experiments. The cusp therefore is not a theoretical artefact, but a real property of the system, related to singularities or shocks, which arise in the landscape of pinning forces. Moreover, these singularities are unavoidable for a glass with multiple metastable states. Consider our interface in a random potential, and add an external quadratic potential well, centered around $w$: \begin{eqnarray} {\cal H}_{\mathrm{tot}}^{w}[u] = \frac{m^2}{2} (u(x)-w)^2 + {\cal H}_{\mathrm{el}}[u] + {\cal H}_{\mathrm{DO}}[u]\ . \end{eqnarray} In each sample (i.e.\ disorder configuration), and given $w$, one finds the minimum energy configuration. This ground state energy is \begin{eqnarray} \hat V(w) := \min_{u(x)} {\cal H}_{\mathrm{tot}}^{w}[u]\ . \end{eqnarray} It varies with $w$ as well as from sample to sample. Its second cumulant \begin{eqnarray} \overline{ \hat V(w) \hat V(w') }^c = L^d R(w-w') \label{defR} \end{eqnarray} defines a function $R(w)$ which is proven \cite{LeDoussal2006b} to be the same function computed in the field theory, defined from the zero-momentum action \cite{LeDoussalWieseChauve2003}. Physically, the role of the well is to forbid the interface to wander off to infinity. The limit of small $m$ is then taken to reach the universal limit. The factor of volume $L^d$ is necessary, since the width $\overline{u^2}$ of the interface in the well cannot grow much more than $m^{-\zeta}$. This means that the interface is made of roughly $L/L_m$ pieces of internal size $L_m \approx m$ pinned independently: (\ref{defR}) hence expresses the central limit theorem and $R(w)$ measures the second cumulant of the disorder seen by any one of the independent pieces. \begin{figure}[t]\setlength{\unitlength}{1.4mm} \fboxsep0mm \centerline{\mbox{\fig{10cm}{compareRFRBchaos}}} \Caption{Filled symbols show numerical results for $Y(z)$, a normalized form of the interface displacement correlator $-R''(u)$ [Eq.\ (\ref{defDe})], for $D=2+1$ random field (RF) and $D=3+1$ random bond (RB) disorders. These suggest a linear cusp. The inset plots the numerical derivative $Y'(z)$, with intercept $Y'(0)\approx -0.807$ from a quadratic fit (dashed line). Open symbols plot the cross-correlator ratio $Y_s(z)=\Delta_{12}(z)/\Delta_{11}(0)$ between two related copies of RF disorder. It does not exhibit a cusp. The points are for confining wells with width given by $M^2=0.02$. Comparisons to 1-loop FRG predictions (curves) are made with no adjustable parameters. Reprinted from \cite{MiddletonLeDoussalWiese2006}.} \label{f:Alan1} \end{figure} The nice thing about (\ref{defR}) is that it can be measured. One varies $w$ and computes (numerically) the new ground-state energy; finallying averaging over many realizations. This has been performed recently in \cite{MiddletonLeDoussalWiese2006} using a powerful exact-minimization algorithm, which finds the ground state in a time polynomial in the system size. In fact, what was measured there are the fluctuations of the center of mass of the interface $u(w)=L^{-d} \int {\mathrm{d}}^d x\, u_0(x;w)$: \begin{eqnarray} \overline{[w-u(w)] [w'-u(w')] }^c = m^{-4} L^{-d} \Delta(w-w') \label{defDe} \end{eqnarray} \begin{figure}[b] \centerline{\rotatebox{90}{\qquad\qquad ~~ $w-u_{w}$} \includegraphics[width=8cm,viewport=125 255 450 440,clip]{./figures/shocks}} \centerline{\qquad $w$} \Caption{Discontinuous positions, ``shocks'', in $w-u_{w}$ as a function of $w$. Reprinted from \cite{MiddletonLeDoussalWiese2006}.} \label{f:Alan-Shocks} \end{figure}% which measures directly the correlator of the pinning force $\Delta(u)=-R''(u)$. To see why it is the total force, write the equilibrium condition for the center of mass $m^2 [w-u(w)] + L^{-d} \int {\mathrm{d}}^d x\, F(x,u)=0$ (the elastic term vanishes if we use periodic b.c.). The result is represented in figure \ref{f:Alan1}. It is most convenient to plot the function $Y=\Delta(u)/\Delta(0)$ and normalize the $u$-axis to eliminate all non-universal scales. The plot in figure \ref{f:Alan1} is free of any parameter. It has several remarkable features. First, it clearly shows that a linear cusp exists in any dimension. Next it is very close to the 1-loop prediction. Even more remarkably, as detailed in \cite{MiddletonLeDoussalWiese2006}, the statistics is good enough to reliably compare the deviations to the 2-loop predictions obtained in section \ref{2loop}. What is the physics of the cusp in $\Delta(u)$? One easily sees in a zero-dimensional model, i.e.\ a particle on a line, $d=0$, that as $w$ increases, the position of the minimum $u(w)$ increases smoothly, except at some discrete set of positions $w=w_s$, where the system switches abruptly between two distant minima. Formally, one can show that the landscape of the force $-\hat V'(w)$ evolves, as the mass is lowered, according to a Burgers equation, known to develop finite-time singularities called ``shocks''. For details on this mapping see \cite{BalentsBouchaudMezard1996,LeDoussal2006b}. For an interface these shocks also exist, as can be seen on figure \ref{f:Alan-Shocks}. Note that when we vary the position $w$ of the center of the well, it is not a real motion. It just means to find the new ground state for each $w$. Literally ``moving'' $w$ is another very interesting possibility, and will be discussed in section \ref{s:dynamics} devoted to depinning \cite{LeDoussalWiese2006a,RossoLeDoussalWiese2006a}. \section{Rounding the cusp}\label{s:Rounding the cusp} As we have seen, a cusp non-analyticity necessary arises at zero temperature, due to the switch-over between many metastable states. Interestingly, this cusp can be rounded by several effects: By non-zero temperature $T>0$, chaos, or a non-zero driving velocity (in the dynamics discussed below). It is easy to include the effect of temperature in the FRG equation to one loop \cite{ChauveGiamarchiLeDoussal2000}: \begin{equation}\label{tempRG} \partial _{\ell} R (u) = \left(\epsilon -4 \zeta \right) R (u) + \zeta u R' (u) + \frac{1}{2} R'' (u)^{2}-R'' (u)R'' (0) + \tilde T_\ell R''(u) \ . \end{equation} $\tilde T_\ell = T {\mathrm{e}}^{- \theta \ell}$ is the dimensionless temperature. It finally flows to zero, since temperature is an irrelevant variable as discussed above. Although irrelevant, it has some profound effect. Clearly the temperature in (\ref{tempRG}) acts as a diffusive term smoothening the cusp. In fact, at non-zero temperature there never is a cusp, and $R(u)$ remains analytic. The convergence to the fixed point is non-uniform. For $u$ fixed, $R(u)$ converges to the zero-temperature fixed point, except near $u=0$, or more precisely in a boundary layer of size $u \sim \tilde T_\ell$, which shrinks to zero in the large-scale limit. Non-trivial consequences are: The curvature blows up as $R''''(0) \sim {\mathrm{e}}^{\theta \ell}/T \sim L^\theta/T$. One can show that this is related to the existence of thermal excitations (``droplets'') in the statics \cite{BalentsLeDoussal2004} and of ``barriers'' in the dynamics, which grow as $L^\theta$ \cite{BalentsLeDoussal2003}. Another case where rounding occurs is for ``disorder chaos''. Disorder chaos is the possibility of a system to have a completely different ground state at large scales, upon a very slight change in the microscopic disorder (for instance changing slightly the magnetic field in a superconductor). Not all types of disorder exhibit chaos. Its presence in spin glasses is still debated. Recently it was investigated for elastic manifolds, using FRG \cite{LeDoussal2006a}. One studies a model with two copies, $i=1,2$, each seeing slightly different disorder energies $V_{i} (x,u (x))$ in Eq.~(\ref{HDO}). The latter are mutually correlated gaussian random potentials with a correlation matrix \begin{eqnarray}\label{ViVj} \overline{V_i(x,u) V_j(x',u')} = \delta^d(x-x') R_{ij}(u-u')\ . \end{eqnarray} At zero temperature, the FRG equations for $R_{11} (u) =R_{22} (u)$ are the same as in (\ref{1loopRG}). The one for the cross-correlator $R_{12}(u)$ satisfies the same equation as (\ref{tempRG}) above, with $\tilde T_\ell$ is replaced by $\hat T:=R''_{12}(0)-R''_{11}(0)$. It is some kind of fictitious temperature, whose flow must be determined self-consistently from the two FRG equations. As in the case of a real temperature, it results in a rounding of the cusp. The physics of that is apparent from figure \ref{f:Alan-Shocks}, which shows the set of shocks in two correlated samples. Since they are slightly and randomly displaced from each other, the cusp is rounded. Chaos is obtained when $\hat T$ grows with scale, and occurs on scales larger than the so-called overlap length. The mutual correlations $C_{ij}(x-x') = \overline{\left< [u^i(x) - u^i(x')] [u^j(x) - u^j(x')]\right>}$ behave as $C_{ij}(x) = x^{2 \zeta} f(\delta x^\alpha)$, where $\delta$ quantifies the difference between the two disorders at the microscopic level. $C_{ij} (x)$ decays at large distance as $C_{ij}(x) \sim x^{2 \zeta - \mu}$ \cite{LeDoussal2006a}. \section{Beyond 1 loop}\label{beyond1loop} Functional renormalization has successfully been applied to a bunch of problems at 1-loop order. From a field theory, we however demand more. Namely that it\medskip $\bullet$ allows for systematic corrections beyond 1-loop order\smallskip $\bullet$ be renormalizable\smallskip $\bullet$ and thus allows to make universal predictions.\medskip \noindent However, this has been a puzzle since 1986, and it has even been suggested that the theory is not renormalizable due to the appearance of terms of order $\epsilon ^{\frac{3}{2}}$ \cite{BalentsDSFisher1993}. Why is the next order so complicated? The reason is that it involves terms proportional to $R''' (0)$. A look at figure 3 explains the puzzle. Shall we use the symmetry of $R (u)$ to conclude that $R''' (0)$ is 0? Or shall we take the left-hand or right-hand derivatives, related by \begin{equation} R''' (0^{+}) := \lim_{{u>0}\atop {u\to 0}} R ''' (u) = - \lim_{{u<0}\atop {u\to 0}} R ''' (u) =:- R''' (0^{-}) . \end{equation} In the following, we will present our solution of this puzzle, obtained at 2-loop order, at large $N$, and in the driven dynamics. \section{Results at 2-loop order}\label{2loop} For the flow-equation at 2-loop order, the result is \cite{LeDoussalWieseChauve2003,ChauveLeDoussalWiese2000a,Scheidl2loopPrivate,DincerDiplom,ChauveLeDoussal2001} \begin{eqnarray}\label{2loopRG} \partial _{\ell} R (u) &=& \left(\epsilon -4 \zeta \right) R (u) + \zeta u R' (u) + \frac{1}{2} R'' (u)^{2}-R'' (u)R'' (0) \nonumber}\newcommand {\eq}[1]{(\ref{#1}) \\ && + \frac{1}{2}\left(R'' (u)-R'' (0) \right)R''' (u)^{2}-\frac{1}{2}R''' (0^{+})^{2 } R'' (u) \ . \end{eqnarray} The first line is the result at 1-loop order, already given in (\ref{1loopRG}). The second line is new. The most interesting term is the last one, which involves $R''' (0^{+})^{2}$ and which we therefore call {\em anomalous}. The hard task is to fix the prefactor $(-\frac{1}{2})$. We have found five different prescriptions to calculate it: The sloop-algorithm, recursive construction, reparametrization invariance, renormalizability, and potentiality \cite{ChauveLeDoussalWiese2000a,LeDoussalWieseChauve2003}. For lack of space, we restrain our discussion to the last two ones. At 2-loop order the following diagram appears \begin{equation}\label{rebi} \diagram{subdiv}\ \longrightarrow\ \frac{1}{2}\left(R'' (u)-R'' (0) \right)R''' (u)^{2} -\frac{1}{2} R'' (u)R''' (0^{+})^{2} \end{equation} leading to the anomalous term. The integral (not written here) contains a sub-divergence, which is indicated by the box. Renormalizability demands that its leading divergence (which is of order $1/\epsilon ^{2}$) be canceled by a 1-loop counter-term. The latter is unique thus fixing the prefactor of the anomalous term. (The idea is to take the 1-loop correction $\delta R$ in Eq.~(\ref{80}) and replace one of the $R''$ in it by $\delta R''$ itself, which the reader can check to leading to the terms given in (\ref{rebi}) plus terms which only involve even derivatives.) Another very physical demand is that the problem remain potential, i.e.\ that forces still derive from a potential. The force-force correlation function being $-R'' (u)$, this means that the flow of $R' (0)$ has to be strictly 0. (The simplest way to see this is to study a periodic potential.) From (\ref{2loop}) one can check that this does not remain true if one changes the prefactor of the last term in (\ref{2loop}); thus fixing it. Let us give some results for cases of physical interest. First of all, in the case of a periodic potential, which is relevant for charge-density waves, the fixed-point function can be calculated analytically as (we choose period 1, the following is for $u\in \left[0,1 \right]$) \begin{equation} R^{*} (u) = - \left(\frac{\epsilon }{72}+\frac{\epsilon ^{2}}{108}+O (\epsilon ^{3}) \right) u^{2} (1-u)^{2} +\mbox{const.} \end{equation} This leads to a universal amplitude. In the case of random-field disorder (short-ranged force-force correlation function) $\zeta =\frac{\epsilon }{3}$, equivalent to the Flory estimate (\ref{a3}). For random-bond disorder (short-ranged potential-potential correlation function) we have to solve (\ref{2loopRG}) numerically, with the result \begin{equation} \zeta = 0.208 298 04 \epsilon +0.006858 \epsilon ^{2} + O(\epsilon^{3})\ . \end{equation} This compares well with numerical simulations, see figure \ref{fig:numstat}. It is also surprisingly close, but distinctly different, from the Flory estimate (\ref{a2}), $\zeta=\epsilon/5$. \begin{figure}\centerline{\small \begin{tabular}{|c|c|c|c|c|} \hline $\zeta _{\rm}$ & one loop & two loop & estimate & simulation and exact\\ \hline \hline $d=3$ & 0.208 & 0.215 & $0.215\pm 0.01$ & $0.22\pm 0.01$ \cite{Middleton1995} \\ \hline $d=2$ &0.417 &0.444 &$0.42\pm 0.02$ & $0.41\pm 0.01$ \cite{Middleton1995} \\ \hline $d=1$ & 0.625 & 0.687 & $0.67\pm 0.02$ & $2/3$ \\ \hline \end{tabular}}\medskip \Caption{Results for $\zeta $ in the random bond case.}\label{fig:numstat} \end{figure} \section{Finite $N$}\label{s:finiteN} Up to now, we have studied the functional RG for one component $N=1$. The general case of finite $N$ is more difficult to handle, since derivatives of the renormalized disorder now depend on the direction, in which this derivative is taken. Define amplitude $u:=|\vec u|$ and direction $\hat u:= \vec u/|\vec u|$ of the field. Then deriving the latter variable leads to terms proportional to $1/u$, which are diverging in the limit of $u\to 0$. This poses additional problems in the calculation, and it is a priori not clear that the theory at $N\neq1$ exists, supposed this is the case for $N=1$. At 1-loop order everything is well-defined \cite{BalentsDSFisher1993}. We have found a consistent RG-equation at 2-loop order \cite{LeDoussalWiese2005a}: \begin{figure}[b] \centerline{{\unitlength1mm \begin{picture} (90,55) \put(0,0){\fig{85mm}{Ncomp}} \put(5,52){$\zeta $} \put(86,3){$N$} \put(70,13){1-loop} \put(30,7){2-loop} \end{picture}} } \Caption{Results for the roughness $\zeta$ at 1- and 2-loop order, as a function of the number of components $N$.} \label{f:Ncomp} \end{figure} \begin{eqnarray}\label{2loopFPENcomp} \partial_{\ell } R(u) &=& (\epsilon - 4 \zeta) R(u) + \zeta u R'(u) +\frac{1}{2} R''(u)^2 - R''(0) R''(u) +\frac{N-1}{2} \frac{R'(u)}{u} \left(\frac{R'(u)}{u} - 2 R''(0)\right) \nonumber}\newcommand {\eq}[1]{(\ref{#1}) \\ &&+\frac{1}{2} \left( R''(u) - R''(0) \right) \,{R''' (u)}^2 +\frac{N{-}1}{2} \frac{{\left( R'(u) {-} uR''(u) \right) }^2\, ( 2 R'(u) {+} u(R''(u) {-}3 R''(0) ) )}{u^5} \nonumber \\ && -R'''(0^{+})^{2} \left[\frac{N+3}{8}R''(u)+\frac{N-1}{4}\frac{R'(r)}{u} \right] \ . \end{eqnarray} The first line is the 1-loop equation, given in \cite{BalentsDSFisher1993}. The second and third line represent the 2-loop equation, with the new anomalous terms proportional to $R''' (0^{+})^{2}$ (third line). The fixed-point equation (\ref{2loopFPENcomp}) can be integrated numerically, order by order in $\epsilon$. The result, specialized to directed polymers, i.e.\ $\epsilon =3$ is plotted on figure \ref{f:Ncomp}. We see that the 2-loop corrections are rather big at large $N$, so some doubt on the applicability of the latter down to $\epsilon=3$ is advised. However both 1- and 2-loop results reproduce well the two known points on the curve: $\zeta =2/3$ for $N=1$ and $\zeta =0$ for $N=\infty$. The latter result will be given in section \ref{largeN}. Via the equivalence \cite{KPZ} of the directed-polymer problem in $N$ dimensions treated here and the KPZ-equation of non-linear surface growth in $N$ dimensions, which relate the roughness exponent $\zeta$ of the directed polymer to the dynamic exponent $z_{\mathrm{KPZ}}$ in the KPZ-equation via $\zeta =\frac{1}{z_{{\mathrm{KPZ}}}}$, we know that $\zeta (N=1)=2/3$. The line $\zeta =1/2$ given on figure \ref{f:Ncomp} plays a special role: In the presence of thermal fluctuations, we expect the roughness-exponent of the directed polymer to be bounded by $\zeta \ge 1/2$. In the KPZ-equation, this corresponds to a dynamic exponent $z_{\mathrm{KPZ}}=2$, which via the exact scaling relation $z_{\mathrm{KPZ}}+\zeta_{\mathrm{KPZ}}=2$ is an upper bound in the strong-coupling phase. The above data thus strongly suggest that there exists an upper critical dimension in the KPZ-problem, with $d_{\mathrm{uc}}\approx 2.4$. Even though the latter value might be an underestimation, it is hard to imagine what can go wrong {\em qualitatively} with this scenario. The strongest objections will probably arise from numerical simulations, such as \cite{MarinariPagnaniParisi2000}. However the latter use a discrete RSOS model, and the exponents are measured for interfaces, which in large dimensions have the thickness of the discretization size, suggesting that the data are far from the asymptotic regime. We thus strongly encourage better numerical simulations on a continuous model, in order to settle this issue. \section{Large $N$}\label{largeN} In the last sections, we have discussed renormalization in a loop expansion, i.e.\ expansion in $\epsilon$. In order to independently check consistency, it is good to have a non-perturbative approach. This is achieved by the large-$N$ limit, which can be solved analytically and to which we turn now. We start from \begin{eqnarray}\label{HlargeN} {\cal H}[\vec u,\vec j ] &=& \frac{1}{2T} \sum _{a=1}^{n}\int_{x} \vec u_{a} (x)\left(-\nabla^{2}{+}m^{2} \right) \vec u_{a} (x) - \sum _{a=1}^{n}\int_{x} \vec{j}_{a} (x)\vec{u}_{a} (x) \nonumber}\newcommand {\eq}[1]{(\ref{#1}) \\ && -\frac{1}{2 T^{2}} \sum _{a,b=1}^{n} \int_x B \left((\vec u_{a} (x)-\vec u_{b} (x))^{2} \right)\ . \end{eqnarray} where in contrast to (\ref{H}), we use an $N$-component field $\vec{u} $. For $N=1$, we identify $B (u^{2} )=R (u)$. We also have added a mass $m$ to regularize the theory in the infra-red and a source $\vec{j} $ to calculate the effective action $\Gamma (\vec u) $ via a Legendre transform. For large $N$ the saddle-point equation reads \cite{LeDoussalWiese2001} \begin{equation}\label{saddlepointequation} \tilde B' (u_{ab}^{2}) = B' \left(u_{ab}^{2}+2 T I_{1} + 4 I_{2} [\tilde B' (u_{ab}^{2})-\tilde B' (0)] \right)\ . \end{equation} This equation gives the derivative of the effective (renormalized) disorder $\tilde B$ as a function of the (constant) background field $u_{ab}^{2}= (u_{a}-u_{b})^{2}$ in terms of: the derivative of the microscopic (bare) disorder $B$, the temperature $T$ and the integrals $I_{n}:= \int_{k}\frac{1}{\left(k^{2}+m^{2} \right)^{n}}$. The saddle-point equation can again be turned into a closed functional renormalization group equation for $\tilde B$ by taking the derivative w.r.t.\ $m$: \begin{equation}\hspace{-0.9 cm} \partial _{\ell}\tilde B (x)\equiv -\frac{m \partial }{\partial m}\tilde B (x) =\left(\epsilon -4\zeta \right)\! \tilde B (x) + 2 \zeta x \tilde B' (x)+\frac{1}{2}\tilde B' (x)^{2}-\tilde B' (x) \tilde B' (0)+ \frac{\epsilon\, T \tilde B' (x)}{\epsilon +\tilde B'' (0)}\,\,\, \end{equation} This is a complicated nonlinear partial differential equation. It is therefore surprising, that one can find an analytic solution. (The trick is to write down the flow-equation for the inverse function of $\tilde B' (x)$, which is linear.) Let us only give the results of this analytic solution: First of all, for long-range correlated disorder of the form $\tilde B' (x)\sim x^{-\gamma }$, the exponent $\zeta $ can be calculated analytically as $\zeta =\frac{\epsilon }{2 (1+\gamma )}\ . $ It agrees with the replica-treatment in \cite{MezardParisi1991}, the 1-loop treatment in \cite{BalentsDSFisher1993}, and the Flory estimate (\ref{a4}). For short-range correlated disorder, $\zeta =0$. Second, it demonstrates that before the Larkin-length, $\tilde B (x)$ is analytic and thus dimensional reduction holds. Beyond the Larkin length, $\tilde B'' (0)=\infty $, a cusp appears and dimensional reduction is incorrect. This shows again that the cusp is not an artifact of the perturbative expansion, but an important property even of the exact solution of the problem (here in the limit of large $N$). \section{Relation to Replica Symmetry Breaking (RSB)}\label{s:RSB} There is another treatment of the limit of large $N$ given by M\'ezard and Parisi \cite{MezardParisi1991}. They start from (\ref{HlargeN}) but {\em without}\/ a source-term $j$. In the limit of large $N$, a Gaussian variational ansatz of the form \begin{eqnarray}\label{HlargeNMP} {\cal H}_{\mathrm g}[\vec u] &=& \frac{1}{2T} \sum _{a=1}^{n}\int_{x} \vec u_{a} (x)\left(-\nabla^{2}{+}m^{2} \right) \vec u_{a} (x) -\frac{1}{2 T^{2}} \sum _{a,b=1}^{n} \sigma_{ab} \, \vec u_{a} (x)\vec u_{b} (x) \end{eqnarray} becomes exact. The art is to make an appropriate ansatz for $\sigma_{ab}$. The simplest possibility, $\sigma _{ab}=\sigma $ for all $a\neq b$ reproduces the dimensional reduction result, which breaks down at the Larkin length. Beyond that scale, a replica symmetry broken (RSB) ansatz for $\sigma _{ab}$ is suggestive. To this aim, one can break $\sigma _{ab} $ into four blocks of equal size, choose one (variationally optimized) value for the both outer diagonal blocks, and then iterate the procedure on the diagonal blocks, resulting in \begin{equation}\label{RSB} \sigma_{ab} = \left(\,\parbox{.25\textwidth}{\fig{.25\textwidth}{RSBmatrice}}\,\right)\ . \end{equation}\begin{figure}[b] \centerline{\fig{8cm}{MPfunction}} \Caption{The function $\left[\sigma \right] (u)+m^{2}$ as given in \protect\cite{MezardParisi1991}.} \vspace{-0.1cm}\label{fig:MP-function} \end{figure}% One finds that the more often one iterates, the better the result becomes. In fact, one has to repeat this procedure infinite many times. This seems like a hopeless endeavor, but Parisi has shown that the infinitely often replica symmetry broken matrix can be parameterized by a function $[\sigma] (z)$ with $z\in \left[0,1 \right]$. In the SK-model, $z$ has the interpretation of an overlap between replicas. While there is no such simple interpretation for the model (\ref{HlargeNMP}), we retain that $z=0$ describes distant states, whereas $z=1$ describes nearby states. The solution of the large-$N$ saddle-point equations leads to the curve depicted in figure 6. Knowing it, the 2-point function is given by \begin{equation}\label{RSBformula} \left< u_{k}u_{-k} \right>=\frac{1}{k^{2}+m^{2}}\left(1+\int_{0}^{1} \frac{{\mathrm{d}} z}{z^{2}} \frac{\left[\sigma \right] (z)}{k^{2}+\left[\sigma \right] (z)+m^{2}} \right)\ . \end{equation} The important question is: What is the relation between the two approaches, which both declare to calculate the same 2-point function? Comparing the analytical solutions, we find that the 2-point function given by FRG is the same as that of RSB, if in the latter expression we only take into account the contribution from the most distant states, i.e.\ those for $z$ between 0 and $z_{m}$ (see figure \ref{fig:MP-function}). To understand why this is so, we have to remember that the two calculations were done under quite different assumptions: In contrast to the RSB-calculation, the FRG-approach calculated the partition function in presence of an external field $j$, which was then used to give via a Legendre transformation the effective action. Even if the field $j$ is finally tuned to 0, the system will remember its preparation, as is the case for a magnet: Preparing the system in presence of a magnetic field will result in a magnetization which aligns with this field. The magnetization will remain, even if finally the field is turned off. The same phenomena happens here: By explicitly breaking the replica-symmetry through an applied field, all replicas will settle in distant states, and the close states from the Parisi-function $\left[\sigma \right] (z)+m^{2}$ (which describes {\em spontaneous} RSB) will not contribute. However, we found that the full RSB-result can be reconstructed by remarking that the part of the curve between $z_{m}$ and $z_{c}$ is independent of the infrared cutoff $m$, and then integrating over $m$ \cite{LeDoussalWiese2001} ($m_{c}$ is the mass corresponding to $z_{c}$): \begin{equation}\label{RSB=intFRG} \left< u_{k}u_{-k} \right>\Big|^{\mathrm{RSB}}_{k=0} =\frac{\tilde R'_{m}(0)}{m^{4}} +\int_{m}^{m_{c}} \frac{{\mathrm{d}} \tilde R'_{\mu}(0)}{\mu^{4}} + \frac{1}{m_{c}^{2}}-\frac{1}{m^{2}}\ . \end{equation} We also note that a similar effective action has been proposed in \cite{BalentsBouchaudMezard1996}. While it agrees qualitatively, it does not reproduce the correct FRG 2-point function, as it should. \section{Corrections at order $1/N$}\label{sec:1overN} In a graphical notation, we find \cite{LeDoussalWiese2004a} \begin{eqnarray} \delta B^{(1)}&=& \!\!\diagram{1oN1}\!\!+\!\!\!\diagram{1oN2}\!\!+\!\!\diagram{1oN3}\!\!+\!\!\!\diagram{1oN4}\!\!+\!\!\diagram{1oN5}\!\! \nonumber \\ && +T\Big( \!\!\diagram{1oNT1a} \!\!+ \!\!\diagram{1oNT1b} \!\!+ \!\!\diagram{1oNT1cN} \!\!+ \!\!\diagram{1oNT1dN}\!\!+ \!\!\diagram{1oNT1b0} \!\!+ \!\!\diagram{1oNT1dN0}\!\! \Big)\nonumber \\ && + T^{2}\Big( \!\!\diagram{1oNT2a} \!\!+ \!\!\diagram{1oNT2bN} \!\!+ \!\!\diagram{1oNT2cN} + {\cal A}^{T^{2}}\Big)\\ \diagram{Bsummed}&=&B'' (\chi _{ab})\left(1-4A_{d} I_{2} (p)B'' (\chi _{ab}) \right)^{-1}\ ,\quad \diagram{B}=B(\chi_{ab})\ , \end{eqnarray} where the explicit expressions are given in \cite{LeDoussalWiese2004a}. By varying the IR-regulator, one can derive a $\beta$-function at order $1/N$, see \cite{LeDoussalWiese2004a}. At $T=0$, it is UV-convergent, and should allow to find a fixed point. We have been able to do this at order $\epsilon$, showing consistency with the 1-loop result, see section \ref{s:finiteN}. Other dimensions are more complicated. A $\beta$-function can also be defined at finite $T$. However since temperature is an irrelevant variable, it makes the theory non-renormalizable, i.e.\ in order to define it, one must keep an explicit infrared cutoff. These problems have not yet been settled. \section{Depinning transition}\label{s:dynamics} \begin{figure}[b] \centerline{\fig{0.4\textwidth}{velforchar}} \Caption{Velocity of a pinned interface as a function of the applied force. Zero force: equilibrium. $f=f_{c}$: depinning.} \label{f:vel-force} \end{figure} Another important class of phenomena for elastic manifolds in disorder is the so-called ``depinning transition'': Applying a constant force to the elastic manifold, e.g.\ a constant magnetic field to the ferromagnet mentioned in the introduction, the latter will only move, if a certain critical threshold force $f_{c}$ is surpassed, see figure \ref{f:vel-force}. (This is fortunate, since otherwise the magnetic domain walls in the hard-disc drive onto which this article is stored would move with the effect of deleting all information, depriving the reader from his reading.) At $f=f_{c}$, the so-called depinning transition, the manifold has a distinctly different roughness exponent $\zeta$ (see Eq.~(\ref{roughness})) from the equilibrium ($f=0$). For $f>f_{c}$, the manifold moves, and close to the transition, new observables and corresponding exponents appear: \begin{itemize \itemsep0mm \item the dynamic exponent $z$ relating correlation functions in spatial and temporal direction $$ t\sim x^{\,z} $$ \item a correlation length $\xi$ set by the distance to $f_{c}$ $$ \xi \sim |f-f_{c}|^{-\nu } $$ \item furthermore, the new exponents are not all independent, but satisfy the following exponent relations \cite{NattermanStepanowTangLeschhorn1992} \begin{equation}\label{exp-relatons} \beta =\nu (z- \zeta ) \qquad \qquad \nu =\frac{1}{2-\zeta } \end{equation} \end{itemize} The equation describing the movement of the interface is \begin{equation}\label{eq-motion} \partial_{t} u (x,t) = (\nabla^{2}+m^{2}) u (x,t) + F (x,u (x,t)) \ , \qquad F (x,u)=-\partial_{u} V (x,u) \end{equation} This model has been treated at 1-loop order by Natterman et al.~\cite{NattermanStepanowTangLeschhorn1992} and by Narayan and Fisher \cite{NarayanDSFisher1993a}. The 1-loop flow-equations are identical to those of the statics. This is surprising, since physically, the phenomena at equilibrium and at depinning are quite different. There is even the claim by \cite{NarayanDSFisher1993a}, that the roughness exponent in the random field universality class is exactly $\zeta =\epsilon /3$, as it is in the equilibrium random field class. After a long debate among numerical physicists, the issue is today resolved: The roughness is significantly larger, and reads e.g.\ for the driven polymer $\zeta =1.25$, instead of $\zeta=1$ as predicted in \cite{NarayanDSFisher1993a}. Clearly, a 2-loop analysis \cite{LeDoussalWieseChauve2002} is necessary, to resolve these issues. Such a treatment starts from the dynamic action \begin{equation}\label{dyn-action} {\cal S} = \int_{x,t} \tilde u (x,t) (\partial_{t}-\nabla^{2}+m^{2}) u (x,t) +\int_{x,t,t'} \tilde u (x,t)\Delta (u (x,t)-u (x,t'))\tilde u (x,t')\ , \end{equation} where the ``response field'' $\tilde u (x,t)$ enforces the equation of motion (\ref{eq-motion}) and \begin{equation}\label{Delta} \overline{F (x,u) F (x',u')} = \Delta (u-u')\delta^{d} (x-x') \equiv -R'' (u-u') \delta^{d}(x-x') \end{equation} is the force-force correlator, leading to the second term in (\ref{dyn-action}). As in the statics, one encounters terms proportional to $\Delta' (0^{+})\equiv -R''' (0^{+})$. Here the sign-problem can uniquely be solved by observing that the membrane only jumps ahead, \begin{equation}\label{jump-ahead} t>t'\qquad \Rightarrow \qquad u (x,t)\ge u (x,t')\ . \end{equation} Practically this means that when evaluating diagrams containing $\Delta (u (x,t)-u (x,t'))$, one splits them into two pieces, one with $t<t'$ and one with $t>t'$. Both pieces are well defined, even in the limit of $t\to t'$. As the only tread-off of this method, diagrams can become complicated and difficult to evaluate; however they are always well-defined. \begin{figure}[b] \scalebox{1.0}{ \begin{tabular}{|c|c|c|c|c|r|} \hline & $d$ & $\epsilon$ & $\epsilon^2$ & estimate & simulation~~~\\ \hline \hline & $3$ & 0.33 & 0.38 & 0.38$\pm$0.02 & 0.34$\pm$0.01 \\ \hline $\zeta$ & $2$ & 0.67 & 0.86 & 0.82$\pm$0.1 & 0.75$\pm$0.02 \\ \hline & $1$ & 1.00 & 1.43 & 1.2$\pm$0.2 & 1.25$\pm$0.01 \\ \hline \hline & $3$ & 0.89 & 0.85 & 0.84$\pm$0.01 & 0.84$\pm$0.02 \\ \hline $\beta$ & $2$ & 0.78 & 0.62 & 0.53$\pm$0.15 & 0.64$\pm$0.02 \\ \hline & $1$ & 0.67 & 0.31 & 0.2$\pm$0.2 & 0.25 \dots 0.4 \\ \hline \hline & $3$ & 0.58 & 0.61 & 0.62$\pm$0.01 & \\ \hline $\nu$ & $2$ & 0.67 & 0.77 & 0.85$\pm$0.1 & 0.77$\pm$0.04 \\ \hline & $1$ & 0.75 & 0.98 & 1.25$\pm$0.3 & 1$\pm$0.05 \\ \hline \end{tabular}}\hfill {% \begin{tabular}{|c|c|c|c|c|c|} \hline & $\epsilon $ & $\epsilon ^{2}$ & estimate & simulation \\ \hline \hline $\zeta $& 0.33 & 0.47 & 0.47$\pm$0.1 & 0.39$\pm$0.002 \\ \hline $\beta $ & 0.78 & 0.59 & 0.6$\pm $0.2 &0.68$\pm$0.06 \\ \hline $z$ &0.78 &0.66 &0.7$\pm $0.1 & 0.74$\pm$0.03 \\ \hline $\nu $ & 1.33 & 1.58 & 2$\pm$0.4 &1.52$\pm $0.02 \\ \hline \end{tabular}} \Caption{The critical exponents at the depinning transition, for short range elasticity (left) and for long range elasticity (right).} \label{dyn-data} \end{figure} Physically, this means that we approach the depinning transition from above. This is reflected in (\ref{jump-ahead}) by the fact that $u (x,t)$ may remain constant; and indeed correlation-functions at the depinning transition are completely independent of time\cite{LeDoussalWieseChauve2002}. On the other hand a theory for the approach of the depinning transition from below ($f<f_{c}$) has been elusive so far. At the depinning transition, the 2-loop functional RG reads \cite{ChauveLeDoussalWiese2000a,LeDoussalWieseChauve2002} \begin{eqnarray}\label{two-loop-FRG-dyn} \partial_{\ell} R (u) \!&=&\! (\epsilon -4 \zeta)R (u)+\zeta u R' (u) +\frac{1}{2}R'' (u)^{2} {-}R'' (u) R'' (0) \nonumber \\ &&+\frac{1}{2}\,\left[R'' (u)-R'' (0) \right] R''' (u)^{2} ~{\mbox{\bf +}}~ \frac{1}{2}\, R''' (0^{+})^{2} R'' (u) \end{eqnarray} First of all, note that it is a priori not clear that the functional RG equation, which is a flow equation for $\Delta (u)=-R'' (u)$ can be integrated to a functional RG-equation for $R (u)$. We have chosen this representation here, in order to make the difference to the statics evident: The only change is in the last sign on the second line of (\ref{two-loop-FRG-dyn}). This has important consequences for the physics: First of all, the roughness exponent $\zeta$ for the random-field universality class changes from $\zeta =\frac{\epsilon}{3}$ to \begin{equation}\label{zetaRFdyn} \zeta =\frac{\epsilon}{3} (1 +0.14331 \epsilon +\ldots ) \end{equation} Second, the random-bond universality class is unstable and always renormalizes to the random-field universality class, as is physically expected: Since the membrane only jumps ahead, it always experiences a new disorder configuration, and there is no way to know of whether this disorder can be derived from a potential or not. Generalizing the arguments of section \ref{measurecusp}, it has recently been confirmed numerically that both RB and RF disorder flow to the RF fixed point \cite{LeDoussalWiese2006a,RossoLeDoussalWiese2006a}, and that this fixed point is very close to the solution of (\ref{two-loop-FRG-dyn}), see figure \ref{f:DeltaRosso}.\begin{figure}[t]\setlength{\unitlength}{1.4mm} \fboxsep0mm \psfrag{random-field-disorder-num}[][]{\small RF $m=0.071$, $L=512$} \psfrag{random-bond-disorder-num}[][]{\small RB $m=0.071$, $L=512$} \psfrag{y}[][]{\small $Y (z)$} \psfrag{x}[][]{\small $z$} \centerline{\fig{9cm}{Delta}} \Caption{Universal scaling form $Y (z)$ for $\Delta (u)$ for RB and RF disorder. Reprinted from \cite{RossoLeDoussalWiese2006a}.} \label{f:DeltaRosso} \end{figure} This non-potentiality is most strikingly observed in the random periodic universality class, which is the relevant one for charge density waves. The fixed point for a periodic disorder of period one reads (remember $\Delta (u)=-R'' (u)$) \begin{equation}\label{rand-per-fp} \Delta^{*} (u) =\frac{\epsilon}{36}+\frac{\epsilon^{2}}{108} -\left(\frac{\epsilon}{6}+\frac{\epsilon^{2}}{9} \right) u (1-u) \end{equation} Integrating over a period, we find (suppressing in $F (x,u)$ the dependence on the coordinate $x$ for simplicity of notation) \begin{equation}\label{period} \int_{0}^{1}{\mathrm{d}} u \, \Delta^{*} (u) \equiv \int_{0}^{1}{\mathrm{d}} u\ \overline{F (u) F (u')}= -\frac{\epsilon^{2}}{108}\ . \end{equation} In an equilibrium situation, this correlator would vanish, since potentiality requires $\int_0^{1}{\mathrm{d}} u\, F (u)\equiv 0$. Here, there are non-trivial contributions at 2-loop order (order $\epsilon^{2}$), violating this condition and rendering the problem non-potential. This same mechanism is also responsible for the violation of the conjecture $\zeta =\frac{\epsilon}{3}$, which could be proven on the assumption that the problem remains potential under renormalization. Let us stress that the breaking of potentiality under renormalization is a quite novel observation here. The other critical exponents mentioned above can also be calculated. The dynamical exponent $z$ (for RF-disorder) reads \cite{ChauveLeDoussalWiese2000a,LeDoussalWieseChauve2002} \begin{equation}\label{zdyn} z=2-\frac{2}{9}\epsilon -0.04321\epsilon^{2} + \ldots \end{equation} All other exponents are related via the relation (\ref{exp-relatons}). That the method works well even quantitatively can be inferred from figure \ref{dyn-data}. \section{Supersymmetry}\label{a5} The use of $n$ replicas in the limit $n\to 0$ to describe disordered systems is often criticized for a lack of rigor. It is argued that instead one should use a supersymmetric formulation. Such a formulation is indeed possible, both for the statics as discussed in \cite{Wiese2004}, as for the dynamics, which we will discuss below. Following \cite{ParisiSourlas1979}, one groups the field $u (x)$, a bosonic auxiliary field $\tilde u (x)$ and two Grassmanian fields $\psi (x)$ and $\bar \psi (x)$ into a superfield $U (x,\bar \Theta , \Theta )$: \begin{equation}\label{superfielddef} U (x,\bar \Theta ,\Theta) = u (x)+ \bar \Theta \psi (x)+\bar \psi (x) \Theta + \Theta \bar \Theta \tilde u (x) \ . \end{equation} The action of the supersymmetric theory is \begin{equation}\label{17.2} {\cal S}_{\mathrm{Susy}}= \int {\mathrm{d}} \Theta {\mathrm{d}} \bar \Theta\int_{x} U (x,\bar \Theta ,\Theta) (\Delta_{s}) U (x,\bar \Theta ,\Theta)\ , \qquad \Delta_{s} := \nabla^{2}-\Delta (0) \frac{\partial}{\partial \bar \Theta}\frac{\partial}{\partial \Theta} \end{equation} It is invariant under the action of the supergenerators \begin{equation}\label{17.3} Q := x \frac{\partial}{\partial \Theta}-\frac{2}{\Delta (0)} \bar \Theta \nabla \ , \qquad \bar Q:=x \frac{\partial}{\partial \bar \Theta}+\frac{2}{\Delta (0)} \Theta \nabla\ . \end{equation} What do the fields mean? Upon integrating over $\bar \Theta$ and $\Theta$ before averaging over disorder, one would obtain two terms, $\sim \int_{x} \tilde u (x) \frac{\delta {\cal H}}{\delta u (x)}$, i.e.\ the bosonic auxiliary field $\tilde u (x)$ enforces $\frac{\delta {H}}{\delta u (x)}=0$, and a second term, bilenear in $\bar \psi$ and $\psi$, $\sim \int_{x}\bar \psi \frac{\delta^{2}{\cal H}}{\delta u^{2}}\psi$ which ensures that the partition function is one. (\ref{17.2}) is nothing but the dimensional reduction result (\ref{zetaDR}) in super-symmetric disguise. What went wrong? Missing is the renormalization of $R (u)$ itself, which in the FRG approach leads to a flow of $\Delta (0)\equiv -R'' (0)$. In order to capture this, one has to look at the supersymmetric action of at least two copies: \begin{equation}\label{17.4} {\cal S}[U_{a}]= \sum_{a}\int_{\Theta, \bar \Theta }\int_{x} U_{a} (x,\bar \Theta ,\Theta) (\Delta_{s}) U_{a} (x,\bar \Theta ,\Theta) -\frac{1}{2} \sum_{a\neq b} \int_{x}\int_{\bar \Theta ,\Theta} \int_{ \bar \Theta', \Theta'} R (U_{a} (x,\bar \Theta ,\Theta)-U_{b} (x,\bar \Theta ',\Theta' )) \ . \end{equation} Formally, we have again introduced $n$ replicas, but we do not take the limit of $n\to 0$; so criticism of the latter limit can not be applied here. After some slightly cumbersome calculations one reproduces the functional RG $\beta$-function at 1-loop (\ref{1loopRG}). (Higher orders are also possible and the SUSY method is actually helpful \cite{Wiese2004}.) At the Larkin-length, where the functional RG produces a cusp, the flow of $\Delta (0)$ becomes non-trivial, given in (\ref{R2of0after}). Then the parameter $\Delta (0)$ in the supersymmetry generators (\ref{17.3}) is no longer a constant, and supersymmetry breaks down. This is, as was discussed in section (\ref{s:RSB}), also the onset of replica-symmetry breaking in the gaussian variational ansatz, valid at large $N$. Another way to introduce a supersymmetric formulation proceeds via the super-symmetric representation of a stochastic equation of motion \cite{Zinn}; a method e.g.\ used in \cite{Kurchan1992} to study spin-glasses. The action then changes to \begin{equation}\label{17.5} {\cal S}[U]= \int_{xt}\int_{\Theta, \bar \Theta } U (x,\bar \Theta ,\Theta,t) (\Delta_{d}) U(x,\bar \Theta ,\Theta,t) -\frac{1}{2} \int_{xtt'}\int_{\bar \Theta ,\Theta} \int_{ \bar \Theta', \Theta'} R (U (x,\bar \Theta ,\Theta,t)-U (x,\bar \Theta ',\Theta' ,t')) \ . \end{equation} \begin{equation}\label{a9} \Delta_{d} = \nabla^{2} + \bar D D \ , \qquad \bar D = \frac{\partial}{\partial \Theta}\ ,\qquad D= \frac{\partial}{\partial \bar \Theta} -\Theta \frac{\partial}{\partial t} \end{equation} and is invariant under the action of the super-generators $Q:= \frac{\partial}{\partial \bar \Theta} $ and $\bar Q := \frac{\partial}{\partial \Theta} +\bar \Theta \frac{\partial}{\partial t}$, since $\left\{Q,D \right\}=\left\{Q,\bar D \right\}=\left\{\bar Q,D \right\} =\left\{\bar Q,\bar D \right\}=0$. Different replicas now become different times, but second cumulant still means bilocal in $\Theta$. However the procedure is not much different from a pure Langevin equation of motion, as in (\ref{eq-motion}); in the latter equation It\^o discretization is already sufficient to ensure that the partition function is 1. The main advantage is the possibility to change the discretization procedure from It\^o over mid-point to Stratonovich without having to add additional terms. In this case, supersymmetry breaking means that the system falls out of equlibrium, i.e.\ the fluctuation-dissipation theorem (which is a consequence of one of the supersymmetry generators \cite{Zinn}) breaks down \cite{Kurchan1992}. \section{Random Field Magnets}\label{a6} Another domain of application of the Functional RG is spin models in a random field. The model usually studied is: \begin{eqnarray} {\cal H} = \int {\mathrm{d}}^d x\, \frac{1}{2} (\nabla \vec S)^2 + \vec h(x) \cdot \vec S(x) \ , \label{rf} \end{eqnarray} where $\vec S(x)$ is a unit vector with $N$-components, and $\vec S(x)^2=1$. This is the so-called $O(N)$ sigma model, to which has been added a random field, which can be taken gaussian $\overline{h_i(x) h_j(x')} = \sigma \delta_{ij} \delta^d(x-x')$. In the absence of disorder the model has a ferromagnetic phase for $T<T_{\mathrm{f}}$ and a paramagnetic phase above $T_{\mathrm{f}}$. The lower critical dimension is $d=2$ for any $N \geq 2$, meaning that below $d=2$ no ordered phase exists. In $d=2$ solely a paramagnetic phase exists for $N>2$; for $N=2$, the XY model, quasi long-range order exists at low temperature, with $\overline{\vec S(x) \vec S(x')}$ decaying as a power law of $x-x'$. Here we study the model directly at $T=0$. The dimensional reduction theorem in section \ref{dimred}, which also holds for random field magnets, would indicate that the effect of a quenched random field in dimension $d$ is similar to the one of a temperature $T \sim \sigma$ for a pure model in dimension $d-2$. Hence one would expect a transition from a ferromagnetic to a disordered phase at $\sigma_c$ as the disorder increases in any dimension $d>4$, and no order at all for $d<4$ and $N \geq 2$. This however is again incorrect, as can be seen using FRG. It was noticed by Fisher \cite{Fisher1985b} that an infinity of relevant operators are generated. These operators, which correspond to an infinite set of random anisotropies, are irrelevant by naive power counting near $d=6$ \cite{Feldman2000,Feldman2000b}. $d=6$ is the naive upper critical dimension (corresponding to $d=4$ for the pure $O(N)$ model) as indicated by dimensional reduction; so many earlier studies concentrated on $d$ around $6$. Because of these operators the theory is however hard to control there. It has been shown \cite{Fisher1985b,Feldman2000,Feldman2000b} that it can be controlled using 1-loop FRG near $d=4$ instead, which is the naive lower critical dimension. Recently this was extended to two loops \cite{LeDoussalWiese2005b}. The 1-loop FRG studies directly the model with all the operators which are marginal in $d=4$, of action most easily expressed directly in replicated form: \begin{eqnarray} \label{action} {\cal S} = \int {\mathrm{d}}^d x &\Big[&\! \frac{1}{2 T} \sum_a [ (\nabla \vec S_a)^2 ] - \frac{1}{2 T^2} \sum_{a b} \hat R(\vec S_a \vec S_b) \Big]\ , \end{eqnarray} The function $\hat R(z)$ parameterizes the disorder. Since the vectors are of unit norm, $z=\cos \phi$ lies in the interval $[-1,1]$. One can also use the parametrization in terms of the variable $\phi$ which is the angle between the two replicas, and define $R(\phi)=\hat R(z=\cos \phi)$. The original model (\ref{rf}) corresponds to $\hat R(z) \sim \sigma z$. It does not remain of this form under RG, in fact again a cusp will develop near $z=1$. The FRG flow equation has been calculated up to two loops, i.e.\ $R^2$ (one loop) \cite{Fisher1985b,Feldman2000,Feldman2000b} and $R^3$ (two loops) \cite{LeDoussalWiese2005b}\footnote{These results were confirmed in \cite{TarjusTissier2005} (for the normal terms not proportional to $R''' (0^{+})$) and \cite{TarjusTissier2006} (with one proposition for the anomalous terms).}: \begin{eqnarray} \partial_{\ell} R (\phi ) &=& \epsilon R (\phi )+ \frac{1}{2} R'' (\phi)^2-R''(0)R''(\phi) + (N{-}2)\left[\frac 1 2 \frac{R'(\phi)^2}{\sin^2 \phi }- \cot \phi R'(\phi)R''(0)\right] \nonumber \\ && + \frac{1}{2} (R''(\phi)-R''(0) ) R'''(\phi)^2 + (N{-}2) \bigg[ \frac{\cot \phi}{\sin^4 \phi} R'(\phi)^3 - \frac{5+ \cos 2 \phi}{4 \sin^4 \phi} R'(\phi)^2 R''(\phi) \nonumber \\&& + \frac{1}{2 \sin^2 \phi} R''(\phi)^3 - \frac{1}{4 \sin^4 \phi} R''(0) \Big( 2 (2 + \cos 2 \phi) R'(\phi)^2 - 6 \sin 2 \phi R'(\phi) R''(\phi) \nonumber \\ && +(5+ \cos 2 \phi) \sin^2 \phi R''(\phi)^2 \Big) \bigg] \nonumber \\ && - \frac{N{+}2} 8 R'''(0^+)^2 R'' (\phi ) - \frac{N{-}2}{4} \cot \phi R'''(0^+)^2 R' (\phi ) \nonumber \\ && - 2 (N{-}2) \Big[R'' (0) - R'' (0)^{2} + \gamma_{a} R''' (0^+)^{2} \Big] R (\phi ) \qquad \label{beta} \end{eqnarray} The constant $\gamma_a$ is discussed in \cite{LeDoussalWiese2005b}; the last factor proportional to $R (\phi )$ takes into account the renormalization of temperature, a specific feature absent in the manifold problem. The full analysis of this equation is quite involved. The 1-loop part already shows interesting features. For $N=2$, the fixed point was studied in \cite{GiamarchiLeDoussal1995}, and corresponds to the Bragg-glass phase of the XY model with quasi-long range order obtained in a $d=4-\epsilon$ expansion below $d=4$. Hence for $N=2$ the lower critical dimension is $d_{\mathrm{lc}} < 4$ and conjectured to be $d_{\mathrm{lc}} < 3$ in \cite{GiamarchiLeDoussal1995}. On the other hand Feldman \cite{Feldman2000,Feldman2000b} found that for $N=3, 4,\dots$ there is a fixed point for $d=4+\epsilon$ for $d>4$. This fixed point has exactly one unstable direction, hence was conjectured to correspond to the ferromagnetic-to-disorder transition. The situation at one loop is thus rather strange: For $N=2$, only a stable FP which describes a {\em unique} phase exists, while for $N=3$ only an unstable FP exists, describing the transition between two phases. The question is: Where does the disordered phase go as $N$ decreases? These results cannot be reconciled within one loop and require the full 2-loop analysis of the above FRG equation. \begin{figure}[t] \Fig{phases} \Caption{Phase diagram of the RF non-linear sigma model. D $=$ disordered, F $=$ ferromagnetic, QLRO $=$ quasi long-range order. Reprinted from \cite{LeDoussalWiese2005b}.} \label{f:phases} \end{figure} The complete analysis \cite{LeDoussalWiese2005b} shows that there is a critical value of $N$, $N_c=2.8347408$, below which the lower critical dimension $d_{\mathrm{lc}}$ of the quasi-ordered phase plunges below $d=4$. Hence there are now two fixed points below $d=4$. For $N>N_c$ a ferromagnetic phase exists with lower critical dimension $d_{\mathrm{lc}}=4$. For $N<N_c$ one finds an expansion: \begin{equation}\label{expansiondc} d_{\mathrm{lc}}^{\mathrm{RF}} = 4-\epsilon_{c}\approx 4 - 0.1268 (N-N_{c})^{2}+ O ( (N-N_{c})^{3})\ . \end{equation} One can also compute the exponents of the correlation function \begin{eqnarray}\label{a10} \overline{S_q S_{-q} } \sim q^{-4 + \bar \eta}\ , \end{eqnarray} and once the fixed point is known, $\bar \eta$ is given by $\bar \eta=\epsilon- (N-1) R''(0) + {\textstyle \frac{3 N-2}{8}} R'''(0^+)^2$. There is a similar exponent for the connected thermal correlation $\eta$. Another fixed point describing magnets with random anisotropies (i.e.\ disorder coupling linearly to $S_i(x) S_j(x)$) is studied in \cite{Feldman2000,Feldman2000b,LeDoussalWiese2005b, KuehnelLeDoussalWieseUnbublished}. In this context, the existence of a quasi-ordered phase for the random-field XY model in $d=3$ (the scalar version of the Bragg glas) has been questioned \cite{TarjusTissier2005}. Corrections in (\ref{expansiondc}) seem to be small and at first sight exclude the quasi-ordered phase in $d=3$. This should however be taken with a (large) grain of salt \cite{LeDoussalIHP2006}. Indeed the above model does not even contain topological defects (i.e.\ vortices) as it was directly derived in the continuum. In the absence of topological defects it is believed that the lower critical dimension is $d_{\mathrm{lc}}=2$ (with logarithmic corrections there). Hence the above series should converge to that value for $N=2$, indicating higher order corrections to (\ref{expansiondc}). Another analysis \cite{TarjusTissier2004} based on a FRG on the soft-spin model, which may be able to capture vortices, indicates $d_{\mathrm{lc}}(N=2)> 3$. Unfortunately, it uses a truncation of FRG which cannot be controlled perturbatively, and as a result, does not match the 2-loop result. It would be interesting to construct a better approximation which predicts accurately at which dimension the soft and hard spin model differ in their lower critical dimensions, probably when vortices become unbound due to disorder. \section{More universal distributions}\label{s:distribution} \begin{figure}[t] \centerline{\fig{0.5\textwidth}{Dhm}} \Caption{Scaling function $\Phi(z)$ for the ($1+1$)--dimensional harmonic model, compared to the Gaussian approximation for $\zeta=1.25$. Data from \protect\cite{RossoKrauthLeDoussalVannimenusWiese2003}.} \label{f:Dhm} \end{figure} As we have already seen, exponents are not the only interesting quantities: In experiments and simulations, often whole distributions can be measured, as e.g.\ the universal width distribution of an interface that we have computed at depinningc \cite{RossoKrauthLeDoussalVannimenusWiese2003,LeDoussalWiese2003a}. Be $\left< u \right>$ the average position of an interface for a {\em given} disorder configuration, then the spatially averaged width \begin{eqnarray}\label{w2} w^{2}:= \frac{1}{L^{d}}\int_{x}\left(u (x)-\left< u \right> \right)^{2} \end{eqnarray} is a random variable, and we can try to calculate and measure its distribution $P (w^{2})$. The rescaled function $\Phi (z)$, defined by \begin{equation}\label{Phi} P (w^{2})={1}/{\overline{w^{2}}}\,\Phi \left({w^{2}}/{\overline{w^{2}}} \right) \end{equation} will be universal, i.e.~independent of microscopic details and the size of the system. Supposing all correlations to be Gaussian, $\Phi (z)$ can be calculated analytically. It depends on two parameters, the roughness exponent $\zeta$ and the dimension $d$. Numerical simulations displayed on figure \ref{f:Dhm} show spectacular agreement between analytical and numerical results. As expected, the Gaussian approximation is not exact, but to see deviations in a simulation, about $10^{5}$ samples have to be used. Analytically, corrections can be calculated: They are of order $R''' (0^{+})^{4}$ and small. Physically, the distribution is narrower than a Gaussian. There are more observables of which distributions have been calculated within FRG, or measured in simulations. Let us mention fluctuations of the elastic energy \cite{FedorenkoStepanow2003}, and of the depinning force \cite{FedorenkoLeDoussalWiese2006,BolechRosso2004}. \section{Anisotropic depinning, directed percolation, branching and all that}\label{s:anisotopic} \begin{figure}[b] \centerline{\fig{2cm}{KPZgeneratormom}} \Caption{{The diagram generating the irreversible nonlinear KPZ term with one disorder vertex and one $c_4$ vertex (the bars denote spatial derivatives).}} \label{fig1.a} \end{figure} We have discussed in section \ref{s:dynamics} isotopic depinning, which as the name suggests is a situation, where the system is invariant under a tilt. This isotropy can be broken through an additional anharmonic elasticity \begin{equation}\label{Eanharm} E_{\mathrm{elastic}}= \int_{x} \frac{1}{2}\left[\nabla u (x)\right]^{2}+c_{4} \left[\nabla u (x)\right]^{4}\ , \end{equation} leading to a drastically different universality class, the so-called anisotropic depinning universality class, as found recently in numerical simulations \cite{RossoKrauth2001b}. It has been observed in simulations \cite{AmaralBarabasiStanley1994,TangKardarDhar1995}, that the drift-velocity of an interface is increased, which can be interpreted as a tilt-dependent term, leading to the equation of motion in the form \begin{equation}\label{lf28} \partial_t u (x,t)= \nabla^2 u (x,t) + \lambda \left[ \nabla u (x,t)\right]^2+ F(x,u (x,t) ) + f\ . \end{equation} However it was for a long time unclear, how this new term (proportional to $\lambda$), which usually is referred to as a KPZ-term, is generated, especially in the limit of {\em vanishing} drift-velocity. In \cite{LeDoussalWiese2002a} we have shown that this is possible in a non-analytic theory, due to the diagram given in figure \ref{fig1.a}. For anisotropic depinning, numerical simulations based on cellular automaton models which are believed to be in the same universality class \cite{TangLeschhorn1992,BuldyrevBarabasiCasertaHavlinStanleyVicsek1992}, indicate a roughness exponent $\zeta \approx 0.63$ in $d=1$ and $\zeta \approx 0.48$ in $d=2$. On a phenomenological level it has been argued \cite{TangLeschhorn1992,BuldyrevBarabasiCasertaHavlinStanleyVicsek1992,GlotzerGyureSciortinoConiglioStanley1994} that configurations at depinning can be mapped onto directed percolation in $d=1+1$ dimensions, which yields indeed a roughness exponent $\zeta_{\mathrm{DP}}= \nu_\perp/\nu_{\|} = 0.630 \pm 0.001$, and it would be intriguing to understand this from a systematic field theory. This theory was developed in \cite{LeDoussalWiese2002a}, and we review the main results here. A strong simplification is obtained by going to the Cole-Hopf transformed fields \begin{equation}\label{lf29} Z ( {x,t}) := {\mathrm{e}}^{ \lambda u(x,t)} \qquad \Leftrightarrow \qquad u(x,t) = \frac{\ln ( Z(x,t))}{ \lambda } \ . \end{equation} The equation of motion becomes after multiplying with $ \lambda Z(x,t)$ (dropping the term proportional to $f$) \begin{equation}\label{lf30} \partial_t Z(x,t) = \nabla^2 Z(x,t) +{{\lambda} } F\left(x,\frac{\ln (Z(x,t))}{{\lambda} } \right) Z(x,t) \end{equation} and the dynamical action (after averaging over disorder) \begin{equation}\label{cole} {\cal S} = \int_{xt}\tilde {Z}(x,t)\left( \partial_{t}-\nabla^{2} \right) Z(x,t) -\frac{ \lambda^{2} }{2 } \int_{xtt'} \tilde {Z}(x,t) {Z}(x,t) \, \Delta\! \left( \frac{\ln Z(x,t)-\ln Z(x,t')}{ \lambda }\right)\tilde {Z}(x,t')Z(x,t') \end{equation} This leads to the FRG flow equation at 1-loop order \begin{eqnarray} \partial _{\ell} \Delta (u) &=& (\epsilon -2\zeta ) \Delta (u) + \zeta u \Delta' (u) -\Delta'' (u)\left( \Delta (u)-\Delta (0) \right) - \Delta' (u)^2\nonumber}\newcommand {\eq}[1]{(\ref{#1}) \\ &&+2 \lambda \Delta (u) \Delta' (0^{+}) +2 \lambda ^{2}\left(\Delta (u)^{2} +\Delta (u)\Delta (0)\right) \label{beta-2} \end{eqnarray} The first line is indeed equivalent to (\ref{1loopRG}) using $\Delta (u)=-R'' (u)$. The second line is new and contains the terms induced by the KPZ term, i.e.\ the term proportional to $\lambda$ in (\ref{lf28}). Equation (\ref{beta-2}) possesses the following remarkable property: {\em A three parameter subspace of exponential functions forms an exactly invariant subspace.} Even more strikingly, this is true {\it to all orders} in perturbation theory \cite{LeDoussalWiese2002a}! The subspace in question is ($0\le u\le 1/\lambda $) \begin{equation}\label{lf80} \Delta(u) = \frac{\epsilon }{ \lambda^2} \left(a + b\, {\mathrm{e}}^{- \lambda u} + c\, {\mathrm{e}}^{\lambda u}\right) \end{equation} \begin{figure*}[!t] \centerline{\fig{.5\textwidth}{flowlambda=2S+}} \Caption{Fixed point structure for $\lambda=2$, which is a typical value. The ratio $c/b$ is not renormalized, see (\ref{lf43})-(\ref{lf44}), such that $c/b$ is a parameter, fixed by the boundary conditions, especially $\lambda $. The fixed points are Gaussian {\tt G}, Random Periodic {\tt RP} (the generalization of the RP fixed point for $\lambda =0$), Self-Avoiding Polymers {\tt SAP}, and Unphysical {\tt U}.} \label{lambda-flow} \end{figure*}% The FRG-flow (\ref{beta-2}) closes in this subspace, leading to the simpler 3-dimensional flow: \begin{eqnarray} \partial_{\ell} a &=& a + 4 a^2 + 4 a c + 4 b c \label{lf42}\\ \partial_{\ell} b &=& b (1 + 6 a + b + 5 c ) \label{lf43}\\ \partial_{\ell} c &=& c (1 + 6 a + b + 5 c )\label{lf44} \end{eqnarray} This flow has a couple of fixed points, given on figure \ref{lambda-flow}. They describe different physical situations. The only globally attractive fixed point is {\tt SAP}, describing self-avoiding polymers. This fixed point is not attainable from the physically relevant initial conditions, which lie (as fixed point {\tt RP}) between the two separatrices given on figure \ref{lambda-flow}. All other fixed points are cross-over fixed points. \begin{figure}[b] \centerline{\fig{0.5\textwidth}{branching}} \Caption{The three vertices proportional to $a$, $b$ and $c$ in equation (\ref{c2}).} \label{f:branch} \end{figure}In the Cole-Hopf representation, it is easy to see why the exponential manifold is preserved to all orders. Let us insert (\ref{lf80}) in (\ref{cole}). The complicated functional disorder takes a very simple polynomial form \cite{LeDoussalWiese2002a}. \begin{equation}\label{c2} {\cal S}=\int_{xt}\tilde {Z}(x,t)\left( \partial_{t}-\nabla^{2} \right) Z(x,t)- \int_{x}\int_{t<t'} \tilde {Z}(x,t)\tilde {Z}(x,t') \left(a Z(x,t)Z(x,t')+bZ(x,t)^{2}+cZ(x,t')^{2} \right)\ . \end{equation} The vertices are plotted on figure \ref{f:branch}. It is intriguing to interpret them as particle interaction ($a$) and as different branching processes ($b$ and $c$): $Z$ destroys a particle and $\tilde Z$ creates one. Vertex $b$ can e.g.\ be interpreted as two particles coming together, annihilating one, except that the annihilated particle is created again in the future. However, if the annihilation process is strong enough, the reappearance of particles may not play a role, such that the interpretation as particle annihilation or equivalently directed percolation is indeed justified. One caveat is in order, namely that the fixed points described above, are all transition fixed point, and nothing can a priori be said about the strong coupling regime. However this is the regime seen in numerical simulations, for which the conjecture about the equivalence to directed percolation has been proposed. Thus albeit intriguing, the above theory is only the starting point for a more complete understanding of anisotropic depinning. Probably, one needs another level of FRG, so as standard FRG is able to treat directed polymers, or equivalently the KPZ-equation in the absence of disorder. \section{Problems not treated in these notes\dots and perspectives}\label{perspectives} Problems not treated in these notes are too numerous to list. Let us just mention some: \cite{FedorenkoStepanow2002} consider depinning at the upper critical dimension. In \cite{LeDoussalWiese2006a}, the crossover from short-ranged to long-ranged correlated disorder is treated. Our techniques can be applied to the statics at 3-loop order \cite{LeDoussalWiesePREPb}. But many questions remain open. Some have already been raised in these notes, another is whether FRG can also be applied to other systems, as e.g.\ spinglasses or window glasses. Can FRG be used as a tool to go beyond mean-field or mode-coupling theories? Another open issue is the applicability of FRG beyond the elastic limit, i.e.\ to systems with overhangs and topological defects, non-liear elasticity \cite{LeDoussalWieseRaphaelGolestanian2004}, or to more general fractal curves than (directed) interfaces. For random periodic disorder in $d=2$, temperature is marginal, and a freezing transition can be discussed (see e.g.\ \cite{CarpentierLeDoussal1998, SchehrLeDoussal2006}). It would be interesting to connect this to methods of conformal field theory and stochastic L\"owner evolution. We have to leave these problems for future research and as a challenge for the reader to plunge deeper into the mysteries of functional renormalization.
train/arxiv
BkiUexDxK03BfL1dWJam
5
1
\section{Introduction} Codes over rings were introduced in early $1970$s. Among them cyclic codes are an important class of linear codes because of their richness in algebraic structure and practical use. Cyclic codes over finite fields are well studied \cite{MacWilms} and they have been extended to various finite rings \cite{Dinh H. and Lo ´pez-permouth S.}. The search for new codes with good parameters encourages researchers to introduce various families of linear codes. In $1973$, Delsarte \cite{Delsarte} defined additive codes in terms of association schemes as the subgroups of the underlying abelian group. Under binary Hamming scheme, the underlying group of order $2^k$ is isomorphic to $\Z_2^\alpha \times \Z_4^\beta$, where $\alpha$ and $\beta$ are non-negative integers. The subgroups of underlying group are called $\Z_2\Z_4$-additive codes. Borges \cite{Borges} has studied $\Z_2\Z_4$-additive codes by deriving their generator matrices and parity check matrices. In \cite{Taher}, $\Z_2\Z_4$-cyclic codes of block length $(r,\ t)$ for odd $t$ have been defined as $\Z_4$-submodules of $\Z_2^r \times \Z_4^t$, and a minimal spanning set for these codes has been determined. Extending this work, Borges et. al. \cite{Borges2} gave duals of $\Z_2\Z_4$-cyclic codes of block length $(r,\ t)$ for odd $t$. Recently Abualrub et al. \cite{Aydogudu2} have studied a new class of codes over the structure $\Z_2\Z[u]$, where $\Z[u]=\Z_2+u\Z_2$, $u^2=0$. They have defined $\Z_2\Z_2[u]$-additive codes as $\Z_2[u]$-submodules of $\Z_2^s \times \Z[u]^t$, and obtained their generator and parity check matrices. They have also defined the type of these codes and have shown that some optimal binary codes are Gray images of $\Z_2\Z_2[u]$-additive codes. Extending the concepts given in \cite{Borges2}, recently Aydogudu and Siap have studied \cite{Aydogdu I. and Siap I.} the algebraic structure of $\Z_{p^r}\Z_{p^s}$-additive codes. They have determined the generator and parity check matrices for these codes. Borges et al. \cite{z2double} have derived the structure of $\Z_2$-double cyclic codes. They have determined generating polynomials for these codes and derived the relationship between the codes and their duals. Similarly structure of double cyclic codes over the rings $\Z_4$ and $\F_2+u\F_2+u^2\F_2,\ u^3=0$ have been studied in \cite{Gao}\cite{Ting Yao Minjia Shi}. In \cite{Ting Yao Minjia Shi}, Gao et al. have obtained some optimal or suboptimal non-linear binary codes. A double cyclic code is in fact a generalized quasi-cyclic (GQC) code of index two. Kulhan and Siap introduced GQC codes over finite fields \cite{Siap I. and Kulhan} and the study has been extended to various finite rings by many authors \cite{Bhaintwal M. Wasan S.,Cao Y. (2011a),Cao Y. (2011b),Esmaeili M. and Yari S.,Gao J. Fu F-W. Shen L. and Ren W.}. Most of these studies focused on exploring $1$-generator GQC codes where they have succeeded to find their duals and obtained a good number of optimal codes. In this paper, we extend the concepts given in \cite{z2double} and \cite{Ting Yao Minjia Shi} and study the algebraic structure of $\Z_2$-triple cyclic codes. We give a minimal spanning set for these codes. Further we present the structure of duals of these codes via their generators. The paper is organized as follows. In Section II, we introduce some basic notations and definitions of $\Z_2$-triple cyclic codes and derive the generators for these codes. In Section III, we determine a minimal spanning set for $\Z_2$-triple cyclic codes. In Section IV, we show the relationship between generators between $\Z_2$-triple cyclic codes and their duals. \section{$\Z_2$-triple cyclic codes} Let $r,\ s$ and $t$ be three positive integers and $n=r+s+t$. Let $\C$ be a binary linear code of length $n$. The $n$ coordinates of each codeword of $\C$ can be partitioned into three sets of size $r,\ s$ and $t$. Therefore $\C$ is a $\Z_2$-submodule of $\RR$. \begin{definition}For any three positive integers $r,s$ and $t$, a $\Z_2$-triple cyclic code $\C$ of block length $(r,s,t)$ is a binary linear code of length $n=r+s+t$ such that \\ $\sigma(c)=(c_{1,r-1},c_{1,0},\cdots,c_{1,r-2}\ \mid \ c_{2,s-1},c_{2,0},\cdots,c_{2,s-2}\ \mid \ c_{3,t-1},c_{3,0},\cdots,c_{3,t-2} )\in\C$, whenever $c=(c_{1,0},\\ c_{1,1},\cdots, c_{1,r-1}\ \mid \ c_{2,0},c_{2,1},\cdots,c_{2,s-1}\ \mid \ c_{3,0},c_{3,1},\cdots,c_{3,t-1} )\in\C$. \end{definition} Let $\C$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$. Let $\C_r$ be the canonical projection of $\C$ on the first $r$ coordinates, $\C_s$ be the projection of next $s$ coordinates and $\C_t$ be the projection of $\C$ on the last $t$ coordinates. It is easy to see that these projections $\C_r$, $\C_s$ and $\C_t$ are binary cyclic codes of length $r$, $s$ and $t$ respectively. $\C$ is called separable if $\C=\C_r \times \C_s \times \C_t$. The dual $\C^\bot$ of a $\Z_2$-triple cyclic code $\C$ of block length $(r,s,t)$ is defined as $$\C^\bot = \{v^\prime\in\RR \ |\ v\cdot v^\prime=0 \ \mbox{for all}\ v\in\C\},$$ \noindent where $v\cdot v^\prime$ is the usual inner product over $\Z_2$. Let $m=\mathrm{lcm}(r,s,t)$. The following result shows that the dual of a $\Z_2$-triple cyclic code of block length $(r,s,t)$ is also a $\Z_2$-triple cyclic code of same block length. \begin{theorem}\label{cyclic nature of dual} If $\C$ is a $\Z_2$-triple cyclic code of block length $(r,s,t)$, then $\C^\bot$ is also $\Z_2$-triple cyclic code of block length $(r,s,t)$. \end{theorem} \begin{proof} Let $v\in\C$. Since $\C$ is invariant under $\sigma$, $\sigma^{m-1}(v)\in\C$. Therefore \begin{align*} 0=u\cdot\sigma^{m-1}(v)&=(u_{1,0}v_{1,1}+\cdots +u_{1,r-2}v_{1,r-1}+u_{1,r-1}v_{1,0})+ (u_{2,0}v_{2,1}+\cdots + u_{2,s-1}v_{2,0})+\\ & \qquad \qquad \qquad \qquad \qquad \qquad (u_{3,0}v_{3,1}+\cdots +u_{3,t-1}v_{3,0})\\ &=(u_{1,r-1}v_{1,0}+u_{1,0}v_{1,1}+\cdots +u_{1,r-2}v_{1,r-1})+ (u_{2,s-1}v_{2,0}+\cdots + u_{2,s-2}v_{2,s-1})+\\ & \qquad \qquad \qquad \qquad \qquad \qquad (u_{3,t-1}v_{3,0}+\cdots +u_{3,t-2}v_{3,t-1})\\ &=\sigma(u) \cdot v. \end{align*} \noindent As $v$ is an arbitrary element of $\C$, the result follows. \end{proof} Now we determine the generators for a $\Z_2$-triple cyclic code $\C$ of block length $(r,s,t)$. For this, we first consider the algebraic structure of $\C$ in $\frac{\Z_2[x]}{\langle x^r-1 \rangle} \times \frac{\Z_2[x]}{\langle x^s-1 \rangle} \times \frac{\Z_2[x]}{\langle x^t-1\rangle}$. Let $\R_{r,s,t}[x] = \frac{\Z_2[x]}{\langle x^r-1 \rangle} \times \frac{\Z_2[x]}{\langle x^s-1 \rangle} \times \frac{\Z_2[x]}{\langle x^t-1\rangle}$, $\smash{\Z_{2,r}[x] = \frac{\Z_2[x]} {\hull{x^r-1}}}$, $\smash{\Z_{2,s}[x] = \frac{\Z_2[x]} {\hull{x^s-1}}}$ and $\smash{\Z_{2,t}[x] = \frac{\Z_2[x]} {\hull{x^t-1}}}$. By identifying each $c=(c_1 \mid c_2 \mid c_3)\in \RR$ with a triplet of polynomials $(c_1(x) \mid c_2(x) \mid c_3(x) )\in \R_{r,s,t}[x]$, where $c_1(x) = \sum_{j=0}^{r-1}c_{1,j} x^j$, $c_2(x) = \sum_{j=0}^{s-1}c_{2,j} x^j$ and $c_3(x) = \sum_{k=0}^{t-1}c_{3,j} x^j$, we get a $\Z_2$-vector space isomorphism between $\RR$ and $\R_{r,s,t}[x]$. Also for any $f(x) \in \Z_2[x]$ and $ c=(c_1(x) \mid c_2(x) \mid c_3(x) ) \in \R_{r,s,t}[x]$, we define the product $f(x) \ast(c_1(x) \mid c_2(x) \mid c_3(x) ) = (f(x)c_1(x) \mid f(x)c_2(x) \mid f(x)c_3(x))\in \R_{r,s,t}[x]$, where $f(x)c_i(x)$ is determined in the corresponding residue ring. Clearly this product is well defined. Therefore $\R_{r,s,t}[x]$ is a $\Z_2[x]$-module with respect to this product. We note that, in polynomial representation, $x(c_1(x) \mid c_2(x) \mid c_3(x) ) = \left(xc_1(x) \mid xc_2(x) \mid xc_3(x)\right)$ represents $\sigma(c)$ for the corresponding element $c=(c_1 \mid c_2 \mid c_3 )\in \RR$. The codes in the present setting are in fact the extensions of both binary cyclic codes and $\Z_2$-double cyclic codes that defined in \cite{z2double}. We denote $f \ast g $ simply by $f g $. The following result follows immediately from the previous discussion. \begin{theorem} Let $\C$ be a binary linear code of length $r+s+t$. Then $\C$ is a $\Z_2$-triple cyclic code in $\RR$ if and only if $\C$ is a $\Z_2[x]$-submodule of $\R_{r,s,t}[x]$.\end{theorem} Since both the modules $\Z_2^t$ and $\Z_2^r \times \Z_2^s $ can be obtained by projecting $\RR$ on first $r$ coordinates and $r+s$ co-ordinates respectively, we make use of the cyclic structures of both binary codes and $\Z_2$-double cyclic codes as given in \cite{z2double} to find the generator polynomials for a $\Z_2$-triple cyclic code $\C$ of block length $(r, s, t)$ in $\R_{r,s,t}[x]$. The following theorem gives the generators for a $\Z_2$-double cyclic code of block length $(r,s)$, which is useful for the rest of our study. \begin{theorem}\cite[Theorem 3.1]{z2double}\label{z2double} The $\Z_2[x]$-module $\R_{r,s}=\frac{\Z_2[x]}{\langle x^r-1 \rangle} \times \frac{\Z_2[x]}{\langle x^s-1 \rangle}$ is a noetherian module, and every submodule $\C$ of $\R_{r,s}$ can be written as $$\C=\hull{(b(x) \mid 0),(l(x) \mid a(x) },$$ where $b(x) ,\ l(x) \in \Z_2[x]/\hull{x^r-1}$ with $b(x) \mid (x^r-1)$ and $a(x) \in \Z_2[x]/\hull{x^s-1}$ with $a(x) \mid (x^s-1)$. \end{theorem} \begin{theorem}\cite[Proposition 3.2.]{z2double}\label{spannig double} Let $\C$ be a $\Z_2$-double cyclic code of block length $(r,s)$, such that $\C=\hull{(b(x) \mid 0 ),(l(x) \mid a(x) )}$, where $b(x) |x^r-1,\ a(x) |x^s-1$ over $\Z_2$. If $\de(b(x))=t_1$ and $\de(a(x))=t_2$, then a minimal spanning set for $\C$ is $S\p = S_1\p\cup S_2\p $, where \begin{align*} S_1\p &= \bigcup_{i=0}^{r-t_1-1}x^i \ast (b(x) \mid 0) & S_2\p &= \bigcup_{i=0}^{s-t_2-1}x^i\ast (l(x) \mid a(x) ). \end{align*} \end{theorem} In the following theorem, we determine the generator polynomials for $\Z_2$-triple cyclic codes of block length $(r,s,t)$. \begin{theorem}\label{z2 triple cy.co.}Let $\C$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$. Then $\C=\hull{(b(x) \ \mid \ 0\ \mid\ 0),(l(x) \ \mid\ a(x) \ \mid \ 0),(g_1(x)\ \mid \ g_2(x)\ \mid \ g_3(x))}$, where $b(x),l(x),g_1(x)\in \Z_{2,r}[x]$ with $b(x) |x^r-1$ and $a(x),g_2(x)\in \Z_{2,s}[x]$ with $a(x)| x^s-1$ and $g_3(x)\in \Z_{2,t}[x]$ with $g_3(x) | x^t-1$. \end{theorem} \begin{proof}Consider the canonical projection $ \pi_t: \C \to \Z_{2,t}[x]$ such that $(c_1 \mid\ c_2 \mid c_3 ) \mapsto c_3 $. It is easy to see that $\pi_t$ is a $\Z_2[x]$-module homomorphism with kernel $ker_{\C}(\pi_t)=\{(c_1 \mid c_2 \mid 0 )\in \C \}$, and therefore the set $K=\{(c_1 \mid c_2 )\in \C \ : \ (c_1 \mid c_2 \mid 0 )\in \C\}$ is a $\Z_2$-double cyclic code of block length $(r,s)$ in $\R_{r,s}[x]$. From Theorem \ref{z2double}, there exist $b ,l \in \Z_{2,r}[x]$ and $a\in \Z_{2,s}[x]$ such that $K=\hull{(b \mid 0),(l \mid a )}$, with $b |x^r-1$ and $a |x^s-1$. This implies that $ker_{\C}(\pi_t)=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0)}$. On the other hand the image of $\C$ under $\pi_t$ is an ideal of $\Z_{2,t}[x]$ and as $\Z_{2,t}[x]$ is a PID, there exists $g_3 \in \Z_{2,t}[x]$ such that $\pi_t(\C)=\hull{g_3 }$. Therefore we have $\frac{\C}{Ker_{\C}(\pi_t)}\cong \pi_t(\C)$, and hence $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ for some $g_1 \in \Z_{2,r}[x]$ and $g_2 \in \Z_{2,s}[x]$. Hence the theorem. \end{proof} Let $(c_1 \mid 0 \mid 0)\in \C$. Then $(c_1 \mid 0)\in K=\{(c_1 \mid c_2 )\in \C \ : \ (c_1 \mid c_2 \mid 0 )\in \C\}$. Also, from Theorem \ref{z2 triple cy.co.}, we have $K=\hull{(b \mid 0),(l \mid a )}$. Therefore $c_1\in\hull{b}$. Hence $(c_1 \mid 0 \mid 0)\in \C$ implies that $c_1\in \hull{b}$. The following results are useful to understand the structure and hence to determine a minimal spanning set for a $\Z_2$-triple cyclic code. The minimal spanning set of a $\Z_2$-triple cyclic code can be used to determine its cardinality and the generator matrix. In the rest of the paper we consider the $\Z_2$-triple cyclic code as defined in Theorem \ref{z2 triple cy.co.}. \begin{lemma}\label{l <b and g_1<b}Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2\ \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$. Then $\de(l )< \de(b )$ and $\de(g_1 )<\de(b )$. \end{lemma} \begin{proof} Assume $\de(l ) \geq \de(b )$. By applying division algorithm, there exist polynomials $q $ and $r $ in $\Z_2[x]$ such that $l = b q +r$, where $r=0$ or $\de(r) < \de(b)$. \noindent Then \begin{align*} \hull{(b \mid 0 \mid 0),(l \mid a \mid 0)} &= \hull{(b \mid 0 \mid 0),(b q +r \mid a \mid 0)} \\ &= \hull{(b \mid 0 \mid 0),(r \mid a \mid 0)}. \end{align*} Hence, we may assume that $\de(l)<\de(b )$. Similarly, we can show $\de(g_1 )<\de(b )$. \end{proof} \begin{lemma}\label{b|h_2l} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$. Then \begin{enumerate} \item $b | \frac{x^s-1}{a} l$;\vspace{2mm} \item $a | \frac{x^t-1}{g_3} g_2$ and if $l=0$, then $b | \frac{x^t-1}{g_3}g_1 $; \vspace{2mm} \item $b\ \mathrm{divides}\ \frac{x^t-1}{g_3a}lg_2+\frac{x^t-1}{g_3}g_1$. \end{enumerate} \end{lemma} \begin{proof} We have $\frac{x^s-1}{a } \left(l \ \mid\ a \ \mid \ 0 \right)= \left(\frac{x^s-1}{a }l\ \mid\ 0 \ \mid \ 0 \right)\in \C$. This implies that $\frac{x^s-1}{a }l \in \hull{b}$ and hence $b | \frac{x^s-1}{a}l $. Similarly, we can prove the second result. For result (3), as $a | \frac{x^t-1}{g_3} g_2$, we have $ \frac{x^t-1}{g_3} g_2=k a $ for some $k \in \Z_2[x]$. Also as $(kl \mid ka \mid 0),(\frac{x^t-1}{g_3}g_1 \mid \frac{x^t-1}{g_3}g_2 \mid 0)\in \C$, implies that $(kl \mid ka \mid 0)+(\frac{x^t-1}{g_3}g_1 \mid \frac{x^t-1}{g_3}g_2 \mid 0)=(kl+\frac{x^t-1}{g_3}g_1 \mid 0 \mid 0)\in \C$. The result follows as $kl+\frac{x^t-1}{g_3}g_1\in \hull{b}$. \end{proof} In the following theorem we determine a minimal spanning set for a $\Z_2$-triple cyclic code. \begin{theorem}\label{spanning sets} Let $\C=\hull{(b \mid 0 \mid\ 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ such that $b,l,g_1\in \Z_{2,r}[x]$, $a,g_2\in \Z_{2,s}[x]$, $g_3\in \Z_{2,t}[x]$ with $b |x^r-1,\ a | x^s-1$ and $g_3 | x^t-1$. Let $h_1 =\frac{x^r - 1}{b}$, $h_2 =\frac{x^s - 1}{a}$ and $h_3 =\frac{x^t - 1}{g_3}$. If $\de(b)=t_1,\ \de(a)=t_2$ and $\de(g_3)=t_3$, then a minimal spanning set for $\C$ is $S = S_1\cup S_2 \cup S_3$, where \begin{align*} S_1 &= \bigcup_{i=0}^{r-t_1-1}x^i \ast (b \mid 0 \mid 0) & S_2 &= \bigcup_{i=0}^{s-t_2-1}x^i\ast (l \mid a \mid 0) & S_3 &= \bigcup_{i=0}^{t-t_3-1}x^i\ast (g_1 \mid g_2 \mid g_3). \end{align*} \noindent Moreover, $\mid \C \mid = 2^{r+s+t-\de(b)-\de(a)-\de(g_3)}$. \end{theorem} \begin{proof} Let $c$ be a codeword in $\C$. Then there exist $d_1 ,\ d_2 $, $d_3 \in \Z_2[x]$ such that \begin{align} c &= d_1\ast (b \mid 0 \mid 0)\ +\ d_2\ast (l \mid a \mid 0)\ +\ d_3 \ast (g_1 \mid g_2 \mid g_3) \\ \nonumber &= (d_1 b \mid 0 \mid 0)\ +\ (d_2l \mid d_2a \mid 0)\ +\ (d_3g_1 \mid d_3g_2 \mid d_3g_3) \end{align} We first show that $d_1 \ast(b \mid 0 \mid 0)\in span(S_1)$. If $\de(d_1)< r-t_1$, then obviously $d_1 \ast(b \mid 0 \mid 0)\in$ $span(S_1)$. Let $\de(d_1)< r-t_1$. By division algorithm, there exist $Q_1 $,$R_1 $$\in \Z_2[x]$ such that $d_1 = Q_1 \frac{x^r-1}{b} + R_1 \ \mbox{with}\ R_1=0 \ \mbox{or} \ \de(R_1 )<r-t_1$. Then \begin{align*} (d_1 b \mid 0 \mid 0) &= \left(\left(Q_1 \frac{x^r-1}{b}+R_1 \right)b \mid 0 \mid 0\right)\\ &= (Q_1(x^r-1)+R_1b \mid 0 \mid 0 )\\ &= Q_1 (x^r-1 \mid 0 \mid 0) + R_1(b \mid 0 \mid 0)\\ &= R_1 (b \mid 0 \mid 0) \in span(S_1). \end{align*} Next we show that $(l \mid a \mid 0) \in\ span(S_1\cup S_2)$. We have by division algorithm $d_2 = Q_2 h_2 +R_2 $ with $R_2=0$ or $\de(R_2 )<s-t_2$, $Q_2,R_2 \in\Z_2[x]$. Therefore \begin{align}\label{eq 2} d_2 \ast(l \mid a \mid 0)&= (Q_2 h_2 +R_2 )(l \mid a \mid 0) \nonumber \\ &= Q_2 (l h_2 \mid 0 \mid 0) + R_2 (l \mid a \mid 0). \end{align} \noindent Since $0\leq \de(R_2)\leq s-t_2-1$, we have $R_2 (l \mid a \mid 0)\in span(S_2) $. Also from Lemma \ref{b|h_2l}, we have $b \mid h_2 l $, which implies that $Q_2 (l h_2 \mid 0 \mid 0)\in span(S_1) $. Therefore from (\ref{eq 2}) we get $d_2 \ast(l \mid a \mid 0)\in span(S_1\cup S_2)$. \indent Finally we show that $ d_3 \ast(g_1 \mid g_2 \mid g_3)$ belongs to $span(S_1\cup S_2\cup S_3)$. Again by the division algorithm, we have $d_3 = Q_3 h_3 +R_3 $ with $R_3=0 \ \mbox{or}\ \de(R_3 )<t-t_3$, where $Q_3 \ \mbox{and}\ R_3 \in\Z_2[x]$. Then \begin{align}\label{s4,s5} d_3 \ast(g_1 \mid g_2 \mid g_3) &= \left(Q_3 h_3 +R_3 \right)(g_1 \mid g_2 \mid g_3) \nonumber \\ &= Q_3h_3(g_1 \mid g_2 \mid 0) + R_3 (g_1 \mid g_2 \mid g_3).\end{align} \noindent It easy to see that $ Q_3h_3(g_1 \mid g_2 \mid 0)\in \C$ and hence $ Q_3h_3(g_1 \mid g_2))\in K=\{(c_1\mid c_2):(c_1\mid c_2\mid 0)\in \mathrm{ker}_{\C}(\pi_t)\}$. From Theorem \ref{spannig double}, we have $Q_3h_3(g_1 \mid g_2)\in span(S_1\p\cup S_2\p)$. This implies that, $Q_3h_3(g_1 \mid g_2 \mid 0)\in span(S_1 \cup S_2) $. Also, $R_3 (g_1 \mid g_2 \mid g_3)\in span(S_3)$, as $\de(R_3)<t-t_3$. Therefore, from equation (\ref{s4,s5}), $ d_3 \ast(g_1 \mid g_2 \mid g_3)\in span(S_1\cup S_2\cup S_3)$. Hence $\C \in span(S_1\cup S_2\cup S_3)$. The second result follows as $S$ is linearly independent. \end{proof} The following example illustrates this. \begin{example} Let $r=s=t=7$. Consider the factorization $x^7-1=(x+1)(x^3+x^2+1)(x^3+x+1)$ of $x^7-1$ over $\Z_2$. Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$, where $b=(x+1)(x^3+x^2+1)$, $l=(x+1)^2$, $a=(x+1)(x^3+x+1)$, $g_1=x+1$, $g_2=x^2+x$ and $g_3=(x+1)(x^3+x^2+1)$. Then, $\C$ holds all the conditions given in Lemma \ref{l <b and g_1<b} and Lemma \ref{b|h_2l}. Therefore, $\C$ is a $\Z_2$-triple cyclic code of block length $(7,7,7)$. Also, $S=S_1\cup S_2\cup S_3$ forms a generating set for $\C$, where $S_1=\cup_{i=0}^2(x^4+x^2+x+1 \mid 0 \mid 0)$, $S_2=\cup_{i=0}^2 x^i(x^2+1 \mid x^4+x^3+x^2+1 \mid 0)$ and $S_3=\cup_{i=0}^2 x^i(x+1 \mid x+x^2 \mid x^4+x^2+x+1)$. The cardinality of $\C$ is $2^9$. Also, $\C$ is generated by the generator matrix $G$, where \begin{equation*}G=\left( \begin{array}{ccc} 1 \ 1 \ 1 \ 0 \ 1 \ 0 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 1 \ 1 \ 1 \ 0 \ 1 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 1 \ 1 \ 1 \ 0 \ 1& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 1 \ 0 \ 1 \ 0 \ 0 \ 0 \ 0& 1 \ 0 \ 1 \ 1 \ 1 \ 0 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 1 \ 0 \ 1 \ 0 \ 0 \ 0& 0 \ 1 \ 0 \ 1 \ 1 \ 1 \ 0& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 1 \ 0 \ 1 \ 0 \ 0& 0 \ 0 \ 1 \ 0 \ 1 \ 1 \ 1& 0 \ 0 \ 0 \ 0 \ 0 \ 0 \ 0 \\ 1 \ 1 \ 0 \ 0 \ 0 \ 0 \ 0& 0 \ 1 \ 1 \ 0 \ 0 \ 0 \ 0& 1 \ 1 \ 1 \ 0 \ 1 \ 0 \ 0 \\ 0 \ 1 \ 1 \ 0 \ 0 \ 0 \ 0& 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 0& 0 \ 1 \ 1 \ 1 \ 0 \ 1 \ 0 \\ 0 \ 0 \ 1 \ 1 \ 0 \ 0 \ 0& 0 \ 0 \ 0 \ 1 \ 1 \ 0 \ 0& 0 \ 0 \ 1 \ 1 \ 1 \ 0 \ 1 \\ \end{array} \right).\end{equation*} \noindent Further, the minimum Hamming distance of $\C$ is $4$ and therefore, $\C$ is a $[21, 9, 4]$ binary linear code with the Hamming weight distribution given by $[ <0, 1>, <4, 7>, <6, 21>, <8, 98>, <10, 154>, <12, 175>, <14, 49>, <16, 7> ]$. \end{example} \section{Duals of $\Z_2$-triple cyclic codes} In this section, we determine the duals of $\Z_2$-triple cyclic codes of block length $(r,s,t)$. In Theorem (\ref{cyclic nature of dual}), it is shown that the dual $\C^\bot$ of a $\Z_2$-triple cyclic code $\C$ is also a $\Z_2$-triple cyclic code. Therefore, we may let $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ with $\hat{b} |x^r-1,\ \hat{a} |x^s-1$ and $\hat{g}_3 |x^t-1$ over $\Z_2$. Further, let $m=lcm(r,s,t)$ and denote the polynomial $\sum_{i=0}^{m-1}x^i$ by $\theta_{m}(x) $. Then by \cite[Preposition 4.2]{z2double}, we have the following result. \begin{proposition}Let $r,s,t\in\mathbb{N}$ and $m=lcm(r,s,t)$. Then, $x^m-1=\theta_{\frac{m}{r}}(x^r)(x^r-1)=\theta_{\frac{m}{s}}(x^s)(x^s-1)=\theta_{\frac{m}{t}}(x^t)(x^t-1)$. \end{proposition} For any polynomials $f ,g \in \Z_2[x]$, we denote the g.c.d. of $f $ and $g $ by $(f ,g )$, and we extend this notation for three or more polynomials. For any polynomial $f $ of degree $n$ the reciprocal of $f $ is defined as $f^\ast =x^nf(\frac{1}{x})$. The following result is usefull for our study. \begin{theorem}\label{properities of reciprocals} Let $f$ and $g$ be two binary polynomials, such that $\mathrm{deg}(f)\geq \mathrm{deg}(g)$, Then \begin{enumerate} \item $\mathrm{deg}(f)\geq \mathrm{deg}(f^\ast)$, and equality holds if $x \nmid f$; \item $(fg)^\ast=f^\ast g^\ast$; \item $(f+g)^\ast=f^\ast+x^{\mathrm{deg}(f)-\mathrm{deg}(g)} g^\ast$; \item $g \mid f\Rightarrow g^\ast \mid f^\ast$ and \item $(f^\ast,g^\ast)=(f,g)^\ast$. \end{enumerate} \end{theorem} \begin{proof} The proofs of (1), (2) and (3) are straight forward. For $4$, let $g \mid f$, so that $f=k g$ for some $k\in \Z_2[x]$. Then $f^\ast=k^\ast g^\ast$. Therefore $g^\ast \mid f^\ast$. \noindent From the definition of g.c.d., there exist $m_1,m_2\in \Z_2[x]$ such that $(f,g)=m_1 f+m_2 g$. Assuming $\mathrm{deg}(m_1 f)\geq \mathrm{deg}(m_2 g) $, we get $$(f,g)^\ast=m_1^\ast f^\ast+x^{\mathrm{deg}(m_1 f)-\mathrm{deg}(m_2 g) }m_2^\ast g^\ast.$$ \noindent Again as $(f^\ast,g^\ast) \mid f^\ast$ and $(f^\ast,g^\ast) \mid g^\ast$, so $(f^\ast,g^\ast) \mid (f,g)^\ast$. On the other hand, $(f,g) \mid f$ implies that $(f,g)^\ast \mid f^\ast$. Similarly $(f,g)^\ast \mid g^\ast$. Hence $$(f,g)^\ast \mid (f^\ast, g^\ast).$$ \noindent The result follows. \end{proof} \begin{remark}If $x\nmid f$ or $x\nmid g$, then it is easy to prove that $\mathrm{deg}(f^\ast,g^\ast)=\mathrm{deg}(f,g)^\ast=\mathrm{deg}(f,g)$. \end{remark} Now we define a mapping $\psi : \R_{r,s,t}[x] \times \R_{r,s,t}[x] \to \frac{\Z_2[x]}{\hull{x^m-1}}$ such that \begin{equation}{\psi(u ,v )= u_{1} \theta_{\frac{m}{r}}(x^r)x^{m-\de(v_{1})-1}v_{1}^\ast +u_{2} \theta_{\frac{m}{s}}(x^s) x^{m-\de(v_{2})-1}v_{2}^\ast +u_{3} \theta_{\frac{m}{r}}(x^r)x^{m-\de(v_{3})-1}v_{3}^\ast ,}\end{equation} \noindent where $u =(u_{1} \mid u_{2} \mid u_{3} ),\ v =(v_{1} \mid v_{2} \mid v_{3} )\in \R_{r,s,t}[x]$. The map $\psi$ is a bilinear map between the two $\Z_2[x]$-modules. $\psi$ is a generalization of a similar map defined in \cite{z2double} for $\Z_2$-double cyclic codes. \begin{lemma} Let $u=(u_{1} \mid u_{2} \mid u_{3}),\ v=(v_{1} \mid v_{2} \mid v_{3})$ be elements in $\RR$ with associated polynomials $u(x) =(u_{1}(x) \mid u_{2}(x) \mid u_{3}(x) )$ and $v(x) =(v_{1}(x) \mid v_{2}(x) \mid v_{3}(x) )$ in $\R_{r,s,t}[x]$. Then $u$ is orthogonal to $v$ and all its cyclic shifts if and only if $\psi(u ,v )=0$. \end{lemma} \begin{proof} Let $u=(u_{1,0},u_{1,1},\cdots,u_{1,r-1} \mid u_{2,0},u_{2,1},\cdots,u_{2,s-1} \mid u_{3,0},u_{3,1},\cdots,u_{3,t-1} )$ and $v=(v_{1,0},v_{1,1}, \cdots, \\ v_{1,r-1} \mid v_{2,0},v_{2,1},\cdots,v_{2,s-1} \mid v_{3,0},v_{3,1},\cdots,v_{3,t-1} )$ be two elements in $\RR$. Let $\sigma^{(i)}(v)$ be the $i$-th shift of $v$. Then $\sigma^{(i)}(v)=(v_{1,0+i},v_{1,1+i},\cdots, v_{1,i-1}\ \mid \ v_{2,0+i} , v_{2,1+i} ,\cdots, v_{2,i-1}\ \mid \ v_{3,0+i} , v_{3,1+i} ,\cdots, v_{3, i-1})$ for $1\leq i\leq m-1$. Under polynomial representation we have \begin{align*} \psi(u ,\sigma^{(i)}(v) ) &= \sum_{p=0}^{r-1}\left(\theta_{\frac{m}{r}}(x^r)\sum_{j=0}^{r-1}u_{1,j}v_{1,p+j}x^{m-1-p}\right) + \sum_{q=0}^{s-1}\left(\theta_{\frac{m}{s}}(x^s)\sum_{k=0}^{s-1}u_{2,k}v_{2,q+k}x^{m-1-q}\right)+\\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad\sum_{l=0}^{t-1}\left(\theta_{\frac{m}{t}}(x^t)\sum_{w=0}^{t-1}u_{3,w}v_{3,l+w}x^{m-1-l}\right). \end{align*} Rearranging the terms in the summation we get $$\psi(u,\sigma^{(i)}(v) )= \sum_{i=0}^{m-1}S_ix^{m-1-i}\ \mathrm{mod}(x^m-1),$$ \noindent where $S_i=\sum_{j=0}^{r-1}u_{1,j}v_{1,i+j}+\sum_{k=0}^{s-1}u_{2,k}v_{2,i+k}+\sum_{l=0}^{t-1}u_{1,l}v_{2,i+l}$. On the other hand, we have $u \cdot \sigma^{(i)}(v)=\sum_{j=0}^{r-1}u_{1,j}v_{1,i+j}+\sum_{k=0}^{s-1}u_{2,k}v_{2,i+k}+\sum_{l=0}^{t-1}u_{1,l}v_{2,i+l}=S_i$. Thus, $\psi(u,\sigma^{(i)}(v) )=0$ if and only if $S_i=0$ for all $1\leq i\leq m-1$. Hence the result. \end{proof} \begin{lemma}\label{b^*a=0} Let $u =(u_{1} \mid u_{2} \mid u_{3} )$ and $v=(v_{1} \mid v_{2} \mid v_{3} )$ in $\R_{r,s,t}[x]$ such that $\psi(u ,v)=0$. Then \begin{enumerate} \item If $u_2 =0$ or $v_2=0$ and $u_3 =0$ or $v_3=0$, then $u_1 v_1^\ast =0\ (\mathrm{mod}\ x^r-1)$. \item If $u_1 =0$ or $v_1=0$ and $u_3 =0$ or $v_3=0$, then $u_2 v_2^\ast =0\ (\mathrm{mod}\ x^s-1)$. \item If $u_1 =0$ or $v_1=0$ and $u_2 =0$ or $v_2=0$, then $u_3 v_3^\ast =0\ (\mathrm{mod}\ x^t-1)$. \end{enumerate} \end{lemma} \begin{proof} Let $u_2 =0$ or $v_2=0$ and $u_3 =0$ or $v_3=0$. Then from the definition of $\psi$, we have $\psi(u, v )=u_{1} \theta_{\frac{m}{r}}(x^r)x^{m-\de(v_{1})-1}v_{1}^\ast =0\ (\mathrm{mod}\ x^m-1)$. This implies that $u_{1} \theta_{\frac{m}{r}}(x^r) x^{m-\de(v_{1})-1}v_{1}^\ast = (x^m-1)g$ for some $g \in\Z_2[x]$. Taking $f =x^{\de(v_{1})+1}g$, we get $u_{1} \theta_{\frac{m}{r}}(x^r)x^{m}v_{1}^\ast =f (x^m-1)$ and therefore $u_{1} (x^m-1)x^{m}v_{1}^\ast =(x^m-1)(x^r-1)f$. Since $x$ and $x^r-1$ are relatively prime, we have $u_1 v_1^\ast =0\ (\mathrm{mod}\ x^r-1)$. Other results can be proved similarly. \end{proof} The following result gives the cardinalities of the projections $\C_r$, $\C_s$ and $\C_t$ and their duals, which can further be used to obtain the duals of a $\Z_2$-triple cyclic codes. \begin{theorem}\label{dimensios from the matrix} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid \ 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$. Then, \begin{align*} \mid\C_r\mid &= 2^{r-\de(b)+k_1} & \mid(\C_r)^\bot \mid &= 2^{\de(b,l,g_1)} & \mid(\C^\bot)_r\mid &= 2^{\de(b)}, \\ \mid\C_s\mid &= 2^{r-\de(a,g_2)} & \mid(\C_s)^\bot \mid &= 2^{\de(a,g_2)} & \mid(\C^\bot)_s\mid &= 2^{\de(a)+k_1}\ \mathrm{and}\\ \mid\C_t\mid &= 2^{r-\de(g_3)} & \mid(\C_t)^\bot \mid &= 2^{\de(g_3)} & \mid(\C^\bot)_t\mid &= 2^{\de(g_3)+k_2},\\ \end{align*} \noindent where $k_1=\de(b)-\de(b,l,g_1)$ and $k_2=\de(a)+\de(b)-\de(b,l,g_1)-\de(b,g_2)$. \end{theorem} \begin{proof} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot$ be its dual. Then from Theorem \ref{spanning sets}, $\C$ is spanned by $S=S_1\cup S_2 \cup S_3$ and therefore $\C$ is generated by the matrix whose rows are the elements of the set $S$. Let $C_1$, $C_2$ and $C_3$ be the subcodes of $\C$ generated by $S_1$, $S_2$ and $S_3$ respectively, and let $\textbf{G}_1$, $\textbf{G}_2$ and $\textbf{G}_3$ be their generator matrices, where $\textbf{G}_1=(I_{r-\de(b)}\hspace{3mm} A \mid 0 \mid 0),\textbf{G}_2=(B \mid C\hspace{3mm} I_{r-\de(a)} \mid 0),\textbf{G}_3=(D \mid E \mid F\hspace{3mm} I_{t-\de(g_3)})$. Then, the matrix $\textbf{G}=\left( \begin{array}{c} \textbf{G}_1 \\ \textbf{G}_2\\ \textbf{G}_3 \\ \end{array} \right)$ forms a generator matrix for $\C$.\\ Now we obtain an equivalent form of the matrix $\textbf{G}$ by adjusting its rows, so that we can make use of this equivalent form and find the cardinalities of the respective projections $\C_r$, $\C_s$ and $\C_t$. It is easy to see that $\C_r$ is generated by $(b,l,g_1)$, and this implies that the dimension of $\C_r$ is $r-\de(b,l,g_1)$, which is greater than or equal to $r-\de(b)$. Therefore, $B$ must have a submatrix, say $B_{k_1}$, of full rank $k_1=\de(b)-\de(b,l,g_1)$. Add the corresponding rows of $\textbf{G}_2$ that contain $B_{k_1}$, to $\textbf{G}_1$. As a result, $\textbf{G}_2$ is now reduced to the matrix of the form $\textbf{G}_2^\prime=(B^\prime \mid C^\prime \hspace{3mm} I_{r-\de(a)-k_1} \mid 0)$. Similarly, as $\C_s$ is generated by $(a,G_2)$, we have the dimension of $\C_s$ as $s-\de(a,g_2)$. Therefore, the matrix $E$ must have a submatrix, say $E_{k_2}$, of full rank $k_2=\de(a)+\de(b)-\de(b,l,g_1)-\de(a,g_2)$. Again, adding the rows that contain the matrix $E_{k_2}$, to $\textbf{G}_2^\prime$, we get the remaining part of the generating matrix $\textbf{G}$ of $\C$ as $(D^\prime \mid E^\prime \mid F \hspace{3mm} I_{r-\de(g_3)-k_2})$. Therefore, the generator matrix $\textbf{G}$ of $\C$ is permutationally equivalent to the matrix $\textbf{G}^\prime$, where\\ $$\textbf{G}^\prime = \left( \begin{array}{ccc|ccccc|ccc} I_{r-\de(b)} & A_1 & A_2 & 0 & & & & & & & \\ 0 & B_{k_1} & B_1 & C_1 & I_{k_1} & & & & & & \\ & 0 & & C_2 & R_1 & I_{s-\de(a)-{k_1}} & 0 & & & & \\ & & & C_3 & R_2 & 0 & E_{k_2}& E_2 & D_1 & I_{k_2} & 0 \\ & & & & & & 0 & E_3 & D_2 & D_3 & I_{t-\de(g_3)-{k_2}} \end{array} \right).$$ \noindent The cardinalities of $\C_r$, $\C_s$ and $\C_t$ and their duals follow from $\textbf{G}^\prime$. The cardinalities of $(\C^\bot)_r$, $(\C^\bot)_s$ and $(\C^\bot)_t$ can be obtained by projecting the parity check matrix of $\C$ on first $r$ coordinates, next $s$ coordinates and remaining last $t$ coordinates, respectively. Hence the theorem. \end{proof} \begin{theorem}\label{for hat b} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0), (g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ be the dual of $\C$. Then $$\hat{b} =\frac{x^r-1}{(b,l,g_1)^\ast}.$$ \end{theorem} \begin{proof} First we determine the degree of $\hat{b} $. From the definition of $\C^\bot$, it is easy to show that $(\C_r)^\bot=\hull{\hat{b} }$. This implies that $\mid(\C_r)^\bot\mid=2^{r-\de(\hat{b} )}$. From Theorem \ref{dimensios from the matrix}, we have $\mid(\C_r)^\bot\mid=2^{\de(b,l,g_1)}$. Therefore \begin{equation}\label{for deg of hat b} \de(\hat{b} )=r-\de(b,l,g_1). \end{equation} Now, as $(\hat{b} \mid 0 \mid 0)\in \C^\bot$, from the definition of $\psi$, we have $$\psi\left((\hat{b} \mid 0 \mid 0),(b \mid 0 \mid 0)\right)=\psi\left((\hat{b} \mid 0 \mid 0),(l \mid a \mid 0)\right)=\psi\left((\hat{b} \mid 0 \mid 0),(g_1 \mid g_2 \mid g_3)\right)=0\ (\mathrm{mod}\ x^m-1).$$ This implies that $\hat{b}\ b^\ast=\hat{b}\ l^\ast=\hat{b}\ G_1^\ast=0\ (\mathrm{mod}\ x^r-1)$. Therefore, $\hat{b}\ \mathrm{gcd}(b^\ast,l^\ast,g_1^\ast)=0\ (\mathrm{mod}\ x^r-1)$. Since $(b^\ast,l^\ast,g_1^\ast)=(b,l,g_1)^\ast$, we have $\hat{b}\ \mathrm{gcd}(b ,l ,g_1 )^\ast=0\ (\mathrm{mod}\ x^r-1)$. Then there exists $\lambda \in \Z_2[x]$ such that, \begin{equation}\label{for hat b 1} \hat{b}\ (b,l,g_1)^\ast=\lambda (x^r-1) \end{equation} \noindent From equations (\ref{for deg of hat b}) and (\ref{for hat b 1}), we get $\lambda=1$ and hence $\hat{b}\ (b,l,g_1)^\ast =(x^r-1)$. \end{proof} \begin{lemma}\label{codeword for G_3} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$. Then $\left(0 \mid 0 \mid \frac{abG_3}{(b,l,g_1)(a,g_2)}\right)\in\C$.\end{lemma} \begin{proof} Since $(b \mid 0 \mid 0),(l \mid a \mid 0)\in \C$, so $\frac{l}{(b,l,g_1)}(b \mid 0 \mid 0)+\frac{b}{(b,l,g_1)}(l \mid a \mid 0)=\left(0 \mid \frac{ab}{(b,l,g_1)} \mid 0\right)\in \C$. Similarly $(b \mid 0 \mid 0),(g_1 \mid g_2 \mid g_3)\in \C$ implies that $\left( 0 \mid \frac{bg_2}{(b,l,g_1)} \mid \frac{bg_3}{(b,l,g_1)} \right)\in \C$. The gcd of $ \frac{ab}{(b,l,g_1)}$ and $\frac{bg_2}{(b,l,g_1)}$ is $\frac{b(a,g_2)}{(b,l,g_1)}$. Therefore, as $\left(0 \mid \frac{ab}{(b,l,g_1)} \mid 0\right),\left(0 \mid \frac{bg_2}{(b,l,g_1)} \mid \frac{bg_3}{(b,l,g_1)} \right) \in \C$, we have $\left(0\mid 0 \mid \frac{abg_3}{(a,g_2)(b,l,g_1)} \right)\in \C$ \end{proof} \begin{theorem} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ be the dual of $\C$. Then $$\hat{g}_3= \frac{(x^t-1)(b,l,g_1)^\ast(a,g_2)^\ast}{a^\ast b^\ast g_3^\ast}.$$ \end{theorem} \begin{proof} Again, first we determine the degree of $\hat{g}_3 $. From the definition of $\C^\bot$, it is easy to show that $(\C^\bot)_s=\hull{\hat{g}_3 }$ and this implies that $\mid(\C^\bot)_s\mid=2^{t-\de(\hat{g}_3)}$. Also from Theorem \ref{dimensios from the matrix}, we have $\mid(\C^\bot)_s\mid=2^{\de(g_3)+k_1}$. Therefore \begin{equation}\label{for deg of hat G_3} \de(\hat{g}_3)=t-\de(g_3)-\de(a)-\de(b)+\de(b,l,g_1)+\de(a,g_2). \end{equation} Now from Lemma \ref{codeword for G_3}, we have $\left(0 \mid 0 \mid \frac{abg_3}{(b,l,g_1)(a,g_2)}\right)\in\C$ and also as $(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)\in \C^\bot$, we get $\psi \left(\left(0 \mid 0 \right.\\ \right.\\ \left.\left. \mid \frac{abg_3}{(b,l)(a,g_2)}\right), (\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3) \right)=0\ (\mathrm{mod}\ x^m-1)$. This implies that $\hat{g}_3\frac{abg_3}{(b,l,g_1)(a,g_2)}=0\ (\mathrm{mod}\ x^t-1 )$, and hence $\hat{g}_3\frac{abg_3}{(b,l,g_1)(a,g_2)}=\lambda^\prime (x^t-1)$ for some $\lambda_2 \in \Z_2[x]$. From the degree consideration of $\hat{g}_3$ from equation (\ref{for deg of hat G_3}), we get $\lambda^\prime=1$. The result follows. \end{proof} \begin{theorem}\label{lambda for G1 and G2} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ be the dual of $\C$. Then, for some $\lambda_1,\ \lambda_2 \in\Z_2[x]$, we have \begin{equation*}\hat{g}_1 \ b^\ast =\lambda_1 (x^r-1)\qquad \mathrm{and} \qquad \hat{g}_2 \ a^\ast b^\ast =\lambda_2 (x^s-1)(b,l,g_1)^\ast.\end{equation*} \end{theorem} \begin{proof} From the definitions of $\C$ and $\C^\bot$, we have $\psi((\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3),(b \mid 0 \mid 0))=0$ $(\mathrm{mod}\ x^m-1)$. This implies that $\hat{g}_1 \ b^\ast =\lambda_1 (x^r-1)$ for some $\lambda_1 \in \Z_2[x]$. On the other hand, it is easy to show that $\left(0 \mid \frac{ab}{(b,l,g_1)} \mid 0\right)\in \C$. Therefore, $\psi\left(\left(0 \mid \frac{ab}{(b,l,g_1)} \mid 0\right),(\hat{g}_1 \mid \hat{g}_2 \right.\\ \left.\mid \hat{g}_3)\right) =0\ (\mathrm{mod}\ x^m-1)$. This implies that $\hat{g}_2 \ a^\ast b^\ast =\lambda_2 (x^s-1)(b,l,g_1)^\ast$ for some $\lambda_2 \in \Z_2[x]$. \end{proof} In the following theorem, we obtain the explicit forms for $\lambda_1$ and $\lambda_2$ that are given in Theorem {\ref{lambda for G1 and G2}}. \begin{theorem}\label{evaluation of lambda for G1 and G2} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{G}_1 \mid \hat{g}_2\mid \hat{g}_3)}$ be the dual code of $\C$. Then, $\hat{g}_1 b^\ast =\lambda_1 (x^r-1)$ and $\hat{g}_2 \ a^\ast b^\ast =\lambda_2 (x^s-1)(b,l,g_1)^\ast(a,g_2)^\ast$, where \begin{align*} \lambda_1&=\left(\frac{l^\ast}{(b,l,g_1)^\ast}\right)^{-1} \left(\frac{g_2^\ast}{(a,g_2)^\ast} \right)^{-1}g_2^\ast~ x^{2m+\de(l)-\de(a)+\de(g_2)-\de(g_3)} ~~~~ \left(\mathrm{mod}\ \ \left( \frac{(b,g_1^\ast)}{(b,l,g_1)^\ast},\ \frac{a^\ast}{(a,g_2)^\ast} \right)\right)\qquad \qquad \end{align*} \noindent {and} \begin{align*} \lambda_2&= \left(\frac{g_2^\ast}{(a,g_2)^\ast} \right)^{-1}~ x^{2m+\de(g_2)-\de(g_3)} ~~~~ \left(\mathrm{mod}\ \ \frac{a^\ast}{(a,g_2)^\ast} \right).\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{align*} \end{theorem} \begin{proof} Since $(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)\in \C$ and $(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)\in \C^\bot$, we have $\psi((\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3),(l \mid a \mid 0))=\psi(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3),(g_1 \mid g_2 \mid g_3))=0\ (\mathrm{mod}\ x^m-1)$. This implies that \begin{equation} \label{G1l+G2a} \hat{g}_1\theta_{\frac{m}{r}}(x^r)x^{m-1-\de(l)}l^\ast + \hat{g}_2\theta_{\frac{m}{s}}(x^s)x^{m-1-\de(a)}a^\ast = 0\ (\mathrm{mod}\ x^m-1)\end{equation} \noindent and \begin{multline}\label{g1g1+g2G2+g3g3} \hat{g}_1\theta_{\frac{m}{r}}(x^r)x^{m-1-\de(g_1)}g_1^\ast + \hat{g}_2\theta_{\frac{m}{s}}(x^s)x^{m-1-\de(g_2)}g_2^\ast \\ + \hat{g}_3\theta_{\frac{m}{t}}(x^t)x^{m-1-\de(g_3)}g_3^\ast = 0\ (\mathrm{mod}\ x^m-1) \end{multline} \noindent Substituting $\hat{g}_1$ and $\hat{g}_2$ from Theorem \ref{lambda for G1 and G2}, in equations (\ref{G1l+G2a}) and (\ref{g1g1+g2G2+g3g3}), and rearranging the terms, we get \begin{equation} \label{G1l+G2a 2} (x^m-1)\frac{l^\ast}{b^\ast}~x^{m-1-\de(l)}\lambda_1+ (x^m-1)\frac{(b,l,g_1)^\ast a^\ast}{a^\ast b^\ast} ~x^{m-1-\de(a)}\lambda_2= 0\ (\mathrm{mod}\ x^m-1) \end{equation} \noindent and \begin{multline}\label{g1g1+g2G2+g3g3 2} (x^m-1)\frac{g_1^\ast}{b^\ast}~x^{m-1-\de(G_1)}\lambda_1 + (x^m-1)\frac{(b,l,g_1)^\ast g_2^\ast}{a^\ast b^\ast} ~x^{m-1-\de(g_2)}\lambda_2 \\ + (x^m-1)\frac{(a,g_2)^\ast (b,l,g_1)^\ast g_3^\ast}{a^\ast b^\ast g_3^\ast} ~x^{m-1-\de(g_3)} = 0\ (\mathrm{mod}\ x^m-1) \end{multline} \noindent From equations (\ref{G1l+G2a 2}) and (\ref{g1g1+g2G2+g3g3 2}), we get \begin{multline} \label{simlification 1 for lambda2} (x^m-1)\frac{(b,l,g_1)^\ast g_1^\ast}{ b^\ast} ~x^{2m-2-\de(a)-\de(g_1)}\lambda_2 + (x^m-1)\frac{(b,l,g_1)^\ast g_2^\ast l^\ast}{a^\ast b^\ast} ~x^{2m-2-\de(l)-\de(g_2)}\lambda_2 \\ + (x^m-1)\frac{(a,g_2)^\ast (b,l,g_1)^\ast l^\ast}{a^\ast b^\ast} ~x^{2m-2-\de(l)-\de(g_3)} = 0\ (\mathrm{mod}\ x^m-1). \end{multline} \noindent Equation (\ref{simlification 1 for lambda2}) can be rewritten as \begin{equation} \label{simlification 2 for lambda2}\begin{split} (x^m-1)\frac{(a,g_2)^\ast (b,l,g_1)^\ast l^\ast}{a^\ast b^\ast}\left[\frac{g_1^\ast a^\ast}{(a,g_2)^\ast l^\ast} ~x^{2m-2-\de(a)-\de(g_1)} \lambda_2 + \frac{g_2^\ast}{(a,g_2)^\ast} ~x^{2m-2-\de(l)-\de(g_2)}\lambda_2 \right. \\ \left. + ~x^{2m-2-\de(l)-\de(g_3)}\right] = 0\ (\mathrm{mod}\ x^m-1).\end{split} \end{equation} \noindent This implies that \begin{equation*} \label{simlification 3 for lambda2} \left[\frac{g_1^\ast a^\ast}{(a,g_2)^\ast l^\ast} ~x^{2m-2-\de(a)-\de(g_1)} \lambda_2 + \frac{g_2^\ast}{(a,g_2)^\ast} ~x^{2m-2-\de(l)-\de(g_2)}\lambda_2 \right. \\ \left. + ~x^{2m-2-\de(l)-\de(g_3)}\right] = 0\ (\mathrm{mod}\ x^m-1). \end{equation*} \noindent Since, $\frac{a^\ast}{(a,g_2)^\ast}$ divides $x^m-1$, we get \begin{equation} \label{simlification 4 for lambda233} \frac{g_2^\ast}{(a,g_2)^\ast} ~x^{2m-2-\de(l)-\de(g_2)}\lambda_2 + ~x^{2m-2-\de(l)-\de(g_3)} = 0\ \mathrm{mod}\left(\frac{a^\ast}{(a,g_2)^\ast} \right). \end{equation} \noindent Since $\frac{a^\ast}{(a,g_2)^\ast}$ and $\frac{g_2^\ast}{(a,g_2)^\ast} $ are relatively prime, we get from equation (\ref{simlification 4 for lambda233}) \begin{equation*} \label{simlification 5 for lambda2} \lambda_2= \left(\frac{g_2^\ast}{(a,g_2)^\ast}\right)^{-1} ~x^{2m+\de(g_2)-\de(g_3)} \ \left(\mathrm{mod}\ \frac{a^\ast}{(a,g_2)^\ast}\right). \end{equation*}. \noindent Substituting $\lambda_2$ in equation (\ref{G1l+G2a 2}) and using the similar arguments that given in finding $\lambda_1$, we therefore get \begin{equation*} \label{simlification 3 for lambda1} \lambda_1= \left(\frac{l^\ast}{(b,l,g_1)^\ast}\right)^{-1} \left(\frac{g_2^\ast}{(a,g_2)^\ast}\right)^{-1} x^{2m-\de(g_3)-\de(a)+\de(g_2)+\de(l))}\ \ \ \ \left(\mathrm{mod}\ \left(\frac{(b,g_1)^\ast}{(b,l,g_1)^\ast}, \frac{a^\ast}{(a,g_2)^\ast}\right)\right) \end{equation*} \noindent Hence the result. \end{proof} \begin{theorem} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ be the dual code of $\C$. Then, $$\hat{a} \ b^\ast(a,G_2)^\ast =(x^s-1)(b,l,g_1)^\ast.$$ \end{theorem} \begin{proof} First we determine the degree of $\hat{a}$. We note that $(\C^\bot)_s=\hull{(\hat{a},\hat{g}_2)}$. Also from Theorem (\ref{dimensios from the matrix}), we have $\mid (\C^\bot)_s \mid=2^{\de(a)+\de(b)-\de(b,l,g_1)}$. Therefore, we get $s-\de(\hat{a},\hat{g}_2)=\de(a)+\de(b)-\de(b,l,g_1)$. Similarly, we have \begin{equation}\label{degree hat a 1} s-\de({a},{g}_2)=\de(\hat{a})+\de(\hat{b})-\de(\hat{b},\hat{l},\hat{g}_1). \end{equation} \noindent Again, as $(\C^\bot)_r=\hull{(\hat{b},\hat{l},\hat{g}_1)}$, from Theorem (\ref{dimensios from the matrix}), we have \begin{equation}\label{degree hat a 2} \de(b)=r-\de(\hat{b},\hat{l},\hat{g}_1). \end{equation} \noindent Therefore, from equations (\ref{degree hat a 1}), (\ref{degree hat a 2}) and from Theorem (\ref{for hat b}), we get \begin{equation}\label{degree hat a 3} \de(\hat{a})=s-\de(a,g_2)-\de(b)+\de(b,l,g_1). \end{equation} Now since $\left(0 \mid \frac{a b}{(b,l,g_1)} \mid 0\right)$ and $\left(0 \mid \frac{b g_2}{(b,l,g_1)} \mid \frac{b g_3}{(b,l,g_1)}\right)$ are in $\C$, and $(\hat{l} \mid \hat{a} \mid 0)\in \C^\bot$, we have \\ $\psi\left((\hat{l} \mid \hat{a} \mid 0),\ \left(0 \mid \frac{a b}{(b,l,g_1)} \mid 0\right) \right)=\psi\left((\hat{l} \mid \hat{a} \mid 0),\ \left(0 \mid \frac{b g_2}{(b,l,g_1)} \mid \frac{b g_3}{(b,l,g_1)}\right)\right)=0$. This implies that $\hat{a}\frac{a^\ast b^\ast}{(b,l,g_1)^\ast}=\gamma_1(x^s-1)$ and $\hat{a}\frac{g_2^\ast b^\ast}{(b,l,g_1)^\ast}=\gamma_2(x^s-1)$ for some $\gamma_1,\gamma_2 \in \Z_2[x]$. This implies that \begin{equation}\label{degree hat a 4} \hat{a}\frac{(a,g_2)^\ast b^\ast}{(b,l,g_1)^\ast}=\gamma(x^s-1), \end{equation} \noindent where $\gamma=(\gamma_1,\gamma_2)$. From equations (\ref{degree hat a 3}) and (\ref{degree hat a 4}), we get $\gamma =1$. The result follows. \end{proof} \begin{theorem} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ be the dual code of $\C$. Then $$\hat{l}\ b^\ast =\beta(x^r-1),$$ \noindent where $\beta =\left(\frac{l^\ast}{(b,l,g_1)^\ast}\right)^{-1}\frac{a^\ast}{(a,g_2)^\ast}\ x^{m+\de(l)-\de(a)} \ \left(\mathrm{mod}\ \frac{(b,g_1)^\ast}{(b,l,g_1)^\ast}\right)$. \end{theorem} \begin{proof} Since $(b \mid 0 \mid 0)\in \C$ and $(\hat{l} \mid \hat{a} \mid 0)\in \C^\bot$, we have $\hat{l} b^\ast = 0 \ (\mathrm{mod}\ x^r-1)$. Therefore $\hat{l} \ b^\ast =\beta(x^r-1)$ for some $\beta \in \Z_2[x]$. Again, as $(\hat{l} \mid \hat{a} \mid 0)\in \C^\bot$ and $(l \mid a \mid 0)\in \C$, so $\psi\left((\hat{l} \mid \hat{a} \mid 0),\ (l \mid a \mid 0)\right)=0\ (\mathrm{mod}\ x^m-1)$. This implies that \begin{equation} \label{for hat l 1} \hat{l}\theta_{\frac{m}{r}}(x^r)x^{m-1-\de(l)}l^\ast + \hat{a}\theta_{\frac{m}{s}}(x^s)x^{m-1-\de(a)}a^\ast = 0\ (\mathrm{mod}\ x^m-1).\end{equation} \noindent Substituting $\hat{l}$ and $\hat{a}$ in equation (\ref{for hat l 1}), we get \begin{equation} \label{simplification 2 for hat l} (x^m-1)\frac{l^\ast}{b^\ast}x^{m-1-\de(l)}\beta+ (x^m-1) \frac{(b,l,g_1)^\ast}{b^\ast (a,g_2)^\ast }a^\ast x^{m-1-\de(a)} = 0\ (\mathrm{mod}\ x^m-1).\end{equation} \noindent Rearranging the terms in equation (\ref{simplification 2 for hat l}), we get \begin{equation} \label{simplification 3 for hat l} (x^m-1)\frac{(b,l,g_1)^\ast}{b^\ast} \left[\frac{l^\ast}{(b,l,g_1)^\ast}x^{m-1-\de(l)}\beta+\frac{a^\ast}{(a,g_2)^\ast} x^{m-1-\de(a)}\right] = 0\ (\mathrm{mod}\ x^m-1).\end{equation} \noindent With similar arguments as in Theorem \ref{evaluation of lambda for G1 and G2}, we get \begin{equation} \label{simplification 4 for hat l} \beta =\left(\frac{l^\ast}{(b,l,g_1)^\ast}\right)^{-1}\frac{a^\ast}{(a,g_2)^\ast}\ x^{m+\de(l)-\de(a)} \ \left(\mathrm{mod}\ \frac{(b,g_1)^\ast}{(b,l,g_1)^\ast}\right).\end{equation} \noindent Hence the result. \end{proof} Summarising the previous results we have the following theorem. \begin{theorem}\label{Summarising} Let $\C=\hull{(b \ \mid \ 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{g}_1 \mid \hat{g}_2 \mid \hat{g}_3)}$ be the dual code of $\C$. Then \begin{enumerate} \item $\hat{b} =\frac{x^r-1}{(b,l,g_1)^\ast}$;\vspace{3mm} \item $\hat{g}_3= \frac{(x^t-1)(b,l,g_1)^\ast(a,g_2)^\ast}{a^\ast b^\ast g_3^\ast }$;\vspace{3mm} \item $\hat{a} =\frac{(x^s-1)(b,l,g_1)^\ast}{(a,g_2)^\ast b^\ast}$;\vspace{3mm} \item $\hat{g}_1 \ b^\ast =\lambda_1 (x^r-1),\hat{g}_2 \ a^\ast b^\ast =\lambda_2 (x^s-1)(b,l,g_1)^\ast$, where \begin{align*} \lambda_1&=\left(\frac{l^\ast}{(b,l,g_1)^\ast}\right)^{-1} \left(\frac{g_2^\ast}{(a,g_2)^\ast} \right)^{-1}g_2^\ast~ x^{2m+\de(l)-\de(a)+\de(g_2)-\de(g_3)} ~~~~ \left(\mathrm{mod}\ \ \left( \frac{(b,g_1^\ast)}{(b,l,g_1)^\ast},\ \frac{a^\ast}{(a,g_2)^\ast} \right)\right)\qquad \qquad \end{align*} \noindent {and} \begin{align*} \lambda_2&= \left(\frac{g_2^\ast}{(a,g_2)^\ast} \right)^{-1}~ x^{2m+\de(g_2)-\de(g_3)} ~~~~ \left(\mathrm{mod}\ \ \frac{a^\ast}{(a,g_2)^\ast} \right).\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \end{align*} \item $\hat{l} \ b^\ast =\beta(x^r-1),$ \noindent where $\beta =\left(\frac{l^\ast}{(b,l,g_1)^\ast}\right)^{-1}\frac{a^\ast}{(a,g_2)^\ast}\ x^{m+\de(l)-\de(a)} \ \left(\mathrm{mod}\ \frac{(b,g_1)^\ast}{(b,l,g_1)^\ast}\right)$. \end{enumerate} \end{theorem} \begin{example} Let $r=10,s=12$ and $t=15$. Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$, where $b=x^6+x^5+x+1$, $l=x^5+1$, $a=x^6+1$, $g_1=x^5+1$, $g_2=x^5+x^4+x^2+x$ and $g_3=x^{12}+x^9+x^6+x^5+x^4+x^2+x+1$. $\C$ satisfies all the conditions given in Lemma \ref{l <b and g_1<b} and Lemma \ref{b|h_2l}. Therefore, $\C$ is a $\Z_2$-triple cyclic code of block length $10,12,15)$. Also, $S=S_1\cup S_2\cup S_3$ forms a generating set for $\C$, where $S_1=\cup_{i=0}^3(x^6+x^5+x+1 \mid 0 \mid 0)$, $S_2=\cup_{i=0}^5 x^i(x^5+1 \mid x^6+1 \mid 0)$ and $S_3=\cup_{i=0}^2 x^i(x^5+1 \mid x^5+x^4+x^2+x \mid x^12+x^9+x^6+x^5+x^4+x^2+x+1)$. The cardinality of $\C$ is $2^{13}$. $\C$ is generated by the matrix \begin{equation*}G=\left( \begin{array}{ccc} 1 1 0 0 0 1 1 0 0 0& 0 0 0 0 0 0 0 0 0 0 0 0& 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 1 1 0 0 0 1 1 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 0 1 1 0 0 0 1 1 0 & 0 0 0 0 0 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 0 0 1 1 0 0 0 1 1 & 0 0 0 0 0 0 0 0 0 0 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 1 0 0 0 0 1 0 0 0 0 & 1 0 0 0 0 0 1 0 0 0 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 1 0 0 0 0 1 0 0 0 & 0 1 0 0 0 0 0 1 0 0 0 0& 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 0 1 0 0 0 0 1 0 0& 0 0 1 0 0 0 0 0 1 0 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 0 0 1 0 0 0 0 1 0 & 0 0 0 1 0 0 0 0 0 1 0 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 0 0 0 0 1 0 0 0 0 1 & 0 0 0 0 1 0 0 0 0 0 1 0 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 1 0 0 0 0 1 0 0 0 0 & 0 0 0 0 0 1 0 0 0 0 0 1 & 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 \\ 1 0 0 0 0 1 0 0 0 0& 0 1 1 0 1 1 0 0 0 0 0 0& 1 1 1 0 1 1 1 0 0 1 0 0 1 0 0 \\ 0 1 0 0 0 0 1 0 0 0 & 0 0 1 1 0 1 1 0 0 0 0 0 & 0 1 1 1 0 1 1 1 0 0 1 0 0 1 0 \\ 0 0 1 0 0 0 0 1 0 0 & 0 0 0 1 1 0 1 1 0 0 0 0 & 0 0 1 1 1 0 1 1 1 0 0 1 0 0 1 \\ \end{array} \right).\end{equation*} \noindent Further, the minimum Hamming distance of $\C$ is $4$ and therefore, $\C$ is a $[37, 13, 4]$ binary linear Code. From Theorem (\ref{Summarising}), we have the dual code of $\C$ as $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0),(\hat{G}_1 \mid \hat{G}_2 \mid \hat{G}_3)}$, where $\hat{b}=(x+1)(x^4+x^3+x^2+x+1)$, $\hat{a}=(x+1)(x^2+x+1)^3$, $\hat{G}_3=1$, $\hat{G}_1=\hat{l}=0$ and $\hat{G}_2=(x+1)^2(x^2+x+1)^2$. \end{example} Let $t=0$. Then by taking $g_1=g_2=g_3=0$, we have $(b,l,g_1)=(b,l)$ and $(b,g_1)=b$ and hence from Theorem (\ref{Summarising}), we see that $\Z_2$-double codes are special case of the family of codes that we are considering when $t=0$. Thus we have the following result. \begin{corollary} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0)}$ be a $\Z_2$-double cyclic code of block length $(r,s)$ and $\C^\bot=\hull{(\hat{b} \mid 0 \mid 0),(\hat{l} \mid \hat{a} \mid 0)}$ be the dual code of $\C$. Then \begin{enumerate} \item $\hat{b} =\frac{x^r-1}{(b,l)^\ast}$;\vspace{3mm} \item $\hat{a} \ a^\ast b^\ast =(x^s-1)(b,l)^\ast$;\vspace{3mm} \item $\hat{l} \ b^\ast =\beta(x^r-1),$ \noindent where $\beta=\left(\frac{l^\ast}{(b,l)^\ast} \right)^{-1}\ x^{m-\de(a)+\de(l)}\ ~~~ \left(\mathrm{mod}\ ~~ \left(\frac{b^\ast}{(b,l)^\ast} \right)\right)$. \end{enumerate} \end{corollary} Let $\C=\hull{(b \mid 0 \mid 0),(l \mid a \mid 0),(g_1 \mid g_2 \mid g_3)}$ be a $\Z_2$-triple cyclic code of block length $(r,s,t)$ as in Theorem (\ref{z2double}). If $b | l$, $b|g_1$ and $a|g_2$, then $\C =\hull{(b \mid 0 \mid 0),(0 \mid a \mid 0),(0 \mid 0 \mid g_3)}$. We note that $\C_r=\hull{b}$, $\C_s=\hull{a}$ and $\C_t=\hull{g_3}$ and $\C=\C_r \times \C_s \times \C_t$. Hence $\C$ is separable. The generating matrix of $\C$ is permutationally equivalent to the matrix\\ $$\textbf{G} = \left( \begin{array}{cc|cc|cc} I_{r-\de(b)} & A & 0 & 0 & 0 & 0 \\ 0 & 0 & I_{s-\de(a)} & B & 0 & 0 \\ 0 & 0 & 0 & 0 & C & I_{t-\de(g_3)} \end{array} \right).$$ The following theorem shows that the dual of a separable $\Z_2$-triple cyclic code is also separable. \begin{theorem}Let $\C =\hull{(b \mid 0 \mid 0),(0 \mid a \mid 0),(0 \mid 0 \mid g_3)}$ be a separable $\Z_2$-triple cyclic code of block length $(r,s,t)$. Then \begin{enumerate} \item $\C^\bot$ is also a separable $\Z_2$-triple cyclic code of block length $(r,s,t)$;\vspace{2mm} \item $\C^\bot=\left\langle\left(\frac{x^r-1}{b} \mid 0 \mid 0 \right),\left(\frac{x^s-1}{a } \mid 0 \mid 0 \right),\left(0 \mid 0 \mid \frac{x^t-1}{g_3 } \right) \right\rangle$, and \vspace{2mm} \item $d_{min}(\C)= \mathrm{min}\{d_{min}(\C_r),d_{min}(\C_s),d_{min}(\C_t)\}$. \end{enumerate} \end{theorem} \begin{proof} As $l=g_1=g_2=0$, the proof follows from Theorem (\ref{Summarising}). \end{proof} \begin{remark}If $\C$ is a non-separable $\Z_2$-triple cyclic code of block length $(r,s,t)$, then $d_{min}(\C)\geq \mathrm{min}\\ \{d_{min}(\C_r),d_{min}(\C_s),d_{min}(\C_t)\}$. \end{remark} \begin{example} Let $r=6,s=4$ and $t=5$. Let $\C=\hull{(b \mid 0 \mid 0),(0 \mid a \mid 0),(0 \mid 0 \mid g_3)}$, where $b=(x+1)(x^3+x^2+1)$, $l=x+1$, $a=(x-1)(x^3+x+1)$, $G_1=1$ and $g_3=(x+1)(x^3+x^2+1)$. Then $\C$ is a $\Z_2$-triple cyclic code of block length $(6,4,5)$. Also, $S=\{(x^5+x^4+x^3+x^2+x+1 \mid 0 \mid 0),(0 \mid x^3+x^2+x+1 \mid 0),(0 \mid 0 \mid x^4+x^3+x^2+x+1 )\}$ forms a generating set for $\C$. The cardinality of $\C$ is $2^3$. Further, the minimum Hamming distance of $\C$ is $4$ and therefore $\C$ is a $[15, 3, 4]$ binary linear code with the Hamming weight distribution given by $[ <0,\ 1>, <4,\ 1>, <5,\ 1>, <6,\ 1>, <9,\ 1>, <10,\ 1>, <11,\ 1>, <15,\ 1> ]$. The dual of $\C$ is also a separable $\Z_2$-triple cyclic code of block length $(6,4,5)$ such that $\C^\bot=\hull{(x+1 \mid 0 \mid 0),(0 \mid x+1 \mid 0),(0 \mid 0 \mid x+1)}$ with minimum Hamming distance $2$ and therefore it is a $[15,12,2]$ binary code.\end{example} \section{Conclusion} In this paper we have considered $\Z_2$-triple cyclic codes of block length $(r,s,t)$. We have studied the structure these codes and determined the form of their generators of these codes. We have determined the size of $\Z_2$-triple cyclic codes by giving a minimal spanning set. We also studied the relationship between the generators of $\Z_2$-triple cyclic codes and their duals and determined the generators for dual of a $\Z_2$-triple cyclic code.
train/arxiv
BkiUc3Q5qhDBQzqNEsD_
5
1
\section{Introduction} Lattice techniques have proved remarkably useful in the quantization of usual gauge theories. This raised the hope that they may also prove useful in the quantization of gravity. A major difference however is that most theories of gravity of interest are invariant under diffeomorphisms and the introduction of a discrete structure breaks diffeomorphism invariance. One of the appealing features of lattice gauge theories is therefore lost in this case, one breaks the symmetry of the theory of interest. The situation gets further compounded in the case of canonical general relativity, since there one also breaks four dimensional covariance into a $3+1$ dimensional split. Spatial diffeomorphisms get implemented via a constraint that has a natural geometrical action and the usual algebra of diffeomorphisms is implemented via the constraint algebra. But the remaining space-time diffeomorphism gets implemented through the complicated Hamiltonian constraint, that has a challenging algebra with spatial diffeomorphisms. In particular the algebra of constraints has structure functions. If we call $C(\vec{N})$ the diffeomorphism constraint smeared by a test vector field (shift) $\vec{N}$ and $H(N)$ the Hamiltonian constraint smeared by a scalar lapse $N$, the constraint algebra is, \begin{eqnarray} \left\{C(\vec{N}),C(\vec{M})\right\}=C([\vec{N},\vec{M}])\\ \left\{C(\vec{N}),H(M)\right\}=H({\cal L}_{\vec{N}}M)\\ \left\{H({N}),H(M)\right\}=C(\vec{K}(q)), \end{eqnarray} where the vector $K^a=q^{ab}(N\partial_a M-M\partial_a N)$ and $q^{ab}$ is the spatial metric. The last Poisson bracket therefore involves structure functions depending on the canonical variables on the right hand side. The algebra of constraints poses important complications in the context of loop quantum gravity when one wishes to implement it as an operator algebra at a quantum level (see \cite{ThiemannGiesel} for a lengthier discussion). In particular, if one chooses spin network states with the usual Ashtekar-Lewandowski \cite{AsLe} measure, they form a non-separable Hilbert space. In it, diffeomorphisms are not implemented in a weakly continuous fashion, i.e. finite diffeomorphisms can be represented but infinitesimal ones cannot. This implies that in loop quantum gravity one treats very asymetrically the spatial and temporal diffeomorphisms. Whereas invariance under spatial diffeomorphisms is implemented via a group averaging procedure \cite{groupaveraging}, invariance under the remaining space-time diffeomorphisms is to be implemented by solving a quantum operatorial equation corresponding to the Hamiltonian constraint. Since the Poisson bracket of two Hamiltonian constraints involves the infinitesimal generator of diffeomorphisms, which is not well defined as a quantum operator, one cannot expect to implement the Poisson algebra at an operatorial level in the quantum theory, at least in the kinematical Hilbert space. A symmetric treatment of the diffeomorphism and Hamiltonian constraints requires to develop a technique that allows to implement the generators of spatial diffeomorphisms as operators in the loop representation. One could attempt to treat the diffeomorphism and Hamiltonian constraints on the same footing, for instance by lattice regularizing them. Unfortunately, such discretized versions of the constraints are not first class. If one treats them properly with the Dirac procedure, the resulting theory is vastly different in symmetries and even in the number of degrees of freedom from what one expects to have in the continuum theory. Therefore there is little chance that one could define a continuum theory as a suitable limit of the constructed lattice theories. These problems have led to the consideration of extensions of the Dirac procedure that could better accommodate this particular problem with the constraint algebra. One such approach is the ``master constraint'' programme of Thiemann and collaborators \cite{master}. Another approach that we have been studying in the last few years are the ``uniform discretizations'' \cite{uniform}. Both approaches have some elements in common. Uniform discretizations are discrete versions of a constrained theory in which the discretized form of the constraints are quantities whose values are under control throughout the system's evolution. Notice that this would not be the case, for instance, if one simply takes a constrained theory and discretizes it. Initial data on which the discrete version of the constraints vanishes will evolve into data with non-vanishing values of the discrete constraints, without any control on the final value. This situation is well known, for instance, in numerical relativity. Uniform discretizations are designed in such a way that the discrete constraints are kept under control upon evolution and that one can take appropriate limits in the initial data such that one can satisfy the constraints to an arbitrary (and controlled) degree of accuracy. This therefore guarantees the existence of a well defined continuum limit at the level of the classical theory. It has been shown \cite{discreteexamples} that the uniform discretization technique is classically equivalent to the Dirac procedure when the constraints are first class. For second class constraints, like the ones that arise when one discretizes continuum systems with first class constraints the uniform discretization technique is radically different from the Dirac procedure, yielding a dynamical evolution that recovers in the continuum limit the continuum theory one started with. Although the existence of a continuum limit is generically guaranteed at a classical level, it is not obvious that it is at the quantum level. It is known \cite{discreteexamples} that there are models in which the continuum limit cannot be achieved and one is left with a non-zero minimum value of the expectation value of the sum squared of the constraints. It is therefore of interest to show that in examples of growing complexity and of increasing similarity to general relativity one can indeed define a continuum quantum theory with the desired symmetries by applying the uniform discretization procedure. The purpose of this paper is to discuss one such model. We will consider the quantization via uniform discretizations of a $1+1$ dimensional model with diffeomorphism symmetry and we will show that the symmetry is recovered at the quantum level correctly. This raises the hopes of having a theory where all the constraints are treated on an equal footing. The organization of this paper is as follows. In section II we discuss the model we will consider. In section III we discretize the model. In section IV we review the uniform discretization procedure and how it departs from the Dirac traditional approach. Section VI discusses the quantization using uniform discretizations and how one recovers the correct continuum limit. We conclude with a discussion. \section{The model} We would like to construct a model by considering spherically symmetric gravity and ignoring the Hamiltonian constraint. This is analogous to building a ``Husain--Kuchar'' \cite{husainkuchar} version of spherically symmetric gravity. It is known that these models correspond to degenerate space-times when translated in terms of the metric variables. We refer the reader to our previous work on spherically symmetric gravity \cite{spherical} for the setup of the model in terms of Ashtekar's new variables. Just as a recap, the model has two canonical pairs $K_x, E^x$ and $K_\varphi,E^\varphi$. The relation to the more traditional metric canonical variables is, \begin{eqnarray} g_{xx}&=& \frac{(E^\varphi)^2}{|E^x|},\qquad g_{\theta\theta} = |E^x|,\\ K_{xx}&=&-{\rm sign}(E^x) \frac{(E^\varphi)^2}{\sqrt{|E^x|}}K_x,\qquad K_{\theta\theta} = -\sqrt{|E^x|} {K_\varphi} \end{eqnarray} and we have set the Immirzi parameter to one for simplicity, since it does not play a role in this analysis. The Lagrangian for spherically symmetric gravity ignoring the Hamiltonian constraint is, \begin{equation} L = \int dx E^x \dot{K}_x+E^\varphi \dot{K}_\varphi +N \left((E^x)'K_x - E^\varphi (K_\varphi)'\right) \end{equation} with $N$ a Lagrange multiplier (the radial component of the shift vector). The equations of motion are \begin{eqnarray} \dot{K}_x-\left(NK_x\right)' &=&0,\\ \dot{E}_x-N\left(E^x\right)' &=&0,\\ \dot{K}_\varphi-NK'_\varphi &=&0,\\ \dot{E}^\varphi-\left(NE^\varphi\right)' &=&0. \end{eqnarray} The theory has one constraint, which is the remaining diffeomorphism constraint in the radial $(x)$ direction, $\phi= -\left(E^x\right)'K_x+E^\varphi K'_\varphi$, which we will write smeared as $\phi(N)=\int dx N \phi$. The constraint generates diffeomorphisms of the fields, with $K_\varphi$ and $E^x$ behaving as scalars and $K_x$ and $E^\varphi$ as a densities of weight one, \begin{eqnarray} \delta K_\varphi &=& \left\{K_\varphi,\phi(N)\right\}=N K'_\varphi,\\ \delta K_x &=& \left\{K_x,\phi(N)\right\}=\left(N K_x\right)',\\ \delta E^\varphi &=& \left\{E^\varphi\phi(N)\right\}=\left(N E^\varphi\right)',\\ \delta E^x &=& \left\{E^x,\phi(N)\right\}=N \left(E^x\right)'. \end{eqnarray} The constraint has the usual algebra of diffeomorphisms, \begin{equation} \left\{\phi(N),\phi(M)\right\}=\phi\left(N M'-M N'\right). \end{equation} Observables are integrals of densities of weight one constructed with the fields, for example, $O=\int dx f(E^x,K_\varphi)K_x$ with $f$ a function. One then has \begin{equation} \left\{O, \phi(N)\right\}=\int dx \left[\frac{\partial f}{\partial E^x} N \left(E^x\right)' +\frac{\partial f}{\partial K_\varphi} N K'_\varphi+ \left(NK_x\right)' f\right] = \int dx \partial_x \left(f NK_x\right)=0, \end{equation} if one considers a compact spatial manifold, $S^{1}$, which we will do throughout this paper. (This may not make a lot of sense if one is thinking of the model as a reduction of $3+1$ spherical symmetry, but we are just avoiding including boundary terms, which are straightforward to treat in the spherical case, see \cite{spherical}, in order to simplify the discussion of diffeomorphism invariance). \section{Discretization} We now proceed to discretize the model. The spatial direction $x$ is discretized into points $x_i$ such that $x_{i+1}-x_i=\epsilon_i$ and the distances are smaller than a bound $d(\epsilon_i)<d_\epsilon$ when measured in some fiducial metric. To simplify notation, from now on we will assume the points are equally spaced and drop the suffix $i$ on $\epsilon$, but the analysis can be straightforwardly extended to the case with variable $\epsilon_i$. The variables of the model become $K_{x,i}=K_x(x_i)$, $K_{\varphi,i}=K_\varphi(x_i)$ and $E^x_i=\epsilon E^x(x_i)$ and $E^\varphi_i=\epsilon E^\varphi(x_i)$. The constraint is, \begin{equation} \phi_i =E^\varphi_i\left(K_{\varphi,i+1}-K_{\varphi,i}\right) -K_{x,i} \left(E^x_{i+1}-E^x_i\right). \end{equation} The constraint algebra is not first class, i.e., \begin{eqnarray} \left\{\phi_i,\phi_j\right\}&=&-E^\varphi_{i-1}\left(K_{\varphi,i+1}-K_{\varphi,i} \right) \delta_{i,j+1}+E^\varphi_{j-1}\left(K_{\varphi,j+1}-K_{\varphi,j} \right)\delta_{j,i+1}\nonumber\\ &&K_{x,{i-1}}\left(E^{x}_{i+1}-E^x_i \right) \delta_{i,j+1}-K_{x,j-1}\left(E^x_{j+1}-E^x_j \right)\delta_{j,i+1} \end{eqnarray} which does not reproduce the constraint. What one has is a ``classical anomaly'' of the form $\left(E^\varphi_{i+1}-E^\varphi_{i}\right) \left(K_{\varphi,i}-K_{\varphi,i-1}\right) -\left(E^x_{i+1}-E^x_i\right)\left(K_{x,i}-K_{x,i-1}\right)$. These terms would tend to zero if one takes the separation $\epsilon$ to zero and the variables behave continuously in such a limit. So if one were to simply quantize the discrete model, one would run into trouble since one would be quantizing a classical theory with second class constraints. We will expand more on the problems one faces in the next section. In this paper we would like to show that in spite of this problem of the classical theory, which implies that the discrete theory loses diffeomorphism invariance, if one follows the uniform discretization approach to quantization the diffeomorphism invariance is recovered in the limit $\epsilon\to 0$ both at the classical and quantum level. In the uniform discretization approach one constructs a ``master constraint'' ${\mathbb H}$ by considering the sum of the discretized constraints squared. One then promotes the resulting quantity to a quantum operator and seeks for the eigenstates of $\hat{\mathbb H}$ with minimum eigenvalue. In the full theory the quantity ${\mathbb H}$ would be constructed from the diffeomorphism constraints $\phi_a$ as, \begin{equation} {\mathbb H} =\frac{1}{2} \int dx \phi_a \phi_b \frac{g^{ab}}{\sqrt{g}}, \end{equation} which motivates in our example to choose, \begin{equation} {\mathbb H} = \frac{1}{2}\int dx \phi \phi \frac{\sqrt{E^x}}{\left(E^\varphi\right)^3}, \end{equation} or, in the discretized theory as, \begin{equation} {\mathbb H}^\epsilon = \frac{1}{2}\sum_{i=0}^N \phi_i \phi_i \frac{\sqrt{E^x_i}}{\left(E^\varphi_i\right)^3} \epsilon^{3/2}. \end{equation} To understand better how to promote these quantities to quantum operators, it is best to start with the constraint itself. Let us go back for a second to the continuum notation, and write, \begin{equation} \phi^\epsilon(N)= \sum_{j=0}^N \epsilon N(x_j) \left\{ -\frac{\left[E^x(x_{j+1})-E^x(x_j)\right]}{\epsilon} K_x(x_j)+ \frac{1}{2}\left[E^\varphi(x_{j})+E^\varphi(x_{j+1})\right] \frac{\left(K_\varphi(x_{j+1})-K_\varphi(x_j)\right)}{\epsilon}\right\} , \end{equation} which would reproduce the constraint $\phi(N)=\lim_{\epsilon\to0} \phi^\epsilon(N)$ though we see that the explicit dependence on $\epsilon$ drops out. We have chosen to regularize $E^\varphi$ at the midpoint in order to simplify the action of the resulting quantum operator as we will see later. When one is to promote these quantities to quantum operators, one needs to remember that although the $E$ variables promote readily to quantum operators in the loop representation, the $K$'s need to be written in exponentiated form. To this aim, we write, classically, \begin{equation} \phi^\epsilon(N)= \sum_{j=0}^N \frac{N(x_j)}{2i\epsilon}\left\{ \exp\left(-2i\epsilon{\left[E^x(x_{j+1})-E^x(x_j)\right]} K_x(x_j)+ i\epsilon{\left[E^\varphi(x_{j})+E^\varphi(x_{j+1})\right]} \left(K_\varphi(x_{j+1})-K_\varphi(x_j)\right)\right)-1\right\}, \end{equation} which again would reproduce the constraint in the continuum limit. Let us rewrite it in terms of the discrete variables, \begin{equation} \phi^\epsilon(N)= \sum_{j=0}^N \frac{N_j}{2i\epsilon}\left\{ \exp\left[i\left({-2\left[E^x_{j+1}-E^x_j\right]} K_{x,j}+ {\left[E^\varphi_{j}+E^\varphi_{j+1}\right]} \left(K_{\varphi,j+1}-K_{\varphi,j}\right)\right)\right]-1\right\}. \end{equation} For later use, it is convenient to rewrite $\phi_j^\epsilon = (D_j-1)/(2i\epsilon)$ and then one has that, \begin{equation} {\mathbb H}^\epsilon = \sum_{j=0}^N \left(D_j-1\right)\left(D_j-1\right)^* \epsilon^{-1/2} \frac{\sqrt{E^x_j}} {\left(E^\varphi_j\right)^3}.\label{27} \end{equation} We dropped the $\epsilon$ in $D$ since it does not explicitly depend on it, but it does through the dependence on $E^x$ and an irrelevant global factor of $1/8$ to simplify future expressions. \section{Uniform discretizations} Before quantizing, we will study the classical theory using uniform discretizations and we will verify that one gets in the continuum limit a theory with diffeomorphism constraints that are first class. The continuum theory can be treated with the Dirac technique and has first class constraints that generate diffeomorphisms on the dynamical variables. However, the discrete theory, when treated with the Dirac technique, has second class constraints and does not have the gauge invariances of the continuum theory. The number of degrees of freedom changes and the continuum limit generically does not recover the theory one started with. As mentioned before, it has been shown \cite{discreteexamples} that the uniform discretization technique is equivalent to the Dirac procedure when the constraints are first class. For second class constraints, like the ones that appear when one discretizes continuum systems with first class constraints the uniform discretization technique is radically different from the Dirac procedure, yielding a dynamical evolution that recovers in the continuum limit the continuum theory one started with. Let us review how this works. We start with a classical canonical system with $N$ configuration variables, parameterized by a continuous parameter $\alpha$ such that $\alpha\to 0$ is the ``continuum limit''. We will assume the theory in the continuum has $M$ constraints $\phi_j = \lim_{\alpha\to 0} \phi_j^\alpha$. In the discrete theory we will assume the constraints generically fail to be first class, \begin{equation} \left\{\phi^\alpha_j,\phi^\alpha_k\right\} = \sum_{m=1}^M C^\alpha_{jk}{}^m \phi^\alpha_m+ A^\alpha_{jk}, \end{equation} where the failure is quantified by $A^\alpha_{jk}$. We assume that in the continuum limit one has $\lim_{\alpha\to 0} A^\alpha_{jk}=0$ and that the quantities $C^\alpha_{jk}{}^m$ become in the limit the structure functions of the (first class) constraint algebra of the continuum theory $C_{jk}{}^m=\lim_{\alpha\to 0} C^\alpha_{jk}{}^m$, so that, \begin{equation} \left\{\phi_j,\phi_k\right\} =\sum_{m=1}^M C_{jk}{}^m \phi_m. \end{equation} If one were to insist on treating the above discrete theory using the Dirac procedure, that is, taking the constraints $\phi^\alpha_j=0$ and a total Hamiltonian $H_{T}=\sum_{j=1}^M C_j \phi^\alpha_j$ with $C_j$ functions of the canonical variables, one immediately finds restrictions on the $C_j{}'s$ of the form $\sum_{j=1}^M C_j A^\alpha_{jk}=0$ in order to preserve the constraints upon evolution. Only in the continuum $\alpha\to 0$ limit are the $C_j$ free functions and one has in the theory $2N-2M$ observables. Notice that away from the continuum limit the number of observables is generically larger and could even reach $2N$ if the matrix $A^\alpha_{jk}$ is invertible. Therefore one cannot view the theory in the $\alpha\to 0$ limit as a limit of the theories for finite values of $\alpha$, since they do not even have the same number of observables and have a completely different evolution. The uniform discretizations, on the other hand, lead to discrete theories that have the same number of observables and an evolution resembling those of the continuum theory. One can then claim that the discrete theories approximate the continuum theory and the latter arises as the continuum limit of them. The treatment of the system in questions would start with the construction of the ``master constraint'' \begin{equation} {\mathbb H}^\alpha=\frac{1}{2} \sum_{i=j}^M \left(\phi^\alpha_j\right)^2 \end{equation} and defining a discrete time evolution through ${\mathbb H}$. In particular, this implies a discrete time evolution from instant $n$ to $n+1$ for the constraints of the form, \begin{eqnarray} \phi^\alpha_j(n+1) &=& \phi^\alpha_j(n)+ \label{31} \left\{ \phi^\alpha_j(n),{\mathbb H}^\alpha\right\}+ \frac{1}{2} \left\{\left\{ \phi^\alpha_j(n),{\mathbb H}^\alpha\right\}, {\mathbb H}^\alpha\right\}+ \ldots\\ &=&\phi^\alpha_j(n)+ \sum_{i,k=1}^M C^\alpha_{ji}{}^k \phi^\alpha_k(n) \phi^\alpha_i(n)+ \sum_{i=1}^M A^\alpha_{ji} \phi^\alpha_i(n)+\ldots\label{evolution} \end{eqnarray} This evolution implies that ${\mathbb H}^\alpha$ is a constant of the motion, which for convenience we denote via a parameter $\delta$ such that ${\mathbb H}^\alpha=\delta^2/2$. The preservation upon evolution of ${\mathbb H}^\alpha$ implies that the constraints remain bounded $|\phi^\alpha_j|\leq \delta$. If one now divides by $\delta$ and defines the quantities $\lambda^\alpha_i\equiv \phi^\alpha_i/\delta$ one can rewrite (\ref{evolution}) as, \begin{equation} \frac{\phi^\alpha_j(n+1)-\phi^\alpha_j(n)}{\delta}= \sum_{i,j=1}^M C^\alpha_{ji}{}^k \phi^\alpha_k(n) \lambda^\alpha_i(n)+ \sum_{i,j=1}^M A^\alpha_{ji} \lambda^\alpha_i(n)+\ldots \end{equation} Notice that the $\lambda^\alpha_i$ remain finite when one takes the limits $\delta\to 0$ and $\alpha\to 0$. If one now considers the limit of small $\delta$'s, one notes that the first term on the right is of order $\delta$, the second one goes to zero with $\alpha\to 0$, at least as $\alpha$ and the rest of the terms are of higher orders in $\delta,\alpha$. If one identifies with a continuum variable $\tau$ such that $\tau=n\delta+\tau_0$, then $\phi^\alpha_j(\tau)\equiv\phi^\alpha_j(n)$ and $\phi^\alpha_j(\tau+\delta)\equiv\phi^\alpha_j(n+1)$ one can take the limits $\alpha\to 0$ and $\delta\to 0$, irrespective of the order of the limits one gets that the evolution equations (\ref{evolution}) for the constraints become those of the continuum theory, i.e., \begin{equation} \dot{\phi}_j \equiv \lim_{\alpha,\delta\to 0} \frac{\phi^\alpha_j(\tau+\delta)-\phi^\alpha_j(\tau)}{\delta}= \sum_{i,k=1}^M C_{ji}{}^k \phi_k \lambda_i \end{equation} with $\lambda_i$ become the (freely specifiable) Lagrange multipliers of the continuum theory. At this point the reader may be puzzled, since the $\lambda$'s are defined as limits of those of the discrete theory and therefore do not appear to be free. However, one has to recall that the $\lambda$'s in the discrete theory are determined by the values of the constraints evaluated on the initial data, and these can be chosen arbitrarily by modifying the initial data. If one considers the limit $\delta\to 0$ for a finite value of $\alpha$ (``continuous in time, discrete in space'') and considers the evolution of a function of phase space $O$, one has that, \begin{equation} \dot{O}=\left\{O,{\mathbb H}^\alpha\right\}=\left\{O,\phi^\alpha_i\right\}\lambda^\alpha_i+\sum_{j=1}^M \left\{O,\phi^\alpha_i\right\}A^\alpha_{ij} \lambda^\alpha_j +\sum_{j,k=1}^M\left\{O,\phi^\alpha_i\right\} A^\alpha_{ij}A^\alpha_{jk} \lambda^\alpha_k+\ldots \end{equation} The necessary and sufficient condition for $O$ to be a constant of the motion (that is, $\dot{O}=0$) is that \begin{equation} \left\{O,\phi^\alpha_i\right\}=\sum_{j=1}^M C_{ij} \phi^\alpha_j+B^\alpha_i, \end{equation} with $B^\alpha_i$ a vector, perhaps vanishing, that is annihilated by the matrix, \begin{equation} \Lambda^\alpha_{ij} = \delta_{ij} + A^\alpha_{ij}+ \sum_{k=1}^MA^\alpha_{ik}A^\alpha_{kj}+\ldots+\sum_{k_1=1,\ldots,k_s=1}^M A^\alpha_{i,k_1}\cdots A^\alpha_{k_s,j}+\ldots \end{equation} Up to now we have assumed $\lambda^\alpha_i$ arbitrary and not necessarily satisfying that $\sum_{j=1}^N A^\alpha_{ij} \lambda_j =0$. It is clear that $\lim_{\alpha\to 0} \Lambda^\alpha_{ij}=\delta_{ij}$ and therefore $\lim_{\alpha\to 0} B^\alpha_i = 0$ which implies that conserved quantities in the discrete theory yield in the limit $\alpha\to 0$ the observables of the continuum theory. Since the $\lambda_i$'s are free the theory with continuous time is not the one that would result naively from applying the Dirac procedure since in the latter the Lagrange multipliers are restricted by $\sum_{j=1}^M A_{ij} \lambda^\alpha_j=0$ and therefore the theory admits more observables than the $2N-2M$ of the continuum theory. That is, if one takes the ``continuum in time'' limit first, the discrete theory has a dynamics that differs from the usual one unless $A^\alpha_{ij} \phi^\alpha_i(n)=0$ and one is really treating two different theories. At this point it would be good to clarify a bit the notation. The above discussion has been for a mechanical system with $M$ configuration degrees of freedom. When one discretizes a field theory with $M$ configuration degrees of freedom on a lattice with $N$ points one ends up with a mechanical system that has $M\times N$ degrees of freedom. An example of such a system would be the diffeomorphism constraints of general relativity in $3+1$ dimensions when discretized on a uniform lattice of spacing $\alpha$ \cite{rentelnsmolin}. Of course, it is not clear at this point if such a system could be completely treated with our technique up to the last consequences, we just mention it here as an example of the type of system one would like to treat. The above discussion extends immediately to systems of this kind, only the bookkeeping has to be improved a bit. If we consider a parameter $\alpha(N)=1/N$, such that the continuum limit is achieved in $N\to\infty$ the classical continuum constraints can be thought of as limits \begin{equation} \phi_j(x)= \lim_{N\to\infty} \phi^{\alpha(N)}_{j,i(x,N)} \end{equation} where $i(x,N)$ is such that the point $x$ in the continuum lies between $i(x,N)$ and $i(x,N)+1$ on the lattice for every $N$. We are assuming a one dimensional lattice. Similar bookkeepings can be set up in higher dimensional cases. Just like we did in the mechanical system we can define \begin{equation} \left\{ \phi^{\alpha(N)}_{j,i},\phi^{\alpha(N)}_{k,i\pm 1}\right\} =\sum_{l,m=1}^MC^{\alpha(N)}_{j,i,k,i\pm1}{}^{lm}\phi^{\alpha(N)}_{l,m} +A^{\alpha(N)}_{j,i,k,i\pm1}, \end{equation} (where we have assumed that for the sites different from $i\pm 1$ on the lattice the Poisson bracket vanishes, the generalization to other cases is immediate) and one has that \begin{equation} \lim_{N\to \infty} A^{\alpha(N)}_{j,i,k,i\pm 1}=0. \end{equation} If one takes the spatial limit $\alpha\to 0$ first, one has a theory with discrete time and continuous space and with first class constraints and we know in that case the uniform discretization procedure agrees with the Dirac quantization. If one has more than one spatial dimension to discretize, then the situation complicates, since the continuum limit can be achieved with lattices of different topologies and connectivity. Once one has chosen a given topology and connectivity for the lattice, the continuum limit will only produce spin networks of connectivities compatible with such lattices. For instance if one takes a ``square'' lattice in terms of connectivity in two spatial dimensions, one would produce at most spin networks in the continuum with four valent vertices. If one takes a lattice that resembles a honeycomb with triangular plaquettes one would produce sextuple vertices, etc. It is clear that this point deserves further study insofar as to how to achieve the continuum limit in theories with more than one spatial dimension. In addition to this, following the uniform discretization approach one does not need to modify the discrete constraint algebra since it satisfies $\lim_{N\to\infty} \left\{\phi_i,\phi_j\right\}\sim 0$ and all the observables of the continuum theory arise by taking the continuum limit of the constants of the motion of the discrete theory. The encouraging fact that we recover the continuum theory in the limit classically is what raises hopes that a similar technique will also work at the quantum level. \section{Quantization} To proceed to quantize the model, we need to consider the master constraint given in equation (\ref{27}), \begin{equation} {\mathbb H}^\epsilon = \sum_{j=0}^N \left(D_j-1\right)\left(D_j-1\right)^* \epsilon^{-1/2} \frac{\sqrt{E^x_j}} {\left(E^\varphi_j\right)^3}, \end{equation} and quantize it. The quantization of this expression will require appropriate ordering of the exponential that appears in $D_j$ , putting the $K$'s to the left of the $E$'s, as in usual normal ordering. One would then have, \begin{equation} \hat{D}_j = :\exp i\left({-2\left[\hat{E}^x_{j+1}-\hat{E}^x_j\right]} \hat{K}_{x,j}+ {\left[\hat{E}^\varphi_{j}+\hat{E}^\varphi_{j+1}\right]} \left(\hat{K}_{\varphi,j+1}-\hat{K}_{\varphi,j}\right)\right): \label{D} \end{equation} Notice that $\hat{D}_j$ is not self-adjoint and, due to the factor ordering, neither is $\hat{\phi}_j$, but we will see that one can construct an $\mathbb H$ that is self-adjoint. To write the explicit action, let us recall the nature of the basis of spin network states in one dimension (see \cite{spherical} for details). One has a lattice of points $j=0\ldots N$. On such lattice one has a graph $g$ consistent of a collection of links $e$ connecting the vertices $v$. It is natural to associate the variable $K_x$ with links in the graph and the variable $K_\varphi$ with vertices of the graph. For bookkeeping purposes we will associate each link with the lattice site to its left. One then constructs the ``point holonomies'' for both variables as, \begin{equation} T_{g,\vec{k},\vec{\mu}}(K_x,K_\varphi) = \langle K_x,K_\varphi \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1new}}\right\rangle= \exp\left(i\sum_{j} {k_j} K_{x,j}\epsilon \right) \exp\left(i\sum_j \mu_{j,v} K_{\varphi,j}\right) \end{equation} The summations go through all the points in the lattice and we allow the possibility of using ``empty'' links to define the graph, i.e. links where $k_j=0$. The vertices of the graph therefore correspond to lattice sites where one of the two following conditions are met: either $\mu_i\neq0$ or $k_{i-1}\neq k_i$. In terms of this basis it is straightforward to write the action of the operator defined in (\ref{D}), \begin{eqnarray} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle &=& \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f2}} \right\rangle \\ &=& \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f3}} \right\rangle. \end{eqnarray} The above expression is easy to obtain, since the $\hat{E}^\varphi_j$ may be substituted by the corresponding eigenvalues $\mu_j$ and $\hat{E}^x_j$ produces $(k_{j-1}+k_j)/(2\epsilon)$. The exponential of $\lambda K_{\varphi,j}$ adds $\lambda$ to $\mu_j$, whereas the exponential of $\epsilon n K_{x,i}$ adds $n$ to $k_i$. An interesting particular case is that of an isolated $\mu$ populated vertex, \begin{equation} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle = \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f5}} \right\rangle. \end{equation} So we see that the operator $\hat{D}$ moves the line to a new vertex. This clean action is in part due to the choice of ``midpoint'' regularization we chose for the $E^\varphi$. This will in the end be important to recover diffeomorphism invariance in the continuum. Something we will have to study later is the possibility of ``coalescing'' two vertices, as in the case, \begin{equation} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f6}} \right\rangle = \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f7}} \right\rangle.\label{34} \end{equation} or the case in which a new vertex is created, \begin{equation} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f8}} \right\rangle = \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f9}} \right\rangle. \end{equation} To compute the adjoint of $\hat{D}$ is easy, since it is a one-to-one operator. We start by noting that, \begin{equation} \left\langle \raisebox{-5mm}{\includegraphics[height=1.5cm]{f3}}\right. \left.\vphantom{\frac{1}{1}}\right\vert \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle=1, \end{equation} and the insertion of any other bra in the left gives zero. Therefore \begin{equation} \hat{D}^\dagger_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle= \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f10}} \right\rangle, \end{equation} with special particular cases that ``translate'' a $\mu$ insertion, \begin{equation} \hat{D}^\dagger_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f5}} \right\rangle= \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle, \end{equation} or create a vertex, \begin{equation} \hat{D}^\dagger_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle= \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f11}} \right\rangle. \end{equation} In addition there is a third particular case of interest in which a vertex is annihilated, it happens if $\mu_{-2}= -2\mu_1$ and $k=(k_1+k_2)/2$. We now need to turn our attention to the other terms in the construction of $\hat{\mathbb H}$ in order to have a complete quantum version of (\ref{27}). The discretization we will propose is, as, \begin{equation}\label{discretizacion} {\mathbb H} = \sum_{j=0}^N \left(O_{j+1}D_j-O_j\right)^\dagger \left(O_{j+1}D_j-O_j\right) \end{equation} where $O_j =\sqrt[4]{\epsilon E^x_j}/(E^\varphi_j)^{3/2}$, and we have chosen to localize $O_j$ and $D_j$ at different points. Intuitively this can be seen in the fact that $\hat{D}$ ``shifts'' links in the spin nets to the next neighbor whereas $\hat{O}$ just acts as a prefactor, as we will discuss in the next paragraph. Therefore if one wishes to find cancellations between both terms in (\ref{discretizacion}) one needs to delocalize the action of both $\hat{O}$'s. The quantization of $O_j$ has been studied in the literature before \cite{aspasi}. Since these operators only act multiplicatively, it is better to revert to a simpler notation for the states $\vert\vec{\mu},\vec{k}\rangle>$. The action of the operator is, \begin{equation} \frac{\sqrt[4]{\hat{E}^x_j}}{\left(E^\varphi_j\right)^{3/2}\epsilon^{1/4}} |\vec{\mu},\vec{k}> = \left(\frac{4}{3\rho } \right)^6 \sqrt[4]{\frac{k_{j-1}+k_{j+1}}{2}} \left[\vert \mu_j+\frac{\rho}{2}\vert^{3/4} - \vert \mu_j-\frac{\rho}{2}\vert^{3/4} \right]^6 |\vec{\mu},\vec{k}>, \end{equation} where $\rho$ is the minimum allowable value of $\mu$ as is customary in loop quantum cosmology. Since this operator has a simple action through a prefactor, we will call such prefactor $f(\vec{\mu},\vec{k},j)$. One therefore has, for example, \begin{equation} \hat{O}_{i+1} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle=f(\vec{\mu},\vec{k},i+1) \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f5}} \right\rangle, \end{equation} or, \begin{equation} \hat{O}_{i+1} \hat{D}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f1}} \right\rangle=f(\vec{\mu},\vec{k},i+1) \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f3}} \right\rangle, \end{equation} where the $\vec{\mu},\vec{k}$ that appear in the prefactor are the ones that appear in the state to the right of the prefactor. It is worthwhile noticing that if $\mu_2=0$ the map is from a diagram with one insertion to another with one insertion, if $\mu_1=0$ it goes from one insertion to two and if both $\mu_1$ and $\mu_2$ are non-vanishing it maps two insertions to two insertions. It is not possible to go from a state with two consecutive insertions into one with only one insertion, since if $2\mu_2+\mu_1 =0$ then $f=0$. This is a key property one seeks in the regularization. If the regularization were able to fuse two insertions it would be problematic, as we will discuss later on. This allows us to evaluate the action of the quadratic Hamiltonian ${\mathbb H}$ explicitly on a set of states that capture in the discrete theory the flavor of diffeomorphism invariance. For instance, consider a normalized state obtained by superposing all possible states with a given insertion \begin{equation} \left\vert \psi_1\right\rangle =\frac{1}{\sqrt{N}}\sum_{i=0}^N \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle. \end{equation} Such a state would be the analogue in the discrete theory of a ``group averaged'' state. If we now consider the action of $\hat{O}_{i+1} \hat{D}_i-\hat{O}_i$ on such a state we get, \begin{equation} \left\langle \psi_1\vphantom{\frac{1}{1}}\right\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f4}} \right\rangle=0 \end{equation} since both terms in the difference produce the same prefactor when acting on the state on the right. If one were to consider on the right a state with multiple insertions, then the result will also be zero since the operators do not convert two consecutive insertions at $i,i+1$ into one and the inner product would vanish. As a consequence, we therefore have that, \begin{equation} \left\langle \psi_1\right\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i =0. \end{equation} Let us now consider states with two insertions, again ``group averaged'' in the sense that we sum over all possible locations of the two insertions respecting a relative order within the lattice (in this case this is irrelevant due to cyclicity in a compact manifold), \begin{equation} \left\vert \psi_2\right\rangle =\frac{1}{\sqrt{N(N-1)}}\sum_{i=0}^N\sum_{\scriptstyle\begin{array}{c}\scriptstyle j \neq i\\ \scriptstyle j=0\end{array}}^N \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f12}}. \right\rangle. \end{equation} If one considers a state $\vert \nu \rangle$, with three or more insertions of $\mu$ one has that \begin{equation} \left\langle \psi_2\right\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \left\vert \nu \right\rangle =0, \end{equation} since in the first term $\hat{D}_i$ could produce a two insertion diagram, but then the action of $\hat{O}$ at site $i+1$ would vanish, and the term on the right does not produce a two insertion diagram, as seen in (\ref{34}). If one considers a state $\vert\nu\rangle$ with two non-consecutive vertices, the operator also vanishes, for the same reasons as before. Finally, if $\vert\nu\rangle$ has two consecutive insertions then we will have a non-trivial contribution. We will see, however, that such a contribution vanishes in the continuum limit. To see this we evaluate, \begin{eqnarray} \left\langle \psi_2\left\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \right\vert\vphantom{\frac{1}{1}} \raisebox{-5mm}{\includegraphics[height=1.5cm]{f14}} \right\rangle &=& f(\vec{\nu},\vec{m},i+1) \left\langle \psi_2 \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f15}}\right. \right\rangle\nonumber\\ && -f(\vec{\nu},\vec{m},i) \left\langle \psi_2 \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f14}}\right. \right\rangle\nonumber\\ &=&\left[ f(\vec{\nu},\vec{m},i+1) \delta_{\mu_2,2\nu_{i+1}+\nu_i} \delta_{\mu_1,-\nu_{i+1}} \delta_{{k'},m_{i-1}}\delta_{{k'},m_i+m_{i-1}-m_{i+1}} \delta_{k,m_{i+1}}\right.\nonumber\\ &&\left.- f(\vec{\nu},\vec{m},i) \delta_{\mu_1,\nu_i} \delta_{\mu_2,\nu_{i+1}} \delta_{k,m_{i-1}} \delta_{{k'},m_{i}} \delta_{k,m_{i+1}}\right]\frac{1}{\sqrt{N(N-1)}} \end{eqnarray} If $\vert \nu\rangle$ has one $\mu$ insertion then there is another contribution, \begin{eqnarray} \left\langle \psi_2\left\vert \hat{O}_{i+1} \hat{D}_i-\hat{O}_i \right\vert\vphantom{\frac{1}{1}} \raisebox{-5mm}{\includegraphics[height=1.5cm]{f16}} \right\rangle &=& f(\vec{\nu},\vec{m},i+1) \left\langle \psi_2 \left\vert\vphantom{\frac{1}{1}}\right.\left. \raisebox{-5mm}{\includegraphics[height=1.5cm]{f17}}\right. \right\rangle\nonumber\\ &=&\frac{1}{\sqrt{N(N-1)}}\left[ \delta_{k,m_i} \delta_{{k'},2m_i-m_{i+1}} \delta_{\mu_1,-\nu_{i+1}} \delta_{k,m_{i+1}} f(\vec{\nu},\vec{m},i+1)\right] \end{eqnarray} We are now in a position to evaluate the expectation value of $\hat{\mathbb H}$. To do that we compute, \begin{equation} \langle \psi_2\vert\hat{\mathbb H} \vert \psi_2\rangle= \sum_{j=0}^N \langle \psi_2\vert \left(\hat{O}_{j+1} \hat{D}_j-\hat{O}_j\right) \left(\hat{O}_{j+1} \hat{D}_j-\hat{O}_j\right)^\dagger \vert \psi_2\rangle. \end{equation} and we insert a complete basis of states between the two parentheses. Then we can apply all the results we have just worked out. The final result is that only three finite contributions appear for every $j$ and therefore \begin{equation} \langle \psi_2\vert\hat{\mathbb H}\vert \psi_2\rangle=O\left(\frac{1}{N}\right), \end{equation} and we see that in the limit $N\to \infty$ one shows that the spectrum of $\hat{\mathbb H}$ contains zero and therefore no anomalies appear and the constraints are enforced exactly. Analogously, one can show that for spin networks with $m$ vertices $\langle\psi_m\vert{\mathbb H}\vert\psi_m\rangle=O(1/N)$, and therefore the states that minimize $\langle\hat{\mathbb H}\rangle$ include in the limit $N\to\infty$ the diffeomorphism invariant states obtained via the group averaging procedure. To see this more clearly we note that the state with $m$ vertices we are considering is of the form, \begin{eqnarray} \left\vert \psi_m\right\rangle = \frac{1}{\sqrt{NC^N_m}}\sum_{i_{v_1}<\ldots<i_{v_j}<\ldots<i_{v_m}<i_{v_1}} \left\vert\vphantom{\frac{1}{1}} \raisebox{-5mm}{\includegraphics[height=1.5cm]{f18}} \right\rangle \end{eqnarray} where the sum is over all the spin nets with the only condition that the cyclic order of the vertices is preserved, that is $v_1$ is always between $v_m$ and $v_2$, etc. The quantities $C^{N-1}_m$ are the combinatorial numbers of $N-1$ elements taken in groups of $m$ for normalization purposes. This sum is the discrete version of the sum on the group that is performed in the continuum group averaging procedure.The sum preserves the cyclic order placing the vertices in all the positions compatible with such order. We have shown that the expectation value of $\hat{\mathbb H}$ vanishes in the continuum limit. Since $\hat{\mathbb H}$ is a positive definite operator this also implies that $\hat{\mathbb H}\vert \psi_n\rangle =0$, which is the condition one seeks in the uniform discretization approach. This can be explicitly checked by computing, for instance for a state $\langle \psi_2\vert$, \begin{equation} \sum_s \langle \psi_2 \vert \hat{\mathbb H}\vert s\rangle\langle s\vert= \frac{1}{\sqrt{N(N-1)}} \sum_{i=1}^N f_i \langle s_i\vert \end{equation} where the sum over $s$ means a sum over a basis of spin networks $\vert s\rangle$ and the $\langle s_i\vert $ are spin network states that have vertices at consecutive sites $i$ and $i+1$. Given that the $f_i$'s are finite coefficients independent of $N$ one immediately sees that the right hand side has zero norm when $N\to \infty$. There is a rather important difference with the continuum case, however. The states constructed here as limits of discrete states are normalizable with the kinematical inner product and therefore the calculation suggests that in a problem with a Hamiltonian constraint in addition to diffeomorphism constraints one could work all constraints in the discrete theory on an equal footing. \section{Discussion} We have seen in a $1+1$ dimensional model with diffeomorphism invariance that one can discretize it, therefore breaking the invariance, and treat it using the ``uniform discretizations'' approach yielding a diffeomorphism invariant theory in the continuum limit. We have argued that this would have been close to impossible if one had naively discretized the constraints and quantized the resulting theory. An important point to realize is that the the kinematical Hilbert space has been changed, by considering spin networks on ``lattices'' with a countable number of points. There exist infinitely many possible such lattices built by considering different spacings between the points. However, in $1+1$ dimensions the choice of lattice does not influence the diffeomorphism invariant quantum theory, whose observables can be written in terms of the canonical variables and invariant combinations of their derivatives that can be entirely framed in terms of $\vec{k}$ and $\vec{\mu}$ without reference to details of the lattice. For instance, the total volume of a slice evaluated on a diffeomorphism invariant spin network $\vert\psi_1\rangle$ is given by \begin{equation} \hat{V}\vert \psi_1 \rangle= 4 \pi \ell_{\rm Planck}^3 \sum_v \vert \mu_v\vert \sqrt{\frac{k_{e^+(v)}+k_{e^-(v)}}{2}} \vert \psi_1 \rangle \end{equation} where the sum is over all vertices of the continuum spin network and $k_{e^\pm}$ are the values of $k$ emanating to the right and left of vertex $v$. More generally, consider an observable $O_{\rm Diff}$, that is an operator invariant under diffeomorphisms. Let us study in the space of lattices with a countable number of points its expectation value on diffeomorphism invariant states $\langle \psi_{m,\vec{k},\vec{\mu}} \vert \hat{O}_{\rm Diff}\vert \psi_{m,\vec{k},\vec{\mu}}\rangle$, with $\vert \psi_{m,\vec{k},\vec{\mu}}\rangle$ the cyclic state we considered in the previous section. In the continuum the vectors of the Hilbert space of diffeomorphism invariant states $\vert \{s\}\rangle$ where $\{s\}$ is the knot class of a spin network $s$ belong to the dual of the space of kinematic spin network states $\vert s\rangle$. The expectation value of the observable in the continuum is $\langle \{s\}\vert \hat{O}_{\rm Diff} \vert \{s\}\rangle$ and the result of both expectation values in the continuum and in the discrete theory coincide. The reason for this is that the action of $\hat{O}_{\rm Diff}$ on one of the terms in $\vert \psi_m \rangle$ coincides with $\hat{O}_{\rm Diff} \vert s\rangle$ except when $s$ has vertices that occupy consecutive positions on the lattice. In this case, depending on the specific form of $\hat{O}_{\rm Diff}$ the results could differ. Due to the normalization factor, however, such exceptional contributions contribute a factor $1/N$ in the $N\to \infty$ limit, so we have that in the continuum limit the expectation values in the continuum and the discrete always agree. An issue of importance in loop quantum gravity is the problem of ambiguities in the definition of the quantum theory. Apart from the usual factor ordering ambiguities in a discrete theory one adds the ambiguities of the discretization process. In this example we have made several careful choices in this process to ensure that the operator $\hat{\mathbb H}$ has a non-trivial kernel in the continuum limit. This requirement proved in practice quite onerous to satisfy and it took quite a bit of effort to satisfy the requirement. Though in no way we claim that the results are unique, it hints at the fact that requiring that $\hat{\mathbb H}$ have a non-trivial kernel in the continuum significantly reduces the level of ambiguities in the definition of a quantum discrete theory. We have not been able to find another regularization satisfying the requirement an leading to a different non-trivial kernel. Another point to note is that the quantum diffeomorphism constraints $\phi^\epsilon(M)=\sum_{j=0}^N \frac{M_j}{2i\epsilon}\left(D_j-1\right)$ with $M_j$ stemming from discretizing a smooth shift function do not reproduce the continuum algebra of constraints when they act on generic spin networks on the lattice that belong to the kinematical Hilbert space. The algebra almost works, but there appear anomalous contributions for spin networks with vertices in two consecutive sites of the lattice. In spite of this the constraints can be imposed at a quantum level through the condition $\langle \psi \vert H=0$ and imply, as we showed, that the solutions correspond to a discrete version of the sum in the group that is performed in the group averaging procedure. The difference is that these states are normalizable with the inner product of the kinematical space itself. In this construction the Hilbert space ${\cal H}_{\rm Diff}$ is a subspace of ${\cal H}_{\rm Kin}$, unlike the situation in the ordinary group averaging procedure. This property opens interesting possibilities, particularly if it holds in more elaborate models. If such a property were to hold in more complex models, for instance involving a Hamiltonian constraint, it would be very important since it would provide immediate access to a physical inner product. All of the above suggests that in more realistic models than the one we studied, for instance when there is a Hamiltonian constraint (with structure functions in the constraint algebra) one will also be able to define the diffeomorphism and the Hamiltonian constraints as quantum operators and impose them as constraints (or equivalently, to impose the ``master constraint'' ${\mathbb H}$). They would act on the kinematic Hilbert space of the discrete theory, and one would hope that a suitable continuum limit can be defined. We would therefore have a way of defining a continuum quantum theory via discretization and taking the continuum limit even in systems where the discretization changes the nature of the constraints from first to second class. In $1+1$ dimensions the procedure appears quite promising. It should be noted that this is a quite rich arena in physical phenomena, including Gowdy cosmologies, the Choptuik phenomena and several models of black hole formation. The fact that we could envision treating these problems in detail in the quantum theory in the near future is quite attractive. In higher dimensions the viability of the approach will require further study, in particular since the discretization scheme chosen could constrain importantly the types of spin networks that one can construct in the continuum theory. Summarizing, we have presented the first example of a model with infinitely many degrees of freedom where the uniform discretization procedure works out to the last consequences, providing a continuum theory with diffeomorphism invariance and where the master constraint has a non-trivial kernel. It also leads to an explicit construction of the physical Hilbert space that is different from the usual one, allowing the introduction of the kinematical inner product as the physical one. \section{Acknowledgements} This work was supported in part by grants NSF-PHY0650715, and by funds of the Horace C. Hearne Jr. Institute for Theoretical Physics, FQXi, PEDECIBA and PDT \#63/076 (Uruguay) and CCT-LSU.
train/arxiv
BkiUfiXxK1yAgXBVG-fF
5
1
\section{Introduction} \label{sect:intro} \vspace{-0.06cm} Many models of new physics predict additional particles beyond the standard model. Depending on the specific model and its underlying parameters, some of the new states might be nearly mass-degenerate, for example two of the neutral Higgs bosons or some sfermions of the MSSM. In such cases with a small mass gap between two intermediate particles and simultaneously a sizeable mixing among them, the interference term between contributions involving either of the nearly degenerate particles may become large. On the other hand, an extended particle spectrum can lead to long cascade decays of unstable particles. But many particles in the final state in conjunction with the need for precise loop corrections cause a technical challenge. So the well-known narrow-width approximation is useful to calculate the production and decay of an intermediate particle. This can be iterated until a complicated process is decomposed into sufficiently short sub-processes. Some Monte-Carlo generators make use of this procedure, and experimental searches for new particles are interpreted with respect to the prediction of a production cross section times branching ratio(s). Yet, the NWA in its standard version (sNWA) does not take interference terms into account. Instead of neglecting the interference or performing the full calculation, which is - especially at higher order - not always possible, we formulate a generalised NWA (gNWA) that also factorises the interference term on-shell and thereby enables the separate computation of loop corrections to the production and decay parts including the interference term\,\cite{Fuchs:2014ola}. Particularly, we apply the gNWA on interfering Higgs bosons within the decay of the neutralino $\tilde{\chi}_4^0$ into $\tilde{\chi}_1^0$ and a $\tau$-pair. In a scenario with real MSSM parameters, only the two $\mathcal{CP}$-even Higgs bosons $h,H$ mix. But the gNWA can also be used for the $\mathcal{CP}$-violating interference between $H$ and $A$ in case of complex parameters and in the context of other new physics models. \section{Generalised narrow-width approximation}\label{sect:gNWA} \subsection{Standard NWA} The narrow-width approximation factorises a more complicated process into the on-shell production of an unstable particle and its subsequent decay. In Fig.\,\ref{fig:prod_decay}, the intermediate particle $d$ with mass $M$, total width $\Gamma$ and momentum $q^{2}$ is described by the Breit-Wigner propagator \begin{equation} \Delta^{\textrm{\scriptsize{BW}}}\left(q^{2}\right):=\frac{1}{q^2 - M^2 + iM\Gamma} \label{eq:BWdef}~~. \end{equation} If its width is narrow, $\Gamma\ll M$, if the production and decay processes are kinematically open and if non-factorisable and interference contributions are negligible, the cross section of the generic example process $ab\rightarrow cef$ via the resonant state $d$ can be approximated by \begin{equation} \sigma_{ab \rightarrow cef} \simeq \sigma_{ab \rightarrow cd} \times \textrm{BR}_{d\rightarrow ef}. \label{eq:NWAbasic} \end{equation} \begin{figure}[ht!] \centering \includegraphics[width=0.8\columnwidth]{figures/abcef} \caption[Splitting a process into production and decay]{The resonant process $ab \rightarrow cef$ is split into the production $ab \rightarrow cd$ and decay $d\rightarrow ef$ with particle $d$ on-shell.} \label{fig:prod_decay} \end{figure} \subsection{Generalised NWA at tree level} However, in the presence of several resonances, interference effects can be relevant if the contributing intermediate particles 1 and 2 are nearly degenerate, i.e., if their mass splitting is below one of the total widths, $|M_1-M_2|\leq \Gamma_1,\Gamma_2$. Denoting the production and decay matrix elements by $\mathcal{P}(q^{2})$ and $\mathcal{D}(q^{2})$, respectively, the full interference term in Eq.\,(\ref{eq:intexact}) can be approximated by on-shell matrix elements in the master formula in Eq.\,(\ref{eq:masterformula}), \begin{widetext} \begin{eqnarray} \sigma_{\rm{int}} &=& \int \frac{d\Phi_P(q^{2}) dq^{2} d\Phi_D(q^{2})}{2\pi F}2\textrm{Re}\left\lbrace \Delta^{\textrm{\scriptsize{BW}}}_1(q^{2})\Delta^{*\textrm{\scriptsize{BW}}}_2(q^{2}) \mathcal{P}_{1}(q^{2})\mathcal{P}^*_{2}(q^{2})\mathcal{D}_{1}(q^{2})\mathcal{D}^*_{2}(q^{2})\right\rbrace\label{eq:intexact}\\ &\simeq& \frac{2}{F}\textrm{Re}\left\lbrace\int \frac{dq^{2}}{2\pi}\Delta^{\textrm{\scriptsize{BW}}}_1(q^{2})\Delta^{\textrm{\scriptsize{*BW}}}_2(q^{2}) \left[\int d\Phi_P(q^{2})\mathcal{P}_{1}(M_1^{2})\mathcal{P}^*_{2}(M_2^{2})\right]\left[\int d\Phi_D(q^{2}) \mathcal{D}_{1}(M_1^{2})\mathcal{D}^*_{2}(M_2^{2}) \right]\right\rbrace\label{eq:masterformula}, \end{eqnarray} \end{widetext} where $\Phi_{P/D}$ are the production/decay phase spaces and $F$ the flux factor. Moreover, as a technical simplification beyond the on-shell approximation for matrix elements, also the phase spaces in Eq.\,(\ref{eq:masterformula}) can be evaluated on-shell and thus taken out of the $q^{2}$-integral. Under the additional assumption of equal masses $M_1=M_2$, one can express the interference term through weight factors $R$, either as the weighted sum in Eq.\,(\ref{eq:intR}) or in terms of only one of the resonant particles in Eq.\,(\ref{eq:intRtilde}), \begin{eqnarray} \sigma_{int} &\simeq& \sigma_{P_i}\textrm{BR}_i\cdot R_i + \sigma_{P_j}\textrm{BR}_j\cdot R_j\label{eq:intR}\\ &\simeq& \sigma_{P_i}\, \textrm{BR}_i\cdot \tilde{R}_i, \hspace{1.7cm} i,j\in{1,2}\label{eq:intRtilde},\\ R_i &:=& 2M_i \Gamma_i w_i\cdot 2\textrm{Re}\left\lbrace x_i I \right\rbrace \label{eq:R},\\ \tilde{R}_i &:=& 2M_i \Gamma_i \cdot \textrm{Re}\left\lbrace x_i I \right\rbrace \label{eq:Rtilde}. \end{eqnarray} The $R$-factors are based on scaling factors $x$ as the ratio of couplings $C_{P/D}$ at the production/decay vertex\,\cite{Fowler:2010eba,ElinaMSc,Barducci:2013zaa}, the relative weight $w_i$ of each resonance $i$ and the overlap integral $I$ of the Breit-Wigner propagators within the kinematic boundaries $q^{2}_{\rm{min}}, q^{2}_{\rm{max}}$, \begin{eqnarray} x_i &:=& \frac{C_{P_i}C_{P_j}^{*}C_{D_i}C_{D_j}^{*}}{|C_{P_i}|^{2}|C_{D_i}|^{2}}\label{eq:xi},\\ w_i &:=& \frac{\sigma_{P_i}\,\textrm{BR}_i}{\sigma_{P_1}\,\textrm{BR}_1+\sigma_{P_2}\,\textrm{BR}_2}\label{eq:wi},\\ I&:=&\int\limits_{q^{2}_{\rm{min}}}^{q^{2}_{\rm{max}}} \frac{dq^{2}}{2\pi}\Delta^{\textrm{\scriptsize{BW}}}_1(q^2)\,\Delta^{\textrm{\scriptsize{*BW}}}_2(q^2)\label{eq:defI}. \end{eqnarray} Hence, Eqs.\,(\ref{eq:masterformula},\ref{eq:intR},\ref{eq:intRtilde}) add a new term to the standard NWA, resulting in the generalised NWA which includes the possibility of several resonances as well as their interference based on on-shell matrix elements or weight factors at the tree level. \subsection{Generalised NWA at higher order} In addition, the gNWA can be extended to incorporate higher-order corrections as long as non-factorisable contributions between the initial and final state such as box diagrams are negligible. At 1-loop order with virtual corrections to the production and virtual and real corrections to the decay, the product of matrix elements is expanded as \begin{eqnarray} \mathcal{P}_1\mathcal{P}_2^{*} &\longmapsto& \mathcal{P}_1^{1}\mathcal{P}_2^{0*}+\mathcal{P}_1^{0}\mathcal{P}_2^{1*},\\ \mathcal{D}_1\mathcal{D}_2^{*} &\longmapsto& \mathcal{D}_1^{1}\mathcal{D}_2^{0*}+\mathcal{D}_1^{0}\mathcal{D}_2^{1*}+\delta_{\textrm{\scriptsize{SB}}}\mathcal{D}_1^{0}\mathcal{D}_2^{0*}, \end{eqnarray} where the superscripts $0/1$ denote the tree/1-loop level on-shell matrix elements and $\delta_{\scriptsize{\textrm{SB}}}$ the factor of soft bremsstrahlung (see e.g. Ref.\,\cite{Denner:1991kt,FormCalc}). The result stays UV- and IR-finite, if the diagrams containing virtual photons and $\delta_{\scriptsize{\textrm{SB}}}$ are evaluated at the same mass \cite{Fuchs:2014ola,Denner:1997ia,Grunewald:2000ju,Denner:2000bj}. It is possible to formulate the $R$-factor approximation at 1-loop order such that higher-order cross sections and branching ratios, but only tree level couplings are used\,\cite{Fuchs:2014ola}. On top of 1-loop diagrams, also corrections of higher order can be included into the gNWA, \begin{equation} \hspace{-0.5cm} \sigma^{\textrm{\scriptsize{best}}} =\sigma_{\textrm{\scriptsize{full}}}^{0}+\sum_{i}\left( \sigma_{P_i}^{\textrm{\scriptsize{best}}}\textrm{BR}_i^{\textrm{\scriptsize{best}}}-\sigma_{P_i}^{0}\textrm{BR}_i^{0}\right)+ \sigma^{\textrm{\scriptsize{int}}1+}\label{eq:Mbest}, \end{equation} where the tree level cross section $\sigma^{0}_{\textrm{\scriptsize{full}}}$ avoids an uncertainty from factorisation at lowest order. The \textit{best} production cross section $\sigma_{P_i}^{\textrm{\scriptsize{best}}}$ and branching ratios $\textrm{BR}_i^{\textrm{\scriptsize{best}}}$ mean the sum of the tree level, strict 1-loop and all available higher-order contribution to the respective quantity. Therefore, the products of tree level production cross sections and branching ratios are subtracted because their unfactorised counterparts are already contained in the full tree level term $\sigma_{\rm{full}}^{0}$. The corrections to the interference term at the 1-loop level and beyond are denoted by $\sigma^{\textrm{\scriptsize{int}}1+}$. \section{Mixing effects in the Higgs sector} For an application of the gNWA to interference effects in the Higgs sector, we briefly review, based on Refs.\,\cite{Frank:2006yh,Williams:2011bu}, the mixing of MSSM Higgs bosons. Self-energy contributions to the mass matrix \begin{equation} \textbf{M}_{ij}(p^{2}) = \delta_{ij}\,m_i^{2} - \hat{\Sigma}_{ij}(p^{2})\label{eq:Mij}, \end{equation} where $m_i$ denotes the tree level mass of $i,j=h,H,A$, are related to mixing between the neutral Higgs boson propagators $\Delta_{ij}(p^{2})$. In the case of mixed external Higgs bosons, finite normalisation factors are introduced for correct on-shell properties and the proper normalisation of the S-matrix in the $\overline{\rm{DR}}$- or other renormalisation schemes without on-shell renormalisation conditions: \begin{equation} \hat{Z}_{i} = \frac{1}{\frac{\partial}{\partial p^{2}}\frac{i}{\Delta_{ii}}}\bigg\vert_{p^{2}=M^{2}_{c_{h_a}}}, \hspace*{0.5cm} \hat{Z}_{ij} = \frac{\Delta_{ij}(p^{2})}{\Delta_{ii}(p^{2})}\bigg|_{p^{2}=M^{2}_{c_{h_a}}}\label{eq:Zij}, \end{equation} where $h_a,~a=1,2,3,$ are the mass eigenstates with loop-corrected masses $M_{h_a}$ and total widths $\Gamma_{h_a}$. Close to the complex pole $M_{c_{h_a}}^{2} = M_{h_a}^{2} - i M_{h_a}\Gamma_{h_a}\label{eq:HComplexPole}$, the full momentum dependence of the internal propagators can be approximated by a combination of Breit-Wigner propagators and $\hat{\textbf{Z}}$-factors $\hat{\textbf{Z}}_{ij}=\sqrt{\hat{Z}_i}\hat{Z}_{ij}$, evaluated at the complex pole\,\cite{Fowler:2010eba,HiggsMix:InPrep}. Thus, we determine the interference contribution in terms of $\hat{\textbf{Z}}$-factors and Breit-Wigner propagators. \section{Application of the gNWA at tree level} \subsection{Example process: $\tilde{\chi}_4^{0}\rightarrow \tilde{\chi}_1^{0} \tau^{+}\tau^{-}$} In order to investigate the possible impact of interference terms, we study the example process $\tilde{\chi}_4^0\rightarrow\tilde{\chi}_1^0\tau^{+}\tau^{-}$ via a resonant $h$ or $H$. We confront the 3-body decay (Fig.\,\ref{fig:chi04_1to3}) with the prediction of the sNWA and gNWA based on 2-body production and decay parts (Fig.\,\ref{fig:chi04_1to2}) in a modified $M_h^{\rm{max}}$ scenario defined in Tab.\,\ref{tab:scenario}, where $h$ and $H$ are nearly degenerate. Their mass difference $\Delta M=M_H-M_h$, shown in red in Fig.\,\ref{fig:DMGamma}, is below one of the total widths $\Gamma_{h/H}$. The ratio $\Delta M/(\Gamma_h+\Gamma_H)$ in Fig.\,\ref{fig:dmg} gives a good indication of the parameter region of most significant interference. If it is minimal, the Breit-Wigner propagators overlap most strongly, causing a large overlap integral $I$. \begin{table}[ht!] \begin{center} \begin{tabular}{|c c c c|} \hline $M_1$& $M_2$& $M_3$& $M_{\textrm{\scriptsize{SUSY}}}$ \\ 100\,GeV& 200\,GeV & 800\,GeV & 1\,TeV \\ \hline\hline $X_t$ & $\mu$&$t_{\beta}$& $M_{H^{\pm}}$\\ 2.5\,TeV & 200\,GeV& 50& (153\,GeV) \\\hline \end{tabular} \label{tab:scenario} \caption[Parameters in the numerical analysis]{The modified $M_h^{\rm{max}}$ scenario used for the numerical analysis. Brackets indicate variation around this central value.} \end{center} \end{table} \begin{figure}[ht!] \centering \subfigure[3-body decay]{\includegraphics[width=0.9\columnwidth]{figures/2neutralino4.pdf}\label{fig:chi04_1to3}}\\ \subfigure[2-body decays]{\includegraphics[width=0.9\columnwidth]{figures/neutralino4.pdf}\label{fig:chi04_1to2}} \caption[3-body decay of $\tilde{\chi}_1^0$ split into 2-body decays]{Example process $\tilde{\chi}_4^0 \rightarrow \tilde{\chi}_1^0 \tau^+ \tau^-$ with $h$ or $H$ as intermediate particles in the two interfering diagrams. The process is either considered as \textbf{(a)} a 3-body decay or \textbf{(b)} decomposed in two 2-body decays.} \label{fig:chi04decay} \end{figure} \begin{figure}[ht!] \begin{center} \subfigure[]{\includegraphics[width=0.65\columnwidth]{figures/DeltaM_Gamma}\label{fig:GM}\label{fig:DMGamma}} \subfigure[]{\includegraphics[width=0.67\columnwidth]{figures/DMoverGamma}\label{fig:dmg}} \caption{Higgs masses and widths from \texttt{FeynHiggs} including dominant 2-loop corrections in the modified $M_{h}^{\rm{max}}$ scenario. \textbf{(a):} Mass difference $\Delta M\equiv M_H-M_h$ (red) compared to total widths $\Gamma_h$ (blue, dotted) and $\Gamma_H$ (green, dashed). \textbf{(b):} Mass difference $\Delta M$ divided by total width of $h$ (blue, dotted), $H$ (green, dashed) and by the sum of both widths (orange).} \label{fig:GammaDeltaM} \end{center} \vspace*{-0.4cm} \end{figure} \subsection{Numerical analysis of the $h-H$-interference} As a function of the input Higgs mass $M_{H^{+}}$, Fig.\,\ref{fig:sgNWA_tree} shows the decay width of $\tilde{\chi}_4^{0}\rightarrow \tilde{\chi}_1^{0} \tau^{+}\tau^{-}$ computed with \texttt{FeynArts}\,\cite{Kublbeck:1990xc, Denner:1992vza, Kublbeck:1992mt, Hahn:2000kx, Hahn:2001rv} and \texttt{FormCalc}\,\cite{Hahn:1998yk, Hahn:1999wr, Hahn:2000jm, Hahn:2006qw, Hahn:2006zy} at the tree level, but improved by 2-loop Higgs masses, widths and $\hat{\textbf{Z}}$-factors from \texttt{FeynHiggs}\,\cite{Heinemeyer:1998np, Heinemeyer:1998yj, Degrassi:2002fi, Heinemeyer:2007aq}. The sNWA prediction (grey, dotted) overestimates the 3-body decay width (black) by up to a factor of 5 whereas the gNWA based on on-shell matrix elements (red, dashed, denoted by $\mathcal{M}$) or approximated by interference weight factors (blue, dash-dotted, denoted by $R$) reproduces the unfactorised result within a few percent uncertainty. The huge discrepancy between the sNWA and the 3-body decay width originates from the large, destructive interference term, owing to a small mass splitting $\Delta M$ and substantial mixing effects. \begin{figure}[ht!] \begin{center} \includegraphics[width=\columnwidth]{figures/LO_sgNWA_MR_final.pdf} \caption{The 1$\rightarrow$3 decay width of $\tilde{\chi}_4^0 \rightarrow \tilde{\chi}_1^0 \tau^{+}\tau^{-}$ at tree level with contributions from $h, H$ including their interference (black) confronted with the NWA: sNWA without the interference term (grey, dotted), gNWA including the interference term based on on-shell matrix elements denoted by $\mathcal{M}$ (red, dashed) and on the R-factor approximation denoted by R (blue, dash-dotted).} \label{fig:sgNWA_tree} \end{center} \vspace*{-0.5cm} \end{figure} \section{Application of the gNWA at higher order} \subsection{3-body neutralino decay at the 1-loop level} In order test the gNWA at the 1-loop level, also the 3-body decay width is needed at this order as a reference. Vertex corrections at both vertices, Higgs self-energies as well as box diagrams and real photon emission contribute. The loop integrals are calculated with \texttt{LoopTools}\,\cite{Hahn:1998yk,Hahn:2010zi}. For illustration, some example diagrams are depicted in Fig.\,\ref{fig:Loop13}. The neutralino-neutralino-Higgs vertex is renormalised in an on-shell scheme, see e.g. Refs.\,\cite{Lahanas:1993ib,Pierce:1994ew,Eberl:2001eu,Fritzsche:2002bi,Fowler:2009ay,Bharucha:2012nx,Bharucha:2012re}. Selecting the most bino-, higgsino- and wino-like states on-shell, in this scenario $\tilde\chi^{0}_{1,3,4}$, defines a stable renormalisation scheme\,\cite{Chatterjee:2011wc}. Consequently, the masses of the remaining electroweakinos $\tilde{\chi}^{0}_2$ and $\tilde{\chi}^{\pm}_{1,2}$ receive loop corrections. The Higgs sector is renormalised in a hybrid on-shell and $\overline{\rm{DR}}$- scheme. While Higgs-Higgs mixing is already contained in the $\hat{\textbf{Z}}$-factors, the self-energies of Higgs and Goldstone/Z-bosons are calculated explicitly. Soft bremsstrahlung (SB) is proportional to the tree level width $\Gamma^{0}$, $\ \Gamma_{\textrm{\scriptsize{SB}}}=\delta_{\textrm{\scriptsize{SB}}}\,\Gamma^{0}$. IR-divergences arising from virtual photons in the final state vertex and from soft bremsstrahlung off the charged leptons in the final state cancel each other. The relative loop contribution $\Gamma_{\textrm{\tiny{full}}}^{1}/\Gamma_{\textrm{\tiny{full}}}^{0}-1$ is displayed in black in Fig.\,\ref{fig:gNWAMR1_prec_loopsize}, amounting to up to $11\%$. \begin{figure}[ht!] \begin{center} \includegraphics[height=1.5cm]{figures/Vert1a_1to3} \includegraphics[height=1.5cm]{figures/Vert2e_1to3} \includegraphics[height=1.5cm]{figures/SE_a} \includegraphics[height=1.5cm]{figures/Box_e} \caption{Example diagrams of the 3-body decay at 1-loop order: production and decay vertex, Goldstone-Higgs mixing, box.} \label{fig:Loop13} \end{center} \vspace*{-0.5cm} \end{figure} \subsection{Validation of the gNWA including loop corrections} For the gNWA prediction, the 2-body decay widths $\Gamma(\tilde{\chi}_4^0\rightarrow\tilde{\chi}_1^0 h/H),~\Gamma(h/H\rightarrow\tau^{+}\tau^{-})$ as well as the production and decay on-shell matrix elements are calculated at the 1-loop level. As before, Higgs masses, total widths and $\hat{\textbf{Z}}$-factors are obtained from \texttt{FeynHiggs} at the leading 2-loop order. Fig.\,\ref{fig:gNWAMR1_prec_loopsize} shows the relative deviation $\Gamma_{\textrm{\scriptsize{gNWA}}}^{1}/\Gamma_{\textrm{\scriptsize{full}}}-1$ between the gNWA prediction with 1-loop corrections and the 1-loop 3-body decay width. While the $R$-factor approximation deviates from the full result by up to $4\%$, the method of on-shell matrix elements agrees with $\Gamma_{\textrm{\scriptsize{full}}}$ within a precision of better than $1\%$, which is of the order of the estimated remaining uncertainty of the full 1-loop result. Hence, the gNWA uncertainty of the $\mathcal{M}$-version stays mostly below the relative loop contribution, except where the full loop correction is minimal and even vanishing. The $R$-factor simplification can be regarded as a technically easier estimate of the interference term with loop corrections, whereas the $\mathcal{M}$-method provides a precise approximation of combined interference and higher order effects. \begin{figure}[ht!] \vspace*{-0.2cm} \begin{center} \includegraphics[width=\columnwidth]{figures/NLO_gNWA3_precision_LoopSize} \caption{Precision of the gNWA at the 1-loop level using the matrix element method denoted by $\mathcal{M}$ (red, dashed) and using the $R$-factor approximation denoted by $\tilde{R}$ (blue, dash-dotted) compared to the relative size of the loop contribution in the full calculation (black). The $\pm1\%$ region is indicated in grey.} \label{fig:gNWAMR1_prec_loopsize} \end{center} \vspace*{-0.5cm} \end{figure}\\ The 3-body decays mediated by a resonant $\mathcal{CP}$-even Higgs boson $A$, a neutral Goldstone $G$, a $Z$-boson or a non-resonant $\tilde\tau$ have been omitted so far (but $H-G$ and $H-Z$ mixing has been included) since they do not interfere with the $\mathcal{CP}$-even Higgs bosons $h$ and $H$. For a realistic prediction of the example process, the lowest order contributions from $A,\,G,\,Z$ and $\tilde\tau$ are taken into account, increasing the width $\Gamma(\tilde{\chi}_4^0\rightarrow\tilde{\chi}_1^0\tau^{+}\tau^{-})$ by an approximately constant shift of $4.15\cdot10^{-4}\,$GeV. Besides, loop contributions beyond the 1-loop level can be included in both versions of the gNWA according to Eq.\,(\ref{eq:Mbest}), such as branching ratios from \texttt{FeynHiggs} at the leading 2-loop level and products of 1-loop partial results. Fig.\,\ref{fig:gNWAbest} presents the relative impact of these higher-corrections with respect to the 1-loop expansion of cross sections times branching fractions or matrix elements. In this example process and scenario, the impact amounts to up to $1.2\%$ for the $\mathcal{M}$-method and up to $0.4\%$ for the $R$-approximation. \begin{figure}[ht!] \begin{center} \includegraphics[width=8.2cm]{figures/Best_gNWA_rel_Extra_AGZstau}\label{fig:gNWAbestrel} \caption{The relative effect of the most precise branching ratios and product of 1-loop terms on the prediction of the gNWA with on-shell matrix elements (red, denoted by $\mathcal{M}$) and the R-factor approximation (blue, denoted by $\tilde{R}$) with respect to the 1-loop expansion.} \label{fig:gNWAbest} \end{center} \vspace*{-0.5cm} \end{figure} \section{Conclusion} Interference effects can drastically modify the cross section or partial decay width of a process if nearby resonances of particles with considerable mixing overlap within their total widths. Such degeneracies are possible in many BSM scenarios. We introduced a generalisation of the NWA to include the interference term in an on-shell approximation that maintains the convenient factorisation into a production cross section and branching ratio as in the standard NWA. For the example process of $\tilde{\chi}_4^0\rightarrow\tilde{\chi}_1^0\tau^{+}\tau^{-}$ via $h/H$, the gNWA is validated against the 3-body decay width at lowest order and with 1-loop vertex, self-energy and box corrections and soft bremsstrahlung. In the analysed scenario of similar masses $M_h,\,M_H$ and large $h-H$ mixing, a significant destructive interference causes a huge discrepancy between the standard NWA and the full result. In contrast, the gNWA based on on-shell matrix elements in the interference term reproduces the full result within an accuracy of better than $1\%$. The factorisation of a more complicated process into a production and decay part achieved with the gNWA can be exploited for incorporating further higher-order contributions, leading to the most accurate prediction within this framework. \section{Acknowledgements} I thank Georg Weiglein and Silja Thewes for the collaboration on this project and Alison Fowler for the first steps of the gNWA in her PhD thesis. Many thanks also go to Aoife Bharucha for useful discussions. I am thankful for the funding from the Studienstiftung des deutschen Volkes. \bibliographystyle{elsarticle-num}
train/arxiv
BkiUbv3xK0-nUGYe4vVy
5
1
\section{Introduction} It is now widely accepted that a quark-gluon plasma (QGP) is produced in heavy ion collisions at the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC). Among the many signals for the QGP is the suppressed production of quarkonia as a result of color screening and dissociation in the produced QGP~\cite{Matsui:1986dk}. Although regeneration of quarkonia from QGP is possible~\cite{Thews:2000rj, Grandchamp:2001pf, Yan:2006ve}, its contribution is not particularly important as the number of heavy quarks is small, especially at the RHIC energy, and they are mainly produced in the primordial scattering of colliding nucleons instead of from the QGP due to their large masses. However, with increasing collision energy, thermal production of charm quarks becomes possible at LHC, although the effect is not large~\cite{Levai:1994dx, Levai:1997bi,Zhang:2007dm,Uphoff:2010sh}. With the Future Circular Collider (FCC) that is being discussed for Pb+Pb collisions at $\sqrt{s_{\rm NN}}=39$\ TeV, it becomes of interest to know if the order of magnitude higher collision energy than currently available at LHC would lead to a substantial thermal production of charm quarks in these collisions. In this short report, we calculate the thermal charm yield in heavy ion collisions at FCC energy based on a boost invariant expanding QGP and a kinetic equation for charm quark production using the charm production cross section from the pQCD at the next-to-leading order~\cite{Nason:1987xz, Zhang:2007dm}. \par The kinetic equation for charm quark production in a QGP can be written as~\cite{Zhang:2007dm} \begin{eqnarray} \partial_{\mu}(\rho_c u^{\mu}) &=& R\left[1-\left(\frac{\rho_c}{\rho_c^{\rm eq}}\right)^2\right], \label{eq_1} \end{eqnarray} where $\rho_c$, $u^{\mu}$\ and $R$\ are, respectively, the local number density of charm quarks, the local 4-velocity, and the thermal production rate of charm quarks, and \begin{eqnarray} \rho_c^{\rm eq}&=& N_{\rm deg}\int \frac{d{\bf p}}{(2\pi)^3}\frac{1}{e^{E/T}+1}\\ &\approx& N_{\rm deg}\int \frac{d{\bf p}}{(2\pi)^3}e^{-\sqrt{ {\bf p}^2+m_c^2}/T} \end{eqnarray} is the equilibrium density of charm quarks, with the charm quark mass $m_c=1.3$\ GeV and the number of degree of freedom $N_{\rm deg}=6$. Because the charm quark production rate $R$\ is negligible at low temperature~\cite{Nason:1987xz, Zhang:2007dm} as observed in heavy ion collisions at RHIC~\cite{Adamczyk:2014uip}, thermal production is important only at the early stage of the produced QGP when the longitudinal expansion is dominant. Therefore, we assume boost invariance and neglect the transverse expansion of the QGP by taking $u^{\mu}=(\cosh y, 0, 0, \sinh y)$\ with $y=\textrm{arctanh}(z/t)$\ being the rapidity. Eq.~(\ref{eq_1}) can then be simplified to \begin{eqnarray} \partial_{\tau}\sigma_c &=& \tau R\left[1-\left(\frac{\sigma_c}{\sigma_c^{\rm eq}}\right)^2\right], \label{eq_sigma_c} \end{eqnarray} where we have introduced the area density $\sigma_c\equiv dN_c/(dyd{\bf x}_T)=\tau \rho_c$ of charm quarks, which would be constant if there were no thermal production or annihilation. Here $\tau=\sqrt{t^2-z^2}$\ and ${\bf x}_T$ are the proper time and tranverse coordinates, respectively. In the above, both the thermal production rate $R$\ and the equilibrium area density $\sigma_c^{\rm eq}$\ depends on the local temperature $T$ of the QGP. \par For the thermal charm quark production rate, we take it from Ref.~\cite{Zhang:2007dm} based on the charm quark production cross sections from the next-to-leading-order QCD calculation given in Refs.~\cite{Nason:1987xz,Nason:1989zy,Beenakker:1988bq,Beenakker:1990maa} and using thermal masses $m_q=gT/\sqrt{6}$ and $m_g=gT/\sqrt{2}$ for quarks and gluons, respectively. Specifically, it includes charm quark production from the leading-order processes $q+\bar q\to +c+\bar c$ and $g+g\to c+\bar c$ as well as the next-order processes $q+\bar q\to c+\bar c+g$ and $g+g\to c+\bar c+g$ and the interferences between the leading-order processes with their virtual corrections due to vertex corrections and self energy insertions. The processes $g+q\to c+\bar c+q$ and $g+\bar q\to c+\bar c+\bar q$ are, however, neglected due to their smaller cross sections. To facilitate our calculations, we parameterize the thermal charm quark production rate shown in Fig.3 of Ref.~\cite{Zhang:2007dm} as \begin{eqnarray} \log_{10}R=\sum_{n=0}^5 a_nT^n \end{eqnarray} with $a_0=-18.2327$, $a_1=110.9367$, $a_2=-319.9090$, $a_3=506.6754$, $a_4=-413.4846$, and $a_5=136.0222$, where $T$\ is in GeV and $R$\ is in $c/$fm$^4$. \par Since only the longitudinal expansion is considered, the time evolution of the QGP can be approximately described by entropy conservation \begin{eqnarray}\label{entropy} s({\bf x}_T, \tau)&=& \frac{\tau_r}{\tau}s({\bf x}_T,\tau_r), \end{eqnarray} where $s$\ is the local entropy density and is assumed to be known at some given time $\tau_r$. Because of the large collision energy at FCC, we further assume that the entropy density is proportional to the number of binary collisions $n_{\rm coll}({\bf x}_T)=\sigma_{pp}^{\rm in}T_A({\bf x}_T)T_B({\bf x}_T)$ between the two colliding nuclei, where $\sigma_{pp}^{\rm in}$\ is the proton-proton inelastic cross section and $T_A$ ($T_B$) is the thickness function of nucleus A (B). Therefore, Eq.(\ref{entropy}) can be rewritten as \begin{eqnarray} s({\bf x}_T, \tau)&=& \frac{\tau_r T_A({\bf x}_T)T_{B}({\bf x}_T)}{\tau T_A({\bf 0})T_B({\bf 0})} s({\bf 0}, \tau_r) \end{eqnarray} in terms of the entropy density in the center of the QGP at time $\tau_r$. Through the equation of state of produced hot dense matter, $s({\bf x}_T,\tau_r)$ is related to the energy density $\epsilon({\bf x}_T,\tau_r)$ and can be determined from the transverse energy $dE_T/dy$ via \begin{eqnarray} \int d{\bf x}_T\epsilon(s({\bf x}_T,\tau_r))=\frac{1}{\tau_r}\frac{dE_T}{dy}. \end{eqnarray} According to Ref.~\cite{Chatrchyan:2012mb}, the energy dependence of the transverse energy measured in heavy ion collisions can be parametrized as \begin{eqnarray} \frac{dE_T}{d\eta}&=& A\left(\frac{\sqrt{s_{\rm NN}}}{\sqrt{s_{\rm NN}^0}}\right)^{0.4}\frac{N_{\rm part}}{2}, \end{eqnarray} with $A=0.46$\ GeV and $\sqrt{s_{\rm NN}^0}=1$\ GeV. The number of participants, $N_{\rm part}$, in above equation can be obtained from the $p+p$\ inelastic cross section using the parametrization~\cite{Zsigmond:2012vc} \begin{eqnarray} \sigma_{pp}^{\rm in}(\sqrt{s})&=& \sigma_0\ln\frac{\sqrt{s}}{\sqrt{s_0}}, \end{eqnarray} with $\sigma_0=8.2$\ mb and $\sqrt{s_0}= 1.436$\ GeV. With an inelastic cross section $\sigma_{pp}^{\rm in}=84$\ mb at 39 TeV, we have $N_{\rm part}=408.5$ for central Pb+Pb collisions at same energy, resulting in a transverse energy $dE_T/dy=6447$\ GeV if we take $\tau_r=5$ fm. \par \begin{figure}[!hbt] \centering \includegraphics[width=0.45\textwidth]{T_xt0} \caption{Time evolution of local temperature $T$\ at ${\bf x}_T=0$.} \label{fg_T} \end{figure} Taking the equation of state as an ideal gas of quarks and gluons with masses $m_u=m_d=m_g=0$\ and $m_s=150$\ MeV for the QGP and a resonance gas of hadrons with masses below 2 GeV as well as including a bag constant, which leads to a first order phase transition at $T_c=165$\ MeV, we have determined the energy density and temperature of the produced medium. In Fig.~\ref{fg_T}, we show the time evolution of the local temperature at ${\bf x}_T=0$. It is seen that the temperature is initially about 935 MeV but drops fast at the beginning due to the strong longitudinal expansion and becomes less than $400$\ MeV after $\tau=2.6$\ fm$/c$. As shown in Fig.~\ref{fg_tauR_xt0}, the production rate $\tau R$ also decreases fast with time and is only important during the early stage of the the expanding QGP. The ratio $\rho_c/\rho_c^{\rm eq}$\ at ${\bf x}_T=0$ is found to increase with time but never exceed 0.42 at $\tau <2.6$\ fm, indicating that charm quark annihilation is far less important than charm production in the QGP. \begin{figure}[!hbt] \centering \includegraphics[width=0.45\textwidth]{tauR_xt0} \caption{Time evolution of $\tau R$\ at ${\bf x}_T=0$.} \label{fg_tauR_xt0} \end{figure} \par For the initial charm quark density $\sigma_c(\tau_0)$, it is estimated by the Glauber model, i.e., \begin{eqnarray} \sigma_c({\bf x}_T, \tau_0) &=& T_A({\bf x}_T)T_B({\bf x}_T)d\sigma_{pp}^{c\bar{c}}/dy, \end{eqnarray} where $\sigma_{pp}^{c\bar{c}}$\ is the charm quark production cross section in $p+p$\ collisions. From the charm quark production cross section measured in $p+p$\ collisions at $\sqrt{s_{\rm NN}}=2.76$\ TeV, i.e., $d\sigma_{pp}^{c\bar{c}}/dy=0.62$\ mb~\cite{Averbeck:2011ga}, we extrapolate it to $\sqrt{s_{NN}}=39$\ TeV by running PYTHIA~\cite{Sjostrand:2006za, Sjostrand:2007gs} at both energies and obtain the cross section $d\sigma_{pp}^{c\bar{c}}/dy=1.57$\ mb at $\sqrt{s}=39$\ TeV, which leads to the initial quark number $dN_c/dy=48$\ from primordial collisions. \begin{figure}[!hbt] \centering \includegraphics[width=0.45\textwidth]{Nc} \caption{Time evolution of the number of charm quarks with different initial time $\tau_0$.} \label{fg_Nc} \end{figure} \par The final yield of charm quarks obtained from solving Eq.~(\ref{eq_sigma_c}) depends on the time $\tau_0$\ when the production of charm quarks starts. It should neither be much smaller than the formation time of charm quark $1/m_c\sim 0.2$\ fm$/c$\ nor be much larger than the formation time of QGP. In Fig.~\ref{fg_Nc}, results from using $\tau_0=0.2$, $0.4$, and $0.6$\ fm/$c$ are shown. It is seen that this leads to a relative enhancement that varies from $21\%$\ to $45\%$ and is thus not negligible. The results of a similar calculation for $\sqrt{s_{\rm NN}}=5.5$\ TeV is from $6\%$\ to $16\%$. \par In summary, we have studied charm quark production from the QGP produced in heavy ion collisions at 39 TeV in future FCC. Using the charm production cross section in quark-anti-quark and quark (anti-quark)-gluon scattering calculated in the next-to-leading order in QCD and assuming that the produced QGP expands boot invariantly, we have found that charm production from the QGP is not negligible. Depending the formation time of the QGP, its contribution can be near to 50\% for a formation time of $\tau_0=0.2$ fm/$c$. Such an enhanced production of charm quarks than that produced from initial hard scattering is expected to have a significant effect on charmonium production in heavy ion collisions at such an energy~\cite{Zhou:2016wbo}. Work is in progress to study the effect of thermal charm production on the nuclear modification factors for both charm quarks and the charmonia. \section*{Acknowledgements} We thank Andrea Dainese for suggesting this study and helpful discussions. This work was supported by the US Department of Energy under Contract No. DE-SC0015266, the Welch Foundation under Grant No. A-1358, and the NSFC under Grant No. 11547043. \bibliographystyle{elsarticle-num.bst}
train/arxiv
BkiUdYc5qsNCPfdFeYVN
5
1
\section{Motivation} The minimal gravitational sector of the Standard-Model Extension (mgSME) is described by the action for conventional physics plus the Lorentz violating term \begin{equation} S_{\rm mgSME} = \frac{1}{2\kappa}\int d^4 x \sqrt{-g}k^{\mu\nu\rho\sigma} R_{\mu\nu\rho\sigma}, \end{equation} where $\kappa$ is the coupling constant of general relativity (GR), $g$ is the determinant of the metric $g_{\mu\nu}$, $k^{\mu\nu\rho\sigma}$ are the Lorentz-violation (LV) coefficients, and $R_{\mu\nu\rho\sigma}$ is the Riemann tensor. Note that, since this work concerns curved spacetimes, LV has to arise spontaneously,\cite{Kostelecky2004} and thus $k^{\mu\nu\rho\sigma}$ are dynamical. The $k^{\mu\nu\rho\sigma}$ can be separated into irreducible pieces as \begin{equation} k^{\mu\nu\rho\sigma} R_{\mu\nu\rho\sigma}=-uR + s^{\mu\nu} R^T_{\mu\nu} +t^{\mu\nu\rho\sigma}W_{\mu\nu\rho\sigma}, \end{equation} where $R$, $R^{T}_{\mu\nu}$, and $W_{\mu\nu\rho\sigma}$ stand, respectively, for the curvature scalar, the traceless Ricci tensor, and the Weyl tensor. Note that $s^{\mu\nu}$ and $t^{\mu\nu\rho\sigma}$ share the index symmetries of $R^T_{\mu\nu}$ and $W_{\mu\nu\rho\sigma}$, respectively. Remarkably, to date, the effects of the $t^{\mu\nu\rho\sigma}$ coefficient are still unknown; this is known as the $t$ puzzle.\cite{BaileyKostelecky,RuiQuentinAlan} This contribution is devoted to describing several analyses where a fundamental explanation for this puzzle is sought. \section{Field redefinitions} It is well known\cite{field redef bibliog} that field redefinitions can be used to move some LV coefficients to other sectors of the Standard-Model Extension (SME). It is thus natural to expect that a field redefinition could explain the $t$ puzzle. In GR, the metric is the dynamical field. Therefore, it is tempting to study what LV coefficients arise, in the GR action $S_{\rm EH}$, with a redefinition $g_{\mu\nu}\rightarrow \tilde{g}_{\mu\nu}$. With a particular metric redefinition, it is possible to show\cite{tpuzzle} that, to first order in the LV coefficients, \begin{equation} S_{\rm EH} \rightarrow S_{\rm EH} + \frac{1}{2\kappa} \int d^4x \sqrt{-\tilde{g}} \left[-uR(\tilde{g}) + s^{\mu\nu}R^T_{\mu\nu}(\tilde{g})\right], \end{equation} where a total divergence, which has no physical effects, has been ignored. This result implies that the $u$ and $s^{\mu\nu}$ coefficients can be moved to other SME sectors, which is consistent with previous results in linearized metric approximation.\cite{KosteleckyTasson} In addition, it proves that $t^{\mu\nu\rho\sigma}$ cannot be removed with a metric redefinition. In GR there are two equivalent dynamical formalisms: the standard and the Palatini. In the first approach, the metric is the only dynamical field and the (torsionless) connection is determined by requiring that the covariant derivative of the metric vanishes. In the latter approach, the metric and the connection are assumed to be dynamically independent fields and the equation of motion for the connection yields the condition that the metric covariant derivative vanishes.\cite{Palatini} If the metric and the connection can be treated as independent fields, it is possible to perform more general field redefinitions. Moreover, it has been shown that, to first order in the LV coefficients, the mgSME yields the same physical predictions in both approaches.\cite{tpuzzle} However, these independent redefinitions are not extremely revealing: the metric redefinition leads to the $u$ and $s^{\mu\nu}$ terms (with no divergence), while the connection redefinition produces new terms that are definitively not of the form of $t^{\mu\nu\rho\sigma}W_{\mu\nu\rho\sigma}$.\cite{tpuzzle} Therefore, the $t^{\mu\nu\rho\sigma}$ term cannot be removed with field redefinitions of the gravitational fields. \section{Lanczos-like tensor} An analytic tensor with the index symmetries of the Weyl tensor can be written in terms of the covariant derivative ($D_\mu$) of a `Lanczos potential' $H^{\mu\nu\rho}$.\cite{Bampi} This potential is such that $H^{\mu\nu\rho} =- H^{\nu\mu\rho}$, and $H^{[\mu\nu\rho]}$, $g_{\nu\rho}H^{\mu\nu\rho}$, and $D_\rho H^{\mu\nu\rho}$ vanish. After replacing $t^{\mu\nu\rho\sigma}$ by its Lanczos potential, the mgSME action takes the form \begin{equation}\label{mgSME action Lanczos} S_{\rm mgSME} = \frac{1}{2\kappa}\int d^4 x \sqrt{-g} \left[ -u R + s^{\mu\nu} R^T_{\mu\nu} +4 H^{\mu\nu\rho}D_\mu R_{\nu\rho}\right]. \end{equation} Observe that the $t^{\mu\nu\rho\sigma}$ term has been converted into a dimension-$5$ operator, and this type of operator is known to generate, in the nonrelativistic weak-gravity approximation, unphysical self accelerations.\cite{RuiQuentinAlan} This may be the reason behind the $t$ puzzle, but it is not a fundamental explanation. It is also tempting to integrate the last term in Eq.~\refeq{mgSME action Lanczos} by parts, obtaining an effective $s^{\mu\nu}$ coefficient: $s_{\rm eff}^{\mu\nu} = s^{\mu\nu}- 4 D_\rho H^{\rho \mu\nu}$. However, $s_{\rm eff}^{\mu\nu}$ depends on the metric (through the covariant derivative), and thus it cannot be considered as an LV coefficient. Still, it should be stressed that, in the linearized gravity approximation and neglecting terms proportional to the LV coefficients and the metric perturbation, this procedure accounts for the absence of physical effects associated with $t^{\mu\nu\rho\sigma}$. \section{Other ideas} It is well known that, if the spacetime under consideration has boundaries, $S_{\rm EH}$ needs to be corrected with the so-called York--Gibbons--Hawking boundary term to lead to Einstein's equations.\cite{HG term} This could be relevant for the $t$ puzzle since, typically, the phenomenological studies in the SME involve conformally flat spacetimes, which have boundaries. Therefore, it should be verified if a boundary term can be constructed for all the coefficients in the mgSME. It turns out that such a boundary term exists for all coefficients in the mgSME, including $t^{\mu\nu\rho\sigma}$.\cite{tpuzzle} However, such a term cannot be constructed in the nonminimal sector, which needs to be carefully handled in the presence of spacetime boundaries. The last idea is related to the Cauchy problem\cite{Wald} for the action of the LV coefficients. Generically, it is hard to study this problem. Therefore, it is useful to focus on a simpler case. In a particular example of the so-called bumblebee models\cite{Bumblebee} where the vector field has a Maxwell kinetic term and a potential that drives the spontaneous Lorentz violation, it has been shown that there exists a Hamilton density that generates a constraint-compatible evolution, but that the evolution is not uniquely determined by the required initial data.\cite{BonderEscobar} In the future, the question that needs to be analyzed is whether the feasible actions for $t^{\mu\nu\rho\sigma}$ have well-posed Cauchy problems. \section{Conclusions} To date, the physical effects of a coefficient in the mgSME, $t^{\mu\nu\rho\sigma}$, remain unknown. While searching for a fundamental reason for this puzzle, some lessons were learned: (1) redefinitions of the gravitational fields generate $u$ and $s^{\mu\nu}$ but no $t^{\mu\nu\rho\sigma}$, (2) the mgSME can be treated \textit{\`a la Palatini}, (3) it is possible to correct the mgSME action to cancel spacetime boundary effects, but there is no boundary term for the nonminimal sector, and (4) the Cauchy problem for theories with spontaneous Lorentz violation may be ill posed. The fact that no fundamental explanation for the $t$ puzzle has been found suggests that $t^{\mu\nu\rho\sigma}$ could be physical, and that the phenomenological methods that have been used could be hiding its effects. It thus seems promising to look for the effects of $t^{\mu\nu\rho\sigma}$ in different phenomenological schemes. \section*{Acknowledgments} This work was done with financial support from UNAM-DGAPA-PAPIIT Project No. IA101116.
train/arxiv
BkiUe_LxK5YsWR0KkXrO
5
1
\subsection{Supplemental Material} The purpose of this Supplemental Material is to guide the reader through the derivation of the correction of the polytropic exponent, $\gamma$. This parameter accounts for the deviation from an ideal gas ($\gamma = 1$) induced by multiple scattering of photons \cite{sesko_2, dalibard}. The atoms behave as if they possess an effective electrical charge $q=\sqrt{\epsilon_0 Q}$, with $Q = (\sigma_R - \sigma_L)\sigma_L I_0 / c$ \cite{Q}, $I_0$ the total intensity of the beams and $c$ is the speed of light. Here, $\sigma_R$ and $\sigma_L$ represent the emission and absorption cross sections, respectively \cite{walker_1990}. The induced collective interaction has been previously explored \cite{tito_2008, livro}. \par Let us begin by determining the electrostatic potential of the system, which is known to satisfy the Poisson equation \begin{equation}\label{poisson1} \nabla^2 \phi\left(r\right) = - \frac{1}{\epsilon_0} q n \left(r\right). \end{equation} We approximate the density distribution in the cloud by a \textit{water-bag} profile - remember the multiple scattering regime discussed in the main text - corresponding to a constant density $n_0$ spread over a radial extent of radius $R$, i.e. $n\left(r\right) = n_0 \theta \left(r - R \right)$, with $n_0 = 3m\omega_0^2 / Q$ and $R = \left( \frac{3N}{4 \pi n_0} \right)^{1/3}$. This approximation allow us to keep the analysis tractable and derive an analytical correction for $\gamma$. We shall then compute the solutions for the Poisson equation in two different regions. For outside the cloud, $r > R$, Eq. (\ref{poisson1}) reads, in spherical coordinates \begin{equation}\label{poisson2} \left( \frac{\partial^2}{\partial r^2} + \frac{2}{r} \frac{\partial}{\partial r} \right) \phi\left( r \right) = 0 \end{equation} which admits solutions in the form $\phi\left( r \right) = \frac{A}{r} + B$. Assuming that the potential vanishes at infinity, $\phi\left( r \rightarrow \infty \right) = 0$ results in $B=0$. Gauss theorem allows us to write $A = \frac{q_{\text{T}}}{4 \pi \epsilon_0}$ with $q_{\text{T}}$ the total charge of the system, $q_{\text{T}} = \frac{4}{3} \pi q n_0 R^3$. We then have, for $r> R$, $\phi\left( r \right) = \frac{q n_0 R^3}{3 \epsilon_0 r}$. Let now turn to the region inside the cloud, $r \leq R$, where the corresponding Poisson equation reads \begin{equation}\label{poisson3} \left( \frac{\partial^2}{\partial r^2} + \frac{2}{r} \frac{\partial}{\partial r} \right) \phi\left( r \right) = -\frac{q n_0}{\epsilon_0} \end{equation} In this case the solution are of the form $ \phi\left( r \right)= A' r^2 + B'$. Substituting in Eq. (\ref{poisson3}) results in $A' = - \frac{qn_0}{6 \epsilon_0}$. The integration constant $B'$ is determined by the continuity of the potential $\phi(r)$ in the boundary of the two regions, i.e $\phi(R^{-}) = \phi(R^{+})$. Finally, we can write the electrostatic potential inside the cloud as \begin{equation}\label{phi1} \phi\left(r\right) = \frac{qn_0}{6 \epsilon_0} \left( R^2 - r^2 \right) + \frac{q n_0 R^2}{3 \epsilon_0} \end{equation} The next step is the evaluation of the effective electrostatic energy, determined by \begin{equation}\label{int1} U_{\text{C}} = \frac{1}{2} \int_V \phi \left( r \right) q n\left(r\right) dV \end{equation} Introducing again the \textit{water-bag} density profile yields the results $U_{\text{C}} = \frac{4}{15} \pi Q n_0^2 R^5$ or, equivalently, $U_{\text{C}} = \frac{1}{5}\left( \frac{3}{4\pi} \right)^{2/3} \frac{QN^2}{V^{1/3}}$ in terms of the volume and the number of particles in the system, which will be useful in the next steps. We now wish to evaluate the pressure in the cloud, which encompasses the contributions from the ideal gas part and the effective electrostatic interaction, $P = P_0 + P_{\text{C}}$, with $P_0 = k_B T n = k_B T N / V$ and $P_{\text{C}}$ determined by \begin{equation} P_{\text{C}} = - \left(\frac{\partial U_{\text{C}}}{ \partial V}\right)_N \text{,} \end{equation} since the electrostatic energy doesn't dependent on the temperature. We then have $P_{\text{C}} = \frac{1}{15} \left( \frac{3}{4 \pi} \right)^{2/3} Q N^2 V^{-4/3}$ or, equivalently, $P_{\text{C}} = \frac{1}{15}Qn_0^2R^2$. The total pressure in the system is given by \begin{equation}\label{pressure} P = \frac{k_BT N}{V} + \frac{1}{15} \left( \frac{3}{4 \pi} \right)^{2/3} \frac{Q N^2}{V^{4/3}} \end{equation} We now wish to establish an equivalence between the former equation of state and a polytropic-like one, in the form $P=C_{\gamma} n^{\gamma}$, as in the Lane-Emden derivation. With that in mind, we can write \begin{equation} C_\gamma n_0^\gamma = k_B T n_0 + \frac{Q n_0^2 R^2}{15} \end{equation} or, equivalently, dividing by $k_B T n_0$ % \begin{equation}\label{eqf} \frac{C_\gamma}{k_B T} n_0^\epsilon = 1 + \frac{Qn_0R^2}{15 k_B T} \end{equation} where we defined $\epsilon = \gamma -1 $. Note that we can rewrite this last expression in terms of the parameter of the model introduced earlier, namely the effective plasma frequency $\Omega = \frac{Qn_0}{3 m \omega_0^2}$ and the scaling factor $a_\gamma^2 = \frac{C_\gamma}{3 m \omega_0^2} n_0^\epsilon$. Simple mathematical manipulation of Eq.(\ref{eqf}) finally yields \begin{equation} \gamma = 1 + \frac{2/3 \xi}{\xi + 1} \end{equation} with $\xi$ an adimensional universal parameter defined as $\xi = \frac{1}{15}\left( \frac{3N}{4 \pi n_0} \right)^{2/3} \frac{\Omega}{a_\gamma^2}$, where we use the total number of atoms as $N=\frac{4}{3} \pi n_0 R^3$.
train/arxiv
BkiUezjxK6mkyCKC2bhJ
5
1
\section{Introduction} \thispagestyle{empty} Traub $[28]$ defines analytic computational complexity to be the optimality theory of analytic or continuous processes. Apart from some work by Schultz $[24]$ on differential equations, most recent results have concerned iterative methods for the solution of nonlinear equations or systems of equations. See, for example, Brent $[$1,3,6$]$, Brent, Winograd and Wolfe $[7]$, Kung $[14,15]$, Kung and Traub $[$16--17$]$, Paterson $[21]$, Rissanen $[22]$, Traub $[$28--32$]$ and Wozniakowski $[35, 36]$.\\ The authors just cited make the (usually implicit) assumption that arithmetic is performed with a fixed precision throughout a given computation. This is probably true for most computations programmed in Fortran or Algol 60. Suppose, though, that we are concerned with an iterative process for approximating an irrational number $\zeta$ (for example, $\sqrt{2}$, $\pi$ or $e$) to arbitrary accuracy. The iterative process should (theoretically) generate a sequence ($x_i$) of real numbers, such that $\displaystyle\zeta = \lim_{i \rightarrow \infty} x_i$, provided no rounding errors occur. On a computing machine each $x_i$ has to be approximated by a finite-precision machine-representable number $\widetilde{x}_i$, and $\displaystyle\zeta = \lim_{i \rightarrow \infty} \widetilde{x}_i$ can only hold if the precision increases indefinitely as $i \rightarrow \infty$. In practice, only a finite number of members of the sequence $(\widetilde{x}_i)$ will ever be generated, but if an accurate approximation to $\zeta$ is required it may be possible to save a large amount of computational work by using variable precision throughout the computation. This is likely to become easier to program as new languages (and possibly hardware), which allow the precision of floating-point numbers to be varied dynamically, are developed.\\ In Section 7 we discuss the effect of using variable precision when solving nonlinear equations. Before doing so, we consider the complexity of the basic multiple-precision arithmetic operations. We assume that a standard floating-point number representation is used, with a binary fraction of $n$ bits. (Similar results apply for any fixed base, for example, 10.) We are interested in the case where $n$ is much greater than the wordlength of the machine, so the fraction occupies several words. For simplicity, we assume that the exponent field has a fixed length and that numbers remain in the allowable range, so problems of exponent overflow and underflow may be neglected. Note that our assumptions rule out exotic number representations (for example, logarithmic $[4]$ or modular $[33, 34]$ representations) in which it is possible to perform some (but probably not all) of the basic operations faster than with the standard representation. To rule out ``table-lookup'' methods, we assume that a random-access memory of bounded size and a bounded number of sequential tape units are available. (Formally, our results apply to multitape Turing machines.)\\ In Sections 2 to 6 we ignore ``constant'' factors, that is factors which are bounded as $n \rightarrow \infty$. Although the constant factors are of practical importance, they depend on the computer and implementation as well as on details of the analysis. Certain machine-independent constants are studied in Sections 7 and 8.\\ If $B$ is a multiple-precision operation, with operands and result represented as above (that is, ``precision $n$'' numbers), then $t_n(B)$ denotes the worst-case time required to perform $B$, obtaining the result with a relative error at most $2^{-n}c$, where $c$ is independent of $n$. We assume that the computation is performed on a serial machine whose single-precision instructions have certain constant execution times. The following definition follows that in Hopcroft $[11]$.\\ \begin{definition}\rm $B$ is {\it linearly reducible} to $C$ (written $B {\;\scriptstyle\stackrel{<}{=}\;} C$), if there is a positive constant $K$ such that $$t_n(B) \leq Kt_n(C) \eqno{(1.1)}$$\\[-4ex] for all sufficiently large $n$. $B$ is {\it linearly equivalent} to $C$ (written $B \equiv C$) if $B {\;\scriptstyle\stackrel{<}{=}\;} C$ and $C {\;\scriptstyle\stackrel{<}{=}\;} B$. \end{definition} In Section 2 we consider the complexity of multiple-precision addition and some linearly equivalent operations. Then, in Section 3, we show that multiple-precision division, computation of squares or square roots, and a few other operations are linearly equivalent to multiplication. Most of these results are well known $[8, 9]$.\\ Sections 4 and 5 are concerned with the ``operations'' of evaluating exponentials, logarithms, and the standard trigonometric and hyperbolic functions (sin, artan, cosh, and so on). It turns out that most of (and probably all) these operations are linearly equivalent so long as certain restrictions are imposed.\\ Section 6 deals with the relationship between the four equivalence classes established in Sections 2 to 5, and several upper bounds on the complexity of operations in these classes are given. The best known constants relating operations which are linearly equivalent to multiplication are given in Section 7.\\ Finally, in Section 8, we compare the efficiencies of various methods for solving nonlinear equations using variable-length multiple-precision arithmetic. The relative efficiencies are different from those for the corresponding fixed-precision methods, and some of the conclusions may be rather surprising. The results of Sections 4 to 8 are mainly new.\\ In the analysis below, $c_1, c_2, \ldots\;$ denote certain positive constants which do not need to be specified further. The notation $f \sim g$ means that $\displaystyle\lim_{n \rightarrow \infty} f(n)/g(n) = 1$, and $f = O(g)$ means that $|f(n)| \leq Kg(n)$ for some constant $K$ and all sufficiently large $n$. Finally, the abbreviation ``{\it mp}'' stands for ``variable-length multiple-precision''. \section{Addition and linearly equivalent operations} Let $A$ denote the operation of multiple-precision addition. Any reasonable implementation of floating-point addition, using at least one guard digit to avoid the possible occurrence of large relative errors, gives $$ t_n(A) \leq c_1n \;. \eqno{(2.1)}$$\\[-4ex] Conversely, from the assumptions stated in Section 1, it is clear that $$ t_n(A) \geq c_2n \;. \eqno{(2.2)} $$\\[-4ex] Hence, the complexity of multiple-precision addition is easily established. (For the operations discussed in Sections 3 and 5 the results are less trivial, in fact the conjectured lower bounds corresponding to (2.2) have not been proved rigorously.)\\ It is easy to see that bounds like (2.1) and (2.2) hold for multiple-precision subtraction, and multiplication or division of a multiple-precision number by a single-precision number (or even by any rational number with bounded numerator and denominator). Hence, all these operations are linearly equivalent to addition. \section{Multiplication and linearly equivalent operations} Let $D, I, M, R$ and $S$ denote the multiple-precision operations of division, taking reciprocals, multiplication, extraction of square roots and forming squares, respectively. In this section, we show that all these operations are linearly equivalent. The proofs are straightforward, but the result is surprising, as it seems intuitively obvious that taking a square root is inherently ``more difficult'' than forming a square, and similarly for division versus multiplication. (Some bounds on the relative difficulty of these operations are given in Section 7.)\\ \begin{lemma} $$ M {\;\scriptstyle\stackrel{>}{=}\;} S {\;\scriptstyle\stackrel{>}{=}\;} A \;. \eqno{(3.1)}$$ \end{lemma} {\bf Proof.} Clearly $$ t_n(M) \geq t_n(S) \geq c_3n \;, \eqno{(3.2)}$$\\[-4ex] so the result follows from (2.1).\\ Sharp upper bounds on $t_n(M)$ are not needed in this section, so we defer them until Section 6. Lemmas 3.2 and 3.3, although weak, are sufficient for our present purposes. \\ \begin{lemma} For all positive n, $$ t_{2n}(M) \leq c_4t_n(M) \;. \eqno{(3.3)}$$ \end{lemma} {\bf Proof.} First assume that $n$ is divisible by 3, and consider operations on the $n$-bit fractions only. If we can multiply $n$-bit numbers with relative error $2^{-n}c_0$ then we can multiply $n/3$-bit numbers exactly (assuming $2^{n/3} > 2c_0$). Thus, a $2n$-bit fraction $x$ may be split up into $$ x = \lambda a + \lambda^2b + \ldots + \lambda^6f \;, \eqno{(3.4)}$$\\[-4ex] where $\lambda = 2^{-n/3}$ and $a, b, \ldots, f$ are integers in $\left[0,2^{n/3}\right)$, and the product of two such $2n$-bit fractions may be formed exactly with 36 exact multiplications of $n/3$-bit numbers and some additions. Thus $$ t_{2n}(M) \leq 36t_n(M) + c_5t_{2n}(A) \;, \eqno{(3.5)} $$\\[-4ex] and the result follows from Lemma 3.1. Trivial modifications to the above proof suffice, if $n$ is not divisible by 3.\\ \begin{lemma} For some constant $c_6<1$, $$ t_n(M) \leq c_6t_{8n}(M) \eqno{(3.6)}$$ for all sufficiently large n.\\ \end{lemma} {\bf Proof.} If $a, b, c$ and $d$ are integers in $\left[0, 2^n\right)$, the identity $$ (a+\lambda b)(c+\lambda d)= ac + \lambda(bc+ad) + \lambda^2 bd\;, \eqno{(3.7)}$$\\[-4ex] with $\lambda = 2^{3n}$, may be used to obtain the products $ac$ and $bd$ from one $8n$-bit product. Thus $$ 2t_n(M) \leq t_{8n}(M) + c_7\;. \eqno{(3.8)}$$\\[-4ex] The result (with $c_6 = 3/4$) follows if $n$ is sufficiently large that $t_{8n}(M) \geq 2c_7$. (We have assumed that the time required for one $n$-bit multiplication is half the time required for two independent $n$-bit multiplications, but much weaker assumptions would be sufficient.) \\ The following lemma will be used to estimate the work required for multiple-precision divisions and square roots.\\ \begin{lemma} Given $\alpha \in (0,1)$, there is a constant $c_8$ such that, for any integers $n_0, \ldots, n_p$ satisfying $$ 1 \leq n_j \leq \alpha^jn \eqno{(3.9)}$$\\[-4ex] for $j=0, 1, \ldots, p$, we have $$ \sum^{p}_{j=0} t_{n_j}(M) \leq c_8 t_n(M)\;. \eqno{(3.10)}$$\\[-4ex] \end{lemma} {\bf Proof.} Let $k$ be large enough that $$\alpha^k \leq 1/8\;. \eqno{(3.11)}$$\\[-4ex] {From} (3.9) and (3.11), $$ t_{n_{jk}}(M) \leq \alpha^{j}_{6} t_n(M) \eqno{(3.12)}$$\\[-4ex] for $j=0, 1, \ldots, \lfloor p/k\rfloor$, provided $n_{jk}$ is sufficiently large for Lemma 3.3 to be applicable. Thus, $$ \sum^{p}_{j=0} t_{n_j}(M) \leq kt_n(M) \left(1+ c_6+ c^2_6+\ldots\right)+c_7\;, \eqno{(3.13)}$$\\[-4ex] where the term $c_7$ allows for those $t_{n_j}(M)$ for which Lemma 3.3 is not applicable. If $$ c_8 = k/\left(1-c_6\right) + c_7 \;,\eqno{(3.14)}$$\\[-4ex] the result follows from (3.13).\\ The following lemma shows that multiple-precision multiplication is linearly equivalent to squaring. This result is essentially due to Floyd $[9]$.\\ \begin{lemma} $$ M\equiv S\;. \eqno{(3.15)}$$\\[-4ex] \end{lemma} {\bf Proof.} Since squaring is a special case of multiplication, $$ M {\;\scriptstyle\stackrel{>}{=}\;} S\;. \eqno{(3.16)}$$\\[-4ex] Conversely, we may use the identity $$ 4\lambda ab = (a+\lambda b)^2 - (a-\lambda b)^2\;, \eqno{(3.17)}$$\\[-4ex] where $\lambda$ is a power of 2 chosen so that $$\textstyle \frac{1}{2} \leq |\lambda b/a| \leq 2 \eqno{(3.18)}$$\\[-4ex] (unless $a=0$ or $b=0)$. This scaling is necessary to avoid excessive cancellation in (3.17). (A~detailed discussion of a similar situation is given in Brent $[5]$.) From (3.17), $$ t_n(M) \leq 2t_n(S) + 3t_n(A) + c_9, \eqno{(3.19)}$$\\[-4ex] so $M {\;\scriptstyle\stackrel{<}{=}\;} S$ follows from Lemma 3.1.\\ The next two lemmas show that multiple-precision multiplication is linearly equivalent to taking reciprocals and to division. The idea of the proof of Lemma 3.6 is to use a Newton iteration involving only multiplications and additions to approximate $1/a$. Computational work is saved by starting with low precision and approximately doubling the precision at each iteration. The basic idea is well-known and has even been implemented in hardware.\\ The possibility of saving work by increasing the precision at each iteration is examined more closely in Sections 7 and 8.\\ \begin{lemma} $$I {\;\scriptstyle\stackrel{<}{=}\;} D {\;\scriptstyle\stackrel{<}{=}\;} M\;. \eqno{(3.20)}$$\\[-4ex] \end{lemma} {\bf Proof.} Consider the iteration $$ x_{j+1} = x_j\left(2-ax_j\right) \eqno{(3.21)}$$\\[-4ex] obtained by applying Newton's method to the equation $x^{-1}-a=0$. If $$ x_j=\left(1-\varepsilon_j\right)a^{-1}\;, \eqno{(3.22)}$$\\[-4ex] then substitution in (3.21) shows that $$ \varepsilon_{j+1}=\varepsilon^2_j\;, \eqno{(3.23)}$$\\[-4ex] so the order of convergence is two. A single-precision computation is sufficient to give an initial approximation such that $|\varepsilon_0| \leq \frac{1}{2}$, and it follows from (3.23) that $$|\varepsilon_j| \leq 2^{-2^j} \eqno{(3.24)}$$\\[-4ex] for all $j \geq 0$.\\ In deriving (3.24) we have assumed that (3.21) is satisfied exactly, but a result like (3.24) holds so long as the right hand side of (3.21) is evaluated using a precision of at least $2^{j+1}$ bits. Thus, an $n$-bit approximation to $a^{-1}$ can be obtained by performing $\lceil \log_2n\rceil$ iterations of (3.22) with precision at least $2, 2^2, 2^3, \ldots, 2^{\lceil\log_2n\rceil -1}$, $n$ at each iteration. From Lemma 3.4 (with $\alpha = \frac{1}{2}$), this gives $$ t_n(I) \leq c_{10}t_n(M)\;. \eqno{(3.25)}$$\\[-4ex] Since $b/a = b(1/a)$, it follows that $$ t_n(D) \leq c_{11}t_n(M)\;, \eqno{(3.26)}$$\\[-4ex] so $D{\;\scriptstyle\stackrel{<}{=}\;} M$. Since $I {\;\scriptstyle\stackrel{<}{=}\;} D$ is trivial, the proof is complete.\\ {From} $ab=a/(1/b)$ it is clear that $M{\;\scriptstyle\stackrel{<}{=}\;} D$. The proof that $M {\;\scriptstyle\stackrel{<}{=}\;} I$ is not quite so obvious, and uses the equivalences of multiplication and squaring (Lemma 3.5).\\ \begin{lemma} $$ M {\;\scriptstyle\stackrel{<}{=}\;} I\;. \eqno{(3.27)}$$\\[-4ex] \end{lemma} {\bf Proof.} We may apply the identity $$ a^2(1-\lambda a)^{-1} = \lambda^{-2}\left[(1-\lambda a)^{-1} - (1+\lambda a)\right] \eqno{(3.28)}$$\\[-4ex] to obtain an approximation to $a^2$, using only the operation of taking reciprocals, addition (or subtraction) and multiplication by powers of two. If $a \neq 0$, choose $\lambda$ to be a power of two such that $$ 2^{-n/3-1} < |\lambda a| < 2^{1-n/3}\;, \eqno{(3.29)}$$\\[-4ex] and evaluate the right hand side of (3.28), using precision $n$. This gives an approximation to $a^2$ with precision $\lceil n/3 \rceil$, so $$ S_{\lceil n/3\rceil} {\;\scriptstyle\stackrel{<}{=}\;} I_n\;, \eqno{(3.30)}$$\\[-4ex] where the subscripts denote the precision. Thus, the result follows from Lemmas 3.2 and 3.5.\\ To conclude this section we consider the complexity of multiple-precision square roots. Results like Lemmas 3.8 and 3.9 actually hold if $x^{\frac{1}{2}}$ is replaced by $x^p$ for any fixed rational $p \neq 0$ or $1$ (we have already shown this for $p=-1$).\\ \begin{lemma} $$M{\;\scriptstyle\stackrel{<}{=}\;} R\;. \eqno{(3.31)}$$\\[-4ex] \end{lemma} {\bf Proof.} The proof is similar to that of Lemma 3.7, using the approximation\\ $2\lambda^{-2}\left[1+\lambda a - (1+2\lambda a)^{\frac{1}{2}}\right]$ to $a^2$.\\ \begin{lemma} $$ R {\;\scriptstyle\stackrel{<}{=}\;} M\;. \eqno{(3.32)}$$\\[-4ex] \end{lemma} {\bf Proof.} The proof is similar to that of Lemma 3.6, using Newton's iteration $$ x_{j+1} = {\textstyle\frac{1}{2}}\left(x_j+a/x_j\right)\;, \eqno{(3.33)}$$\\[-4ex] with precision increasing at each iteration, to approximate $\sqrt{a}$. Alternatively, it is possible to avoid multiple-precision division by using the iteration \pagebreak $$x_{j+1} = x_j\left(3-ax^2_j\right)/2 \eqno{(3.34)} $$\\[-4ex] to approximate $a^{-\frac{1}{2}}$, and then use $ \sqrt{a} = a.a^{-\frac{1}{2}}$ to evaluate $\sqrt{a}$.\\ The results of Lemmas 3.5 to 3.9 may be summarized in the following:\\ \begin{theorem} $$ D \equiv I \equiv M \equiv R \equiv S\;. \eqno{(3.35)}$$\\[-4ex] \end{theorem} \section{Some regularity conditions} Before discussing the complexity of multiple-precision evaluation of exponentials, trigonometric functions, etc., we need some definitions. Throughout this section, let $\phi(x)$ be a real-valued function which is positive and monotonic increasing for all sufficiently large positive $x$.\\ \begin{definition} \rm $\phi \in \Phi_1$ iff, for all $\alpha \in (0,1)$, for some positive $K$, for all sufficiently large $x$ and all $x_0, \ldots, x_J$ satisfying $$ 1 \leq x_j \leq \alpha^jx \eqno{(4.1)}$$\\[-4ex] for $j=0, \ldots, J$, we have $${\sum^J_{j=0}} \phi(x_j) \leq K\phi(x)\;. \eqno{(4.2)}$$\\[-2ex] $\phi \in \Phi_2$ iff, for some $\alpha, \beta \in (0,1)$ and all sufficiently large $x$, $$\phi(\alpha x) \leq \beta\phi(x)\;. \eqno{(4.3)}$$\\[-4ex] $\phi \in \Phi_3$ iff, for some positive $K_1, K_2$ and $p$, there is a monotonic increasing function $\psi$ such that $$K_1x^p\psi(x) \leq \phi(x) \leq K_2x^p\psi(x) \eqno{(4.4)}$$\\[-4ex] for all sufficiently large $x$. \end{definition} Note the similarity between the definition of $\Phi_1$ and the statement of Lemma 3.4. In Section 5, we need to assume that the time $\phi(n)$ required to perform certain operations with precision $n$ satisfies (4.2). The following lemmas make this assumption highly plausible. Lemma 4.1 shows that ``for all $\alpha$'' in the definition of $\Phi_1$ may be replaced by ``for some $\alpha$''.\\ \begin{lemma} If, for some $\alpha \in (0,1)$ and some positive $K$, for all sufficiently large $x$ and all $x_0, \ldots, x_J$ satisfying {\rm{(4.1)}}, we have {\rm{(4.2)}}, then $\phi \in \Phi_1$. \end{lemma} {\bf Proof.} Take any $\alpha_1$ and $\alpha_2$ in (0, 1), and suppose that (4.1) with $\alpha$ replaced by $\alpha_2$ implies (4.2) with $K$ replaced by $K_2$. Let $m$ be a positive integer such that $\alpha^m_1 \leq \alpha_2$. If (4.1) holds with $\alpha$ replaced by $\alpha_1$ for a sequence $(x_0, x_1, \ldots, x_J)$, then (4.1) also holds with $\alpha$ replaced by $\alpha_2$ for each of the $m$ subsequences $$(x_0, x_m, \ldots), (x_1, x_{m+1}, \ldots), \ldots, (x_{m-1}, x_{2m-1}, \ldots)\;,$$\\[-4ex] so (4.2) holds with $K$ replaced by $K_1 = mK_2$.\\ Lemmas 4.2 and 4.3 show that $\phi \in \Phi_2$ or $\phi \in \Phi_3$ is a sufficient condition for $\phi \in \Phi_1$. The proof of Lemma 4.2 is similar to that of Lemma 3.4 (using Lemma 4.1), so is omitted.\\ \begin{lemma} $$ \Phi_2 \subseteq \Phi_1\;. \eqno{(4.5)}$$\\[-4ex] \end{lemma} \vspace*{-2mm} \begin{lemma} $$ \Phi_2 = \Phi_3\;. \eqno{(4.6)}$$\\[-4ex] \end{lemma} {\bf Proof.} First suppose that $\phi \in \Phi_3$, so (4.4) holds for some function $\psi$ and some positive $K_1, K_2$ and $p$. Choose $\alpha \in (0,1)$ such that $\beta = \alpha^p K_2/K_1 < 1$. For all sufficiently large $x$, we have $$ \phi(\alpha x) \leq K_2\alpha^px^p\psi(\alpha x) \leq K_2\alpha^px^p\psi(x) \leq \left(K_2\alpha^p/K_1\right)\phi(x) \leq \beta\phi(x) \eqno{(4.7)}$$\\[-4ex] (using (4.4) and the monotonicity of $\psi)$, so $\phi \in \Phi_2$.\\ Conversely, suppose that $\phi \in \Phi_2$, so (4.3) holds for all sufficiently large $x$ (say $x \geq x_0>0$) and some $\alpha, \beta \in (0,1)$. Choose $p$ small enough that $\beta \leq \alpha^p$, so $$\phi(\alpha x) \leq \alpha^p\phi(x) \eqno{(4.8)}$$\\[-4ex] for $x \geq x_0$. Since $\phi(x)$ is positive for sufficiently large $x$, we may assume that $\phi(x_0) > 0$. Let $K_1 = \alpha^p, K_2 = 1$, and $$ \psi(x) = \sup_{x_0\leq y\leq x} \phi(y)/y^p \eqno{(4.9)}$$\\[-2ex] for $x \geq x_0$. Thus, $\psi(x)$ is monotonic increasing and $$ \psi(x) \geq \phi(x)/x^p \eqno{(4.10)}$$\\[-4ex] so $$ \phi(x) \leq K_2x^p\psi(x) \eqno{(4.11)}$$\\[-4ex] for $x \geq x_0$.\\ By repeated application of (4.8) we have, for $k \geq 0$, $$\phi(x)/x^p \geq \phi(\alpha^kx)/(\alpha^kx)^p \eqno{(4.12)}$$\\[-4ex] provided $\alpha^kx \geq x_0$. Thus, from (4.9), $$ \psi(x) = \sup_{\alpha x\leq y\leq x} \phi(y)/y^p \eqno{(4.13)}$$\\[-4ex] $$ \;\;\; \leq \phi(x)/(\alpha x)^p \eqno{(4.14)}$$\\[-4ex] for $x\geq x_0/\alpha$. Thus, $$ \phi(x) \geq K_1x^p\psi(x) \eqno{(4.15)}$$\\[-4ex] and, in view of (4.11), $\phi \in \Phi_2$. \section{Linear equivalence of various elementary functions} In this section, we consider the multiple-precision ``operations'' of evaluating certain elementary functions (log, exp, sin, artan, etc). First we prove three theorems which apply under fairly general conditions. Theorem 5.1 is a generalization of Lemmas 3.7 and 3.8, and gives a simple condition under which the evaluation of $f(x)$ is at least as difficult as a multiplication (in the sense of Definition 1.1).\\ {\bf NOTATION.} If $f$ is a real-valued function defined on some finite interval $[a, b]$, the operation of evaluating $f(x)$ to (relative) precision $n$ for $x\in[a,b]$ is denoted by $E^{(n)}_{[a,b]}(f)$. If there is no risk of confusion, we write simply $E_{[a,b]}(f)$ or $E(f)$. We sometimes write $t_n(f)$ for $t_n\left(E(f)\right)$. $LC^{(m)}[a,b]$ is the class of functions with Lipschitz continuous $m$-th derivatives on $[a,b]$. We always assume that $b>a$.\\ \begin{theorem} If $f\in LC^{(2)}[a,b]$ and there is a point $x_0\in (a,b)$ such that $f''(x_0) \neq 0$, then $$ E(f) \geq M\;. \eqno{(5.1)}$$\\[-4ex] \end{theorem} {\bf Proof.} For all sufficiently small $h$, we have (from $[$2, Lemma 3.2$]$) $$ f(x_0+h) + f(x_0-h) - 2f(x_0) = h^2f''(x_0) + R(x)\;, \eqno{(5.2)}$$\\[-4ex] where $$|R(x)| \leq c_{12}|h|^3\;. \eqno{(5.3)}$$\\[-4ex] Let $c = f''(x_0) \neq 0$. Three evaluations of $f$ and some additions may be used to approximate $ch^2$, using (5.2). If $h$ is of order $2^{-n/3}$, the resulting approximation to $ch^2$ has relative error of order $2^{-n/3}$. Proceeding as in the proof of Lemma 3.5, we see that six evaluations of $f$ and some additions may be used to approximate $cxy$ to precision $n/3$, for any $x$ and $y$. Applying this result, with $x$ replaced by the stored constant $c^{-2}$, and $y$ replaced by the computed $cxy$, shows that 12 evaluations of $f$ give $c\left(c^{-2}\right)(cxy) = xy$ to precision $n/3$. The result now follows from Lemma 3.2. \\ {\bf REMARK.} If $f''(x)$ is not constant on $[a,b]$, the point $x_0$ may be chosen so that $f''(x)$ is rational, so (5.2) may be used to approximate $h^2$, and the result follows more easily (as in the proof of Lemma 3.7).\\ Theorem 5.2 gives conditions under which the multiple-precision evaluation of the inverse function $g=f^{(-1)}$ of a function $f$ is linearly reducible to the evaluation of $f$. (The inverse function satisfies $g\left(f(x)\right)=x$.) The condition $0\not\in [a,b]$ could be dropped, if we only required the computation of $g$ with an absolute (rather than relative) error of order $2^{-n}$.\\ \begin{theorem} If $0 \not\in [a,b]$, $f \in LC^{(1)}[a,b]$, $f^{\prime}(x) \neq 0$ on $[a,b]$, $E(f) {\;\scriptstyle\stackrel{>}{=}\;} M$, and $$ t_n(f) \in \Phi_1\;, \eqno{(5.4)} $$\\[-4ex] then $$ E(g) {\;\scriptstyle\stackrel{<}{=}\;} E(f)\;, \eqno{(5.5)}$$\\[-4ex] where $g = f^{(-1)}$ and $\Phi_1$ is as in Definition 4.1. \end{theorem} {\bf Proof.} Since $f'(x)$ is continuous and nonzero on $[a, b]$, there is no loss of generality in assuming that $$ f'(x) \geq c_{13} > 0 \eqno{(5.6)} $$\\[-4ex] on $[a, b]$. Thus, $g(y)$ exists on $[c, d] = [f(a), f(b)]$. Also, since $0 \neq [a, b]$, we have $$|g(y)| \geq c_{14} > 0 \eqno{(5.7)}$$\\[-4ex] on $[c, d]$.\\ To estimate $g(y)$ we may solve $\psi(x)=0$ by a discrete version of Newton's method, where $$ \psi(x) = f(x) - y \;. \eqno{(5.8)}$$\\[-4ex] Consider the iteration $$ x_{j+1} = x_j - \psi(x_j)/\mu_j\;, \eqno{(5.9)}$$\\[-4ex] where $$ \mu_j = \left(\psi(x_j+h_j)-\psi(x_j)\right)/h_j\;, \eqno{(5.10)}$$\\[-4ex] and the computation of $\mu_j$ and $x_{j+1}$ is performed with precision $n_j \leq n$, giving computed values $\widehat{\mu}_j$ and $\widehat{x}_{j+1}$ respectively. If $h_j$ is of order $2^{{-n_j}/2}$, then $$|\widehat{\mu}_j - \psi'(\widehat{x}{_j})| \leq\; 2^{-n_j/2}c_{15}\;, \eqno{(5.11)}$$\\[-4ex] and it is easy to show that $$|\widehat{x}_{j+1}-g(y)| \;\leq\; |\widehat{x}_j-g(y)|^2 \; c_{16} \; + \; 2^{-n_j /2}\; |\widehat{x}_j - g(y)| \; c_{17} \; + \; 2^{-n_{j}} \; c_{18}\;. \eqno{(5.12)}$$\\[-4ex] Since a sufficiently good starting approximation $x_0$ may be found using single-precision (or at most bounded-precision) computation, (5.12) ensures that $$|\widehat{x}{_{j+1}}-g(y)| \;\leq\; |\widehat{x}{_j}-g(y)|^2\; c_{19}\;, \eqno{(5.13)}$$\\[-4ex] provided $$|\widehat{x}_j -g(y)| \;\geq\; 2^{-n_j/2}\;. \eqno{(5.14)}$$\\[-4ex] Hence, we may approximately double the precision at each iteration, and (5.13) guarantees convergence of order two. A final iteration with $h_j = 2^{-n/2}$ will be sufficient to give $$|\widehat{x}_{j+1}-g(y)| \;\leq\; 2^{-n}c_{20}\;. \eqno{(5.15)}$$\\[-4ex] Since $E(f) \geq M$, the result follows from (5.4), (5.7), (5.15), and Lemma 3.6.\\ \begin{theorem} If $0 \not\in [a, b]$, $f \in LC^{(1)}[a, b]$, $f(x)f'(x) \neq 0$ on $[a, b]$, $g=f^{(-1)}$, $E(f) {\;\scriptstyle\stackrel{>}{=}\;} M$, $E(g) {\;\scriptstyle\stackrel{>}{=}\;} M$, $t_n(f) \in \Phi_1$, and $t_n(g) \in \Phi_1$, then $$ E(f) \equiv E(g)\;. \eqno{(5.16)}$$\\[-4ex] \end{theorem} {\bf Proof.} Since $t_n(f) \in \Phi_1$, Theorem 5.2 applied to $f$ gives $E(g) {\;\scriptstyle\stackrel{<}{=}\;} E(f)$. Similarly, applying Theorem 5.2 to $f^{(-1)}$ gives $E(f) {\;\scriptstyle\stackrel{<}{=}\;} E(g)$, so the result follows.\\ We are now ready to deduce the linear equivalence of $mp$ evaluation of various elementary functions $f_i$, assuming that $t_n(f_i) \in \Phi_1$. In view of Lemmas 4.2 and 4.3, this assumption is very plausible.\\ \begin{corollary} If $0<a<b$, $c<d$, $1 \not\in [a, b]$, $t_n\left(E_{[a,b]}(\log)\right) \in \Phi_1$, and $t_n\left(E_{[c,d]}(\exp)\right) \in \Phi_1$, then $$ E_{[a,b]}(\log) \equiv E_{[c,d]}(\exp)\;. \eqno{(5.17)}$$\\[-4ex] \end{corollary} {\bf Proof.} From Theorem 5.1, $E_{[a,b]}(\log){\;\scriptstyle\stackrel{>}{=}\;} M$ and $E _{[c,d]}(\exp){\;\scriptstyle\stackrel{>}{=}\;} M$. Also, the identities $$ \exp(-x) = 1/\exp(x) \eqno{(5.18)}$$\\[-6ex] and $$ \exp(\lambda x) = \left(\exp(x)\right)^{\lambda} \eqno{(5.19)}$$\\[-4ex] (for suitable rational $\lambda$) may be used to show that $E_{[c,d]}(\exp) \equiv E_{[c',d']}(\exp)$ for any $c' < d'$. Hence, the result follows from Theorem 5.3.\\ {\bf REMARK.} % If $1 \in [a, b]$, then Theorem 5.2 shows that $$ E^{(n)}_{[c,d]}(\exp) {\;\scriptstyle\stackrel{<}{=}\;} E^{(n)}_{[a,b]}(\log)\;, \eqno{(5.20)}$$\\[-4ex] and a proof like that of Theorem 5.2 shows that $$ E^{(n)}_{[a,b]}(\log) {\;\scriptstyle\stackrel{<}{=}\;} E^{(2n)}_{[c,d]}(\exp)\;, \eqno{(5.21)}$$\\[-4ex] so the conclusion of Corollary 5.1 follows, if $$E^{(2n)}_{[c,d]}(\exp) \equiv E^{(n)}_{[c,d]}(\exp)\;. \eqno{(5.22)}$$\\[-4ex] Although (5.22) is plausible, no proof of it is known. (The corresponding result for multiplication is given in Lemma 3.2.)\\ \begin{corollary} \begin{eqnarray*} E(\sinh) &\equiv& E(\cosh) \equiv E(\tanh) \equiv E(\rm arsinh)\\ &\equiv& E(\rm arcosh) \equiv E(\rm artanh) \equiv E(\exp) \equiv E(\log) \end{eqnarray*}\\[-45pt]$$ \eqno{(5.23)} $$ % on any nontrivial closed intervals on which the respective functions are bounded and nonzero, assuming $t_n(\sinh) \in \Phi_1$ etc. \end{corollary} \vspace*{3mm} \begin{corollary} $$E(\sin) \equiv E(\cos) \equiv E(\tan) \equiv E(\rm arsin) \equiv E(\rm arcos) \equiv E(\rm artan) \eqno{(5.24)}$$ on any nontrivial closed intervals on which the respective functions are bounded and nonzero, assuming $t_n(\sin) \in \Phi_1$ etc. \end{corollary} \vspace*{3mm} {\bf REMARKS.} The proofs of Corollaries 5.2 and 5.3 are similar to that of Corollary 5.1 (using well-known identities), so are omitted. Since $\exp(ix) = \cos(x) + i \sin(x)$, it is plausible that $E(\exp) \equiv E(\sin)$, but we have not proved this. (It is just conceivable that the evaluation of $\exp(x)$ for complex $x$ is not linearly reducible to the evaluation of $\exp(x)$ for real $x$.) \section{Upper and lower bounds} In this section we give some upper and lower bounds on $t_n(A)$, $t_n(M)$, $t_n(\exp)$ and $t_n(\sin)$. Since the multiplicative constants are not specified, the bounds apply equally well to the operations which are linearly equivalent to addition, multiplication, etc.~(see Sections 2 to 5). The lower bounds are trivial: $t_n{\exp\choose\sin} \geq c_{21}t_n(M) \geq c_{22}t_n(A) \geq c_{23}n$ (from (2.2), Lemma 3.1 and Theorem 5.1). The upper bounds are more interesting. \\ {\bf UPPER BOUNDS ON $t_{n}(M)$}\\ The obvious algorithm for multiplication of multiple-precision numbers gives $$ t_n(M) \leq c_{24}n^2\;, \eqno{(6.1)}$$\\[-4ex] but this is not the best possible upper bound. Karatsuba and Ofman $[12]$ showed that $$ t_n(M) \leq c_{25}n^{1.58\ldots}\;, \eqno{(6.2)}$$\\[-4ex] where $1.58 \ldots \;=\; \log_{2}3$. The idea of the proof is that, to compute $$ (a+\lambda b)(c+\lambda d)= ac+ \lambda(ad+bc) + \lambda^2bd\;, \eqno{(6.3)}$$\\[-4ex] where $\lambda$ is a suitable power of two, we compute the three products $m_1=ac$, $m_2=bd$, and $m_3=(a+b)(c+d)$, and use the identity $$ ad+bc=m_3 - (m_1+m_2)\;. \eqno{(6.4)}$$\\[-4ex] Thus, $2n$-bit integers can be multiplied with three multiplications of (at most) $(n+1)$-bit integers, some multiplications by powers of two, and six additions of (at most) $4n$-bit integers. This observation leads to a recurrence relation from which (6.2) follows.\\ More complicated identities like (6.4) may be used to reduce the exponent in (6.2). Recently Sch\"{o}nhage and Strassen $[23]$ showed that the exponent can be taken arbitrarily close to unity. Their method gives the best known upper bound $$ t_n(M) \leq c_{26}n\; \log(n)\log\log(n)\;, \eqno{(6.5)}$$\\[-4ex] and uses an alogrithm related to the fast Fourier transform to compute certain convolutions. For a description of this and earlier methods see Knuth $[13$ (revised)$]$. Knuth conjectures that (6.5) is optimal, though the term $\log \log(n)$ is rather dubious. (It may be omitted if a machine with random-access memory of size $O(n^p)$ for some fixed positive $p$ is assumed.) From results of Morgenstern $[19]$ and Cook and Aanderaa $[8]$, it is extremely probable that $$ \lim_{n\rightarrow\infty} t_n(M)/n = \infty\;, \eqno{(6.6)}$$\\[-4ex] which implies that $M \not\equiv A$, but more work remains to be done to establish this rigorously.\\ {\bf UPPER BOUNDS ON} $t_n(\exp)$ {\bf AND} $t_n(\sin)$\\ To evaluate $\exp(x)$ to precision $n$ from the power series $$ \exp(\pm x) = \sum^{\infty}_{j=0} (\pm x)^j /j!\;, \eqno{(6.7)}$$\\[-4ex] it is sufficient to take $c_{27}n/\log(n)$ terms, so $$ t_n(\exp) \leq c_{28}t_n(M)n/\log(n)\;. \eqno{(6.8)}$$\\[-4ex] Theorem 6.1 shows that the bound (6.8) may be reduced by a factor of order $\sqrt{n}/\log(n)$.\\ \begin{theorem} $$ t_n(\exp) \leq c_{29} \sqrt{n} \; t_n(M) \eqno{(6.9)}$$\\[-4ex] and $$ t_n(\sin) \leq c_{30} \sqrt{n} \; t_n(M)\;. \eqno{(6.10)}$$\\[-4ex] \end{theorem} {\bf Proof.} To establish (6.9), we use the identity $$ \exp(x) = \left(\exp(x/\lambda)\right)^\lambda \eqno{(6.11)}$$\\[-4ex] with $\lambda = 2^q$, where $q=\lfloor n^{\frac{1}{2}}\rfloor$. If $[a, b]$ is the domain of $x$, and $c = \max(|a|, |b|)$, then $$|(x/\lambda)^r/r!| \;\leq\; 2^{-qr}\;, \eqno{(6.12)}$$\\[-4ex] if $r$ is large enough that $$ c^r \leq r!\;. \eqno{(6.13)}$$\\[-4ex] Hence, it is sufficient to take $r = \lceil n/q \rceil$ terms in the power series for $\exp(x/\lambda)$ to give an absolute error of order $2^{-n}$ in the approximation to $\exp(x/\lambda)$. Since $\exp(x/\lambda)$ is close to unity, the relative error will also be of the order $2^{-n}$ for large $n$. From (6.11), $q$ squarings may be used to compute $\exp(x)$ once $\exp(x/\lambda)$ is known.\\ The method just described gives $\exp(x)$ to precision $n - n^{\frac{1}{2}}$, for the relative error in $\exp(x/\lambda)$ is amplified by the factor $\lambda$. This may be avoided by taking $r = \lceil n/q \rceil +1$, and either working with precision $n + n^{\frac{1}{2}}$, or evaluating $$ \exp(|x/\lambda|) - 1 \simeq \sum^{r}_{j=1} |x/\lambda|^j/j! \eqno{(6.14)}$$\\[-4ex] and then using the identity $$ (1+\varepsilon)^2 - 1 = 2\varepsilon + \varepsilon^2 \eqno{(6.15)}$$\\[-4ex] to evaluate $\exp(|x|) - 1$ without appreciable loss of significant figures. Thus, (6.9) follows (using Lemma 3.2 if necessary).\\ The proof of (6.10) is similar, using the identity $$ \sin(x) = \pm2 \sin(x/2) \sqrt{1-\sin^2(x/2)}\;, \eqno{(6.16)}$$\\[-4ex] $q$ times to reduce the computation of $\sin(x)$ to that of $\sin(x/\lambda)$ (recall Lemma 3.9).\\ {\bf REMARKS}. If $x$ is a rational number with small numerator and denominator, the time required to sum $r$ terms in the power series for $\exp(x/\lambda)$ is $O(rn)$, and the time required for $q$ squarings is $O\left(qt_n(M)\right)$. Thus, choosing $r = \lfloor \sqrt{t_n(M)} \rfloor$ and $q = \lceil n/r \rceil$ gives total time $O \left( n \sqrt{t_n(M)}\right)$. It is also possible to evaluate $\exp(x)$ in this time for general $x$, by using a form of preconditioning to reduce the number of multiplications required to evaluate the power series for $\exp(x/\lambda)$.~\\~\\ {\bf A NUMERICAL EXAMPLE}\\ The following example illustrates the ideas of Theorem 6.1. Suppose we wish to calculate $e$ to 30 decimal places. The obvious method is to use the approximation $$ e \simeq \sum^{28}_{j=0} 1/j! \eqno{(6.17)}$$\\[-4ex] (since $29! \simeq 8.8 \times 10^{30}$). On the other hand $$ e \simeq \left(\sum^{10}_{j=0} \frac{1}{j! \; 256^j}\right)^{256} \eqno{(6.18)}$$\\[-2ex] also gives the desired accuracy (since $11! \; 256^{10} \simeq 4.8 \times 10^{31}$). Thus, the computation of 18 inverse factorials may be saved at the expense of 8 squarings.\\ Similarly, the computation of $e$ to $10^6$ decimal places by the obvious method requires the sum of about 205,030 inverse factorials, but the approximation $$ e \simeq \left( \sum^{1819}_{j=0} \frac{1}{j! \; 2^{1820j}}\right)^{2^{1820}}\;, \eqno{(6.19)}$$\\[-2ex] requiring only 1820 terms and 1820 squarings, is sufficiently accurate.\\ {\bf BASE CONVERSION}\\ Sch\"{o}nhage has shown that conversion from binary to decimal or vice versa may be done in time $O\left(n\left(\log(n)\right)^2\log\left(\log(n)\right)\right)$ (see Knuth $[13, {\rm ex.}\; 4.4.14\; ({\rm revised})]$). We describe his method here, as a similar idea is used below to improve Theorem 6.1.\\ Let $\beta > 1$ be a fixed base (e.g.~$\beta = 10$), and suppose we know the base $\beta$ representation of an integer $x$, i.e.~we know the digits $d_0, \ldots, d_{t-1}$, where $0 \leq d_i < \beta$ and $\displaystyle x = \sum^{t-1}_{0} d_i \beta^i$. Suppose that $n$-bit binary numbers can be multiplied exactly in time $M(n)$, where $$ 2M(n) \leq M(2n) \eqno{(6.20)}$$\\[-4ex] for all sufficiently large $n$. (This is certainly true if the Sch\"{o}nhage-Strassen method $[13, 23]$ is used.) We describe how the binary representation of $x$ may be found in time $O\left(M(n)\log(n)\right)$, where $n$ is sufficiently large for $x$ to be representable as an $n$-bit number (i.e.~$2^n \geq \beta^t$).\\ Without changing the result, we may suppose $t = 2^k$ for some positive integer $k$. Let the time for conversion to binary and computation of $\beta^{2^k}$ be $C(k)$. Thus, we can compute $\beta^{t/2}$ and convert the numbers $\displaystyle x_1 = \sum^{t/2-1}_{0}d_i\beta^i$ and $\displaystyle x_2 = \sum^{t-1}_{t/2} d_i \beta^{i-t/2}$ to binary in time $2C(k-1)$, and then $x=x_1+\beta^{t/2}x_2$ and $\beta^t=\left(\beta^{t/2}\right)^2$ may be computed in time $2M(n/2)+O(n)$. Thus $$ C(k) \leq 2C(k-1)+2M(n/2)+O(n)\;, \eqno{(6.21)}$$\\[-4ex] so $$ C(k) \leq 2M(n/2)+ 4M(n/4) + 8M(n/8) + \ldots + O(n\log(n))$$ $$ \hspace*{-54mm} \leq O(M(n)\log(n)) \eqno{(6.22)}$$\\[-4ex] (using (6.20)).\\ The proof that conversion from base 3 to base $\beta$ may be done in time (6.22) is similar, and once we can convert integers it is easy to convert floating-point numbers.\\ {\bf COMPUTATION OF} $e$ {\bf AND} $\pi$\\ We may regard $e-2 = 1/2! + 1/3! + \ldots\;$ as given by a mixed-base fraction $0.111 \ldots$, where the base is $2, 3, \ldots$ Hence, it is possible to evaluate $e$ to precision $n$, using a slight modification of the above base-conversion method, in time $O(M(n)\log(n))$.\\ Similarly, $\rm artan(1/j)$ may be computed to precision $n$ in time $O(M(n)\log^2(n))$, for any small integer $j \geq 2$, and then $\pi$ may be computed from well-known identities such as Machin's $$ \pi = 16 \; \rm artan(1/5) - 4 \; \rm artan(1/239)\;. \eqno{(6.23)}$$\\[-4ex] The methods just described are asymptotically faster than the $O(n^2)$ methods customarily used in multiple-precision calculations of $e$ and $\pi$ (see, for example, Shanks and Wrench $[25, 26]$). It would be interesting to know how large $n$ has to be before the asymptotically faster methods are actually faster. A proof that even faster methods are impossible would be of great interest, for it would imply the transcendence of $e$ and $\pi$.\\ {\bf IMPROVED UPPER BOUNDS ON} $t_n(\exp)$ {\bf AND} $t_n(\sin)$\\ The following lemma uses an idea similar to that described above for base conversion and computation of $e$.\\ \begin{lemma} If $p$ and $q$ are positive integers such that $p^2 \leq q \leq 2^n$, then $\exp(p/q)$ may be computed to precision $n$ in time $O(M(n)\log(n))$. \end{lemma} {\bf Proof.} The approximation $$ \exp(p/q) \simeq \sum^{k}_{j=0} \frac{(p/q)^j}{j!} \eqno{(6.24)}$$\\[-2ex] is sufficiently accurate if $k$ is chosen so that $$\frac{(p/q)^{k+1}}{(k+1)!} \leq 2^{-n} \leq \frac{(p/q)^k}{k!}\;. \eqno{(6.25)}$$\\[-2ex] Since $p^2 \leq q$, (6.25) gives $k!q^{k/2} \leq 2^n$, so certainly $$ k!q^k \leq 2^{2n}\;, \eqno{(6.26)} $$\\[-4ex] Hence, a method like that described above for the computation of $e$ may be used, and (6.26) ensures that the integers in intermediate computations do not grow too fast.\\ {From} Lemma 6.1 it is easy to deduce Theorem 6.2, which is an improvement of Theorem 6.1 for large $n$. The methods used in the proof of Theorem 6.1 and the following remarks are, however, faster than that of Theorem 6.2 for small and moderate values of $n$.\\ \begin{theorem} If $M(n)$ satisfies {\rm (6.20)} then $$ t_n(\exp) \leq c_{32}M(n)\log^2(n) \eqno{(6.27)}$$\\[-4ex] and $$ t_n(\sin) \leq c_{33}M(n)\log^2(n)\;. \eqno{(6.28)}$$\\[-4ex] \end{theorem} {\bf Proof.} Without affecting the result (6.27) we may assume that $n = 2^k$ for some positive integer $k$. (This assumption simplifies the proof, but it is not essential.) Given an $n$-bit fraction $x \in [0, 1)$, we write $$ x = \sum^{k}_{i=0} p_i/q_i\;, \eqno{(6.29)}$$\\[-2ex] where $q_i = 2^{2^i}$ and $0 \leq p_i < 2^{2^{i-1}}$ for $i = 0, 1, \ldots, k$. By Lemma 6.1, $\exp(p_i/q_i)$ can be computed, to sufficient precision, in time $O(M(n)\log(n))$, so $$\exp(x) = \prod^{k}_{i=0} \exp(p_i/q_i) \eqno{(6.30)}$$\\[-2ex] can be computed in time $O(M(n)(\log(n))^2)$. This establishes (6.27), and the proof of (6.28) is similar.\\ \begin{corollary} $$ t_n(\exp) \leq c_{34}n(\log(n))^3\log\log(n) \eqno{(6.31)}$$\\[-4ex] and $$ t_n(\sin) \leq c_{35}n(\log(n))^3\log\log(n)\;. \eqno{(6.32)}$$\\[-4ex] \end{corollary} {\bf Proof.} This is immediate from the bound (6.5) and Theorem 6.2.\\ \begin{corollary} $$ t_n(E_{[a,b]}(f)) \leq c_{36}n(\log(n))^3\log\log(n)\;, $$\\[-4ex] where \begin{eqnarray*} f(x) &=& \log(x), \exp(x), \sin(x), \cos(x), \tan(x), \sinh(x),\\ & & \cosh(x), \tanh(x), \rm arsin(x), \rm artan(x), \rm arsinh(x), \end{eqnarray*} etc, and $[a, b]$ is any finite interval on which $f(x)$ is bounded. \end{corollary} {\bf Proof.} This follows from (6.5), Corollaries 5.1 (and the note following), 5.2, 6.1, and Lemma~3.2. \section{Best constants for operations equivalent to multiplication} In this section, we consider in more detail the relationship between the $mp$ operations $D, I, M, R$, and $S$ defined in Section 3. It is convenient to consider also the operation $Q$ of forming inverse square roots (i.e., $y \leftarrow x^{-\frac{1}{2}}$). From Theorem 3.1, if we can perform any one of these operations (say $Y$) to precision $n$ in time $t_n(Y)$, then the time required to perform any of the other operations to precision $n$ is at most a constant multiple of $t_n(Y)$.\\ \begin{definition} \rm $C_{XY}$ is the minimal constant such that, for all positive $\varepsilon$ and all sufficiently large $n$, the operation $X$ can be performed (to precision $n$) in time $(C_{XY}+\varepsilon)t_n(Y)$ if $Y$ can be performed in time $t_n(Y)$, where $X, Y=D, I, M, Q, R$ or $S$. \end{definition} The following inequalities are immediate consequences of Definition 7.1: $$ C_{XY}C_{YZ} \geq C_{XZ} \eqno{(7.1)}$$\\[-4ex] \pagebreak and $$ C_{XY}C_{YX} \geq C_{XX} = 1\;. \eqno{(7.2)}$$\\[-4ex] {\bf ASSUMPTIONS}\\ To enable us to give moderate upper bounds on the constants $C_{XY}$, it is necessary to make the following plausible assumption (compare (4.3), (6.20)) throughout this section: for all positive $\alpha$ and $\varepsilon$, and all sufficiently large $n$, $$ t_{\alpha n}(Y) \leq (\alpha+\varepsilon)t_n(Y) \eqno{(7.3)}$$\\[-4ex] for $Y=D, I, M, Q, R$ and $S$. We also assume (6.6).\\ Table 7.1 gives the best known upper bounds on the constants $C_{XY}$. Space does not permit a detailed proof of all these upper bounds, but the main ideas of the proof are sketched below.\\ \begin{center} TABLE 7.1 $\;\;\;\;\;\;$ Upper bounds on $C_{XY}$ \vspace*{8mm} \begin{tabular}{|r|cccccc|}\hline &&&&&&\\[-2ex] & $X=D$ & \hphantom{0}$I$ & \hphantom{0}$M$ & \hphantom{0}$Q$ & \hphantom{0}$R$ & \hphantom{0}$S$ \\[1ex]\hline &&&&&&\\[-2ex] $Y=D$ & \W1.0&\W1.0&\W2.0&\W3.0&\W2.0&\W2.0\\ $I$ & \W7.0&\W1.0&\W6.0&15.0 &14.0 &\W3.0\\ $M$ & \W4.0&\W3.0&\W1.0&\W4.5&\W5.5&\W1.0\\ $Q$ & 10.0 &\W4.0&\W6.0&\W1.0&\W5.0&\W3.0\\ $R$ & \W7.5&\W6.0&\W6.0&\W3.0&\W1.0&\W3.0\\ $S$ & \W7.5&\W5.5&\W2.0&\W7.0&\W9.0&\W1.0\\[1ex]\hline \end{tabular} \end{center} \vspace*{3mm} \underline{$C_{IM} \leq 3$}\\ Use the Newton iteration $$x_{i+1} = x_i - x_i(a x_i - 1) \eqno{(7.4)}$$\\[-4ex] to approximate $1/a$ using multiplications. At the last iteration it is necessary to compute $a x_i$ to precision $n$, but $x_i(a x_i - 1)$ only to (relative) precision $n/2$. Since the order of convergence is 2, the assumptions (7.3) (with $\alpha = \frac{1}{2}$) and (6.6) give $$\textstyle C_{IM} \leq (1+\frac{1}{2})(1+\frac{1}{2}+\frac{1}{2}+ \ldots) = 3\;. \eqno{(7.5)}$$\\[-4ex] \underline{$C_{QM} \leq 4.5$}\\ Use the third-order iteration $$\textstyle x_{i+1}= x_i - \frac{1}{2}x_i \left(\varepsilon_i - \frac{3}{4}\varepsilon^2_i\right) \eqno{(7.6)}$$\\[-4ex] where $$ \varepsilon_i = a x^2_i -1 \eqno{(7.7)}$$\\[-4ex] to approximate $a^{-\frac{1}{2}}$. At the last iteration it is necessary to compute $a x^2_i$ to precision $n$, $\varepsilon^2_i$ to precision $n/3$, and $x_i\left(\varepsilon_i - \frac{3}{4}\varepsilon^2_i\right)$ to precision $2n/3$. Thus $$ \textstyle C_{QM} \leq (2+\frac{1}{3}+\frac{2}{3})(1+\frac{1}{3}+\frac{1}{9} + \ldots ) = \frac{9}{2}\;. \eqno{(7.8)}$$\\[-4ex] Note that this bound is sharper than the bound $C_{QM}\leq 5$ which may be obtained from the second-order iteration $$ \textstyle x_{i+1}= x_i - \frac{1}{2}x_i\varepsilon_i\;. \eqno{(7.9)}$$\\[-4ex] \underline{$C_{RD} \leq 2$}\\ Use Newton's iteration $$\textstyle x_{i+1}= \frac{1}{2}(x_i + a/x_i) \eqno{(7.10)} $$\\[-4ex] to approximate $\sqrt{a}$.\\ \underline{$C_{MS} \leq 2$} \\ This follows from (3.19) and our assumptions.\\ \underline{$C_{IS} \leq 5.5$}\\ Use the third-order iteration $$ x_{i+1} = x_i - x_i \left( \varepsilon_i - \varepsilon^2_i\right) \eqno{(7.11)}$$\\[-4ex] where $$ \varepsilon_i = ax_i -1 \eqno{(7.12)}$$\\[-4ex] to approximate $1/a$.\\ \underline{$C_{QS} \leq 7$}\\ Use the third-order iteration (7.6).\\ \underline{$C_{SI} \leq 3$}\\ {From} the proof of Lemma 3.7, $$ t_{n/3}(S) \leq t_n (I) + O(n)\;. \eqno{(7.13)}$$\\[-4ex] The result follows from the assumption (7.3) with $\alpha=3$. (This is the first time we have used (7.3) with $\alpha>1$. The assumption is plausible in view of the Sch\"{o}nhage-Strassen bound (6.5).) Upper bounds on $C_{SQ}$ and $C_{SR}$ follow similarly.\\ \underline{$C_{MI} \leq 6$}\\ This follows from (7.1) and our bounds on $C_{MS}$ and $C_{SI}$. Similarly for the bounds on $C_{MQ}$, $C_{MR}$ and $C_{RI}$.\\ \underline{$C_{QR} \leq 3$}\\ Use the identity $$ a^{-\frac{1}{2}} = \frac{1}{\lambda} \left( \sqrt{a+\lambda} - \sqrt{a - \lambda} \right) + O\left(\lambda^2/a^{5/2} \right)\;, \eqno{(7.14)}$$\\[-4ex] where $\lambda$ is a power of 2 such that $$ 2^{-n/3-1} \leq \lambda/a \leq 2^{1-n/3}\;. \eqno{(7.15)}$$\\[-4ex] \pagebreak Thus $$ t_{2n/3}(Q) \leq 2t_n(R) +O(n)\;, \eqno{(7.16)}$$\\[-4ex] and the result follows from (7.3).\\ \underline{$C_{DR} \leq 7.5$}\\ Use the identity $$ b/a = \frac{1}{\lambda} \left( \sqrt{a^2 + \lambda b} - \sqrt{a^2 - \lambda b} \right) + O\left( \lambda^2 b^3/a^5 \right)\;, \eqno{(7.17)}$$\\[-4ex] where $\lambda$ is a power of 2 such that (for $b \neq 0$) $$ 2^{-n/3-1} \leq \lambda b/a^2 \leq 2^{1-n/3}\;. \eqno{(7.18)}$$\\[-4ex] Thus $$ t_{2n/3}(D) \leq t_n(S) + 2t_n(R) + O(n)\;, \eqno{(7.19)}$$\\[-4ex] and the result follows.\\ \underline{$C_{IR}\leq 6$} $$ a^{-1} = (a^2)^{-\frac{1}{2}}\;, \eqno{(7.20)}$$\\[-4ex] so $$ C_{IR} \leq C_{SR} + C_{QR} \leq 6\;. \eqno{(7.21)}$$\\[-4ex] The bound on $C_{IQ}$ also follows from (7.20), and then the bound on $C_{RQ}$ follows from\\ $a^{\frac{1}{2}} = \left(a^{-1}\right)^{-\frac{1}{2}} $. \section{Comparison of some $mp$ methods for nonlinear equations} In this section, we briefly consider methods for finding multiple-precision solutions of non-linear equations of the form $$ f(x) = 0, \eqno{(8.1)}$$\\[-4ex] where $f(x)$ can be evaluated for any $x$ in some domain. Additional results are given in $[38]$.\\ There are many well-known results on the efficiency of various methods for solving (8.1), e.g., Hindmarsh $[10]$, Ostrowski $[20]$, Traub $[27]$ and the references given in Section 1, but the results are only valid if arithmetic operations (in particular the evaluation of $f(x), f'(x)$ etc.) require certain constant times. The examples given below demonstrate that different considerations are relevant when multiple-precision arithmetic of varying precision is used.\\ For simplicity, we restrict attention to methods for finding a simple zero $\zeta$ of $f$ by evaluating $f$ at various points. We assume that $f$ has sufficiently many continuous derivatives in a neighbourhood of $\zeta$, but the methods considered do not require the evaluation of these derivatives.\\ Since $f(x)$ is necessarily small near $\zeta$, it is not reasonable to assume that $f(x)$ can be evaluated to within a small {\it relative\/} error near $\zeta$. In this section, an evaluation of $f$ ``with precision $n$'' means with an {\it absolute\/} error of order $2^{-n}$. We suppose that such an evaluation requires time $w(n) = t_n(E(f))$, where $$ w(cn) \sim c^{\alpha}w(n) \eqno{(8.2)}$$\\[-4ex] for some constant $\alpha > 1$ and all positive $c$. Since $\alpha > 1$, the bound (6.5) and condition (8.2) give $$ \lim_{n \rightarrow \infty} t_n(M)/w(n) = 0\;, \eqno{(8.3)}$$\\[-4ex] so we may ignore the time required for a fixed number of multiplications and divisions per iteration, and merely consider the time required for function evaluations. Our results also apply if $\alpha = 1$, so long as (8.3) holds. (For example, the evaluation of $\exp(x)$ by the method of Corollary 6.1 requires time $w(n) \sim c_{37}n(\log(n))^3\log\log(n)$, which satisfies (8.2) with $\alpha = 1$, and also satisfies (8.3).)\\ \begin{definition} \rm If an $mp$ zero-finding method requires time $t(n) \sim C(\alpha)w(n)$ to approximate $\zeta \neq 0$ with precision $n$, where $w(n)$ and $\zeta$ are as above, then $C(\alpha)$ is the {\it asymptotic constant\/} of the method. (Not to be confused with the asymptotic error constant as usually defined for fixed-precision methods $[2]$.) \end{definition} Given several $mp$ methods with various asymptotic constants, it is clear that the method with minimal asymptotic constant is the fastest (for sufficiently large $n$). The method which is fastest may depend on $\alpha$, as the following examples show.\\ {\bf DISCRETE NEWTON $mp$ METHODS}\\ Consider iterative methods of the form $$ x_{i+1} = x_i - f(x_i)/g_i\;, \eqno{(8.4)}$$\\[-4ex] where $g_i$ is a finite-difference approximation to $f'(x_i)$. If $\varepsilon_i = |x_i - \zeta|$ is sufficiently small, $f(x_i)$ is evaluated with absolute error $O\left(\varepsilon^2_i\right)$, and $$ g_i = f'(x_i) + O(\varepsilon_i)\;, \eqno{(8.5)}$$\\[-4ex] then $$|x_{i+1}-\zeta| = O\left(\varepsilon^2_i\right)\;, \eqno{(8.6)}$$\\[-4ex] so the method has order (at least) 2.\\ The simplest method of estimating $f'(x_i)$ to sufficient accuracy is to use the one-sided difference $$ g_i= \frac{f(x_i+h_i)-f(x_i)}{h_i}\;, \eqno{(8.7)}$$\\[-4ex] where $h_i$ is of order $\varepsilon_i$, and the evaluation of $f(x_i+h_i)$ and $f(x_i)$ are performed with an absolute error $O\left(\varepsilon^2_i\right)$. Thus, to obtain $\zeta$ to precision $n$ by this method $(N_1)$, we need two evaluations of $f$ to precision $n$ (at the last iteration), preceded by two evaluations to precision $n/2$, etc. (The same idea is used above, in the proof of Theorem 5.2.) The time required is $$ t(n) \sim 2w(n) + 2w(n/2) + 2w(n/4) + \ldots \;.\eqno{(8.8)}$$\\[-4ex] Thus, from (8.2) and Definition 8.1, the asymptotic constant is $$ C_{N_1}(\alpha) = 2(1+2^{-\alpha}+2^{-2\alpha} + \ldots) = 2/(1-2^{-\alpha}) \;. \eqno{(8.9)}$$\\[-4ex] Since $$ 2 < C_{N_1}(\alpha) \leq 4 \; , \eqno{(8.10)}$$\\[-4ex] the time required to solve (8.1) to precision $n$ is only a small multiple of the time required to evaluate $f$ to the same precision. The same applies for the methods described below.\\ Using (8.7) is not necessarily the best way to estimate $f'(x_i)$. Let $p$ be a fixed positive integer, and consider estimating $f'(x_i)$ by evaluating $f$ at the points \[ x_i - \lfloor p/2 \rfloor h_i, x_i -(\lfloor p/2\rfloor -1)h_i,\ldots, x_i + \lceil p/2\rceil h_i\;. \] (The points need not be equally spaced so long as their minimum and maximum separations are of order $h_i$.) Let $g_i$ be the derivative (at $x_i$) of the Lagrange interpolating polynomial agreeing with $f$ at these points. Since estimates $f'(x_i)$ with truncation error $O\left(h^p_i\right)$, we need $h_i$ of order $\varepsilon^{1/p}_i$. Then, to ensure that (8.5) holds, the function evaluations at the above points must be made with absolute error $O\left(\varepsilon^{1+1/p}_i\right)$. Thus to obtain $\zeta$ to precision $n$ by this method $(N_p)$ we need one evaluation $f$ to precision $n$ and $p$ evaluations to precision $n(1+1/p)/2$, preceded by one evaluation precision $n/2$ and $p$ to precision $n(1+1/p)/4$, etc. The asymptotic constant is $$C_N(p,\alpha)=\left(1+p \left(\frac{p+1}{2p}\right)^{\alpha}\right)\Big/ (1-2^{-\alpha}) \;. \eqno{(8.11)}$$\\[-4ex] Let $$ C_N(\alpha) = \min_{p=1,2,\ldots}C_N(p, \alpha) \;, \eqno{(8.12)}$$\\[-4ex] so the ``optimal $mp$ discrete Newton method'' has asymptotic constant $C_N(\alpha)$. From (8.11), the $p$ which minimizes $C_N(p, \alpha)$ also minimizes $p^{1/\alpha}(1+1/p)$, so the minimum for $\alpha >1$ occurs at $p=\lfloor\alpha -1\rfloor$ or $\lceil\alpha - 1\rceil$. In fact, $p=1$ is optimal if $$ 1 \leq \alpha < \log(2)/\log(4/3)=2.4094 \ldots \;, \eqno{(8.13)}$$\\[-4ex] and $p \geq 2$ is optimal if $$ \frac{\log(1-p^{-1})}{\log(1-p^{-2})} < \alpha < \frac{\log(1+p^{-1})}{\log(1+1/(p(p+2)))} \;. \eqno{(8.14)}$$\\[-4ex] The result that method $N_2$ is more efficient than method $N_1$ if $\alpha > 2.4094 \ldots$ is interesting, for $N_2$ requires one more function evaluation per iteration than $N_1$, and has the same order of convergence. The reason is that not all the function evaluations need to be as accurate for method $N_2$ as for method $N_1$. We give below several more examples where methods with lower order and/or more function evaluations per iteration are more efficient than methods with higher order and/or less function evaluations per iteration.\\ For future reference, we note that $$1<C_N(\alpha)\leq 4 \;, \eqno{(8.15)}$$\\[-4ex] $$C_N(1)=4 \;, \eqno{(8.16)}$$\\[-4ex] and $$ C_N(\alpha) - 1 \sim e\alpha2^{-\alpha} \eqno{(8.17)}$$\\[-4ex] as $\alpha \rightarrow \infty$.\\ {\bf A CLASS OF $mp$ SECANT METHODS}\\ It is well-known that the secant method is more efficient that the discrete Newton method for solving nonlinear equations with fixed-precision arithmetic $[2,\; 20]$. For $mp$ methods the comparison depends on the exponent $\alpha$ in (8.2).\\ \pagebreak[4] Let $k$ be a fixed positive integer and $p_k$ the positive real root of $$ x^{k+1} = 1 + x^k \;. \eqno{(8.18)}$$\\[-4ex] The iterative method $S_k$ is defined by $$ x_{i+1} = x_i - f(x_i) \left(\frac{x_i-x_{i-k}}{f(x_i)-f(x_{i-k})}\right) \;, \eqno{(8.19)}$$\\[-4ex] where the function evaluations are performed to sufficient accuracy to ensure that the order of convergence is at least $p_k$. Thus, $S_1$ is the usual secant method with order $p_1 = \frac{1+\sqrt{5}} {2} = 1.618 \ldots; S_2, S_3$ etc.~are methods with lower orders $p_2= 1.4655 \ldots, p_3= 1.3802 \ldots$, etc. With fixed-precision $S_1$ is always preferable to $S_2, S_3$ etc., but this is not always true if $mp$ arithmetic is used.\\ Suppose $i$ and $k$ fixed, $\delta > 0$ small, and write $\varepsilon = |x_{i-k}-\zeta|$ and $p=p_k-\delta$. Since the order of convergence is at least $p$, we have $$|x_i-\zeta| = O\left(\varepsilon^{p^k}\right) \;, \eqno{(8.20)}$$\\[-4ex] $$|x_{i+1}-\zeta| = O\left(\varepsilon^{p^{k+1}}\right) \;, \eqno{(8.21)}$$\\[-4ex] $$|x_i - x_{i-k}| = O(\varepsilon) \;, \eqno{(8.22)}$$\\[-4ex] and $$|f(x_i)| = O\left(\varepsilon^{p^k}\right) \;. \eqno{(8.23)}$$\\[-4ex] For the approximate evaluation of the right side of (8.19) to give order $p$, the absolute error in the evaluation of $f(x_i)$ must be $O\left(\varepsilon^{p^{k+1}}\right)$, and the relative error in the evaluation of $(f(x_i)-f(x_{i-k}))/(x_i-x_{i-k})$ must be $O\left(\varepsilon^{p^{k+1}-p^k}\right)$, so the absolute error in the evaluation of $f(x_{i-k})$ must be $O\left(\varepsilon^{p^{k+1}-p^k+1}\right)$. From (7.18), for $\delta$ sufficiently small, $$ p^{k+1}-p^k+1>p \;, \eqno{(8.24)}$$\\[-4ex] so the evaluation of $\zeta$ to precision $n$ by method $S_k$ requires evaluations of $f$ to precision $n, n/p, n/p^2,$ $ \ldots, n/p^{k-1}, 2n/p^{k+1}, 2n/p^{k+2}$, etc. Thus, the asymptotic constant is $$ C_S(k, \alpha)= 1+p^{-\alpha}+ \ldots + p^{(1-k)\alpha} + (2p^{-(k+1)})^{\alpha}(1+p^{-\alpha}+ \ldots) $$ $$\hspace*{-30mm} = \frac{1-p^{-k\alpha}+(2p^{-(k+1)})^\alpha} {1-p^{-\alpha}} \;, \eqno{(8.25)}$$\\[-4ex] where (after letting $\delta \rightarrow 0$) $p=p_k$ satisfies (8.18).\\ We naturally choose $k$ to minimize $C_S(k, \alpha)$, giving the ``optimal $mp$ secant method'' with asymptotic constant $$ C_S(\alpha) = \min_{k=1,2,\ldots} C_S(k,\alpha) \;. \eqno{(8.26)}$$\\[-4ex] The following lemmas show that the optimal secant method is $S_1$ if $\alpha < 4.5243 \ldots$, and $S_2$ if $\alpha > 4.5243 \ldots$\\ % \begin{lemma} $$ C_S(k, 1) = 3 + p^k_k - p_k \;. \eqno{(8.27)}$$\\[-4ex] \end{lemma} {\bf Proof.} Easy from (8.18) and (8.25).\\ \begin{lemma} $$ C_S(k, \alpha) - 1 \sim \left\{ {(3-\sqrt{5})^{\alpha}\; {\it if}\; k =1\;,}\atop {\hspace*{-.5mm}p^{-\alpha}_k \hspace*{10.5mm}{\it if}\; k \geq 2\;,} \right. \eqno{(8.28)}$$\\[-4ex] as $\alpha \rightarrow \infty$. \end{lemma} {\bf Proof.} From (8.25), $$ C_S(k, \alpha) - 1 \sim p^{-\alpha}_k - p^{-k\alpha}_k + \left(2p^{-(k+1)}_k\right)^\alpha \eqno{(8.29)}$$\\[-4ex] as $\alpha \rightarrow \infty$. If $k \geq 2$ then, from (8.18), $$ p^{k}_k = p^{-1}_k + p^{k-1}_k \geq p^{-1}_k + p_k > 2 \;, \eqno{(8.30)}$$\\[-4ex] so $$ p^{-1}_k > 2p^{-(k+1)}_k \;. \eqno{(8.31)}$$\\[-4ex] Thus, the result for $k \geq 2$ follows from (8.29). The result for $k=1$ also follows from (8.29), for $2p^{-2}_1 = 3 - \sqrt{5}$.\\ \begin{lemma} $$ C_S(\alpha) = \left\{ {C_S(1, \alpha) \;\;{\it if}\;\; 1 \leq \alpha \leq \alpha_0 \;,}\atop {\hspace*{-7mm}C_S(2, \alpha) \;\;{\it if}\;\; \alpha \geq \alpha_0 \;,} \right. \eqno{(8.32)}$$\\[-4ex] where $\alpha_0 = 4.5243 \ldots$ is the root of $$ C_S(1,\alpha_0) = C_S(2,\alpha_0) \;. \eqno{(8.33)}$$\\[-4ex] \end{lemma} {\bf Proof.} The details of the proof are omitted, but we note that the result follows from Lemmas 8.1 and 8.2 for (respectively) % small and large values of $\alpha$.\\ {From} (8.25), $C_S(k, \alpha)$ is a monotonic decreasing function of $\alpha$, so the same is true of $C_S(\alpha)$. Thus, from Lemmas 8.1, 8.2 and 8.3, $$ 1 < C_S(\alpha) \leq 3 \;, \eqno{(8.34)}$$\\[-4ex] $$ \hspace*{4mm} C_S(1) = 3 \;, \eqno{(8.35)}$$\\[-4ex] and $$ C_S(\alpha) - 1 \sim p^{-\alpha}_2 = (0.6823 \ldots)^\alpha \eqno{(8.36)}$$\\[-4ex] as $\alpha \rightarrow \infty$. Comparing these results with (8.15) to (8.17), we see that the optional $mp$ secant method is more efficient than the optimal $mp$ discrete Newton method for small $\alpha$, but less efficient for large $\alpha$. (The changeover occurs at $\alpha = 8.7143 \ldots)$\\ {\bf AN $mp$ METHOD USING INVERSE QUADRATIC INTERPOLATION}\\ For fixed-precision arithmetic the method of inverse quadratic interpolation $[2]$ is slightly more efficient than the secant method, for it has order $P_Q = 1.8392 \ldots > 1.6180 \ldots$, and requires the same number (one) of function evaluations per iteration. For $mp$ arithmetic, it turns out that inverse quadratic interpolation $(Q)$ is always more efficient than the secant method $S_1$, but it is less efficient than the secant method $S_2$ if $\alpha > 5.0571 \ldots$\\ \pagebreak Since the analysis is similar to that for method $S_1$ above, the details are omitted. The order $p_Q$ is the positive real root of $$ x^3 = 1 + x + x^2 \;. \eqno{(8.37)}$$\\[-4ex] For brevity, we write $\sigma = 1/p_Q = 0.5436 \ldots$\\ To evaluate $\zeta$ to precision $n$ by method $Q$ requires evaluations of $f$ to precision $n$, $(1-\sigma +\sigma^2)n$, and $\sigma^j(1-\sigma - \sigma^2 + 2\sigma^3)n$ for $j = 0, 1, 2, \ldots$ Hence, the asymptotic constant is $$ C_Q(\alpha) = 1 + (1-\sigma + \sigma^2)^\alpha + (1-\sigma - \sigma^2 + 2\sigma^3)^\alpha/(1-\sigma^\alpha)$$ $$\hspace*{-11mm} = 1+ (1-\sigma + \sigma^2)^\alpha + (3\sigma^3)^\alpha/ (1-\sigma^\alpha) \eqno{(8.38)}$$\\[-4ex] from (8.31). Corresponding to the results (8.15) to (8.17) and (8.34) to (8.36), we have that $C_Q(\alpha)$ is monotonic decreasing, $$\textstyle 1 < C_Q(\alpha) \leq C_Q(1) = \frac{1}{2}(7-2\sigma - \sigma^2) = 2.8085 \ldots \;, \eqno{(8.39)}$$\\[-4ex] and $$ C_Q(\alpha) - 1 \sim (1-\sigma+ \sigma^2)^\alpha = (0.7519 \ldots)^\alpha \eqno{(8.40)}$$\\[-4ex] as $\alpha \rightarrow \infty$. Method $Q$ is more efficient than the optimal $mp$ secant method if $\alpha < 5.0571 \ldots$, and more efficient than the optimal $mp$ discrete Newton method if $\alpha < 7.1349 \ldots$ We do not know any $mp$ method which is more efficient than method $Q$ for $\alpha$ close to 1.\\ {\bf OTHER $mp$ METHODS USING INVERSE INTERPOLATION}\\ Since inverse quadratic interpolation is more efficient than linear interpolation (at least for $\alpha$ close to 1), it is natural to ask if inverse cubic or higher degree interpolation is even more efficient. Suppose $\frac{1}{2} \le \mu < 1$, and consider an inverse interpolation method $I_\mu$ with order $1/\mu$. In particular, consider the method $I_\mu$ which uses inverse interpolation at $x_i, x_{i-1}, \ldots, x_{i-k}$ to generate $x_{i+1}$, where $k$ is sufficiently large, and the function evaluations at $x_i, \ldots, x_{i-k}$ are sufficiently accurate to ensure that the order is at least $1/\mu$ and, in general, no more than $1/\mu$. (The limiting case $I_{1/2}$ is the method which uses inverse interpolation through all previous points $x_0, x_1, \ldots, x_i$ to generate $x_{i+1}$.)\\ By an analysis similar to those above, it may be shown that the asymptotic constant of method $I_\mu$ is $$ C_I(\mu, \alpha) = \sum^{\infty}_{j=0} \left(s_j(\mu)\right)^\alpha \;, \eqno{(8.41)}$$\\[-4ex] where $s_0(\mu) =1$ and $$ s_j(\mu) = \max\left[ \mu s_{j-1}(\mu), 1+j\mu^{j+1}-\mu(1-\mu^j)/ (1-\mu)\right] \eqno{(8.42)}$$\\[-4ex] for $j = 1, 2, \ldots$ Space does not allow a proof of (8.41), but related results are given in $[20,$ Appendix H$]$. We note the easily verified special cases $$ C_I\left(\frac{\sqrt{5}-1}{2},\; \alpha\right) = C_S(1, \alpha) \eqno{(8.43)}$$\\[-4ex] and $$C_I(\sigma, \alpha) = C_Q(\alpha) \;. \eqno{(8.44)}$$\\[-4ex] \pagebreak The method with maximal order (see $[7]$) is $I_{1/2}$, with asymptotic constant $${\textstyle C_I(\frac{1}{2}, \alpha)} = \sum^{\infty}_{j=2} (j2^{1-j})^\alpha \;. \eqno{(8.45)}$$\\[-4ex] The ``optimal $mp$ inverse interpolatory method'' is the method $I_\mu$ with $\mu(\alpha)$ chosen to minimize $C_I(\mu, \alpha)$, so its symptotic constant is $$ C_I(\alpha)= \min_{\frac{1}{2} \leq \mu \leq 1} C_I(\mu, \alpha) \;. \eqno{(8.46)}$$\\[-2ex] The following lemma shows that the optimal choice is $\mu = \sigma$, corresponding to the inverse quadratic method $Q$ discussed above, if $\alpha \leq 4.6056 \ldots$\\ \begin{lemma} If $C_I(\alpha) = C_I(\mu(\alpha), \alpha)$ then $$ \mu(\alpha) = \sigma = 0.5436 \ldots \;{\it if}\; 1 \leq \alpha \leq 4.6056 \ldots \;, \eqno{(8.47)}$$\\[-4ex] $\mu(\alpha)$ is a monotonic decreasing function of $\alpha$, and $$\textstyle \lim_{\alpha \rightarrow \infty} \mu(\alpha) = \frac{1}{2} \;. \eqno{(8.48)}$$\\[-4ex] \end{lemma} {From} (8.39), $$\textstyle C_I(\frac{1}{2}, \alpha) - 1 \sim \left(\frac{3}{4}\right)^\alpha \eqno{(8.49)}$$\\[-4ex] as $\alpha \rightarrow \infty$, so Lemma 8.4 shows that the optimal inverse interpolatory is more efficient than methods $S_1$ and $Q$ (as expected), but less efficient than method $S_2$ or the optimal discrete Newton method, for large $\alpha$. In fact $C_I(\alpha) < C_S(\alpha)$ for $1 \leq \alpha < 5.0608 \ldots$\\ {\bf A LOWER BOUND FOR $C(\alpha)$}\\ The following theorem shows that $C(\alpha) \geq 1$ for all useful $mp$ methods. The results above (e.g.~(7.17)) show that the constant ``1'' here is best possible, as methods with $C(\alpha) \rightarrow 1$ as $\alpha \rightarrow \infty$ are possible. The minimal value of $C(\alpha)$ for any finite $\alpha$ is an open question.\\ \begin{theorem} If an $mp$ method is well-defined and converges to a zero of the functions $f_1(x) = F(x) - y$ and $f_2(y) = F^{(-1)}(y) - x$, where $x$ and $y$ are restricted to nonempty domains $D_x$ and $D_y$, and $F$ is some invertible mapping of $D_x$ onto $D_y$ such that $t_n(E(F))$ satisfies {\rm{(8.2)}}, then the asymptotic constant of the method satisfies $C(\alpha) \geq 1$. \end{theorem} \vspace*{3mm} {\bf Proof.} If $C(\alpha)<1$ then, by solving $f_1(x) =0$, we can evaluate $F^{(-1)}(y)$ (for $y$ in $D_y$) in time less than $t_n(E(F))$, for all sufficiently large $n$. Applying the same argument to $f_2(y)$, we can evaluate $F=(F^{(-1)})^{(-1)}$ in time less than $t_n(E(F^{(-1)}))$. Hence, for large $n$ we have $$ t_n(E(F)) < t_n (E(F^{(-1)})) < t_n(E(F)) \;, \eqno{(8.50)}$$\\[-4ex] a contradiction. Hence, $C(\alpha) \geq 1$.\\ \begin{conjecture} For all $mp$ methods (using only function evaluations) which are well-defined and convergent for some reasonable class of functions with simple zeros, $$ C(\alpha) \geq 1/(1-2^{-\alpha}) \;. \eqno{(8.51)}$$\\[-4ex] \end{conjecture} \pagebreak {\bf SUMMARY OF $mp$ ZERO-FINDING METHODS}\\ Of the methods described in this section, the most efficient are: \begin{enumerate} \item optimal inverse interpolation, if $1 \leq \alpha \leq 5.0608 \ldots$ (equivalent to inverse quadratic interpolation, if $1\leq \alpha \leq 4.6056 \ldots)\;$; \item optimal secant method (method $S_2$), if $5.0608 \ldots < \alpha \leq 8.7143 \ldots\;$; \item optimal discrete Newton, if $8.7143 \ldots < \alpha$. \end{enumerate} For practical purposes, the inverse quadratic interpolation method is to be recommended, for it is easy to program, and its asymptotic $C_Q(\alpha)$ is always within 3.2\% of the least constant for the methods above. Numerical values of the asymptotic constants, for various values of $\alpha$, are given to $4D$ in Table 8.1. The smallest constant for each $\alpha$ is italicized. \vspace*{3mm} \begin{center} TABLE 8.1 $\;\;\;\;\;\;$ Aysmptotic constants for various $mp$ methods \vspace*{8mm} \begin{tabular}{rcccccc}\hline &&&&&&\\[-2ex] $\alpha\hphantom{0}$ & $C_N(\alpha)$ & $C_S(1,\alpha)$ & $C_S(2,\alpha)$ & $C_Q(\alpha)$ & $C_I(\alpha)$ & $C_I(\frac{1}{2}, \alpha)$ \\[1ex]\hline &&&&&&\\[-2ex] 1.0&4.0000&3.0000&3.6823&{\it 2.8085}&{\it 2.8085}&3.0000\\ 1.1&3.7489&2.8093&3.4256&{\it 2.6484}&{\it 2.6484}&2.8193\\ 1.5&3.0938&2.2987&2.7241&{\it 2.2108}&{\it 2.2108}&2.3219\\ 2.0&2.6667&1.9443&2.2209&{\it 1.8954}&{\it 1.8954}&1.9630\\ 3.0&2.1071&1.5836&1.6935&{\it 1.5586}&{\it 1.5586}&1.5856\\ 4.0&1.6988&1.3988&1.4248&{\it 1.3789}&{\it 1.3789}&1.3898\\ 5.0&1.4260&1.2860&1.2694&1.2677&{\it 1.2676}&1.2718\\ 6.0&1.2529&1.2105&{\it 1.1741}&1.1936&1.1930&1.1946\\ 7.0&1.1469&1.1573&{\it 1.1137}&1.1420&1.1410&1.1416\\ 8.0&1.0838&1.1185&{\it 1.0748}&1.1051&1.1039&1.1041\\ 9.0&{\it 1.0471}&1.0898&1.0495&1.0782&1.0770&1.0771\\ 10.0&{\it 1.0262}&1.0682&1.0328&1.0584&1.0573&1.0573\\ 15.0&{\it 1.0012}&1.0176&1.0043&1.0139&1.0134&1.0134\\ 20.0&{\it 1.0001}&1.0046&1.0006&1.0033&1.0032&1.0032\\[1ex]\hline \end{tabular} \end{center} \vspace{20mm} {\bf NOTE ADDED IN PROOF.} Theorem 6.2 and its corollaries may be improved by a factor $\log(n)$, as described in $[37]$ and $[38]$. \vspace*{\fill} \pagebreak \section*{References} \smallskip \begin{tabular}{lp{414pt}} $[1]$& Brent, R.P. The computational complexity of iterative methods for systems of nonlinear equations. In {\it Complexity of Computer Computations} (edited by R.E.~Miller and J.W.~Thatcher). Plenum Press, New York, 1972, 61--71.\\ $[2]$& Brent, R.P. {\it Algorithms for Minimization without Derivatives.} Prentice-Hall, Englewood Cliffs, New Jersey, 1973.\\ $[3]$& Brent, R.P. Some efficient algorithms for solving systems of nonlinear equations. {\it SIAM Numer.~Anal.}~{\bf 10}, 327--344, 1973.\\ $[4]$& Brent, R.P. On the precision attainable with various floating-point number systems. {\it IEEE Trans.~Comp.}~{\bf C-22}, 601--607, 1973.\\ $[5]$&Brent, R.P. Error analysis of algorithms for matrix multiplication and triangular decomposition using Winograd's identity. {\it Numer.~Math.}~{\bf 16}, 145--156, 1970. \\ $[6]$& Brent, R.P. {\it Numerical solution of nonlinear equations.} Computer Sci.\ Dept., Stanford University, March 1975, 189~{\it{pp}}.\\ $[7]$& Brent, R.P., Winograd, S.~and Wolfe, P. Optimal iterative processes for rootfinding. {\it Numer.~Math.}~{\bf 20}, 327--341, 1973.\\ $[8]$& Cook, S.A.~and Aanderaa, S.O. On the minimum complexity of functions. {\it Trans.\ Amer.\ Math.\ Soc.}~{\bf 142}, 291--314, 1969.\\ $[9]$& Floyd, R.W. Unpublished notes.\\ $[10]$& Hindmarsh, A.C. Optimality in a class of rootfinding algorithms. {\it SIAM J.~Numer.~Anal.} {\bf 9}, 205--214, 1972.\\ $[11]$& Hopcroft, J.E. Complexity of computer computations. In {\it Information Processing 74}. North-Holland, Amsterdam, 1974, 620--626.\\ $[12]$& Karatsuba, A.~and Ofman, Y. Multiplication of multidigit numbers on automata (Russian). {\it Dokl.~Akad.~Nauk SSSR} {\bf 145}, 293-294, 1962.\\ $[13]$& Knuth, D.E. {\it The Art of Compter Programming\/}, Vol.~II, {\it Seminumerical Algorithms\/}. Addison Wesley, Reading, Massachusetts, 1969. Errata and addenda: Report CS 194, Computer Sci.~Department, Stanford University, 1970.\\ $[14]$& Kung, H.T. The computational complexity of algebraic numbers. {\it SIAM J.~Numer.~Anal.\ }(to appear).\\ % $[15]$& Kung, H.T. A bound on the multiplicative efficiency of iteration. {\it J.~Computer \& System Sciences} {\bf 7}, 334--342, 1973.\\ $[16]$& Kung, H.T.~and Traub, J.F. Optimal order of one-point and multipoint iteration. {\it J.~ACM\/} {\bf 21}, 643--651, 1974.\\ $[17]$& Kung, H.T.~and Traub, J.F. Computational complexity of one-point and multipoint iteration. In {\it Complexity of Real Computations\/} (edited by R.~Karp). Amer.~Math.~Soc., % Providence, Rhode Island, 1974, 149--160.\\ % $[18]$& Kung, H.T.~and Traub, J.F. Optimal order and efficiency for iterations with two evaluations. Tech.~Report, Department of Computer Science, Carnegie-Mellon University, 1973.\\ $[19]$& Morgenstern, J. The linear complexity of computation. {\it J.~ACM\/} {\bf 20}, 305--306 (1973).\\ $[20]$& Ostrowski, A.M. {\it Solution of Equations in Euclidean and Banach Spaces}. Academic Press, New York, 1973.\\ $[21]$& Paterson, M.S. Efficient iterations for algebraic numbers. In {\it Complexity of Computer Computations\/} (edited by R.E.~Miller and J.W.~Thatcher). Plenum Press, New York, 1972, 41--52.\\ $[22]$& Rissanen, J. On optimum root-finding algorithms. {\it J.~Math.~Anal.~Applics.} {\bf 36}, 220--225, 1971.\\ \end{tabular} \begin{tabular}{lp{414pt}} $[23]$& Sch\"{o}nhage, A.~and Strassen, V. Schnelle Multiplikation grosser Zahlen. {\it Computing\/} {\bf 7}, 281--292, 1971.\\ $[24]$& Schultz, M.H. The computational complexity of elliptic partial differential equations. In {\it Complexity of Computer Computations\/} (edited by R.E.~Miller and J.W.~Thatcher). Plenum Press, New York, 1972, 73--83.\\ $[25]$& Shanks, D.~and Wrench, J.W. Calculation of $\pi$ to 100,000 decimals. {\it Math.~Comp.\ }{\bf{16}} 76--99, 1962.\\ $[26]$& Shanks, D.~and Wrench, J.W. Calculation of $e$ to 100,000 decimals. {\it Math.~Comp.\ }{\bf{23}} 679--680, 1969.\\ $[27]$& Traub, J.F. {\it Iterative Methods for the Solution of Equations}. Prentice-Hall, Englewood Cliffs, New Jersey, 1964.\\ $[28]$& Traub, J.F. Computational complexity of iterative processes. {\it SIAM J.~Computing} {\bf 1}, 167--179, 1972.\\ $[29]$& Traub, J.F. Numerical mathematics and computer science. {\it Comm.~ACM\/} {\bf 15}, 537--541, 1972.\\ $[30]$& Traub, J.F. Optimal iterative processes: theorems and conjectures. In {\it Information Processing 71\/}. North-Holland, Amsterdam, 1972, 1273--1277.\\ $[31]$& Traub, J.F. Theory of optimal algorithms. In {\it Software for Numerical Mathematics\/} (edited by D.J.~Evans). Academic Press, 1974.\\ $[32]$& Traub, J.F. An introduction to some current research in numerical computational complexity. Tech.~Report, Department of Computer Science, Carnegie-Mellon University, 1973.\\ $[33]$& Winograd, S. On the time required to perform addition. {\it J.~ACM\/} {\bf 12}, 277--285, 1965.\\ $[34]$& Winograd, S. On the time required to perform multiplication. {\it J.~ACM\/} {\bf 14}, 793--802, 1967.\\ $[35]$& Wozniakowski, H. Generalized information and maximal order of iteration for operator equations. {\it SIAM J.~Numer.~Anal\/}. {\bf 12}, 121--135, 1975.\\ $[36]$& Wozniakowski, H. Maximal stationary iterative methods for the solution of operator equations. {\it SIAM J.~Numer.~Anal\/}. {\bf 11}, 934--949, 1974.\\ $[37]$& Brent, R.P. Fast multiple-precision evaluation of elementary functions. {\it J.~ACM\/} {\bf 23}, 242--251, 1976.\\ $[38]$& Brent, R.P. Multiple-precision zero-finding methods and the complexity of elementary function evaluation. In {\it Analytic Computational Complexity\/} (edited by J.F.~Traub). Academic Press, New York, 1975, 59--73. \end{tabular} \parskip 2mm \pagebreak[4] \section*{Postscript (September 1999)} \subsection*{Historical Notes} This paper was retyped (with minor corrections) in \LaTeX\ during August 1999. It is available electronically from \url{http://wwwmaths.anu.edu.au/~brent/pub/pub032.html} \medskip The related paper Brent~[38] is available electronically from\\ \url{http://wwwmaths.anu.edu.au/~brent/pub/pub028.html} \medskip The paper Kung~[14] appeared in {\em SIAM J.~Numer.\ Anal.\ }{\bf{12}} (1975), 89--96. \subsection*{Sharper Results} Some of the constants given in Table 7.1 can be improved, e.g. $C_{DM}$, $C_{RM}$, $C_{DS}$, $C_{RS}$. One source of improvement is given in a report by Karp and Markstein\footnote{ Alan H.\ Karp and Peter Markstein, {\em High Precision Division and Square Root}, HP Labs Report 93-93-42 (R.1), June 1993, Revised October 1994. Available electronically from\\ {\tt http://{{\linebreak[0]}}www.hpl.hp.com/{{\linebreak[0]}}techreports/% {{\linebreak[0]}}93/{{\linebreak[0]}}HPL-93-42.html} }. \medskip For example, consider $C_{DM}$. We want to compute an $n$-bit approximation to $b/a$. If $x_i \to 1/a$ as in (7.4) and we define $y_i = bx_i$, then $y_i \to b/a$. Also, if $x_i$ satisfies the recurrence~(7.4), then $y_i$ satisfies $$ y_{i+1} = y_i - x_i(ay_i - b)\;. \eqno(7.4') $$ Note that (7.4') is self-correcting because of the computation of the residual $ay_i - b$. Suppose $x_i$ has (relative) precision $n/2$. If we approximate $y_i = bx_i$ using an $\frac{n}{2}$-bit multiplication, compute the residual $ay_i - b$ using an $n$-bit multiplication, then its product with $x_i$ using an $\frac{n}{2}$-bit multiplication, we can apply~(7.4') to obtain $y_{i+1}$ with relative precision~$n$. Assuming $x_i$ is obtained in time $\sim 3M(n/2) \sim\frac{3}{2}M(n)$ (see (7.5)), the time to obtain $y_{i+1}$ is $\sim\frac{7}{2}M(n)$, i.e. $C_{DM} \le 3.5$, which is sharper than the bound $C_{DM} \le 4.0$ given in Table~7.1. \medskip Similarly, we can obtain $C_{RM} \le 4.25$, which is sharper than the bound $C_{RM} \le 5.5$ given in Table~7.1. If $x_i \to a^{-1/2}$ and $y_i = ax_i \to \sqrt{a}$, we compute a precision $n/2$ approximation $x_i$ in time $\sim\frac{9}{2}M(n/2)$ as in Section~7, then apply a final second-order iteration for $$ y_{i+1} = y_i - x_i(y_i^2 - a)/2 \eqno(7.9') $$ (derived by multiplying~(7.9) by $a$ and using~(7.7)) to obtain a precision $n$ approximation $y_{i+1}$ to $\sqrt{a}$. \medskip As a corollary, the time required for an arithmetic-geometric mean iteration~[37,38] is reduced from ${\sim}6.5M(n)$ to ${\sim}5.25M(n)$. \subsection*{The Definition of $n$-bit Multiplication} Our $t_n(M)$ (see Sections 1--3) is essentially the time required to compute the most significant $n$ bits in the product of two $n$-bit numbers. In Brent~[38], % $t_n(M)$ is written as $M(n)$. A related but subtly different function is $M^{*}(n)$, defined as the time required to compute the full $2n$-bit product of $n$-bit numbers\footnote{In Brent~[37] % we (confusingly) used the notation $M(n)$ for $M^{*}(n)$.}. Paul Zimmermann\footnote{Personal communication, 1999.} observed that smaller constants can sometimes be obtained in row $Y = M$ of Table 7.1 if we use $M^{*}(n)$ instead of $M(n)$. (We denote these constants by $C_{XM^{*}}$ to avoid confusion with the $C_{XM}$ of Table~7.1.) For example, $C_{DM^{*}} < 3.5$ and $C_{RM^{*}} < 4.25$. % \medskip It is an open question whether $$M(n) \sim M^{*}(n) \;\;{\rm as}\;\; n \to \infty\;;$$ with the best available multiplication algorithms (those based on the FFT) this is true\footnote{Similar remarks apply if we consider computing the product of two polynomials of degree $n-1$, and ask either for the first $n$ terms in the product or the complete product. Although the first computation is faster (by a factor of about two) if the classical order $n^2$ algorithms are used, it is not significantly faster if FFT-based algorithms are used.}. \medskip \subsection*{Final Comments} Daniel Bernstein\footnote{Personal communication, 1999.} observed that the time required to compute $n$-bit square roots can be reduced further if the model of computation is relaxed so that redundant FFTs can be eliminated. Similar remarks apply to division, exponentiation etc (and to operations on power series). \medskip In conclusion, 25 years after the paper was written (in 1974), improvements can still be found, and the last word is yet to be written! \end{document}
train/arxiv
BkiUd9I5qYVBhG4-TAto
5
1
\section{Introduction} Localization operators have a long-standing tradition among physicists, mathematicians and engineers. A special form of such operators called ``Anti-Wick operators'' had been used as a quantization procedure by Berezin~\cite{Berezin71,Shubin91} in 1971. The terminology ``Time-frequency localization operators'' or simply ``localization operators'' is due to Daubechies, who wrote the popular papers \cite{DB1,DB2} appeared in 1988. From then onwards so many authors have written contributions on this topic that it is not possible to cite them all. In this note we shall focus on the time-frequency properties of such operators and we will exhibit the results known so far. Much has been done in terms of necessary and sufficient conditions for boundedness of such operators on suitable normed spaces, as well as their belonging to the Schatten-von Neumann Class $ S_p(L^2(\rd))$, $1<p\leq \infty$. Here we focus on the quasi-Banach setting $0<p<1$ and present outcomes in this framework, while reviewing also the known results for the Banach case $p\geq 1$. First, we introduce the main features of this study. The protagonists of time-frequency\, analysis are the operators of translation and modulation defined by \begin{equation} \label{eqi1} T_xf(t)=f(t-x)\quad{\rm and}\quad M_{\omega}f(t)= e^{2\pi i \omega t}f(t), \, \quad f\inL^2(\rd). \end{equation} For a fixed non-zero $g \in \mathcal{S} (\bR^d )$ (the Schwartz class), the short-time Fourier transform, in short STFT, of $f \in \mathcal{S} ' (\bR^d ) $ (the space of tempered distributions), with respect to the window $g$, is given by \begin{equation} \label{eqi2} V_gf(x,\omega)=\langle f,M_\omega T_x g\rangle =\int_{\mathbb{R}^d} f(t)\, {\overline {g(t-x)}} \, e^{-2\pi i\omega t}\,dt\, . \end{equation} By means of the STFT, the time-frequency\ localization operator $A_a^{\varphi_1,\varphi_2} $ with symbol $a$, analysis window function $\varphi _1$, and synthesis window function $\varphi _2$ can be formally defined as \begin{equation} \label{eqi4} A_a^{\varphi_1,\varphi_2} f(t)=\int_{\mathbb{R}^{2d}}a (x,\omega ) V_{\varphi \varphi_1}f (x,\omega ) M_\omega T_x \varphi _2 (t) \, dx d\omega. \end{equation} In particular, if $a \in \mathcal{S} '({\bR^{2d}} )$ and $\varphi _1, \varphi _2 \in \mathcal{S} (\bR^d )$, then \eqref{eqi4} is a well-defined continuous operator from $\mathcal{S} (\bR^d )$ to $\mathcal{S} ' (\bR^d )$. If $\varphi _1(t) = \varphi _2 (t) = e^{-\pi t^2}$, then $A_a = A_a^{\varphi_1,\varphi_2} $ is the classical Anti-Wick\ operator and the mapping $a \mapsto A_a^{\varphi_1,\varphi_2} $ is understood as a quantization rule, cf. \cite{Berezin71,Shubin91} and the recent contribution \cite{deGossonATFA19}. In a weak sense, the definition of $A_a^{\varphi_1,\varphi_2} $ in \eqref{eqi4} can be rephrased as \begin{equation}\label{anti-Wickg} \langle A^{\f_1,\f_2}_a f,g\rangle=\langle aV_{\varphi_1}f, V_{\varphi_2}g\rangle=\langle a,\overline{V_{\varphi_1}f}\, V_{\varphi_2}g\rangle,\quad f,g\in\mathcal{S}(\mathbb{R}^d)\, . \end{equation} The definition in \eqref{eqi4} has suggested the study of localization operators as a multilinear mapping \begin{equation}\label{map} (a, \varphi _1, \varphi _2) \mapsto A_a^{\varphi_1,\varphi_2} . \end{equation} In \cite{CG02,CG05,Wignersharp2018,Nenad2015,Nenad2016,Wong02} the boundedness of the map in \eqref{map} has been widely studied, in dependence on the function spaces of both symbol $a$ and windows $\varphi_1,\varphi_2$. The sharpest Schatten-class results are obtained by choosing modulation space s as spaces for both symbol and windows, as observed in \cite{CG05} and \cite{Wignersharp2018}; in those contributions the focus is limited to the Banach framework. Sharp compactness results for localization operators are contained in \cite{FG2006}. Finally, smoothness and decay of eigenfuctions for localization operators are studied in \cite{BCN19}, see also \cite{Abreu2012,Abreu2016,Abreu2017}.\par Modulation spaces are (quasi-)Banach spaces that measure the concentration of functions and distributions on the time-frequency plane. Since the STFT is the mean to extract the time-frequency features of a function/distribution, the idea that leads to the definition of modulation space s is the following: \emph{ give a (quasi)norm} to the STFT. These spaces will be introduced in the following Subsection $2.2$.\par Another way to introduce localization operators is as a form of Weyl transform. The latter can be defined by means of another popular time-frequency representation, the cross-Wigner distribution. Namely, given two functions $f_1,f_2\in \mathcal{S}(\mathbb{R}^d)$, the {\it cross-Wigner distribution} $W(f_1,f_1)$ is defined to be \begin{equation} \label{eq3232} W(f_1,f_2)(x,\omega)=\int f_1(x+\frac{t}2)\overline{f_2(x-\frac{t}2)} e^{-2\pi i\omega t}\,dt. \end{equation} The quadratic expression $Wf = W(f,f)$ is called the Wigner distribution of $f$. Every continuous operator from $\mathcal{S} (\bR^d )$ to $\mathcal{S} ' (\bR^d )$ can be represented as a pseudodifferential operator in the Weyl form $L_\sigma$ and the connection with the cross-Wigner distribution is provided by \begin{equation}\label{equiv1} \langle L_\sigma f,g\rangle=\langle \sigma,W(g,f)\rangle,\quad\quad f,g\in\mathcal{S}(\mathbb{R}^d). \end{equation} Localization operators $A^{\f_1,\f_2}_a$ can be represented as Weyl operators as follows (cf. \cite{BCG2004,CG02,Nenad2016}) \begin{equation}\label{WA}A^{\f_1,\f_2}_a=L_{a\ast W(\varphi_2,\varphi_1)}, \end{equation} so that the Weyl symbol of the localization operator $A_a^{\varphi_1,\varphi_2}$ is given by \begin{equation}\label{eq2} \sigma = a\ast W(\varphi_2,\varphi_1)\, . \end{equation} This representation of localization operators in the Weyl form, together with boundedness properties of Weyl operators and sharp continuity properties for the cross-Wigner distribution, yields to Schatten-class results for localization operators. In particular here we present new outcomes in the quasi-Banach setting, while reviewing the known results in the Banach framework, see Theorems \ref{class} and \ref{main} below. \par The paper is organized as follows. Section $2$ presents the basic definitions and properties of the Schatten-von Neumann Classes $S_p(L^2(\rd))$, $0<p\leq\infty$, of the modulation spaces and the time-frequency analysis tools needed to infer our results. Section $3$ exhibits the sufficient conditions for localization operators to be in the Schatten-von Neumann classes $S_p$. To chase this goal, sharp continuity properties for the cross-Wigner distribution are presented. Such result is new in the framework of quasi-Banach modulation spaces and is the main ingredient to prove sufficient Schatten class conditions for localization operators. Section $4$ contains necessary Schatten class results for localization operators and ends by showing perspectives and open problems about this topic. \section{Preliminaries on Schatten Classes, Modulation Spaces and Frames} \subsection{Schatten-von Neumann Classes.} We limit to consider the Hilbert space $L^2(\rd)$. Let $T$ be a compact operator on $L^2(\rd)$. Then $T^\ast T$: $L^2(\rd)\to L^2(\rd)$ is compact, self-adjoint, and non-negative. Hence, we can define the absolute value of $T$ by $|T| = (T^\ast T)^{\frac12}$, acting on $L^2(\rd)$. Recall that $|T|$ is compact, self-adjoint, and non-negative, hence by the Spectral Theorem we can find an orthonormal basis $(\psi_n)_n$ for $L^2(\rd)$ consisting of eigenvectors of $|T|$. The corresponding eigenvalues $s_1(T) \geq s_2(T)\geq \dots \geq s_n(T) \geq \dots\geq 0$, are called the the singular values of $T$. If $0<p<\infty$ and the sequence of singular values is $\ell^p$-summable, then $T$ is said to belong to the Schatten-von Neumann class $S_p(L^2(\rd))$. If $1 \leq p < \infty$, a norm is associated to $S_p(L^2(\rd))$ by \begin{equation}\label{normSp} \|T\|_{S_p}:=\left(\sum_{n=1}^\infty s_n(T)^p\right)^\frac1p. \end{equation} If $1\leq p<\infty$ then $(S_p(L^2(\rd)), \|\cdot\|_{S_p})$ is a Banach space whereas, for $0<p<1$, $(S_p(L^2(\rd)), \|\cdot\|_{S_p})$ is a quasi-Banach space since the quantity $\|T\|_{S_p}$ defined in \eqref{normSp} is only a quasinorm. For completeness, we define $S_\infty(L^2(\rd))$ to be the space of bounded operators on $L^2(\rd)$. The Schatten-von Neumann classes are nested, with $S_p \subset S_q$, for details on this topic we refer to \cite{Gohberg1969,Reed-Simon1975,Simon2005,Schatten70,Shubin91,zhu2007operator}. For $2\leq p<\infty$ and $T$ in $S_p(L^2(\rd))$, we can express its norm by \begin{equation}\label{char-p-big2} \|T\|_{S_p}^p=\sup \sum_n \|T \phi_n\|_{L^2}^p, \end{equation} the supremum being over all orthonormal bases $(\phi_n)_n$ of $L^2(\rd)$. Then, it is a straightforward consequence (see \cite[Theorem 12]{ZhuSchatten2015}) \begin{equation}\label{Schatten-norm-bound} \left(\sum_{n} |\langle T \phi_n, \phi_n\rangle|^p\right)^{1/p}\leq \|T\|_{S_p}, \end{equation} for every orthonormal basis $(\phi_n)_n$, $2\leq p<\infty$. If $T\in S_2(L^2(\rd))$ then $T$ is called \emph{Hilbert-Schmidt} operator. If $T\in S_1(L^2(\rd))$ then $T$ is said to be a \emph{trace class} operator and the space $S_1$ is named the Trace Class. \par \begin{remark}\label{counterexp} For $0<p<2$, the characterization in \eqref{char-p-big2} does not hold, in general. In fact, a simple example is shown by Bingyang, Khoi and Zhu in the paper \cite{ZhuSchatten2015}. Let us recall it for sake of clarity in the case of the Hilbert space $H=L^2(\rd)$. Fix an orthonormal basis $(\phi_n)_n$ and consider the function $h\in L^2(\rd)$ given by $$h=\sum_{n=1}^\infty \frac{\phi_n}{\sqrt{n}\log( n+1)}. $$ Define the rank-one operator on $L^2(\rd)$ by $$ Tf= \langle f,h\rangle h, \quad f\inL^2(\rd).$$ We have $$T \phi_n= \langle \phi_n ,h\rangle h= \frac{h}{\sqrt{n}\log( n+1)},\quad n\geq 1.$$ It follows that $$\sum_{n=1}^\infty \|T\phi_n\|_{L^2}^p=\|h\|_{L^2}^p\sum_{n=1}^\infty \frac{1}{[\sqrt{n}\log(n+1)]^p}=\infty$$ for any $0<p<2$. \end{remark} \subsection{Modulation Spaces} \subsubsection{Weight functions} In the sequel $v$ will always be a continuous, positive, submultiplicative weight function on $\bR^d$, i.e., $ v(z_1+z_2)\leq v(z_1)v(z_2)$, for all $ z_1,z_2\in\mathbb{R}^d$. We say that $m\in \mathcal{M}_v(\bR^d)$ if $m$ is a positive, continuous weight function on $\mathbb{R}^d$ {\it $v$-moderate}: $ m(z_1+z_2)\leq Cv(z_1)m(z_2)$ for all $z_1,z_2\in\mathbb{R}^d$. We will mainly work with polynomial weights of the type \begin{equation}\label{vs} v_s(z)=\langle z\rangle^s =(1+|z|^2)^{s/2},\quad s\in\field{R},\quad z\in\bR^d. \end{equation} Observe that, for $s<0$, $v_s$ is $v_{|s|}$-moderate.\par Given two weight functions $m_1,m_2$ on $\bR^d$, we write $$(m_1\otimes m_2)(x,\omega)=m_1(x)m_2(\omega),\quad x,\omega\in \bR^d.$$ \noindent {\bf Modulation Spaces.} We present the more general definition of such spaces, containing the quasi-Banach setting, introduced first by Y.V. Galperin and S. Samarah in \cite{Galperin2004}. \begin{definition}\label{def2.4} Fix a non-zero window $g\in\mathcal{S}(\bR^d)$, a weight $m\in\mathcal{M}_v({\bR^{2d}})$ and $0<p,q\leq \infty$. The modulation space $M^{p,q}_m(\bR^d)$ consists of all tempered distributions $f\in\mathcal{S}'(\bR^d)$ such that the (quasi)norm \begin{equation} \|f\|_{M^{p,q}_m}=\|V_gf\|_{L^{p,q}_m}=\left(\int_{\rd}\left(\int_{\rd} |V_g f (x,\omega )|^p m(x,\omega )^p dx \right)^{\frac qp}d\omega\right)^\frac1q \end{equation} (obvious changes with $p=\infty$ or $q=\infty)$ is finite. \end{definition} The most known modulation spaces are those $M^{p,q}_m(\bR^d)$, with $1\leq p,q\leq \infty$, introduced by H. Feichtinger in \cite{feichtinger-modulation}. In that paper their main properties were exhibited; in particular we recall that they are Banach spaces, whose norm does not depend on the window $g$: different window functions in $\mathcal{S}(\bR^d)$ yield equivalent norms. Moreover, the window class $\mathcal{S}(\bR^d)$ can be extended to the modulation space $M^{1,1}_v(\bR^d)$ (so-called Feichtinger algebra). For shortness, we write $M^p_m(\bR^d)$ in place of $M^{p,p}_m(\bR^d)$ and $M^{p,q}(\bR^d)$ if $m\equiv 1$. The modulation spaces $M^{p,q}_m(\bR^d)$, $0<p,q<1$, where introduced almost twenty years later by Y.V. Galperin and S. Samarah in \cite{Galperin2004}. In this framework, it appears that the largest natural class of windows universally admissible for all spaces $M^{p,q}_m(\bR^d)$, $0<p,q\leq \infty$ (with weight $m$ having at most polynomial growth) is the Schwartz class $\mathcal{S}(\bR^d)$. Many properties related to the quasi-Banach setting are still unexplored. The focus of this paper is on the quasi Banach setting, which allows to infer new results for localization operators.\par In the sequel we shall use inclusion relations for modulation spaces (cf. \cite[Theorem 3.4]{Galperin2004} and \cite[Theorem 12.2.2]{grochenig}): \begin{theorem}\label{inclusionG} Let $m\in\mathcal{M}_v({\bR^{2d}})$. If $0<p_1\leq p_2\leq \infty$ and $0<q_1\leq q_2\leq \infty$ then $M^{p_1,q_1}_m(\bR^d)\subseteq M^{p_2,q_2}_m(\bR^d)$. \end{theorem} \begin{remark} In our framework it is important to notice the following inclusion relation for $s>0$: \begin{equation}\label{compact} M^{\infty}_{v_s\otimes 1} ({\bR^{2d}}) \subset M^{p,\infty}({\bR^{2d}})\quad \mbox{if}\, \,p>2d/s. \end{equation} This follows from the recent contribution \cite[Theorem 1.5]{Guo2019}. \end{remark} Let us recall convolution relations for modulations spaces. They are contained in the contributions \cite{CG02} and \cite{toft2004} for the Banach framework. The more general case is exhibited in \cite{BCN19}. \begin{proposition}\label{mconvmp} Let $\nu (\omega )>0$ be an arbitrary weight function on $\mathbb{R}^d$, $0< p,q,r,t,u,\gamma\leq\infty$, with \begin{equation}\label{Holderindices} \frac 1u+\frac 1t=\frac 1\gamma, \end{equation} and \begin{equation}\label{Youngindicesrbig1} \frac1p+\frac1q=1+\frac1r,\quad \,\, \text{ for } \, 1\leq r\leq \infty \end{equation} whereas \begin{equation}\label{Youngindicesrbig1} p=q=r,\quad \,\, \text{ for } \, 0<r<1. \end{equation} For $m\in\mathcal{M}_v({\bR^{2d}})$, $m_1(x) = m(x,0) $ and $m_2(\omega ) = m(0,\omega )$ are the restrictions to $\mathbb{R}^d\times\{0\}$ and $\{0\}\times\mathbb{R}^d$, and likewise for $v$. Then \begin{equation}\label{mconvm} M^{p,u}_{m_1\otimes \nu}(\mathbb{R}^d)\ast M^{q,t}_{v_1\otimes v_2\nu^{-1}}(\mathbb{R}^d)\subseteq M^{r,\gamma}_m(\mathbb{R}^d) \end{equation} with norm inequality $$\| f\ast h \|_{M^{r,\gamma}_m}\lesssim \|f\|_{M^{p,u}_{m_1\otimes \nu}}\|h\|_{ M^{q,t}_{v_1\otimes v_2\nu^{-1}}}.$$ \end{proposition} \subsection{Frame Theory.} A sequence of functions $\{b_j\,:\,j\in\mathcal{J}\} $ in $L^2(\rd)$ is a \emph{frame} for the Hilbert space $L^2(\rd)$ if there exist positive constants $0<A\leq B<\infty$, such that \begin{equation}\label{frame} A\|f\|_{L^2}^2\leq\sum_{j\in\mathcal{J}}|\langle f, b_j\rangle|^2\leq B \|f\|_{L^2}^2,\quad \forall f\inL^2(\rd). \end{equation} The constants $A$ and $B$ are called \emph{lower} and \emph{upper} frame bounds, respectively. It is straightforward from \eqref{frame} (or see, e.g., \cite[Pag. 398]{HeWeiss96}) to check the elements of a frame satisfy \begin{equation}\label{unif} \|b_j\|_{L^2}\leq \sqrt{B},\quad \forall j \in\mathcal{J}. \end{equation} Using \eqref{unif}, in \cite{CG05} we extended the inequality in \eqref{Schatten-norm-bound} from orthonormal bases to frames. \begin{lemma}\label{ppp} Let $(b_n)_{n}$ be a frame for $L^2(\rd)$, as defined in \eqref{frame}, with upper bound $B$. If $T\in S_p(L^2(\rd))$, for $1\leq p\leq\infty$, then \begin{equation}\label{adj} \left(\sum_{n=1}^\infty |\langle T b_n,b_n\rangle|^p\right)^{1/p}\leq B\|T\|_{S_p}. \end{equation} \end{lemma} Observe that an orthonormal basis is a special instance of frame with upper bound $B=1$; hence Lemma \ref{ppp} provides an alternative proof to the inequality in \eqref{Schatten-norm-bound}, for every $1\leq p\leq\infty$. In the case $0<p<1$, Lemma \ref{ppp} is false in general. This is a straightforward consequence of the following result \cite[Proposition 22]{ZhuSchatten2015}: \begin{proposition} Suppose $0 < p < 1$ and $(\phi_n)_n$ any orthonormal basis for $L^2(\rd)$. Then there exists a positive operator $S \in S_p(L^2(\rd))$ such that $(\langle S\phi_n,\phi_n\rangle )_n\notin \ell^p$. \end{proposition} Since an orthonormal basis is a frame with frame bounds $A=B=1$, it follows that the majorization \eqref{adj} fails for $(\phi_n)_n$ and, consequently, Lemma \ref{ppp} is false. For $p\geq 1$, a useful consequence of Lemma \ref{ppp} is as follows (cf. \cite[Corollary 2]{CG05}): \begin{corollary}\label{adj4} Let $(b_n)_n$ be a frame with upper bound $B$. Let $L\in S_\infty(L^2(\rd))$ and $T\in S_p(L^2(\rd))$, with $1\leq p\leq\infty$. Then we have \begin{equation}\label{adj6} \left(\sum_{n=1} ^\infty|\langle T b_n,L b_n\rangle|^p\right)^{1/p}\leq B\|T\|_{S_p}\|L\|_{S_\infty}. \end{equation} \end{corollary} In \cite[Proposition 10]{Maurice2015}, see also \cite{seip92,seip-wallsten}, it is proved that, if $\alpha\beta<1$ and \begin{equation}\label{fi} \varphi:=2^{d/4}e^{-\pi x^2},\end{equation} then the set of the Gaussian time-frequency shift \,$(M_{\beta n}T_{\alpha k}\varphi)_{n,k\in \field{Z}^d}$ is a frame for $L^2({\bR^{2d}})$ (called Gabor frame). In the sequel we shall also use the Gabor frames on $L^2(\rdd)$ given by $$ (M_{\beta n}T_{\alpha k}\Phi)_{k,n \in{\bZ^{2d}}},$$ where $\Phi$ is the $2d$-dimensional Gaussian function below \begin{equation}\label{Phi} \Phi(x,\omega ):=2^{-d}e^{-\pi (x^2+\omega^2)},\quad (x,\omega )\in{\bR^{2d}}. \end{equation} It is easy to compute (or see, e.g., \cite[Lemma 1.5.2]{grochenig}) that \begin{equation}\label{Gauss0}V_\varphi \varphi(x,\omega )=2^{-d/2} e^{-\pi i x \omega}e^{-\frac\pi 2(x^2+\omega^2)}.\end{equation} \begin{definition}\label{ellp} For $0<p,q\leq \infty$, $m\in \mathcal{M}_v({\bZ^{2d}})$, the space $\ell^{p,q}_m({\bZ^{2d}})$ consists of all sequences $c=(c_{k,n})_{k,n\in\bZ^d}$ for which the (quasi-)norm $$\|c\|_{\ell^{p,q}_m}=\left(\sum_{n\in\bZ^d}\left(\sum_{k\in\bZ^d}|c_{k,n}|^p m(k,n)^p\right)^{\frac qp}\right)^{\frac 1q} $$ (with obvious modification for $p=\infty$ or $q=\infty$) is finite. \end{definition} For $p=q$, $\ell^{p,q}_m({\bZ^{2d}})=\ell^p_m({\bZ^{2d}})$, the standard spaces of sequences. Namely, in dimension $d$, for $0<p\leq \infty$, $m$ a weight function on $\bZ^d$, a sequence $c=(c_k)_{k\in\bZ^d}$ is in $\ell^p_m(\bZ^d)$ if $$\|c\|_{\ell^{p}_m}=\left(\sum_{k\in\bZ^d}|c_{k}|^p m(k)^p\right)^{\frac 1p}<\infty. $$ Discrete equivalent modulation spaces norms are produced by means of Gabor frames. The key result is the following characterization for the $M^{p,q}_m$- norm of localization symbols (see \cite[Chapter 12]{grochenig} for $1\leq p,q\leq\infty$, and \cite[Theorem 3.7]{Galperin2004} for $0<p,q<1$). \begin{theorem}\label{framesmod} Assume $m\in\mathcal{M}_v({\bR^{2d}})$, $0<p,q\leq\infty$. Consider the Gabor frame $(M_{\beta n}T_{\alpha k}\Phi)_{k,n \in{\bZ^{2d}}}$ with Gaussian window $\Phi$ in \eqref{Phi}. Then, for every $a \inM_m^{p,q}({\bR^{2d}})$, \begin{equation}\label{idea} \|a\|_{M^{p,q}_m({\bR^{2d}})}\asymp \|(\langle a ,M_{\beta n}T_{\alpha k}\Phi\rangle_{n,k\in{\bZ^{2d}}})_{n,k\in{\bZ^{2d}}}\|_{\ell_m^{p,q}(\field{Z}^{4d})}. \end{equation} \end{theorem} \subsection{Time-frequency Tools} In the sequel we shall need to compute the STFT of the cross-Wigner distribution, contained below \cite[Lemma 14.5.1]{grochenig}: \begin{lemma}\label{STFTSTFT} Fix a nonzero $g \in \mathcal{S} (\mathbb{R}^d ) $ and let $\Phi=W (g , g ) \in\mathcal{S}(\mathbb{R}^{2d})$. Then the STFT of $W(f _1, f _2) $ with respect to the window $\Phi $ is given by \begin{equation} \label{eql4} { {V}}_\Phi (W(f_1,f_2)) (z, \zeta ) =e^{-2\pi i z_2\zeta_2}{\overline{V_{g }f_2(z_1+\frac{\zeta_2}2,z_2-\frac{\zeta_1}2})}V_{g }f_1(z_1-\frac{\zeta_2}2,z_2+\frac{\zeta_1}2)\, . \end{equation} \end{lemma} The following properties of the STFT (cf. \cite[Lemma 1]{CG05}) will be used to prove necessary Schatten class conditions for localization operators. \begin{lemma} If $z=(z_1,z_2)\in\mathbb{R}^{2d}$, $\zeta=(\zeta_1,\zeta_2)\in\mathbb{R}^{2d}$, then \begin{align}\label{eqr4} T_{(z_1,z_2)}({\overline{V_{\varphi_1}f}}\cdot V_{\varphi_2}g ) (x,\omega ) &= {\overline{V_{\varphi_1}(M_{z_2}T_{z_1}f)}(x,\omega )} \, V_{\varphi_2}(M_{z_2}T_{z_1}g)(x,\omega ), \,\\ \label{eqr5} M_{(\zeta_1,\zeta_2)}\big({\overline{V_{\varphi_1}f}}\,V_{\varphi_2}g \big) (x,\omega ) &= \overline{V_{\varphi_1}f(x,\omega ) } \, V_{(M_{\zeta_1}T_{-\zeta_2}\varphi_2)}( M_{\zeta_1}T_{-\zeta_2}g)(x,\omega ), \, \\ \label{bo} M_\zeta T_z({\overline{V_{\varphi_1}f}}V_{\varphi_2}g) &= {\overline{V_{\varphi_1}(M_{z_1}T_{z_2}f)}}V_{(M_{\zeta_1}T_{-\zeta_2}\varphi_2)}(M_{\zeta_1}T_{-\zeta_2}M_{z_1}T_{z_2}g). \end{align} \end{lemma} \section{Sufficient Conditions for Schatten Class $S_p$, $0<p\leq \infty$} In this Section we present sufficient conditions for Schatten Class properties of localization operators. The Banach case $p\geq 1$ was studied in \cite{CG02,CG05}. The main result (cf. Theorem \ref{class} below) will take care of the full range $0<p\leq\infty$. First, we need to recall similar properties for Weyl operators, obtained in several papers, we refer the interested reader to \cite{CG02,grochenig,GH99,Sjo94,toft2004}. \begin{theorem}\label{Charly1} For $0<p\leq\infty$, we have:\\ (i) If $0< p \leq 2$ and $\sigma \in M^{p} (\mathbb{R}^{2d} )$, then $L_\sigma \in S_p$ and $\|L_\sigma \|_{S_p} \lesssim \|\sigma \|_{M^{p}}$.\\ (ii) If $2\leq p \leq \infty$ and $\sigma \in M^{p,p'} (\mathbb{R}^{2d} )$, then $L_\sigma \in S_p$ and $\|L_\sigma \|_{S_p} \lesssim \|\sigma \|_{M^{p,p'}}$. \end{theorem} \begin{proof} The proof for $p\geq 1$ can be found in \cite[Theorem 3.1]{CG02}, see also references therein. The case $0<p<1$ is contained in \cite[Theorem 3.4]{ToftquasiBanach2017}. \end{proof} We now focus on the properties of the cross-Wigner distribution, which enjoys the following property. \begin{theorem} \label{T1} Assume $p_i,q_i,p,q\in (0,\infty]$, $i=1,2$, $s\in \field{R}$, such that \begin{equation}\label{WIR} p_i,q_i\leq q, \ \quad i=1,2 \end{equation} and that \begin{equation}\label{Wigindexsharp} \frac1{p_1}+\frac1{p_2}\geq \frac1{p}+\frac1{q},\quad \frac1{q_1}+\frac1{q_2} \geq \frac1{p}+\frac1{q}. \end{equation} Then, if $f_1\in M^{p_1,q_1}_{v_{|s|}}(\mathbb{R}^d)$ and $f_2\in M^{p_2,q_2}_{v_s}(\mathbb{R}^d)$ we have $W(f_1,f_2)\in M^{p,q}_{1\otimes v_s}(\mathbb{R}^{2d})$, and \begin{equation}\label{wigest} \| W(f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}}\lesssim \|f_1\|_{M^{p_1,q_1}_{v_{|s|}}}\| f_2\|_{M^{p_2,q_2}_{v_s}}. \end{equation} \par Vice versa, assume that there exists a constant $C>0$ such that \begin{equation}\label{Wigestsharp} \|W(f_1,f_2)\|_{M^{p,q}}\leq C \|f_1\|_{M^{p_1,q_1}} \|f_2\|_{M^{p_2,q_2}},\quad \forall f_1,f_2\in\mathcal{S}({\bR^{2d}}). \end{equation} Then \eqref{WIR} and \eqref{Wigindexsharp} must hold. \end{theorem} \begin{proof} \emph{Sufficient Conditions.} The result for the indices $p_i,q_i,p,q\in [1,\infty]$ is proved in \cite[Theorem 3.1]{Wignersharp2018}. The general case follows easily from that one, since the main tool is provided by the inclusion relations for modulation spaces in \eqref{inclusionG}. We detail its steps for sake of clarity.\par First, study the case both $0<p,q<\infty$. Let $g\in \mathcal{S} (\bR^d ) $ and set $\Phi=W(g,g)\in\mathcal{S}(\mathbb{R}^{2d})$. If $\zeta = (\zeta_1,\zeta_2)\in \mathbb{R}^{2d}$, we write $\tilde{\zeta } = (\zeta _2,-\zeta _1)$. Then, from Lemma \ref{STFTSTFT}, \begin{equation}\label{e0} |{{V}}_\Phi (W(f_1,f_2))(z,\zeta)| =| V_g f_2(z +\tfrac{\tilde{\zeta }}{2})| \, |V_g f_1(z - \tfrac{\tilde{\zeta }}{2})| \,. \end{equation} Hence, \begin{equation*} \|W( f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}} \asymp \left(\int_{\rdd}\!\left(\int_{\rdd}\! | V_g f_2(z +\tfrac{\tilde{\zeta }}{2})|^p \, |V_g f_1(z - \tfrac{\tilde{\zeta }}{2})|^p \, dz \right)^\frac{q}{p} \, \langle \zeta \rangle ^{sq} \, d\zeta \right)^{1/q}. \end{equation*} Making the change of variables $z \mapsto z-\tilde{\zeta } /2$, the integral over $z$ becomes the convolution $(|V_g f_2|^p\ast |(V_g{f_1})^*|^p)(\tilde{\zeta })$, and observing that $(1\otimes v_s) (z,\zeta ) = \langle \zeta \rangle ^s = v_s (\zeta )= v_s (\tilde{\zeta })$, we obtain \begin{eqnarray*} \|W(f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}} &\asymp& \left(\iint_{\mathbb{R}^{2d}}\!(|V_g f_2|^p\ast |(V_g {f_1})^*|^p)^\frac{q}{p}(\tilde{\zeta}) v_s(\tilde{\zeta})^{q} \, d\zeta \right)^{1/p}\\ &=& \| \, |V_g f_2|^p\ast |(V_g{ f_1})^*|^p \, \|^{\frac 1 p}_{L^\frac{q}{p}_{v _{ps}}}. \end{eqnarray*} Hence \begin{equation}\label{e1} \|W(f_1,f_2)\|^p_{M^{p,q}_{1\otimes v_s}}\asymp \| \, |V_g f_2|^p\ast |(V_g{ f_1})^*|^p \, \|_{L^\frac{q}{p}_{v _{ps}}}. \end{equation} \emph{Case $0<p\leq q<\infty$}.\par \emph{ Step 1.} Consider first the case $p\leq p_i,q_i$, $i=1,2$, satisfying the condition \begin{equation}\label{Wigindex} \frac1{p_1}+\frac1{p_2}=\frac1{q_1}+\frac1{q_2}=\frac1{p}+\frac1{q}, \end{equation} (and hence $p_i,q_i\leq q$, $i=1,2$). Since $q/p\geq 1$, we can apply Young's Inequality for mixed-normed spaces \cite{Galperin2014} and majorize \eqref{e1} as follows \begin{align*} \|W(f_1,f_2)\|^p_{M^{p,q}_{1\otimes v_s}}&\lesssim \| \, |V_g f_2|^p\|_{L^{r_2,s_2}_{v _{p|s|}}}\|\, |(V_g{ f_1})^*|^p \|_{L^{r_1,s_1}_{v _{ps}}} \, \\ &= \| |V_g{ f_1}|^p \|_{L^{r_1,s_1}_{v _{p|s|}}} \| \, |V_g f_2|^p\|_{L^{r_2,s_2}_{v _{ps}}}\, \\ &= \| V_g{ f_1} \|^p_{L^{pr_1,ps_1}_{v _{|s|}}} \| V_g f_2\|^p_{L^{p r_2,ps_2}_{v _{s}}}\, , \end{align*} for every $1\leq r_1,r_2,s_1,s_2\leq\infty$ such that \begin{equation}\label{e2} \frac 1{r_1}+\frac{1}{r_2}=\frac 1{s_1}+\frac{1}{s_2}=1+ \frac{p}{q}. \end{equation} Choosing $r_i=p_i/p\geq 1$, $s_i=q_i/p\geq 1$, $i=1,2$, the indices' relation \eqref{e2} becomes \eqref{Wigindex} and we obtain $$ \|W(f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}}\lesssim \| V_g{ f_1} \|_{L^{p_1,q_1}_{v _{|s|}}} \| V_g f_2\|_{L^{p_2,q_2}_{v _{s}}}\asymp \|f_1\|_{M^{p_1,q_1}_{v_{|s|}}}\|f_2\|_{M^{p_2,q_2}_{v_s}}. $$ Now, still assume $p\leq p_i,q_i$, $i=1,2$ but $$\frac 1{p_1}+\frac 1{p_2}\geq \frac{1}{p}+\frac 1q,\quad \frac 1{q_1}+\frac 1{q_2}= \frac{1}{p}+\frac 1q, $$ (hence $p_i,q_i\leq q$, $i=1,2$). We set $u_1=t p_1$, and look for $t\geq 1$ (hence $u_1\geq p_1$) such that $$\frac 1 {u_1}+\frac 1{p_2}= \frac{1}{p}+\frac 1q $$ that gives $$0<\frac 1t=\frac{p_1}{p}+\frac{p_1}{q}-\frac{p_1}{p_2}\leq1 $$ because $p_1(1/p+1/q)-p_1/p_2\leq p_1(1/p_1+1/p_2)-p_1/p_2=1$ whereas the lower bound of the previous estimate follows by $1/(tp_1)=1/p+1/q-1/p_2>0$ since $p\leq p_2$. Hence the previous part of the proof gives \begin{equation*} \|W(f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}}\lesssim \|f_1\|_{M^{u_1,q_1}_{v_{|s|}}}\|f_2\|_{M^{p_2,q_2}_{v_s}}\lesssim \|f_1\|_{M^{p_1,q_1}_{v_{|s|}}}\|f_2\|_{M^{p_2,q_2}_{v_s}}, \end{equation*} where the last inequality follows by inclusion relations for modulations spaces $M^{p_1,q_1}_{v_s}(\bR^d)\subseteq M^{u_1,q_1}_{v_s}(\bR^d)$ for $p_1\leq u_1$. The general case $$\frac 1{p_1}+\frac 1{p_2}\geq \frac{1}{p}+\frac 1q,\quad \frac 1{q_1}+\frac 1{q_2}\geq \frac{1}{p}+\frac 1q, $$ is similar.\par \noindent \emph{ Step 2}. Assume now that $0<p_i,q_i\leq q$, $i=1,2$, and satisfy relation \eqref{Wigindexsharp}. If at least one out of the indices $p_1, p_2$ is less than $p$, assume for instance $p_1\leq p$, whereas $p\leq q_1,q_2$, then we proceed as follows. We choose $u_1=p$, $u_2=q$, and deduce by the results in Step 1 (with $p_1=u_1$ and $p_2=u_2$) that $$ \|W(f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}}\lesssim \|f_1\|_{M^{u_1,q_1}_{v_{|s|}}}\|f_2\|_{M^{u_2,q_2}_{v_s}}\lesssim\|f_1\|_{M^{p_1,q_1}_{v_{|s|}}}\|f_2\|_{M^{p_2,q_2}_{v_s}} $$ where the last inequality follows by inclusion relations for modulation spaces, since $p_1\leq u_1=p$ and $p_2\leq u_2=q$.\par Similarly we argue when at least one out of the indices $q_1, q_2$ is less than $p$ and $p\leq p_1,p_2$ or when at least one out of the indices $q_1, q_2$ is less than $p$ and at least one out of the indices $p_1, p_2$ is less than $p$. The remaining case $p\leq p_i,q_i\leq q$ is treated in Step 1. \noindent\emph{Case $0<p<q=\infty$.} The argument are similar to the case $0<p\leq q<\infty$. \noindent\emph{Case $p=q=\infty$.} We use \eqref{e0} and the submultiplicative property of the weight $v_s$, \begin{align*} \|W(f_1,f_2)\|_{M^{\infty}_{1\otimes v_s}}&=\sup_{z,\zeta\in{\bR^{2d}}}| V_g f_2(z +\tfrac{\tilde{\zeta }}{2})| \, |V_g f_1(z - \tfrac{\tilde{\zeta }}{2})| v_s(\zeta)\\ &=\sup_{z,\zeta\in{\bR^{2d}}}|| V_g f_2(z)| \, |(V_g f_1)^*(z - \tilde{\zeta })| v_s(\zeta )\\ &=\sup_{z,\zeta\in{\bR^{2d}}}|| V_g f_2(z)| \, |(V_g f_1)^*(z - \tilde{\zeta })| v_s(\tilde{\zeta })\\ &\leq \sup_{z\in{\bR^{2d}}}(\|V_g f_1 v_{|s|}\|_{\infty} \,|V_g f_2(z) v_s(z)|)= \|V_g f_1 v_{|s|}\|_{\infty}\|V_g f_2 v_s\|_{\infty}\\ &\asymp\|f\|_{M^\infty_{v_{|s|}}}\|g\|_{M^\infty_{v_s}}\leq \|f\|_{M^{p_1,q_1}_{v_{|s|}}}\|f\|_{M^{p_2,q_2}_{v_s}}, \end{align*} for every $0< p_i,q_i\leq \infty$, $i=1,2$. \noindent\emph{Case $p>q$.} Using the inclusion relations for modulation spaces, we majorize $$\|W(f_1,f_2)\|_{M^{p,q}_{1\otimes v_s}}\lesssim \|W(f_1,f_2)\|_{M^{q,q}_{1\otimes v_s}}\lesssim \|f_1\|_{M^{p_1,q_1}_{v_{|s|}}}\|f_2\|_{M^{p_2,q_2}_{v_s}}$$ for every $0< p_i,q_i\leq q$, $i=1,2$. Here we have applied the case $p\leq q$ with $p=q$. Notice that in this case condition \eqref{Wigestsharp} is trivially satisfied, since from $p_1,q_i\leq q$ we infer $1/p_1+1/p_2\geq 1/q+1/q$, $1/q_1+1/q_2\geq 1/q+1/q$. This ends the proof of the sufficient conditions.\par \emph{Necessary Conditions.} The proof works exactly the same as that of \cite[Theorem 3.5]{Wignersharp2018}. In fact, the main point is the use of the $M^{r,s}$-norm of the rescaled Gaussian $\varphi_\lambda(x)=\varphi(\sqrt{\lambda} x)$, with $\varphi(x)=e^{-\pi x^2}$, for which we reckon (see also \cite[Lemma 3.2]{cordero2} and \cite[Lemma 1.8]{toft2004}): $$ \|\varphi_\lambda\|_{M^{r,s}}\asymp \lambda^{-\frac d {2r}}(\lambda+1)^{-\frac d2(1-\frac1s-\frac1r)},$$ for every $0<r,s\leq\infty$. \end{proof} Based on the tools developed above, we establish the following Schatten class results for localization operators. \begin{theorem}\label{class} For $s\geq 0$, we have the following statements.\\ (i) If $0< p <1$, then the mapping $(a,\varphi _1, \varphi _2) \mapsto A_a^{\varphi_1,\varphi_2} $ is bounded from $M^{p,\infty }_{1\otimes v_{-s}}({\bR^{2d}} ) \times M^p_{v_s} (\bR^d )\times M^p_{v_s} (\bR^d )$ into $S_p$: $$\|A^{\f_1,\f_2}_a\|_{S_p}\lesssim \|a\|_{M^{p,\infty}_{1\otimes v_{-s}}}\|\varphi_1\|_{M^p_{v_s}}\|\varphi_2\|_{M^p_{v_s}}\, .$$ (ii) If $1\leq p \leq 2$, then the mapping $(a,\varphi _1, \varphi _2) \mapsto A_a^{\varphi_1,\varphi_2} $ is bounded from $M^{p,\infty }_{1\otimes v_{-s}}({\bR^{2d}} ) \times M^1_{v_s} (\bR^d )\times M^p_{v_s} (\bR^d )$ into $S_p$: $$\|A^{\f_1,\f_2}_a\|_{S_p}\lesssim \|a\|_{M^{p,\infty}_{1\otimes v_{-s}}}\|\varphi_1\|_{M^1_{v_s}}\|\varphi_2\|_{M^p_{v_s}}\, .$$ (iii) If $2 \leq p \leq \infty$, then the mapping $(a,\varphi _1, \varphi _2) \mapsto A_a^{\varphi_1,\varphi_2} $ is bounded from $M^{p,\infty }_{1\otimes v_{-s}} \times M^1_{v_s}\times M^{p'}_{v_s}$ into $S_p$: $$\|A^{\f_1,\f_2}_a\|_{S_p}\lesssim \|a\|_{M^{p,\infty}_{1\otimes v_{-s}}}\|\varphi_1\|_{M^1_{v_s}}\|\varphi_2\|_{M^{p'}_{v_s}}\, .$$ \end{theorem} \begin{proof} ({\it i}) If $\varphi_1 \in M^{p}_{v_s}(\mathbb{R}^d)$ and $\varphi_2 \in M^{p}_{v_s}(\mathbb{R}^d)$, then $W(\varphi_2,\varphi_1)\in M^{p}_{1\otimes v_{s}}(\mathbb{R}^{2d})$ by \eqref{wigest}. Since $a\in M^{p,\infty } _{1\otimes v_{-s}}$, the convolution relation $M^{p,\infty} _{1\otimes v_{-s}}({\bR^{2d}}) \ast M^{p}_{1\otimes v_{s}}({\bR^{2d}}) \subseteq M^{p}({\bR^{2d}})$ of Proposition \ref{mconvmp} implies that the Weyl symbol $\sigma=a\ast W(\varphi_2,\varphi_1)$ is in $M^{p}({\bR^{2d}})$. The result now follows from Theorem \ref{Charly1} ({\it i}). The items ({\it ii}) and ({\it ii}) are proved similarly, see \cite[Theorem 3.1]{CG02}. \end{proof} \begin{corollary} Any localization operators $A_a^{\varphi_1,\varphi_2}$ with symbol $a$ in $ M^{\infty}_{v_s\otimes 1} ({\bR^{2d}})$, $s>0$, and windows $\varphi_1,\varphi_2$ in $\mathcal{S}(\bR^d)$ is a compact operator belonging to the Schatten class $S_p(L^2(\rd))$, with $p>2d/s$. \end{corollary} \begin{proof} It immediately follows from the inclusion relations for modulation spaces in \eqref{compact} and the sufficient conditions in Theorem \ref{class}. \end{proof} \section{Necessary Conditions} The necessary conditions for Schatten class localization operators for the Banach case $p\geq 1$ is contained in the work \cite[Theorem 1 (b)]{CG05}, see also \cite{FG2006} , who recaptured the results in \cite[Theorem 1 (b)]{CG05} by using different techniques. Before stating the necessary conditions, observe that using the inclusion relations for modulation spaces in Theorem \ref{inclusionG}, one can rephrase the unweighted sufficient conditions in Theorem \ref{class} as follows. \begin{theorem}\label{classLargercond} If $1 \leq p \leq \infty$, then the mapping $(a,\varphi _1, \varphi _2) \mapsto A_a^{\varphi_1,\varphi_2} $ is bounded from $M^{p,\infty }({\bR^{2d}} ) \times M^1 (\bR^d )\times M^1(\bR^d )$ into $S_p(L^2(\rd))$, i.e., $$\|A^{\f_1,\f_2}_a\|_{S_p}\leq C \|a\|_{M^{p,\infty}}\|\varphi_1\|_{M^1}\|\varphi_2\|_{M^1}\,$$ for a suitable constant $C>0$. \end{theorem} \begin{proof} The inequality immediately follows from Theorem \ref{class} and the estimate $\|\varphi_2\|_p\leq \|\varphi_2\|_1$, for any $p> 1$, by the inclusion relation $M^1(\bR^d)\subset M^p(\bR^d)$. \end{proof} The vice versa of the sufficient conditions above is shown hereafter. \begin{theorem} \label{main} Consider $1\leq p\leq \infty$. If $A^{\f_1,\f_2}_a\in S_p(L^2(\rd))$ for every pair of windows $\varphi_1,\varphi_2\in\mathcal{S}(\bR^d)$ with norm estimate \begin{equation}\label{mt} \|A^{\f_1,\f_2}_a\|_{S_{p}}\leq C\, \|\varphi_1\|_{ M^1}\, \|\varphi_2\|_{ M^1}, \end{equation} where the constant $C>0$ depends only on the symbol $a$, then $a\in M^{p,\infty}({\bR^{2d}})$. \end{theorem} In what follows we detail the main steps of the proof, in order to underline the tools employed. The key role is played by Corollary \ref{adj4}, togetheer with the characterization of the $M^{p,\infty}({\bR^{2d}})$-norm of the symbol $a$ via Gabor frames.\par \vspace{0.3truecm} \noindent \emph{ Sketch of the proof of Theorem \ref{main}.}\\ Consider $0<\alpha,\beta<1$, $\Phi(x,\omega )= 2^{-d}e^{-x^2-\omega^2}\in\mathcal{S}({\bR^{2d}})$ and the Gabor frame $(T_{\alpha k}M_{\beta n}\Phi)_{n,k\in{\bZ^{2d}}}$. We compute the $M^{p,\infty}({\bR^{2d}})$-norm of the symbol $a$ in $A^{\f_1,\f_2}_a$ by using the norm characterization in \eqref{idea} \begin{equation}\label{idea2} \|a\|_{M^{p,\infty}({\bR^{2d}})}\asymp \|\langle a ,M_{\beta n}T_{\alpha k}\Phi\rangle_{n,k\in{\bZ^{2d}}}\|_{\ell^{p,\infty}(\field{Z}^{4d})}. \end{equation} Using \eqref{Gauss0} we can write \begin{equation}\label{Gauss?}\Phi(x,\omega )=2^{-d}e^{-\pi (x^2+\omega^2)}=V_\varphi \varphi(x,\omega )\overline{V_\varphi \varphi(x,\omega )}.\end{equation} Now, let $k=(k_1,k_2), n=(n_1,n_2)\in{\bZ^{2d}}$, by \eqref{Gauss?} and Formula \eqref{bo}, the time-frequency shift \, of $\Phi$ can be expressed by the point-wise product of two STFTs: \begin{eqnarray*}M_{\beta n}T_{\alpha k}\Phi(x,\omega )&=&M_{ (\beta n_1,\beta n_2)}T_{ (\alpha k_1,\alpha k_2)}(V_\varphi \varphi\overline{V_\varphi \varphi})(x,\omega )\\ &=&V_{(M_{\beta n_1}T_{-\beta n_2} \varphi)}(M_{\beta n_1} T_{-\beta n_2} M_{\alpha k_2}T_{\alpha k_1} \varphi)\cdot\overline{V_\varphi (M_{\alpha k_2}T_{\alpha k_1} \varphi)}. \end{eqnarray*} Using the weak definition of localization operator given in \eqref{anti-Wickg}, we can write \begin{equation}\label{???} \langle a, M_{\beta n}T_{\alpha k}\Phi\rangle=\langle A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}(M_{\alpha k_2}T_{\alpha k_1} \varphi), M_{\beta n_1} T_{-\beta n_2} M_{\alpha k_2}T_{\alpha k_1} \varphi\rangle. \end{equation} The $M^{p,\infty}$-norm of the symbol $a$ can be recast as \begin{eqnarray*} \|a\|_{M^{p,\infty}}\!\!&\asymp&\|\langle a,M_{\beta n}T_{\alpha k}\Phi\rangle_{n,k\in{\bZ^{2d}}}\|_{\ell^{p,\infty}(\field{Z}^{4d})}\\ &=&\sup_{n\in{\bZ^{2d}}}\left(\sum_{k \in{\bZ^{2d}}}|\langle a,M_{\beta n}T_{\alpha k}\Phi\rangle|^p\right)^{1/p}\\ &=&\!\!\!\sup_{(n_1,n_2)\in{\bZ^{2d}}}\!\left(\sum_{(k_1,k_2) \in{\bZ^{2d}}}\!\!|\langle A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}(M_{\alpha k_2}T_{\alpha k_1} \varphi), M_{\beta n_1} T_{-\beta n_2} M_{\alpha k_2}T_{\alpha k_1} \varphi\rangle|^p\right)^{1/p} \end{eqnarray*} We apply the assumption \eqref{mt} to the localization operators $A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}$; in fact, for every choice of $\beta,n_1,n_2$, the functions $M_{\beta n_1}T_{-\beta n_2} \varphi$ are in the Schwartz class $\mathcal{S}(\bR^d)$, so that the localization operators satisfy the uniform estimate \begin{equation}\label{e3} \| A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}\|_{S_p}\le C \|\varphi\|_{M^1} \|M_{\beta n_1}T_{-\beta n_2} \varphi\|_{M^1}= C\|\varphi\|_{M^1}^2,\end{equation} since the time-frequency shifts are isometry on $M^1(\bR^d)$. Finally, applying Corollary \ref{adj4} with the Gabor frame $(M_{\alpha k_2} T_{\alpha k_1} \varphi )_{k_1,k_2\in\bZ^d}$ and operators $T=A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}\in S_p$ and $L=M_{\beta n_1} T_{-\beta n_2}\in S_\infty$, we can majorize the norm $\|a\|_{M^{p,\infty}}$ as \begin{align*} &\|a\|_{M^{p,\infty}}\\ &\,\,\asymp\!\!\sup_{(n_1,n_2)\in{\bZ^{2d}}}\!\!\|\langle A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}(M_{\alpha k_2}T_{\alpha k_1} \varphi), M_{\beta n_1} T_{-\beta n_2} M_{\alpha k_2}T_{\alpha k_1} \varphi\rangle_{(k_1,k_2) \in{\bZ^{2d}}}\|_{\ell^p({\bZ^{2d}})}\\ &\,\,\lesssim \sup_{(n_1,n_2)\in{\bZ^{2d}}}\| A_a^{\varphi,(M_{\beta n_1}T_{-\beta n_2} \varphi)}\|_{S_p}\\ &\,\,\lesssim \sup_{(n_1,n_2)\in{\bZ^{2d}}} \|\varphi\|^2_{M^1}= \|\varphi\|^2_{M^1}<\infty, \end{align*} where in the last inequality we used \eqref{e3}. \endproof\\ \subsection{Conclusion and Perspectives} As it becomes clear from the previous proof, we cannot expect to prove necessary conditions for small $p$, that is $0<p<1$, using similar techniques to the case $p\geq 1$. The main obstruction being the fact that Corollary \ref{adj4} does not hold for $0<p<1$. Observe that the discrete modulation norm via Gabor frames in \eqref{idea2} remains valid also for $0<p<1$. In view of the sufficient conditions in Theorem \ref{class}, we conjecture that a necessary condition of the type expressed below should hold true. \begin{theorem}\label{classLargercond} For $0<p<1$, if $A^{\f_1,\f_2}_a$ is in $S_p(L^2(\rd))$ for every pair of windows $\varphi_1,\varphi_2\in\mathcal{S}(\bR^d)$ and there exists a $C>0$ such that $$\|A^{\f_1,\f_2}_a\|_{S_p}\leq C \|\varphi_1\|_{M^p}\|\varphi_2\|_{M^p},\quad \varphi_1,\varphi_2\in\mathcal{S}(\bR^d),$$ then $a\in M^{p,\infty}({\bR^{2d}})$. \end{theorem} \section*{Acknowledgments} The author wish to thank Prof. Fabio Nicola for his suggestions and comments.
train/arxiv
BkiUdBw5qhDACudDNSjh
5
1
\section{Introduction} The congruence subgroups of braid groups arise from a congruence condition on the integral reduced Burau representation. They are finite-index normal subgroups of the braid group and give insight to the braid groups. This generalizes both how congruence subgroups are used in the study of integral linear groups (like $\SL_n(\Z)$) and how pure braid groups are used in the study of braid groups. The \emph{integral (reduced) Burau representation} $\rho\colon B_n\rightarrow \GL_{n-1}(\mathbb{Z})$ is the Burau representation specialized at $t=-1$, \[\rho\colon B_n\xrightarrow{\text{Burau}}{}\GL_{n-1}(\mathbb{Z}[t^{\pm1}])\xrightarrow{t=-1}{} \GL_{n-1}(\mathbb{Z}).\] The \emph{level $\ell$ congruence subgroup} of the braid group on $n$ strands $B_n$, denoted $B_n[\ell]$, is defined to be the preimage of the level $\ell$ congruence subgroup of the general linear group, or more explicitly, the kernel of the following composition: \[B_n[\ell]:= \ker\bigg(B_n \stackrel{\rho}{\longrightarrow} \GL_{n-1}(\Z)\xrightarrow{\!\!\!\!\!\!\mod\ell}{} \GL_{n-1}(\Z/\ell\Z) \bigg) = \{ b\in B_n \mid \rho(b) \equiv I_{n-1} \mod \ell\}.\] These subgroups have been studied by many authors, e.g. A'Campo \cite{ACampo}, Appel--Bloomquist--Gravel--Holden \cite{ABGH}, Arnol\cprime{}d \cite{A68}, Assion \cite{Assion}, Brendle \cite{brendle}, Brendle--Margalit \cite{BM18}, Kordek--Margalit \cite{KM19}, McReynolds \cite{McReynolds}, Nakamura \cite{Nakamura}, and Stylianakis \cite{S18}. Despite the extensive study, many questions about these groups remain open. For example, except for $B_n[2]$, which Arnol\cprime{}d proved to be the pure braid group on $n$ strands \cite{A68} and the first rational homology of $B_n[4]$ found by Kordek--Margalit \cite{KM19}, their group homology (even their abelianization) is generally unknown. Various open questions about the integral Burau representation can be found in Section 3 of Margalit's problem list \cite{margalitproblems} and Brendle's mini-course notes \cite{brendle}. One fundamental theme of questioning is to understand the image of the integral Burau representation through relevant restrictions and quotients.\\ In this paper we provide answers to the following \emph{questions}: What is the image of \begin{enumerate} \item the integral Burau representation $\rho\colon B_n\rightarrow \GL_{n-1}(\mathbb{Z})$? \item the integral Burau representation reduced modulo $\ell$, $B_n\rightarrow \GL_{n-1}(\mathbb{Z/\ell\Z})$? \item $B_n[\ell]$ under $\rho$, $B_n[\ell]\rightarrow \GL_{n-1}(\mathbb{Z})$? (Problem 3.4 in \cite{margalitproblems})\\ \end{enumerate} Question (3), or Problem 3.4 in \cite{margalitproblems}, has been solved for $B_n[2]$ by Brendle--Margalit \cite{BM18}. In this paper, we answer Question (3) completely and the result is stated below in \autoref{thm:prob3.4}. Question (1) is in fact a special case of Question (3) at level $\ell = 1$, which we answer in \autoref{cor:F}. We also answer Question (2) and describe the image of $B_n \to \GL_{n-1}(\Z/\ell\Z)$, or equivalently the quotients $B_n/B_n[\ell]$ in \autoref{thm:mainresult}. There has already been considerable progress on this problem which we outline here. For all $n$, each odd $\ell$, and odd prime $p$, \begin{itemize} \item $B_n/B_n[2]\cong S_n$, the symmetric group on $n$ letters \hfill \cite{A68} \item $B_{2n+1}/B_{2n+1}[p]\cong \Sp_{2n}(\mathbb{Z}/p\mathbb{Z})$, the symplectic group \hfill \cite{ACampo} \item $B_n[\ell]/B_n[2\ell] \cong S_n$ \hfill \cite{ABGH} and \cite{S18} \item $B_n[2\ell]/B_n[4\ell] \cong (\Z/2\Z)^{{n \choose 2 }}.$ \hfill\cite{ABGH} and \cite{BM18} \item $B_n[\ell]/B_n[4\ell] \cong Z_n$, \hfill\cite{ABGH} and \cite{KM19} \noindent where $Z_n$ is a non-split extension of $S_n$ by $(\Z/2\Z)^{{n \choose 2 }}.$ \end{itemize} \subsection*{Answering Question (2)} In our first main theorem, \autoref{thm:mainresult}, we unify these results and show how these quotients are related to the congruence subgroups of the symplectic groups. For this we introduce some notation. Let \[ \Gamma_{2g}[\ell] := \ker\bigg( \Sp_{2g}(\Z) \xrightarrow{\!\!\!\!\!\!\mod\ell}\Sp_{2g}(\Z/\ell\Z) \bigg)= \{ A \in \Sp_{2g}(\Z) \mid A \equiv I_{2g} \mod \ell\}.\] denote the \emph{level $\ell$ congruence subgroup} of $ \Sp_{2g}(\Z)$. Denote the subgroups of $ \Sp_{2g}(\Z)$ and $\Sp_{2g}(\Z/\ell\Z)$ that fix the first standard basis vector $e_1$ by $[\Sp_{2g}(\Z)]_{e_1}$ and $[\Sp_{2g}(\Z/\ell\Z)]_{e_1}$, respectively. We further denote the \emph{level $\ell$ congruence subgroup} of $ [\Sp_{2g}(\Z)]_{e_1}$ by \[ \Gamma_{2g-1}[\ell] := \ker\bigg( [\Sp_{2g}(\Z)]_{e_1} \xrightarrow{\!\!\!\!\!\!\mod\ell} [\Sp_{2g}(\Z/\ell\Z)]_{e_1} \bigg)= \{ A \in [\Sp_{2g}(\Z)]_{e_1} \mid A \equiv I_{2g} \mod \ell\}.\] To unify the notation, we define \[ \Gamma_n := \begin{cases} \Sp_{2g}(\Z)&\text{for $n=2g$} \\ [\Sp_{2g}(\Z)]_{e_1} &\text{for $n=2g-1$.}\end{cases}\] We note that with this notation and to explicitly point out our indexing convention, \[\rho(B_n)\leq \Gamma_{n-1}.\] For those familiar with the notation in Brendle--Margalit, $[\Sp_{2g+2}(\Z)]_{e_1}$ is isomorphic to $[\Sp_{2g+2}(\Z)]_{\vec{y}_{g+1}}$ in \cite{BM18} and we describe our basis labelling conventions in \autoref{sec:backgroundsubgroups}. Our first main theorem, providing an answer to Question 2, is as follows. \begin{atheorem}\label{thm:mainresult}For an integer $\ell=2^{k}m$ with $m$ odd, \[B_{n}/B_{n}[\ell]\cong \begin{cases} \Sp_2(\Z/\ell\Z) &\text{ for $n=3$,}\\ \mathcal{Z}_{n,k} \times \Sp_{n-1}(\Z/m\Z) &\text{ for odd $n\geq 5$},\\ \mathcal{Z}_{n,k} \times [\Sp_n(\Z/m\Z)]_{e_1} &\text{ for even $n\geq 4$,} \end{cases} \] where $\mathcal{Z}_{n,{k}} \cong B_n/B_n[2^k]$ is the non-split extension of $S_n$ by $\Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]$ described in \autoref{thm:B}. \end{atheorem} For $n=3$, the integral Burau representation surjects onto $\SL_2(\Z)$ and notice that $\SL_2(\Z/\ell\Z)= \Sp_2(\Z/\ell\Z)$. For higher $n$, \autoref{thm:mainresult} follows from the following three lemmas. \begin{alemma}\label{thm:A} Let $n,m,\ell \in\N$ with $\gcd(m,\ell) = 1$. Then \[ B_n/B_n[m\ell] \cong B_n/B_n[m] \times B_n/B_n[\ell].\] \end{alemma} For the following statements, we need a certain inclusion of the symmetric group $S_n$ into $\Gamma_{n-1}[\ell]/\Gamma_{n-1}[2\ell] \cong \Gamma_{n-1}/\Gamma_{n-1}[2]$. It can be described as follows. From Arnol\cprime{}d \cite{A68}, we know that \[B_n/B_n[2] \cong S_n.\] This is precisely the image of $B_n$ in \[ \Gamma_{n-1}/\Gamma_{n-1}[2] \cong \begin{cases} \Sp_{2g}(\Z/2\Z) &\text{if $n=2g+1$ and}\\ [\Sp_{2g}(\Z/2\Z)]_{e_1} &\text{if $n=2g$.}\\ \end{cases}\] \begin{alemma}\label{thm:B} Let $n \ge 4$. Then $B_n/B_n[2^k] $ is a non-split extension of $S_n$ by $\Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]$. More precisely, $B_n/B_n[2^k] $ is isomorphic to the preimage of $S_n\hookrightarrow \Gamma_{n-1}/\Gamma_{n-1}[2]$ along the quotient map $\Gamma_{n-1}/\Gamma_{n-1}[2^k] \to \Gamma_{n-1}/\Gamma_{n-1}[2]$. \end{alemma} \begin{alemma}\label{thm:C} Let $n\ge 3$ and $p$ an odd prime. Then \[ B_n/B_n[p^k] \cong \Gamma_{n-1}/\Gamma_{n-1}[p^k].\] \end{alemma} \subsection*{Answering Question (1) and (3)} Our second main theorem solves Margalit's Problem 3.4 from \cite{margalitproblems}. After distributing a draft of this paper, Charalampos Stylianakis \cite{Stylperscomm} made us aware that he independently solved this problem in not yet published work. \begin{atheorem}\label{thm:prob3.4} The image of $B_n[\ell]$ in $\GL_{n-1}(\Z)$ under the integral Burau representation is completely characterized as follows. \begin{enumerate} \item If $\ell$ is even or $n=3$, the image is \[\Gamma_{n-1}[\ell].\] \item If $n\ge 4$ and $\ell$ is odd the image is the preimage of \[S_n \hookrightarrow \Gamma_{n-1}[\ell]/\Gamma_{n-1}[2\ell] \cong \begin{cases} \Sp_{2g}(\Z/2\Z) &\text{if $n=2g+1$ and}\\ [\Sp_{2g}(\Z/2\Z)]_{e_1} &\text{if $n=2g$}\\ \end{cases}\] along the quotient map $\Gamma_{n-1}[\ell]\longrightarrow \Gamma_{n-1}[\ell]/\Gamma_{n-1}[2\ell].$ \end{enumerate} \end{atheorem} Specializing the level to $\ell=1$ we arrive at an answer to Question 3, as follows. \begin{acor}\label{cor:F} The image of $B_n[1]=B_n$ under the integral Burau representation \begin{enumerate} \item for $n=3$ is $\SL_2(\Z) = \Sp_2(\Z)$, \item for $n\ge 4$ is the pre-image of $S_n$ along the quotient map $\Gamma_{n-1}\rightarrow \Gamma_{n-1}/\Gamma_{n-1}[2]$. \end{enumerate} \end{acor} \noindent \textbf{Acknowledgements.} We would like to thank Tara Brendle, Dan Margalit, Jeremy Miller, Oscar Randal-Williams, Ismael Sierra, and Charalampos Stylianakis for helpful conversations. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while the third author was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Braids program. The first author is supported in part by NSF Grant DMS-1745583. The second author was supported in part by a Simons collaboration grant and by the Danish National Research Foundation through the Copenhagen Centre for Geometry and Topology (DNRF151) and the European Research Council under the European Union's Seventh Framework Programme ERC Grant agreement ERC StG 716424 - CASe, PI Karim Adiprasito. \section{Background and Preliminaries}\label{sec:background} We will prove all technical results about the congruence subgroups of braid groups in this section. We start with a brief explanation of how to view the integral Burau representation as a symplectic representation. We explain some details of symplectic groups and their stabilizer subgroups. After that, we turn our attention to the congruence subgroups of the braid groups. \subsection{Background on the Integral Burau Representation.}\label{sec:backgroundsubgroups} In this section, we first give a brief introduction to viewing the integral Burau representation as the action of braid groups on the first homology of surfaces and how congruence subgroups fit into this framework. For a more detailed introduction, see Brendle \cite{brendle}. As stated in the introduction, the \emph{integral Burau representation} $\rho\colon B_n\rightarrow \GL_{n-1}(\mathbb{Z})$ is the Burau representation specialized at $t=-1$, \[\rho\colon B_n\xrightarrow{\text{Burau}}\GL_{n-1}(\mathbb{Z}[t^{\pm1}])\xrightarrow{t=-1} \GL_{n-1}(\mathbb{Z}).\] More precisely, the Artin generators $\sigma_i$ (a half twist on the $i$-th and the $(i+1)$-st strands) are explicitly sent to the matrices \begin{gather*} \rho(\sigma_1) = \left( \begin{array}{cc|c}1 & 1 & 0 \\ 0 & 1 & 0 \\ \hline 0 & 0 & I_{n-3} \end{array} \right), \rho(\sigma_{n-1}) = \left( \begin{array}{c|cc} I_{n-3} & 0 & 0 \\ \hline 0 & 1 & 0 \\ 0 &-1 & 1 \end{array} \right), \rho(\sigma_j)= \left( \begin{array}{c|ccc|c} I_{i-2} & 0 & 0 & 0 & 0 \\ \hline 0 & 1 & 0 & 0 & 0 \\ 0 & -1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ \hline 0 & 0 & 0 & 0 & I_{n-i-2} \end{array} \right ) , \end{gather*} for $1<j<n-1$. This representation can be realized by an action of the braid group on a surface. Let $\Sigma_g^b$ denote the connected orientable surface of genus $g$ with $b$ boundary components. We refer to the curves $\alpha_1,\beta_1, \dots, \alpha_g, \beta_g$ on $\Sigma_g^1$ as in \autoref{fig:generatingcurves}(a). Notably, their images $e_1,f_1,\dots, e_g,f_g$ in $H_1(\Sigma_g^1;\Z) \cong \Z^{2g}$ form a symplectic basis with respect to algebraic intersection number as \[ \langle e_i,f_j\rangle = - \langle f_j,e_i\rangle = \delta_{ij}, \quad \langle e_i,e_j\rangle = - \langle f_i,f_j\rangle = 0.\] Additionally every diffeomorphism of $\Sigma_g^1$ induces a symplectic map on $H_1(\Sigma_g^1;\Z)$. We refer the reader to Chapter 6 of Farb-Margalit for a more in-depth account \cite{FM}. \begin{figure} \centering \begin{picture}(450,100) \put(0,0){\includegraphics[scale=.38]{basissvg.pdf}} \put(-7,0){(a)} \put(225,0){\includegraphics[scale=.38]{genssvg.pdf}} \put(220,0){(b)} \put(33,22){$\alpha_1$} \put(95,22){$\alpha_2$} \put(145,25){$\alpha_3$} \put(43,55){$\beta_1$} \put(103,55){$\beta_2$} \put(158,55){$\beta_3$} \put(300,52){$\gamma_2$} \put(360,52){$\gamma_4$} \put(410,52){$\gamma_6$} \put(270,25){$\gamma_1$} \put(335,25){$\gamma_3$} \put(390,25){$\gamma_5$} \end{picture} \caption{(a) Curves in $\Sigma_3^1$ that induce a symplectic basis of $H_1(\Sigma_3^1;\Z)$. (b) Curves $\gamma_1, \dots, \gamma_6$ in $\Sigma_3^1$ that induce a basis of $H_1(\Sigma_3^1;\Z)$ } \label{fig:generatingcurves} \end{figure} Assume that $n=2g+1$, then $B_n$ embeds into the mapping class group of $\Sigma_g^1$ via the realization of $\Sigma_g^1$ as the double cover of the disk with $2g+1$ marked points. The generator $\sigma_i$ is sent to the Dehn twist $T_{\gamma_i}$ about the curve $\gamma_i$, where \[ \gamma_1 = \beta_1,\quad \gamma_2 = \alpha_1\cdot \alpha_2^{-1}, \quad \gamma_3 = \beta_2, \quad \gamma_4 =\alpha_2\cdot \alpha_3^{-1}, \quad \ldots,\quad \gamma_{2g-1} = \beta_g, \quad \gamma_{2g} = \alpha_g\] as shown in \autoref{fig:generatingcurves}(b). Recall that a Dehn twist $T_\gamma$ will act on $H_1(\Sigma_g^1;\Z)$ via the \emph{transvection} \[ x \longmapsto x + \langle x, c \rangle c,\] where $c$ is the image of $\gamma$ in $H_1(\Sigma_g^1;\Z)$. Let $c_i$ be the image of $\gamma_i$ in $H_1(\Sigma_g^1;\Z)$, i.e. \[ c_1 = f_1, \quad c_2 = e_1-e_2, \quad c_3 = f_2, \quad c_4 = e_2- e_3, \quad \ldots, \quad c_{2g-1} = f_g, \quad c_{2g} = e_g.\] Then $c_1, \dots, c_{2g}$ forms a (non-symplectic) basis of $H_1(\Sigma_g^1;\Z)$ and it is easy to check that action of $B_n$ on $H_1(\Sigma_g^1;\Z)$ with respect to this basis will give exactly the matrices that defines the integral Burau representation. In particular, $B_n$ acts on $H_1(\Sigma_g^1;\Z)$ via symplectic maps with respect to the symplectic basis $e_1,f_1, \dots, e_g,f_g$. Thus we may consider the integral Burau representation as a symplectic representation \[\rho\colon B_n \longrightarrow\Sp_{2g}(\Z)\quad \text{for $n= 2g+1$.}\] Assume now that $n = 2g$, then consider $B_n$ as a subgroup of $B_{2g+1}$ by sending the generators $\sigma_i$ of $B_n$ to the generator $\sigma_{i+1}$ of $B_{2g+1}$. Using the above action, $B_n$ acts on $\Sigma_g^1$ via mapping classes and this induces a symplectic action on $H_1(\Sigma_g^1;\Z)$ which which fixes $e_1$. Let $V$ be the quotient of $H_1(\Sigma_g^1;\Z)$ modulo the subgroup spanned by $e_1$, then the action of $B_n$ on $V$ coincides with the integral Burau representation when considered with respect to the basis (the images of) $c_2, \dots, c_{2g}$ (in $V$). In particular, we can consider the integral Burau representation as a map \[\rho\colon B_n \longrightarrow[\Sp_{2g}(\Z)]_{e_1}\quad \text{for $n= 2g$,}\] where $[\Sp_{2g}(\Z)]_{e_1}$ denotes the subgroup of matrices of $\Sp_{2g}(\Z)$ that fix $e_1$. For those familiar with the notation used in \cite{BM18}, Brendle--Margalit describes this group as $[\Sp_{2g+2}(\Z)]_{\vec{y}_{g+1}}$ where $y_{g+1}$ is the homology class of the curve $\alpha_1$ in our notation. Furthermore, we can also describe the congruence subgroups of $B_n$ using this description: \[ B_n[\ell] =\begin{cases} \ker\bigg(B_n \to \Sp_{2g}(\Z) \to \Sp_{2g}(\Z/\ell\Z)\bigg) &\text{for $n=2g+1$}\\ \ker\bigg(B_n \to [\Sp_{2g}(\Z)]_{e_1} \to [\Sp_{2g}(\Z/\ell\Z)]_{e_1}\bigg) &\text{for $n=2g$}\end{cases}\] \subsection{Technical Results of Stabilizer Subgroups of Symplectic Groups}\label{sec:TechnicalSymplectic} The results about symplectic groups that we need are well understood and documented. We will summarize them in the following proposition. \begin{proposition}\label{prop:symplecticcong}\ \begin{enumerate} \item $\Sp_{2g}(\Z)/\Gamma_{2g}[\ell] \cong \Sp_{2g}(\Z/\ell\Z)$ \cite[Theorem VII.20]{newman} \item $\Gamma_{2g}[\ell]\cap \Gamma_{2g}[m] = \Gamma_{2g}[\lcm(\ell,m)]$ and $\Gamma_{2g}[\ell]\cdot \Gamma_{2g}[m] = \Gamma_{2g}[\gcd(\ell,m)]$ \cite[Theorem VII.22]{newman} \item $\Gamma_{2g}[\gcd(\ell,m)]/ \Gamma_{2g}[\ell] \cong \Gamma_{2g}[m]/\Gamma_{2g}[\lcm(\ell,m)]$ \cite[Theorem VII.23]{newman} \end{enumerate} \end{proposition} We will now prove analogous results for the stabilizer subgroup $\Gamma_{2g+2} = [\Sp_{2g+2}(\Z)]_{e_1}$ and its congruence subgroups $\Gamma_{2g+2}[\ell]= \ker\big( [\Sp_{2g+2}(\mathbb{Z})]_{e_1} \to [\Sp_{2g+2}(\mathbb{Z}/\ell\Z)]_{e_1}\big)$. These results are certainly not surprising to the experts, but we were unable to find them in the literature. \begin{lemma}\label{lem:sympstabimage} For $n$ odd, $\Gamma_n/\Gamma_{n}[\ell]\cong [\Sp_{n+1}(\Z/\ell\Z)]_{e_1}$. \end{lemma} \begin{proof} We need to show that $[\Sp_{n+1}(\mathbb{Z})]_{e_1} \to [\Sp_{n+1}(\mathbb{Z}/\ell\Z)]_{e_1}$ is surjective. Let $A\in [\Sp_{n+1}(\Z/\ell\Z)]_{e_1}$ and let $B$ be the lower right $(n-2)\times (n-2)$ submatrix with the following entries. \[ A=\begin{bmatrix} 1 & x & a_3& \cdots & a_{n+1}\\ 0 & 1 & 0 & \cdots & 0\\ \vline & \vline &\vline & &\vline \\ 0 & v_2 & v_3 &\cdots &v_{n+1} \\ \vline & \vline & \vline && \vline \\ \end{bmatrix} \qquad B=\begin{bmatrix} \vline & &\vline \\ v_3 &\cdots &v_{n+1} \\ \vline && \vline \\ \end{bmatrix}\] Clearly, $B \in \Sp_{n-2}(\Z/\ell\Z)$. Since $\Sp_{n-2}(\Z) \to \Sp_{n-2}(\Z/\ell\Z)$, we can choose a $\tilde B\in \Sp_{n-2}(\Z)$ so that $\tilde B \equiv B \mod \ell$. Also choose $\tilde{x}\in\Z$ so that $\tilde{x}\equiv x\mod \ell$, and $\tilde{v}_2\in\Z^{n-1}$ so that $\tilde{v}_2\equiv v_2\mod \ell$. If $\tilde v_3, \dots, \tilde v_{n+1}$ are the columns of $\tilde B$, set $\tilde a_i = \langle \tilde v_2, \tilde v_i\rangle$ and set \[\tilde{A}=\begin{bmatrix} 1 & \tilde{x} & \tilde{a}_3& \cdots & \tilde{a}_{n+1}\\ 0 & 1 & 0 & \cdots & 0\\ \vline & \vline & & & \\ 0 & \tilde{v}_2 & &\tilde{B} & \\ \vline & \vline & && \\ \end{bmatrix}.\] This defines a matrix $\tilde A \in [\Sp_{n+1}(\Z)]_{e_1}$. To check that $\tilde A \equiv A \mod \ell$, it only remains to check that $\tilde a_i \equiv a \mod \ell$. Note that \[ \tilde a_i = \langle \tilde v_2, \tilde v_i\rangle \equiv \langle v_2, v_i\rangle = a_i \mod \ell.\qedhere\] \end{proof} \begin{lemma}\label{lem:sympstabint} For $n$ odd, $\Gamma_n[\lcm(\ell,m)]=\Gamma_n[\ell]\cap \Gamma_n[m]$. \end{lemma} \begin{proof} Observe that $h\in \Gamma_n[\lcm(\ell,m)]$ if and only if $h=I+X$ such that all entries of $X$ are divisible by $\lcm(\ell,m)$ if and only if $h=I+X$ such that all entries of $X$ are divisible by $\ell$ and $m$ if and only if $h\in \Gamma_n[\ell]\cap \Gamma_n[m]$. \end{proof} \begin{lemma}\label{Obv} Let $d=\gcd(m,\ell)$. If $x\equiv 0 \mod d$ then there is a $y\equiv 0\mod m$ and $y\equiv x\mod \ell$. \end{lemma} \begin{proof} There exists $a$ and $b$ so that $am+b\ell=d$ and let $x=kd$. Then, let $y=x-kb\ell$. \end{proof} \begin{lemma}\label{Lem:y=mModl} Let $n$ be odd and $d=\gcd(m,\ell)$. Then for every $A\in\Gamma_n[d]$, there exists $\tilde A\in \Gamma_n[m]$ so that $\tilde A\equiv A\mod\ell$. \end{lemma} \begin{proof} Let $A\in \Gamma_n[d]$ and let $B$ be the lower right $(n-2)\times (n-2)$ submatrix with the following entries. \[ A=\begin{bmatrix} 1 & x & a_3& \cdots & a_{n+1}\\ 0 & 1 & 0 & \cdots & 0\\ \vline & \vline &\vline & &\vline \\ 0 & v_2 & v_3 &\cdots &v_{n+1} \\ \vline & \vline & \vline && \vline \\ \end{bmatrix} \qquad B=\begin{bmatrix} \vline & &\vline \\ v_3 &\cdots &v_{n+1} \\ \vline && \vline \\ \end{bmatrix}\] Clearly, $B \in \Gamma_{n-2}[d]$. By Lemma 4 from Newman--Smart \cite{NS}, there exists $\tilde B\in \Gamma_{n-2}[m]$ with $\tilde B\equiv B\mod\ell$. By \autoref{Obv}, there exists $\tilde x \in\Z$ and $\tilde v_2 \in\Z^{n-1}$ so that $\tilde x\equiv x\mod\ell$, $\tilde x\equiv 0\mod m$, $\tilde v_2\equiv v_2\mod \ell$, and $\tilde v_2\equiv 0\mod m$. If $\tilde v_3, \dots, \tilde v_{n+1}$ are the columns of $\tilde B$, set $\tilde a_i = \langle \tilde v_2, \tilde v_i\rangle$ and set \[\tilde{A}=\begin{bmatrix} 1 & \tilde{x} & \tilde{a}_3& \cdots & \tilde{a}_{n+1}\\ 0 & 1 & 0 & \cdots & 0\\ \vline & \vline & & & \\ 0 & \tilde{v}_2 & &\tilde{B} & \\ \vline & \vline & && \\ \end{bmatrix}.\] This defines a matrix $\tilde A \in [\Sp_{n+1}(\Z)]_{e_1}$. To check that $\tilde A\in \Gamma_n[m]$, it only remains to check that $\tilde a_i \equiv 0 \mod m$. Note that \[ \tilde a_i = \langle \tilde v_2, \tilde v_i\rangle \equiv 0 \mod m.\] To check that $\tilde A \equiv A \mod \ell$, it only remains to check that $\tilde a_i \equiv a_i \mod \ell$. Note that \[ \tilde a_i = \langle \tilde v_2, \tilde v_i\rangle \equiv \langle v_2, v_i\rangle = a_i \mod \ell.\qedhere\] \end{proof} \begin{corollary}\label{cor:sympstabprod} For $n$ odd, $\Gamma_n[\gcd(\ell,m)] = \Gamma_n[\ell] \cdot \Gamma_n[m]$. \end{corollary} \begin{proof} Let $A \in \Gamma_n[\gcd(\ell,m)]$, and let $\tilde A \in \Gamma_n[m]$ be a matrix such that $\tilde A \equiv A \mod \ell$ by \autoref{Lem:y=mModl}. Therefore $A \cdot {\tilde A}^{-1} \equiv I \mod \ell$ or in other words $A \cdot {\tilde A}^{-1} \in \Gamma_n[\ell]$. So $A = (A \cdot {\tilde A}^{-1})\cdot \tilde A \in \Gamma_n[\ell] \cdot \Gamma_n[m]$. \end{proof} \begin{corollary}\label{cor:sympstabquot} For $n$ odd, $\Gamma_{n}[\gcd(\ell,m)]/ \Gamma_{n}[\ell] \cong \Gamma_{n}[m]/\Gamma_{n}[\lcm(\ell,m)]$. \end{corollary} \begin{proof} This follows immediately from \autoref{lem:sympstabint} and \autoref{cor:sympstabprod} and a general isomorphism theorem: \[ \Gamma_{n}[\gcd(\ell,m)]/ \Gamma_{n}[\ell] = (\Gamma_{n}[\ell)]\cdot \Gamma_{n}[m])/ \Gamma_{n}[\ell]\cong \Gamma_{n}[m]/(\Gamma_{n}[\ell]\cap \Gamma_{n}[m] )= \Gamma_{n}[m]/\Gamma_{n}[\lcm(\ell,m)]\qedhere\] \end{proof} \begin{corollary} \label{cor:Gammaprod} Let $n,m,\ell \in\N$ with $\gcd(m,\ell) = 1$. Then \[ \Gamma_n/\Gamma_n[m\ell] \cong \Gamma_n/\Gamma_n[m] \times \Gamma_n/\Gamma_n[\ell].\] \end{corollary} \begin{proof} For any relatively prime integers $m$ and $\ell$, we have \[\Gamma_n/\Gamma_n[m\ell]\stackrel{(1)}{\cong} (\Gamma_n[m]\cdot \Gamma_n[\ell])/(\Gamma_n[m]\cap \Gamma_n[\ell]) \stackrel{(2)}{\cong} \Gamma_n/\Gamma_n[m]\times \Gamma_n/\Gamma_n[\ell],\] where (1) follows from $\Gamma_n=\Gamma_n[m]\cdot \Gamma_n[\ell]$ by \autoref{prop:symplecticcong}(2) and \autoref{cor:sympstabprod} and $\Gamma_n[m\ell]=\Gamma_n[m]\cap \Gamma_n[\ell]$ by \autoref{prop:symplecticcong}(2) and \autoref{lem:sympstabint}, and (2) follows from the isomorphism theorem $HK/(H\cap K)\cong HK/H\times HK/K$. \end{proof} Finally, we introduce Wajnryb's ``braid-like" presentation of $\Gamma_{2g}/\Gamma_{2g}[p] \cong \Sp_{2g}(\F_p)$ and $\Gamma_{2g-1}/\Gamma_{2g-1}[p] \cong [\Sp_{2g}(\F_p)]_{e_1}$ for odd primes $p$ from \cite{W}. Let $c_1,\dots,c_{n}$ be a basis of $\F_p^{n}$ and let there be an alternating bilinear form with \[ \langle c_i,c_j\rangle = \begin{cases}1 &j = i+1,\\0 &|i-j| \neq 1. \end{cases}\] If $n=2g$ is even, this bilinear form is non-degenerate and a symplectic basis is given by \[ e_i = c_1+ \dots + c_{2i-1},\quad f_i = c_{2i} \quad \text{for $1\le i \le g$.}\] The isometric automorphisms are given by the symplectic group $\Sp_{2g}(\F_p) \cong \Gamma_{2g}/\Gamma_{2g}[p]$. If $n=2g-1$ is odd, $\F_p^{2g-1}$ can be embedded into the symplectic vector space $\F_p^{2g}$. The subspace is spanned by $\{c_1,\dots, c_{2g-1}\}$ and equally by $\{e_1, f_1, \dots, e_{g-1}, f_{g-1}, e_g\}$, which is the symplectic complement of the line spanned by $e_g$. In other words, the isometric automorphisms of $\F_p^{2g-1}$ are given by the subgroup of $\Sp_{2g}(\F_p)$ that fix $e_g$. This group is isomorphic to $[\Sp_{2g}(\F_p)]_{e_1} \cong \Gamma_{2g-1}/\Gamma_{2g-1}[p]$. \begin{theorem}[{Wajnryb \cite[Theorem 1]{W}}]\label{Wpres} Let $p$ be an odd prime and $n\ge3$. $\Gamma_n/\Gamma_n[p]$ is generated by the transvections $t_i(x) = x-\langle x,c_i\rangle c_i$ where $c_1, \dots, c_n$ are the vectors given above. These generators are subject the following relations: \begin{enumerate} \item $t_it_{i+1}t_i = t_{i+1}t_it_{i+1}$ for $i = 1, \dots, n-1$ \item $t_it_j = t_jt_i$ for $|i-j|>1$ \item $t_1^p$ \item $(t_1t_2)^6$ if $p> 3$ \item $t_1^{(p+1)/2}t_2^4t_1^{(p-1)/2}t_2^{-2}t_1^{-1}t_2^2$ if $p>3$ \item $(t_1t_2t_3)^4A^{-1} t_1^{-2} A$ with $A = t_2t_3^{-1}t_2^{(p-1)/2} t_4t_3^3t_4$, when $n\ge 4$ \end{enumerate} \end{theorem} The standard presentation of the braid groups gives the following immediate consequence, also observed by Styliankis \cite{S18}. \begin{corollary}\label{cor:normalclosure} Let $p$ be an odd prime and $n\ge3$. $\Gamma_n/\Gamma_n[p]$ is isomorphic to the quotient of the braid group $B_{n+1}$ modulo the normal closure of \begin{enumerate} \item $g_1=\sigma_1^p$, \item $g_2=(\sigma_1\sigma_2)^6$ if $p>3$, \item $g_3=\sigma_1^{(p+1)/2}\sigma_2^4\sigma_1^{(p-1)/2}\sigma_2^{-2}\sigma_1^{-1}\sigma_2^2$ if $p>3$, and \item $g_4=(\sigma_1\sigma_2\sigma_3)^4A^{-1}\sigma_1^{-2}A$, where $A=\sigma_2\sigma_3^{-1}\sigma_2^{(p-1)/2}\sigma_4\sigma_3^3\sigma_4$ when $n\ge 4$. \end{enumerate} \end{corollary} \subsection{Structure of Level \boldmath$\ell$ Braid Groups} \begin{lemma}\label{lem:mlProduct} For any $n,m,\ell\in \mathbb{N}$, if $gcd(m,\ell)=1$ then $B_n=B_n[m]\cdot B_n[\ell]$. \end{lemma} \begin{proof} As explained above, each generator $\sigma_i$ of $B_n$ acts as a transvection $t_i(x) = x + \langle x, c_i\rangle c_i$ for some $c_i$. Thus $\sigma_i^m$ acts via $t_i^m(x) = x + m \cdot \langle x,c_i\rangle c_i$ which is the identity modulo $m$, meaning $\sigma_i^m\in B_n[m]$. By Bezout's identity there exists $d,r\in \mathbb{Z}$ so that $md+\ell r=1$. Then $\sigma_i=\sigma_i^{md+\ell r}=(\sigma_i^m)^d\cdot (\sigma_i^\ell)^r\in B_n[m]\cdot B_n[\ell]$, showing that $B_n\subseteq B_n[m]\cdot B_n[\ell]$. \end{proof} \begin{lemma}\label{lem:mlInterset} Let $n,m,\ell\in \mathbb{N}$, then $B_n[\lcm(m,\ell)]=B_n[m]\cap B_n[\ell]$. \end{lemma} \begin{proof} We observe that $h\in B_n[\lcm(m,\ell)]$ if and only if $\rho(h)=I+X$ such that all entries of $X$ are divisible by $\lcm(m,\ell)$ if and only if $\rho(h)=I+X$ such that all entries of $X$ are divisible by $m$ and $\ell$ if and only if $h\in B_n[m]\cap B_n[\ell]$. \end{proof} Let $PB_n$ be the pure braid group on $n$ strands. Arnol\cprime{}d proved that $PB_n\cong B_n[2]$ in \cite{A68}, which motivates the next result which follows from the work of A'Campo \cite{ACampo}, but we align our discussion with that of Brendle--Margalit \cite{BM18}. In particular, A'Campo uses point forgetting rather than stabilizer subgroups in the case of $n$ being even, and as such the analogous definition of $B_4[2]$ does not align with $PB_4$. \begin{lemma}[Brendle--Margalit \cite{BM18}, Theorem 3.3] \label{lem:BM} For $n\geq 3$, the restriction of $\rho$ to the pure braid group $\rho\colon PB_n\rightarrow \Gamma_{n-1}[2]$ is surjective. \end{lemma} \begin{proof} Brendle--Margalit \cite{BM18} prove this result for $n\geq 5$, however their proof can easily seen to work also for $n=3,4$. For completeness we will include the details here. In Proposition 3.2 \cite{BM18}, they find generating sets for $\Gamma_{n-1}[2]$ for $n\ge 5$. In the proof of their Theorem 3.3, they then find preimages of each generator in $B_n[2]$. In the following paragraph, we will observe that the proof of their Proposition 3.2 also works for $n=3,4$. The preimages they find in the proof of their Theorem 3.3 then also work as preimages of the given generators. Let us first look at the case $n=3$. By Mumford \cite[Proposition A.3, p.207]{Mumford}, $\Gamma_2[2]$ is generated by (in the notation of Brendle--Margalit) \[ \tau^2_{\vec x_1}, \tau^2_{\vec y_1}, \tau^2_{\vec x_1+\vec y_1}\] or (in matrix form) \[ \begin{pmatrix} 1&2\\0&1\end{pmatrix}, \begin{pmatrix} 1&0\\2&1\end{pmatrix}, \begin{pmatrix} 3&-2\\2&-1\end{pmatrix}.\] Note that \[ \begin{pmatrix} -1&0\\0&-1\end{pmatrix} = \begin{pmatrix} 3&-2\\2&-1\end{pmatrix}\begin{pmatrix} 1&0\\2&1\end{pmatrix}\begin{pmatrix} 1&2\\0&1\end{pmatrix}\] which is the more usual third generator of $\Gamma_2[2]$. Brendle--Margalit then replace $\tau^2_{\vec x_1+\vec y_1}$ with \[ \tau^2_{\vec x_1-\vec y_1} = \tau^2_{\vec x_1}\tau^2_{\vec x_1+\vec y_1}\tau^{-2}_{\vec x_1}.\] We see that all generators from their Proposition 3.2 (that make sense for $n=3$) still form a generating set of $\Gamma_2[2]$. They proceed by reducing $\Gamma_{2g+1}[2]$ to $\Gamma_{2g}[2]$. This works exactly the same way by reducing $\Gamma_3[2]$ to $\Gamma_2[2]$ and we get the generators \[ \tau^2_{\vec x_1}, \tau^2_{\vec y_1}, \tau^2_{\vec x_1-\vec y_1}, \tau^2_{\vec y_2-\vec x_1}, \tau^2_{\vec y_2-\vec y_1}, \tau^2_{\vec y_2} \] for $\Gamma_3[2]$. Again, these are exactly the generators described in their Proposition 3.2. \end{proof} The following direct corollary is well known to experts, and is stated by Margalit \cite{margalitproblems} as a corollary of the work of A'Campo \cite{ACampo} in the case that $n$ is odd. \begin{corollary}\label{cor:imageelleven} For $n\geq 3$, the restriction of $\rho$ to $\rho\colon B_n[\ell]\rightarrow \Gamma_{n-1}[\ell]$ is surjective if $\ell$ is even. \end{corollary} \begin{proof} Let $x\in \Gamma_{n-1}[\ell] \le \Gamma_{n-1}[2]$. By \autoref{lem:BM} (Theorem 3.3 of Brendle--Margalit \cite{BM18}), there is a $y \in B_n[2]$ that maps to $x$. Because $x \equiv I \mod \ell$, $y$ has to even be in $B_n[\ell]$. \end{proof} The case $n=3$ of \autoref{lem:BM} can also be seen by the following observation that we will use later. \begin{lemma}\label{lem:n=3} $\rho\colon B_3 \to \Gamma_2$ is surjective. \end{lemma} \begin{proof} $\Gamma_2 = \Sp_2(\Z)= \SL_2(\Z)$. The image of $\rho$ is generated by the two matrices \[ \begin{pmatrix} 1&1\\0&1\end{pmatrix}\quad\text{and}\quad\begin{pmatrix} 1&0\\-1&1\end{pmatrix}\] as can be seen by the description of the integral Burau representation in \autoref{sec:backgroundsubgroups}. These two matrices generate $\SL_2(\Z)$ as \[ \begin{pmatrix} 0&1\\-1&0\end{pmatrix} = \begin{pmatrix} 1&1\\0&1\end{pmatrix}\begin{pmatrix} 1&0\\-1&1\end{pmatrix}\begin{pmatrix} 1&1\\0&1\end{pmatrix} \quad\text{and}\quad \begin{pmatrix} 1&1\\0&1\end{pmatrix}\] are well-known to generate $\SL_2(\Z)$. \end{proof} A'Campo proved in \cite{ACampo} that $B_{2g+1}/B_{2g+1}[p]\cong \Sp_{2g}(\mathbb{Z}/p\mathbb{Z})$ for odd primes $p$. We can use this result and Wajnryb's presentation of $[\Sp_{2g}(\Z/p\Z)]_{e_1}$ (see \autoref{Wpres}) to prove the following analogous result. \begin{proposition}\label{prop:oddnquot} For odd primes $p$, $B_{2g}/B_{2g}[p]\cong [\Sp_{2g}(\mathbb{Z}/p\mathbb{Z})]_{e_1}$. \end{proposition} \begin{proof} By \autoref{cor:normalclosure}, we know that $[\Sp_{2g}(\mathbb{Z}/p\mathbb{Z})]_{e_1}$ is isomorphic to $B_{2g}/ N$, where $N$ is the normal closure of a subset $S$ of $\{g_1,g_2,g_3,g_4\}$ (depending on whether $p>3$ and $2g \ge 4$). The composition \[ B_{2g}/B_{2g}[p] \longrightarrow [\Sp_{2g}(\mathbb{Z}/p\mathbb{Z})]_{e_1} \stackrel{\cong}{\longrightarrow } B_{2g}/N\] is induced by the identity map on $B_{2g}$. This implies that $B_{2g}[p] \le N$. One can check manually that the elements of $S$ are also in $B_{2g}[p]$, or one uses the fact that they are in $B_{2g+1}[p]$ by A'Campo's result \cite{ACampo} and that $B_{2g}[p] = B_{2g+1}[p] \cap B_{2g}$. \end{proof} \begin{lemma}\label{lem:W} For $n\ge 3$ and odd prime $p$, $B_n[p]=B_n[2p]\cdot B_n[p^k]$ which implies that $ B_n[p]/B_n[p^k]\cong B_n[2p]/B_n[2p^k].$ \end{lemma} \begin{proof} By A'Campo \cite{ACampo} and \autoref{prop:oddnquot} we know that $B_n/B_n[p]\cong \Gamma_{n-1}/\Gamma_{n-1}[p]$. \autoref{cor:normalclosure} shows that $B_n[p]$ is the normal closure of a subset of the four elements $g_1, g_2, g_3, g_4$. We argue that each of these generators is an element of $B_n[2p]\cdot B_n[p^k]$, and since $B_n[2p]\cdot B_n[p^k]$ is normal in $B_n$ as the product of two normal subgroups, then we can conclude that $B_n[p]= B_n[2p]\cdot B_n[p^k]$. \begin{enumerate} \item $g_1=\sigma_1^p$. Since $p\neq 2$, $gcd(2p,p^k)=p$, by Bezout's identity there exists integers $a$ and $b$ so that $p=2pa+p^kb$. Thus, \[\sigma_1^p=\sigma_1^{2pa+p^kb}=(\sigma_1^{2p})^a(\sigma_1^{p^k})^b\in B_n[2p]\cdot B_n[p^k]\] \item $g_2=(\sigma_1\sigma_2)^6$ is a pure braid. \item $g_3=\sigma_1^{(p+1)/2}\sigma_2^4\sigma_1^{(p-1)/2}\sigma_2^{-2}\sigma_1^{-1}\sigma_2^2$ is a pure braid as the underlying permutation is a transposition to the $(p-1)$-th power. \item $g_4=(\sigma_1\sigma_2\sigma_3)^4A^{-1}\sigma_1^{-2}A$, where $A=\sigma_2\sigma_3^{-1}\sigma_2^{(p-1)/2}\sigma_4\sigma_3^3\sigma_4$. This generator is a pure braid as the underlying permutation is a $4-$cycle to the fourth power times a transposition squared. \end{enumerate} Since $g_2,g_3,g_4$ are pure braids (and in $B_n[p]$), $g_2,g_3,g_4\in B_n[2]\cap B_n[p]$ and by \autoref{lem:mlProduct} $B_n[2]\cap B_n[p]=B_n[2p]$. This shows that $B_n[p]=B_n[2p]\cdot B_n[p^k]$. The following sequence of isomorphism uses this equality, a general isomorphism theorem, and \autoref{lem:mlInterset}. \[B_n[p]/B_n[p^k]= \big(B_n[2p]\cdot B_n[p^k]\big)/B_n[p^k]\cong B_n[2p] /\big(B_n[2p]\cap B_n[p^k]\big) \cong B_n[2p]/B_n[2p^k]. \qedhere \] \end{proof} \section{Main results}\label{sec:MainResults} In this section, we prove \autoref{thm:A}, \autoref{thm:B}, and \autoref{thm:C}, and our main results \autoref{thm:mainresult} and \autoref{thm:prob3.4}. \begin{lemB} Let $n,m,\ell \in\N$ with $\gcd(m,\ell) = 1$. Then \[ B_n/B_n[m\ell] \cong B_n/B_n[m] \times B_n/B_n[\ell].\] \end{lemB} \begin{proof} For any relatively prime integers $m$ and $\ell$, we have \[B_n/B_n[m\ell]\stackrel{(1)}{\cong} (B_n[m]\cdot B_n[\ell])/(B_n[m]\cap B_n[\ell]) \stackrel{(2)}{\cong} B_n/B_n[m]\times B_n/B_n[\ell],\] where (1) follows from $B_n=B_n[m]\cdot B_n[\ell]$ by \autoref{lem:mlProduct} and $B_n[m\ell]=B_n[m]\cap B_n[\ell]$ by \autoref{lem:mlInterset}, and (2) follows from the isomorphism theorem $HK/(H\cap K)\cong HK/H\times HK/K$. \end{proof} To move forward to \autoref{thm:B}, we make some more general observations. For any $m$, define $\hat{\rho}_m\colon B_n/B_n[m]\rightarrow \Gamma_{n-1}/\Gamma_{n-1}[m]$ by \[\hat{\rho}_m(b\cdot B_n[m])=\rho(b)\cdot\Gamma_{n-1}[m]\] which is well defined as $\rho(B_n[m])\leq \Gamma_{n-1}[m]$ by definition. For any $\ell$ dividing $m$, the restriction $\hat{\rho}_m\Rest\colon B_n[\ell]/B_n[m]\rightarrow \Gamma_{n-1}[\ell]/\Gamma_{n-1}[m]$ is well defined as $\rho(B_n[\ell])\leq \Gamma_{n-1}[\ell]$ by definition. Then we get the following monomorphism of short exact sequences. \begin{equation} \begin{tikzcd} 1 \arrow[r]& B_n[p]/B_n[p^k]\arrow[d,"\hat{\rho}_{p^k}\downharpoonright"',hook]\arrow[r,hook] & B_n/B_n[p^k]\arrow[d,"\hat{\rho}_{p^k}",hook] \arrow[r, two heads] &B_n/B_n[p]\arrow[d,"\hat{\rho}_{p}",hook]\arrow[r]& 1 \\ 1 \arrow[r]& \Gamma_{n-1}[p]/\Gamma_{n-1}[p^k]\arrow[r,hook] & \Gamma_{n-1}/\Gamma_{n-1}[p^k] \arrow[r, two heads] &\Gamma_{n-1}/\Gamma_{n-1}[p]\arrow[r]& 1 \\ \end{tikzcd}\label{SES} \end{equation} The downwards maps are injective because $B_n[m] = \rho^{-1}(\Gamma_{n-1}[m])$, thus $\rho(b) \in \Gamma_{n-1}[m]$ if and only if $b\in B_n[m]$. \begin{lemma}\label{lem:IntermediateIsos2} For all $n\geq 3$, the map $\hat{\rho}_{2^k}\Rest\colon B_n[2]/B_n[2^k]\rightarrow \Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]$ is an isomorphism. \end{lemma} \begin{proof} We have seen above that the map is injective. Surjectivity follows from \autoref{lem:BM} (Theorem 3.3 of Brendle--Margalit \cite{BM18}), which states that $B_n[2]\to\Gamma_{n-1}[2]$ is surjective for $n\geq 3$. \end{proof} \begin{lemC} Let $n \ge 4$. Then $B_n/B_n[2^k] $ is a non-split extension of $S_n$ by $\Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]$. More precisely, $S_n\cong \im \hat\rho_2$ can be considered as a subgroup of $\Gamma_{n-1}/\Gamma_{n-1}[2]$ and $B_n/B_n[2^k] $ is isomorphic to the preimage of $\im \hat\rho_2$ in $\Gamma_{n-1}/\Gamma_{n-1}[2^k]$. \end{lemC} \begin{proof} To prove the second part of the statement, we only need to find the image of $\hat\rho_{2^k}\colon B_n/B_n[2^k] \to \Gamma_{n-1}/\Gamma_{n-1}[2^k]$. Clearly, $\im \hat\rho_{2^k}$ is a subset of the preimage of $\im \hat\rho_2$ because the right square in \eqref{SES} commutes. This gives us the morphism of short exact sequences \begin{equation*} \begin{tikzcd} 1 \arrow[r]& B_n[2]/B_n[2^k]\arrow[d,"\hat{\rho}_{2^k}\downharpoonright"']\arrow[r,hook] & B_n/B_n[2^k]\arrow[d,"\hat{\rho}_{2^k}"] \arrow[r, two heads] &B_n/B_n[2]\arrow[d,"\hat{\rho}_{2}"]\arrow[r]& 1 \\ 1 \arrow[r]& \Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]\arrow[r,hook] & X\arrow[r, two heads] &S_n\arrow[r]& 1 \\ \end{tikzcd} \end{equation*} where $X$ denotes the preimage of $\im \hat\rho_2$ in $\Gamma_{n-1}/\Gamma_{n-1}[2^k]$. The five-lemma implies that $B_n/B_n[p^k] \to X$ is an isomorphism as $B_n[2]/B_n[2^k]\to \Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]$ and $B_n/B_n[2] \to S_n$ are both isomophisms. This also implies the first statement, with the exception of noting that this extension does not split. We conclude this from the following commutative diagram and the fact that $\mathcal{Z}_{n,2}$ is a non-split extension as proven in Proposition 7.6 of \cite{KM19}. \[\begin{tikzcd} \Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]\arrow[r,hook]\arrow[dd,two heads] & \mathcal{Z}_{n,k}\arrow[dr,two heads]\arrow[dd,two heads] & \\ & & S_n \\ \Gamma_{n-1}[2]/\Gamma_{n-1}[4]\arrow[r,hook] & \mathcal{Z}_{n,2}\arrow[ur,two heads] & \end{tikzcd}\] The commutative diagram comes from the quotient map \[ \mathcal Z_{n,k} \cong B_n/B_n[2^k] \twoheadrightarrow B_n/B_n[4] \cong \mathcal Z_{n,2}.\] If $\mathcal Z_{n,k} \to S_n$ had a splitting $s \colon S_n \to \mathcal Z_{n,k}$, the composition \[ S_n \stackrel{s}{\longrightarrow} \mathcal Z_{n,k} \longrightarrow \mathcal Z_{n,2}\] would be a splitting of $\mathcal Z_{n,2} \to S_n$, which contradicts Proposition 7.6 of \cite{KM19}. \end{proof} We will now turn our attention to \autoref{thm:C} and thus odd primes. \begin{lemma}\label{lem:2p} For all $n\geq 3$, the map $\hat{\rho}_{p^k}\Rest\colon B_n[2p]/B_n[2p^k]\rightarrow \Gamma_{n-1}[2p]/\Gamma_{n-1}[2p^k]$ is an isomorphism. \end{lemma} \begin{proof} As remarked above, it is clear that this map is injective. For surjectivity, it is enough to prove that $B_n[2p] \to \Gamma_{n-1}[2p]$ is surjective. This follows from \autoref{cor:imageelleven}. \end{proof} \begin{lemma}\label{lem:commsq} The square \[\begin{tikzcd} B_n[2p]/B_n[2p^k]\arrow[d,"\hat{\rho}_{2p^k}\downharpoonright"] \arrow[r,"\psi"]&B_n[p]/B_n[p^k]\arrow[d,"\hat{\rho}_{p^k}"]\\ \Gamma_{n-1}[2p]/\Gamma_{n-1}[2p^k]\arrow[r,"\phi"] & \Gamma_{n-1}[p]/\Gamma_{n-1}[p^k] \end{tikzcd} \] with isomorphisms $\psi$ from \autoref{lem:W} and $\phi$ from \autoref{cor:sympstabquot} commutes. \end{lemma} \begin{proof} The isomorphism $\psi$ is induced by the inclusion $B_n[2p] \le B_n[p]$ and the isomorphism $\phi$ is induced by the inclusion $\Gamma_{n-1}[2p] \le \Gamma_{n-1}[p]$. Both $\hat \rho_{2p^k} \Rest$ and $\hat\rho_{p^k}$ are restrictions of the map $\rho\colon B_n \to \Gamma_{n-1}$. \end{proof} \begin{lemma}\label{lem:IntermediateIsos} For all $n\geq 3$ and odd primes $p$, the map $\hat{\rho}_{p^k}\Rest\colon B_n[p]/B_n[p^k]\rightarrow \Gamma_{n-1}[p]/\Gamma_{n-1}[p^k]$ is an isomorphism. \end{lemma} \begin{proof} In the commutative square of \autoref{lem:commsq}, the maps $\psi$, $\hat\rho_{2p^k} \Rest$, and $\phi$ are known to be isomorphisms by \autoref{lem:W}, \autoref{lem:2p}, and \autoref{cor:sympstabquot}, respectively. This implies that the remaining map $\hat{\rho}_{p^k}\Rest$ is an isomorphism, too. \end{proof} \begin{lemD} Let $n\ge 3$ and $p$ an odd prime. Then \[ B_n/B_n[p^k] \cong \Gamma_{n-1}/\Gamma_{n-1}[p^k].\] \end{lemD} \begin{proof} A'Campo proved in \cite{ACampo} that $\hat{\rho}_p\colon B_n/B_n[p] \to \Gamma_{n-1}/\Gamma_{n-1}[p]$ is an isomorphism when $p$ is odd. Together with \autoref{lem:IntermediateIsos}, this implies that the two outer vertical maps of \eqref{SES} are isomorphism. Then so is the middle map by the five-lemma. \end{proof} We are now ready to prove \autoref{thm:mainresult}. \begin{thmA} For an integer $\ell=2^{k}m$ with $m$ odd, \[B_{n}/B_{n}[\ell]\cong \begin{cases} \Sp_2(\Z/\ell\Z) &\text{ for $n=3$,}\\ \mathcal{Z}_{n,k} \times \Sp_{n-1}(\Z/m\Z) &\text{ for odd $n\geq 5$},\\ \mathcal{Z}_{n,k} \times [\Sp_n(\Z/m\Z)]_{e_1} &\text{ for even $n\geq 4$,} \end{cases} \] where $\mathcal{Z}_{n,{k}} \cong B_n/B_n[2^k]$ is a non-split extension of $S_n$ by $\Gamma_{n-1}[2]/\Gamma_{n-1}[2^k]$. \end{thmA} \begin{proof} Let $n=3$. By \autoref{lem:n=3}, $\rho\colon B_3 \to \Gamma_2$ is surjective and thereby so is the composition $B_3 \to \Gamma_2/\Gamma_2[\ell]$. This implies that \[ B_3/B_3[\ell] \cong \Gamma_2/\Gamma_2[\ell]\cong \Sp_2(\Z/\ell\Z).\] Let $n\ge 4$. For an integer $\ell= 2^km$ with $m=\prod_i p_i^{k_i}$ for $p_i$ distinct odd primes, \begin{multline*}B_{n}/B_{n}[\ell]\stackrel{\text{\autoref{thm:A}}}{\cong} B_n/B_n[2^{k}]\times \prod_i B_n/B_n[p_i^{k_i}]\stackrel{\substack{\text{\autoref{thm:B},}\\\text{\autoref{thm:C}}}}{\cong} \mathcal{Z}_{n,k} \times \prod_i \Gamma_{n-1}/\Gamma_{n-1}[p^{k_i}]\\\stackrel{\text{\autoref{cor:Gammaprod}}}{\cong} \mathcal{Z}_{n,k} \times \Gamma_{n-1}/\Gamma_{n-1}[m]. \end{multline*} \end{proof} We conclude with proving \autoref{thm:prob3.4}. \begin{thmE} The image of $B_n[\ell]$ in $\GL_{n-1}(\Z)$ under the integral Burau representation is completely characterized as follows. \begin{enumerate} \item If $\ell$ is even or $n=3$, the image is $\Gamma_{n-1}[\ell].$ \item If $n\ge 4$ and $\ell$ is odd the image is the preimage of \[S_n \subset \Gamma_{n-1}[\ell]/\Gamma_{n-1}[2\ell] \cong \begin{cases} \Sp_{2g}(\Z/2\Z) &\text{if $n=2g+1$ and}\\ [\Sp_{2g}(\Z/2\Z)]_{e_1} &\text{if $n=2g$}\\ \end{cases}\] along the quotient map $\Gamma_{n-1}[\ell]\longrightarrow \Gamma_{n-1}[\ell]/\Gamma_{n-1}[2\ell].$ \end{enumerate} \end{thmE} \begin{proof} For $n=3$, \autoref{lem:n=3} proves that the image of $B_3[\ell] \to \Gamma_2$ is simply $\Gamma_2[\ell]$. For $\ell$ even, \autoref{cor:imageelleven} implies that the image of $B_n[\ell] \to \Gamma_{n-1}$ is $\Gamma_{n-1}[\ell]$. Let $n\ge 4$ and $\ell$ odd. Consider following the map of short exact sequences. \begin{equation*} \begin{tikzcd} 1 \arrow[r]& B_n[2\ell]\arrow[d,two heads]\arrow[r,hook] & B_n[\ell]\arrow[d,"\rho"] \arrow[r, two heads] &S_n\arrow[d,hook]\arrow[r]& 1 \\ 1 \arrow[r]& \Gamma_{n-1}[2\ell]\arrow[r,hook] & \Gamma_{n-1}[\ell]\arrow[r, two heads] &\Gamma_{n-1}[\ell]/\Gamma_{n-1}[2\ell]\arrow[r]& 1 \end{tikzcd} \end{equation*} We want to prove that the image of $B_n[\ell] \to \Gamma_{n-1}[\ell]$ is the preimage of $S_n \subset \Gamma_{n-1}[\ell]/\Gamma_{n-1}[\ell]$ via the quotient map. The image is contained in the preimage from the above commutative diagram. Now let $x\in \Gamma_{n-1}[\ell]$ be in the preimage, i.e.\ $x$ maps to a permutation $y\in S_n \subset \Gamma_{n-1}[\ell]/\Gamma_{n-1}[\ell]$ via the quotient map. Let $z \in B_n[\ell]$ be a preimage of $y$, then the difference $\rho(z)^{-1}x$ lies in $\Gamma_{n-1}[2\ell]$. Let $w\in B_n[2\ell]$ map to $\rho(z)^{-1}x$ via $\rho$. Then $z\cdot w \in B_n[\ell]$ maps to $x$ via $\rho$ as \[ \rho(z \cdot w) = \rho(z) \cdot \rho(w) = \rho(z) \cdot \rho(z)^{-1}x = x.\qedhere\] \end{proof} \bibliographystyle{alpha}
train/arxiv
BkiUfcTxaKgQZYloKvoh
5
1
\section*{} \vspace{-1cm} \footnotetext{$^{a}$\emph{Centre for Sustainable Chemical Technologies and Dept. of Chemistry, University of Bath, Claverton Down, Bath BA2 7AY, UK}} \footnotetext{$^{b}$\emph{Global E$^3$ Institute and Department of Materials Science and Engineering, Yonsei University, Seoul 120-749, Korea}} \footnotetext{$^{\ddag}$ Current: \emph{EPFL Valais Wallis, EPFL LSMO, Rue de l'Industrie 17, Case postale 440, CH-1951 Sion, Switzerland}} \footnotetext{\dag~Electronic Supplementary Information (ESI) available: Tabulated free energy and enthalpy data. See DOI: 10.1039/C5SC03088A. Additional data and code available in external repositories with DOIs: 10.5281/zenodo.28536; 10.6084/m9.figshare.151373; 10.6084/m9.figshare.1513833. See Data Access Statement for more information.} \section{Introduction} \label{sec:orgheadline1} Sulfur is an abundant resource exploited by industry on a scale of tens of millions of tonnes per year.\cite{Nehb2000} While it may be found in its elemental form, the primary industrial source is hydrogen sulfide, a byproduct of the oil and gas industry. The vast majority of industrial sulfur is converted to sulfuric acid or sulfur dioxide before further use; this may explain the surprising shortage of data in the thermochemical literature regarding the vapour phase of elemental sulfur. Historically, the thermochemistry of sulfur has been studied experimentally and has been understood to be associated with a variable composition for over a century; Lewis and Randall remarked in 1914 that "no other element is known to occur in as many different forms as sulfur" while studying the free energy of a number of these forms.\cite{Lewis1914} (Carbon now has a higher number of known allotropes but the majority of these are not naturally-occuring.) However, contemporary reference data for sulfur still does not present a complete picture; the NIST-JANAF Thermochemical Tables (1998) give thermochemical data for two solid phases, one liquid phase, the ions S\(^{\text{+}}\) and S\(^{\text{-}}\) and eight gas allotropes S\(_{\text{1-8}}\).\cite{Chase1998} Of these, only S\(_{\text{2}}\) and S\(_{\text{8}}\) are from spectroscopic data. The allotropes S\(_{\text{3-7}}\) are assumed to exist and are assigned energies following an interpolation scheme suggested by Rau \emph{et al.} (1966), which also makes use of experimental data for S\(_{\text{6}}\).\cite{Rau1973a} That paper rules out the significant presence of tautomers, finding little evidence of a tautomer contribution and assuming that they have relatively high energy. The authors generally reserve speculation on the actual structures of the components of their equilibrium model. In recent years considerable attention has turned to metal chalcogenides; II-VI semiconductors such as ZnS, CdS, PbS are widely studied in many contexts.\cite{Yu2003} Copper indium gallium selenides (CIGS) and cadmium telluride (CdTe) are used as the basis for "second-generation" thin-film photovoltaic devices, and have seen a dramatic rise in production. Cu\(_{\text{2}}\)ZnSn(S,Se)\(_{\text{4}}\) (CZTS) and Cu\(_{\text{2}}\)SnS\(_{\text{3}}\) (CTS) devices have so far struggled to match these materials in terms of energy conversion efficiencies, but hold significant long-term promise due to their use of highly abundant elements; such availability is a prerequisite for terawatt-scale photovoltaics.\cite{Berg2012} As such, thin-film processing in sulfur atmospheres is of considerable interest, as the inherent safety of industrial processing may be improved by eliminating the use of toxic H\(_{\text{2}}\)S. In addition to chalcogen annealing, which is used to increase grain size, substitute other elements or directly form chalcogenides from elements, high-quality single-crystal samples may be produced using chemical vapour transport of elemental chalogens.\cite{Lichtenstriger1961, Colombara2013, Burton2013} Previous work on the thermodynamics of such processing has tended to assume that sulfur adopts one particular gaseous allotrope (either S\(_{\text{2}}\) or S\(_{\text{8}}\)), but the validity of this assumption has not been explored in depth.\cite{Jackson2014,Kosyak2013,Scragg2011a} It is undermined however by the model derived by Rau \emph{et al.}, which predicts that no one component makes more than 50\% of the gas mixture at temperatures between 800-1100K.\cite{Rau1973a} Mass spectrometry at a relatively mild 105\(^{\circ}\)C has observed a series of charged clusters with the form (S\(_{\text{8n}}\))\(^{\text{+}}\).\cite{Martin1984} In the mid 1980s, a number of cyclic allotropes had been identified by crystallisation and X-ray diffraction, but this only covered the range \(n\) = 6 -- 20.\cite{Steudel1984} An \emph{ab initio} study was carried out for S\(_{\text{2}}\) through to S\(_{\text{13}}\) in an early application of the Car-Parrinello simulated annealing method.\cite{Hohl1988} Energies were calculated using density-functional theory with the local density approximation (LDA). While limited by the inherent difficulties in exploring the entire potential energy surface of the atomic positions, this thorough study generated 21 allotropes, finding a local maximum in the atomisation energy at \(n=8\). A later (1990) paper used coupled-cluster electronic structure calculations to study the proposed tautomers of S\(_{\text{4}}\) in depth, concluding that the planar structure with \(C_{2v}\) symmetry is lowest in energy, with a trans (\(C_{2h}\)) structure also visible in experimental spectra; a more recent \emph{ab initio} study reached similar conclusions regarding stability while challenging the spectroscopic assignment of the phases.\cite{Quelch1990, Wong2003} The \(C_{2v}\) structure was ruled out in the simulated annealing study with LDA, although the authors noted the experimental evidence for its existence.\cite{Hohl1988} A 2003 review by \citet{Steudel2003} collects more recent data, including both experimental and theoretical studies of vapour-phase allotropes; this review notes the weakness of the widespread assumption that each size is represented by a single species.\cite{Steudel2003} The work compares several sets of enthalpies relative to S\(_{\text{8}}\) that have been obtained experimentally; variability is high for the smaller allotropes while there is fairly good agreement for the larger allotropes. Studies are generally carried out at a single temperature, such that the temperature and pressure dependence of the thermochemistry must be derived from statistical mechanics and analysis of vibrational information. In this study, we develop a set of structures for S\(_{\text{2}}\)-S\(_{\text{8}}\), compute their Gibbs free energy from first-principles and with empirical corrections, and solve the temperature-dependent chemical potential to describe the gaseous mixture. The potential function will be important for quantitative investigations of defect formation and phase stability in metal sulfide materials. \section{Methods} \label{sec:orgheadline9} \subsection{Density functional theory} \label{sec:orgheadline2} Energies and forces of arbitrary clusters of sulfur atoms were computed within Kohn-Sham density-functional theory (DFT).\cite{Kohn1965} A range of exchange-correlation functionals were used in this work: PBE is a popular and elegant implentation of the Generalised Gradient Approximation (GGA) and PBEsol restores a periodic exchange contribution leading to improved performance for solids;\cite{Perdew1996,Perdew2008} B3LYP\footnote{Note that the implementation of B3LYP in FHI-aims uses a parameterisation of the local density contribution based on the Random Phase Approximation in order to match values obtained with Gaussian, another quantum chemistry code.\cite{Hertwig1997}} is a widely-used "hybrid" functional which combines pre-existing gradient corrections with "exact" Hartree-Fock exchange;\cite{Becke1993} PBE0 is applies similar principles to the parameter-free PBE functional.\cite{Adamo1999} (While PBE is generally preferred to PBEsol for molecular calculations, PBEsol was included in this study for its compatibility with other all-electron work using this functional.) Calculations for the evolutionary algorithm search used the Vienna Ab Initio Simulations Package (VASP) with the PBE exchange-correlation functional and a plane-wave basis set with a 500 eV energy cutoff.\cite{Kresse1996b,Kresse1996c} As calculations in VASP employ a periodic boundary condition, orthorhombic bounding boxes were employed with 10 \(\AA\) of vacuum between each molecule and its periodic images. Electronic structure iteration used only the \(\Gamma\)-point of this large cell. Further calculations used the Fritz Haber Institute ab initio molecular simulations package (FHI-aims) to carry out all-electron DFT calculations with numerically-tabulated basis sets.\cite{Blum2009,Havu2009} All calculations were open-shell with S\(_2\) adopting its low-energy triplet spin configuration. The recommended "tight" basis set was employed for initial relaxation and study with PBEsol, which extends the minimal set of occupied orbitals with 6 additional functions. This was extended further to the full "tier 2" set of 9 additional functions for calculations with the LDA, PBE0, and B3LYP functionals. \subsection{Global structure search} \label{sec:orgheadline3} Global structure optimisation was carried out with the USPEX package, which was originally developed for crystalline systems and has been adapted for use with clusters.\cite{Oganov2006,Oganov2011,Lyakhov2013} At this stage, molecules with \(n>8\) were disregarded, as experimental results anticipate high- and low-temperature limits dominated by S\(_{\text{2}}\) and S\(_{\text{8}}\), respectively. Clusters were generated for S\(_{\text{3-7}}\), and refined with an evolutionary algorithm to minimise the ground-state energy until a number of seemingly distinct clusters were identified by inspection. The atomic positions of these clusters were then optimised in FHI-aims calculations with PBEsol, using the BFGS algorithm to minimise the atomic forces to less than 10\(^{\text{-4}}\)~eV~\AA{}\(^{\text{-1}}\) and converge energy to within 10\(^{\text{-6}}\)~eV. Point groups were assigned to the structures using Materials Studio version 6.0, a proprietary package developed by Accelrys. \subsection{Vibrational frequencies} \label{sec:orgheadline4} Vibrational frequencies were calculated within the harmonic approximation by making finite displacements to each atomic position to obtain the local potential wells, and diagonalising the resulting dynamical matrix to obtain the normal modes and their frequencies. This is implemented as a script and diagonalisation routine provided with FHI-aims. Improved vibrational frequencies may be obtained by applying an empirically-derived scale factor to the vibrational eigenvalues computed using DFT; collections of such scale factors have been published for large test-sets of molecules.\cite{Merrick2007,Alecu2010} The use of these factors is somewhat problematic when creating a systematic, transferable set of data but offers an opportunity to create the most realistic thermochemical model possible. Given that the calculations in this work involve a more limited subset of atomic interactions, we choose to fit a scaling factor to the experimentally-reported frequencies of S\(_8\) and S\(_2\). \subsection{Thermochemistry} \label{sec:orgheadline8} \subsubsection{Thermochemistry of individual gas species.~~} \label{sec:orgheadline5} \label{SEC:species-thermochem} Thermochemical properties were calculated within the ideal gas, rigid-rotor and harmonic vibration approximations. A set of textbook equations forms the chemical potential \(\mu\) for a nonlinear molecule from the ground-state electronic energy \(E_0\) given a set of vibrational energies \(\mathbf{\epsilon}\), the rotational constant \(\sigma\), moment of inertia \(I\) \begin{align} \mu &= E_0 + E_\text{ZPE} + \int^T_0 C_{v} + k_B T - T S \\ \intertext{where} C_v &= C_{v,\textrm{trans}} + C_{v,\textrm{vib}} + C_{v,\textrm{rot}}\\ \int^T_0 C_v &\approx \frac{3}{2} k_B + \sum_i \frac{\epsilon_i}{\exp(\epsilon_i / k_B T) - 1} + \frac{3}{2} k_B \\ S &= S_\textrm{vib} + S_\textrm{trans} + S_\textrm{rot} \\ &= \sum_i \left[ \frac{\epsilon_i / k_B T}{\exp (\epsilon_i / k_B T) - 1} - \ln \left( 1-\exp (-\epsilon_i / k_B T)\right) \right] \nonumber \\ &\quad + k_B \left[ \ln \left( \frac{2 \pi m k_B T}{h^2} \right)^{\tfrac{3}{2}} \frac{k_B T}{P_{\textrm{ref}}}p + \frac{5}{2} \right] \nonumber \\ &\quad + k_B \left[ \ln \frac{\sqrt{\pi \prod_i I_i}}{\sigma} \left( \frac{8 \pi^2 k_B T}{h^2} \right)^{\tfrac{3}{2}} + \frac{3}{2} \right]. \nonumber \\ \end{align} These were applied as implemented in the Atomic Simulation Environment (ASE) Python package.\cite{Bahn2002} (Note that the expressions for monatomic and linear molecules are slightly different.) The rotational constants \(\sigma\) were assigned from the point groups. \subsubsection{Reference energies.~~} \label{sec:orgheadline6} A number of \emph{ab initio} methods have been applied. In order to compare the energies, a reference point is needed. Conventionally the enthalpy of the ground state is zero; however, in this case the ground state phase \(\alpha\)-sulfur is relatively expensive to compute. We therefore use the experimental sublimation enthalpy \(\Delta H_{sub} = \tfrac{1}{8} H_{\textrm{S}_8} - H_{\textrm{S}_{\alpha}}\) to obtain a reference from the calculated enthalpy of S\(_8\): \begin{align} \Delta H_{\text{S}_x} &= H_{\text{S}_x} - x H_{\text{S}_\alpha} \\ \Delta H_{\text{S}_x} &= H_{\text{S}_x} - x \left( \frac{H_{\text{S}_8}}{8} + H_{\text{S}_\alpha} - \frac{H_{\text{S}_8}}{8} \right) \\ \Delta H_{\text{S}_x} &= H_{\text{S}_x} - x \left( \frac{H_{\text{S}_8}}{8} - \Delta H_{sub} \right) \end{align} The preferred experimental value for \(\Delta H_{sub}\) is \(100.416 / 8 = 12.552\) kJ mol\(^{-1}\), from experiments at 298K.\cite{Chase1998} Note that the physical system does not in fact sublime at high temperatures, but passes through a molten phase. Nonetheless, it is more practical (and perfectly valid) to retain \(\alpha\)-S as the reference state over the whole temperature range studied. \subsubsection{Equilibrium modelling.~~} \label{sec:orgheadline7} \label{SEC:eqm_derivation} The following derivation closely follows the approach and notation of Ref.~\citenum{Smith1982}, which describes a generalised "non-stoichiometric method" for solving chemical equilibria. This approach is well-established and based on key work in Refs.~\citenum{Brinkley1946,Brinkley1947,White1967}. We attempt to minimise the Gibbs free energy \begin{align} \min G(\mathbf{n}) &= \sum^N_{i=1} n_i \mu_i \label{eq:minG}\\ \intertext{subject to the mass balance constraint} \sum^N_{i=1} a_{i} n_i &= b \label{eq:mass_balance} \end{align} where \(N\) is the number of unique species \(i\) with stoichiometric coefficient \(a_i\); \(n\) is the quantity of species \(i\) and \(b\) is the total number of sulfur atoms. The classic approach for a constrained optimisation is the method of Lagrange multipliers. The Lagrangian is formed \begin{align} {\cal L}(\mathbf{n},\lambda) &= \sum^N_{i=1} n_i \mu_i + \lambda \left( b - \sum^N_{i=1} a_i n_i \right)\\ \intertext{and differentiated to form a set of equations definining the equilibrium state.} \left(\frac{\partial \cal{L}}{\partial n_i}\right)_{n_{j\ne i, \lambda}} &= \mu_i - a_i \lambda &= 0 \label{eq:eqm-cond1}\\ \intertext{and} \left(\frac{\partial \cal{L}}{\partial \lambda} \right)_\mathbf{n} &= b - \sum^N_{i=1} a_i n_i &= 0. \label{eq:eqm-cond2} \end{align} The species chemical potential \(\mu_i\) calculated as in Section~\ref{SEC:species-thermochem} is a function of both temperature and the partial pressure \(p_i = P \frac{n_i}{n_t}\) where \(P\) is the total pressure and the total quantity \(n_t = \sum^N_i n_i\). The temperature dependence is complex and we are willing to solve the equilibrium at each temperature of interest, so we form a temperature-dependent standard free energy at a reference pressure \(P^\circ\), \(\mu_i^\circ(T) = \mu_i(T,P^\circ)\). \begin{align} \mu_i(T,P,\mathbf{n}) &= \mu_i^\circ (T) + R T \ln \left(\frac{p_i}{P^\circ} \right) \\ &= \mu_i^\circ (T) + R T \ln\left(\frac{n_i}{n_t} \frac{P}{P^\circ} \right) \\ &= \mu_i^\circ (T) + R T \ln \left( \frac{P}{P^\circ} \right) + R T \ln\left(\frac{n_i}{n_t} \right) \label{eq:ideal_mu} \end{align} From here we drop the parenthetical indication that \(\mu_i^\circ\) is a function of temperature, and define the unit of pressure as the reference pressure, such that \(P^\circ = 1\). Substituting (\ref{eq:ideal_mu}) into (\ref{eq:eqm-cond1}), we obtain \begin{align} \mu_i^\circ + R T \ln\left(\frac{n_i}{n_t} P \right) - a_i\lambda &= 0 \\ \ln\left( \frac{n_i}{n_t}P \right) &= \frac{a_i\lambda - \mu_i^\circ}{R T} \label{eq:1}\\ \intertext{and summing over $i$} P &= \sum^N_{i=1} \exp\left(\frac{a_i\lambda - \mu_i^\circ}{RT}\right). \end{align} The only unknown variable in this expression is \(\lambda\); rearranging slightly we form a polynomial which is suitable for solving by standard numerical methods. The method employed in this work is the Levenberg-Marquardt least-squares algorithm, as implemented in Scipy.\cite{Marquardt1963,Scipy2001} \begin{equation} \sum^N_{i=1} \exp \left( \frac{-\mu_i^\circ}{RT} \right) \left[ \exp \left( \frac{\lambda}{RT} \right)\right]^{a_i} -P = 0 \label{eq:lagrange_polynomial} \\ \end{equation} \noindent To recover the composition \(\mathbf{n}\), we rearrange (\ref{eq:1}): \begin{align} n_i &= \frac{n_t}{P} \exp\left( \frac{a_i\lambda}{RT} \right) \exp\left(\frac{-\mu_i^\circ}{R T}\right) \label{eq:3} \\ \intertext{and substitute into the second equilibrium condition (\ref{eq:mass_balance}) to obtain} b &= \frac{n_t}{P}\sum\limits^N_{i=1} a_i \exp \left( \frac{a_i \lambda -\mu_i^\circ}{RT}\right) \label{eq:2}\\ \intertext{combining (\ref{eq:3}) and (\ref{eq:2}) we eliminate $n_t$} \frac{n_i}{b} &= \frac{\exp\left(\frac{a_i \lambda - \mu_0}{RT} \right)}{\sum\limits^N_{i=1} a_i \exp \left(\frac{a_i \lambda - \mu_0}{RT} \right)} \\ \intertext{ and clean up the notation by denoting $\exp \left( \frac{a_i \lambda - \mu_0}{RT}\right)$ as $\Phi_i$} \frac{n_i}{b} &= \frac{\Phi_i}{\sum\limits^N_{i=1} a_i \Phi_i}. \label{eq:phi_frac} \end{align} Finally, to obtain the chemical potential of the mixture we note from (\ref{eq:eqm-cond1}) that \(\frac{\mu_i}{a_i}=\lambda\) for all \(i\). Therefore \begin{equation} \lambda = \mu_{\text{S}}, \end{equation} the normalised chemical potential of sulfur vapour on an atom basis. (A mathematical derivation is given in Appendix~\ref{SEC:lagrange_gibbs_proof}.) \section{Results} \label{sec:orgheadline23} \subsection{Sulfur allotropes} \label{sec:orgheadline18} \label{sec:results.types} A variety of candidate structures were generated in the evolutionary algorithm study with the PBE functional. The low-energy candidates following geometry optimisation are discussed in this section. \begin{figure*} \centering \includegraphics[width=\textwidth]{S-montage.png} \caption{\label{fig:S-montage} Predicted low-energy sulfur clusters with symmetry assignment} \end{figure*} \subsubsection{S\(_{\text{2}}\).~~} \label{sec:orgheadline10} Diatomic sulfur has the point group \(D_{\infty h}\), in common with other homonuclear diatomics. The atoms were initially set 2 \AA{} apart, and relaxed to a bond length of 1.91 \AA{}. Studies with other functionals were relaxed either from this distance or from 2 \AA{}. The resulting bond lengths are given in Table~\ref{tbl:S2_r}. \begin{table}[htb] \small \caption{\label{tbl:S2_r} Calculated and experimental bond length \(r\) in S\(_{\text{2}}\). Experimental value is NIST/JANAF-recommended distance.\cite{Chase1998}} \centering \begin{tabular}{lr} \toprule DFT functional & \(r\) / \AA{}\\ \midrule PBE & 1.911\\ PBEsol & 1.903\\ LDA & 1.895\\ PBE0 & 1.884\\ \midrule Experiment & 1.889\\ \bottomrule \end{tabular} \end{table} \subsubsection{S\(_{\text{3}}\).~~} \label{sec:orgheadline11} The evolutionary algorithm process eliminated all but a \(C_{2v}\) non-linear chain for S\(_3\). This corresponds to "thiozone", which has a well-characterised structure by rotational spectroscopy (bond length 1.917(1)~\AA{} and angle 117.36(6)\(^\circ\); the values from optimisation with PBE0 in this study are 1.901~\AA{} and 118.2$^{\circ}$).\cite{McCarthy2004} We have also considered the simple triangular allotrope, which is \(\sim 0.5\) eV higher in ground-state energy. \subsubsection{S\(_{\text{4}}\).~~} \label{sec:orgheadline12} A range of branched and cyclic structures were generated in the evolutionary algorithm. The structures included in the equilibrium modelling are shown in Fig.~\ref{fig:S-montage}. The lowest-energy structure identified was the `eclipsed' C\(_{\text{2v}}\) chain; this is in agreement with the high-level theoretical studies in Ref.~\citenum{Quelch1990,Wong2003}. These studies identified a `trans' \(C_{2h}\) structure as being likely to exist; there is some spectroscopic evidence for the viability of this isomer as well as a branched chain, but we were not able to reproduce stable structures corresponding to these allotropes through geometry optimisation.\cite{Boumedien1999,Hassanzadeh1992} Various cyclic and tetrahedral candidate structures yielded a relatively flat puckered ring with \(D_{2d}\) symmetry. \subsubsection{S\(_{\text{5}}\).~~} \label{sec:orgheadline13} Although a wide range of branched and chain structures were generated, the main candidate is the 5-membered ring with \(C_{s}\) symmetry. \subsubsection{S\(_{\text{6}}\).~~} \label{sec:orgheadline14} In addition to a cyclic \(C_{2v}\) allotrope, relatively low-energy branched and chain variations were identified. Of considerable interest is also a structure which may be viewed as a stack of two S\(_3\) cycles, or alternatively as a cluster of S\(_2\) diatoms. This appears to be the \(D_{3h}\) "prism" structure identified by by \citet{Wong2004}; the characteristic S-S bond lengths from that study were 190.1 and 276.2 pm, while the corresponding average distances from optimisation with the same hybrid XC functional (B3LYP) in this work were 189.0 and 275.7 pm. It is worth stressing that no explicit dispersion terms were included in any of the electronic structure calculations. \subsubsection{S\(_{\text{7}}\).~~} \label{sec:orgheadline15} The evolutionary algorithm results rapidly provided the same \(C_s\) cyclic structure as that obtained by energy minimisation from a regular polygon. A branched structure, generated early in the progress of the algorithm, was also selected as an interesting alternative to include. This was about 1 eV lower in energy than the other candidates at that stage. Geometry optimisation by force relaxation yielded a compact structure, also with \(C_s\) (mirror-plane) symmetry. \subsubsection{S\(_{\text{8}}\).~~} \label{sec:orgheadline16} No evolutionary algorithm study was applied for S\(_8\), as its ring structure is quite well-known. The initial geometry was extracted from the crystal structure for the condensed \(\alpha\)-S phase used in a previous study,\cite{Burton2012a} and relaxed to form an isolated \(D_{4d}\) ring. \subsubsection{Ground-state energies.~~} \label{sec:orgheadline17} An inspection of the ground-state energies from DFT reveals a trend of smoothly decreasing energy per atom with cluster size for the minimum-energy configuration at each size (Fig.~\ref{fig:energies}). The variation within the clusters included at each size is of the order 10 kJ mol\(^{-1}\) atom\(^{-1}\), which is comparable to the energy difference between neigbouring cluster sizes. \begin{figure}[htb] \centering \includegraphics[width=.9\linewidth]{energies.pdf} \caption{\label{fig:energies} Ground-state energies from DFT of clusters included in study. Energies are relative to the energy for S\(_8\) with each functional, and normalised to the number of atoms. A point is also included from reference data\cite{Chase1998}; this is derived from the enthalpies of formation at zero temperature, based on spectroscopic observations and equilibrium studies. While the energies from different exchange-correlation functionals diverge across the series, the S\(_2\) energy from PBE0 calculations agrees closely with this reference data.} \end{figure} \subsection{Vibrational properties} \label{sec:orgheadline20} Vibrational frequencies were calculated for all of the allotropes listed in section \ref{sec:results.types}; frequencies for S\(_2\) and S\(_8\) are listed in Table~\ref{tbl:frequencies}. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{empirical_freqs.pdf} \caption{\label{fig:empirical_freqs} Vibrational frequencies of S\(_8\) calculated with various DFT functionals, compared with recommended experimental values.\cite{Chase1998}} \end{figure} \begin{table}[htb] \small \caption{\label{tbl:frequencies} Calculated and experimental vibrational frequencies for S\(_{\text{2}}\) and S\(_{\text{8}}\).\cite{Chase1998} All frequencies in cm\(^{\text{-1}}\).} \centering \begin{tabular}{lrrrrrr} \toprule & LDA & PBEsol & PBE0 & PBE0 & B3LYP & Expt\\ & & & & (scaled) & & \\ \midrule S\(_{\text{2}}\) & 716 & 713 & 751 & 721 & 714 & 724\\ \midrule S\(_{\text{8}}\) & 73 & 73 & 74 & 71 & 74 & 56\\ & 73 & 73 & 75 & 72 & 74 & 56\\ & 136 & 136 & 150 & 144 & 145 & 152\\ & 136 & 136 & 150 & 144 & 145 & 152\\ & 188 & 187 & 197 & 189 & 191 & 191\\ & 188 & 187 & 197 & 189 & 191 & 191\\ & 217 & 215 & 223 & 214 & 214 & 218\\ & 228 & 228 & 248 & 238 & 242 & 243\\ & 248 & 247 & 256 & 246 & 249 & 248\\ & 248 & 247 & 256 & 246 & 249 & 248\\ & 391 & 382 & 434 & 417 & 381 & 411\\ & 418 & 411 & 454 & 436 & 407 & 437\\ & 418 & 411 & 454 & 436 & 407 & 437\\ & 473 & 467 & 492 & 472 & 455 & 471\\ & 473 & 467 & 492 & 472 & 455 & 471\\ & 479 & 474 & 493 & 473 & 461 & 475\\ & 479 & 474 & 493 & 473 & 461 & 475\\ & 486 & 482 & 497 & 477 & 470 & 475\\ \bottomrule \end{tabular} \end{table} \subsubsection{Empirical corrections.~~} \label{sec:orgheadline19} Empirical scale factors were determined by fitting the frequencies to the experimental spectrum for S\(_8\). Note that frequencies are linearly proportional to their corresponding zero-point energies \(E_\textrm{ZPE} = \tfrac{1}{2}h \nu\) and hence this may also be seen as fitting to zero-point energy on a per-mode basis. The factors were calculated for each functional (Table~\ref{tbl:scale_factors}); scaling the frequencies from PBE0 by 96\% was found to give the best overall fit, and is employed here as the reference "empirically-corrected" method. The resulting set of frequencies is illustrated in Fig. \ref{fig:empirical_freqs} alongside the uncorrected and experimental values. Using this scale factor also gives good agreement (< 4~cm\(^{\text{-1}}\) error) with the stretching frequency of S\(_2\), which was not used in the fit. (Table~\ref{tbl:frequencies}) Least-squares fitting was carried out with the Levenberg-Marquardt algorithm as implemented in Scipy.\cite{Marquardt1963,Scipy2001} \begin{table}[htb] \small \caption{\label{tbl:scale_factors} Optimal scale factors for exchange-correlation functionals, fitting to ground-state frequencies of S\(_{\text{8}}\)\cite{Chase1998}. Standard deviations \(s\) for the least-squares fit are given over the set of frequencies in units of frequency and their corresponding zero-point energies per sulfur atom.} \centering \begin{tabular}{lrrr} \toprule Functional & scale factor & \(s\) / cm\(^{\text{-1}}\) & \(s\) / eV (ZPE)\\ \midrule LDA & 1.0085 & 11.57 & 0.00072\\ PBEsol & 1.0201 & 12.39 & 0.00077\\ PBE0 & 0.9596 & 6.41 & 0.00040\\ B3LYP & 1.0332 & 11.05 & 0.00068\\ \bottomrule \end{tabular} \end{table} \subsection{Equilibrium model} \label{sec:orgheadline21} Equilibrium compositions and free energies were computed as a function of temperature and pressure for all the data sets computed (Fig.~\ref{fig:composition}). There is significant disagreement between the predictions of the local exchange-correlation functionals LDA and PBEsol and the predicted composition from the hybrid functional PBE0, both before and after frequency scaling. While the "lower-level" calculations predict a diverse mixture of phases, hybrid DFT strongly supports the dominance of S\(_8\) and S\(_2\), at low and high temperatures respectively. In all cases, this simplicity is strongest at low total pressure. The other phases which are present in any significant quantity are the cyclic allotropes where \(N\) = 4-7, in the range 600-1000 K. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{composition.pdf} \caption{\label{fig:composition} Compositions of modelled S\(_{x}\) mixtures over range of equilibrium temperatures and pressures. Results are presented for density functional theory with one local (LDA), one semi-local (PBEsol) and one non-local exchange-correlation functional with empirical corrections. Composition is given in units of atom fraction. It is expected that the most accurate results are obtained using PBE0 with scaled frequencies.} \end{figure*} The corresponding free energies are also plotted in Figure~\ref{fig:mu_functionals}; we note that agreement between the methods is much stronger at low temperatures where the mixture is dominated by larger molecules. This may be an artefact of aligning the free energies of the S\(_{8}\) atoms; divergence in the energies of the smaller molecules leads to the disagreement at high temperatures. The other trend of note is the presence of a sharp bend in the \(\mu-T\) curve, particularly at low pressure, corresponding to the presence of S\(_2\) molecules. The point of onset depends on the data source, but the curve for PBE0 with empirical corrections closely tracks the minimum of the two curves from reference data. This represents a challenge to the formation of a simple parameterised model function, as it suggests the presence of a spike in the second derivative. Popular parameterisations of thermochemical properties, such as those in the NIST "WebBook", employ multiple temperature regions. This is usually viewed as a limitation, as it introduces non-physical discontinuities; with care, they could be aligned to an apparently physical discontinuity in the function. Taking the PBE0 results with empirical corrections as our preferred model, the free energy of the mixture is plotted with the chemical potentials of its component species on an atomic basis (Fig.~\ref{fig:mu_contributions}). \begin{figure*} \centering \includegraphics[width=\textwidth]{{mu_functionals}.pdf} \caption{\label{fig:mu_functionals} Chemical potential of S vapours per mole of atoms, given at several pressures according to range of calculation methods. Data for S\(_2\) and S\(_8\) are also provided from the thermochemical literature.\cite{Chase1998} At low pressures, the free energy diverges by more than 50 kJ mol\(^{-1}\) S atoms between the S\(_2\) and S\(_8\) allotropes at high temperatures, while at high pressures there is less variation. Results from hybrid DFT calculations with scaled frequencies closely track the minimal value from the literature, while the local and semi-local exchange correlation functionals diverge from this data due to over-estimation of the formation energy of S\(_2\).} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{mu_contributions.pdf} \caption{\label{fig:mu_contributions} Chemical potential of S vapours over range of T, P, compared with individual allotropes. The equilibrium mixture is lower in energy than any single allotrope, but in most T/P regimes lies close to the chemical potential of S\(_2\) or S\(_8\). Data from vibrational calculations with PBE0 and empirically-corrected frequencies.} \end{figure*} \begin{table*}[htb] \small \caption{\label{tbl:mu_pbe0_scaled}Gibbs free energy of S vapours, tabulated from calculations with PBE0 and empirical corrections, with reference state (H=0) $\alpha$-sulfur at 298.15K. Energies in kJ mol$^{\text{-1}}$, column headers in log$_{\text{10}}$(pressure/Pa). Tables are provided with more values and greater decimal precision in the supplementary information.} \centering \begin{tabular}{rrrrrrrrrrr} \toprule \multirow{2}{*}{T/K} & \multicolumn{10}{c}{$\log_{10}(p/\mathrm{Pa})$} \\ & 1.00 & 1.67 & 2.33 & 3.00 & 3.67 & 4.33 & 5.00 & 5.67 & 6.33 & 7.00\\ \cmidrule(lr){1-1} \cmidrule(lr){2-11} 100 & 4.73 & 4.88 & 5.04 & 5.20 & 5.36 & 5.52 & 5.68 & 5.84 & 6.00 & 6.16\\ 150 & 2.29 & 2.53 & 2.77 & 3.01 & 3.25 & 3.49 & 3.72 & 3.96 & 4.20 & 4.44\\ 200 & -0.39 & -0.07 & 0.25 & 0.57 & 0.89 & 1.21 & 1.53 & 1.85 & 2.17 & 2.49\\ 250 & -3.27 & -2.87 & -2.47 & -2.08 & -1.68 & -1.28 & -0.88 & -0.48 & -0.08 & 0.32\\ 300 & -6.34 & -5.86 & -5.39 & -4.91 & -4.43 & -3.95 & -3.47 & -2.99 & -2.51 & -2.03\\ 350 & -9.58 & -9.02 & -8.46 & -7.90 & -7.34 & -6.78 & -6.23 & -5.67 & -5.11 & -4.55\\ 400 & -12.97 & -12.33 & -11.69 & -11.05 & -10.41 & -9.77 & -9.13 & -8.49 & -7.85 & -7.21\\ 450 & -16.50 & -15.77 & -15.05 & -14.33 & -13.61 & -12.89 & -12.17 & -11.45 & -10.73 & -10.01\\ 500 & -20.20 & -19.37 & -18.56 & -17.75 & -16.94 & -16.14 & -15.33 & -14.53 & -13.73 & -12.93\\ 550 & -24.24 & -23.17 & -22.22 & -21.31 & -20.40 & -19.51 & -18.62 & -17.73 & -16.85 & -15.96\\ 600 & -29.74 & -27.46 & -26.12 & -25.03 & -24.01 & -23.01 & -22.03 & -21.05 & -20.08 & -19.11\\ 650 & -37.54 & -33.52 & -30.62 & -29.01 & -27.78 & -26.65 & -25.56 & -24.49 & -23.42 & -22.36\\ 700 & -45.63 & -41.17 & -36.83 & -33.61 & -31.81 & -30.45 & -29.22 & -28.04 & -26.87 & -25.72\\ 750 & -53.78 & -49.00 & -44.23 & -39.63 & -36.36 & -34.48 & -33.03 & -31.71 & -30.43 & -29.18\\ 800 & -61.99 & -56.89 & -51.79 & -46.72 & -41.99 & -38.90 & -37.03 & -35.51 & -34.10 & -32.74\\ 850 & -70.27 & -64.84 & -59.43 & -54.02 & -48.67 & -44.06 & -41.31 & -39.46 & -37.88 & -36.39\\ 900 & -78.59 & -72.85 & -67.11 & -61.38 & -55.67 & -50.16 & -46.04 & -43.61 & -41.79 & -40.15\\ 950 & -86.97 & -80.91 & -74.85 & -68.80 & -62.75 & -56.78 & -51.43 & -48.04 & -45.84 & -44.01\\ 1000 & -95.39 & -89.01 & -82.64 & -76.26 & -69.90 & -63.57 & -57.48 & -52.84 & -50.06 & -47.98\\ 1050 & -103.86 & -97.17 & -90.47 & -83.77 & -77.09 & -70.43 & -63.88 & -58.14 & -54.50 & -52.07\\ 1100 & -112.38 & -105.36 & -98.34 & -91.33 & -84.32 & -77.34 & -70.42 & -63.91 & -59.21 & -56.29\\ 1150 & -120.94 & -113.60 & -106.26 & -98.93 & -91.60 & -84.29 & -77.03 & -70.00 & -64.26 & -60.68\\ 1200 & -129.53 & -121.88 & -114.22 & -106.57 & -98.92 & -91.29 & -83.70 & -76.25 & -69.65 & -65.25\\ 1250 & -138.17 & -130.19 & -122.22 & -114.24 & -106.28 & -98.33 & -90.41 & -82.60 & -75.33 & -70.03\\ 1300 & -146.84 & -138.54 & -130.25 & -121.96 & -113.67 & -105.40 & -97.16 & -89.01 & -81.23 & -75.04\\ 1350 & -155.55 & -146.93 & -138.32 & -129.71 & -121.10 & -112.51 & -103.95 & -95.46 & -87.25 & -80.27\\ 1400 & -164.29 & -155.36 & -146.42 & -137.49 & -128.57 & -119.66 & -110.77 & -101.95 & -93.36 & -85.72\\ 1450 & -173.06 & -163.81 & -154.56 & -145.31 & -136.07 & -126.84 & -117.63 & -108.49 & -99.53 & -91.33\\ \bottomrule \end{tabular} \end{table*} \begin{figure}[htb] \centering \includegraphics[width=8.3cm]{surface.pdf} \caption{\label{fig:surface} Temperature-pressure map of approximations to free energy of mixture. At dashed line \(\tfrac{1}{2} \mu_{\text{S}_2} = \tfrac{1}{8} \mu_{\text{S}_8}\); in shaded region the error in chemical potential \(\mu\) associated with assuming a single phase S\(_2\) or \(S_8\) exceeds 1 kJ/mol S atoms; in unshaded regions the corresponding single-phase free energy is close to the energy of the mixture.} \end{figure} The depression in free energy due to mixing of allotropes and presence of minor components can be quantified by subtracting the chemical potential of the mixture from the minimum of the chemical potentials of the majority components S\(_2\) and S\(_8\). The resulting plot (Fig.~\ref{fig:mu_mix_contribution}) shows that this has an impact ranging from around 1--4 kJ mol\(^{-1}\), depending on the pressure. This is illustrated as a contour plot in Fig. \ref{fig:surface}; within each unshaded region a single-phase model is adequate to within 1 kJ mol\(^{-1}\) S atoms. \begin{figure}[htb] \centering \includegraphics[width=8.3cm]{mu_mix_contribution.pdf} \caption{\label{fig:mu_mix_contribution} Depression in chemical potential of sulfur vapour \(\mu_{\mathrm{S}}\) due to mixing and presence of minor allotropes. \(\Delta \mu_{\mathrm{mixture}} = \mu_{\mathrm{S}} - \mathrm{min}\left( \frac{\mu_{\mathrm{S}_2}}{2}, \frac{\mu_{\mathrm{S}_8}}{8}\right)\)} \end{figure} \subsection{Parameterisation} \label{sec:orgheadline22} For convenience, a parameterised fit has been generated for the chemical potential of S over the T, P range 400--1500K, 10\(^{\text{0}}\)--10\(^{\text{7}}\) Pa, incorporating an error function "switch" between S\(_{\text{2}}\) and S\(_{\text{8}}\) dominated regions and a Gaussian correction for the free energy depression where there is substantial mixing of phases. In eV per S atom, for $T$ in K, the form of the parameterisation is \begin{align} \mu_{\mathrm{S}}(T,P) &= \frac{1}{2} \left[ \mathrm{erfc}\left( \frac{T - T_{tr}}{w} \right) \frac{\mu_{\mathrm{S}_8}}{8} + \left( \mathrm{erf} \left(\frac{T - T_{tr}}{w}\right) + 1 \right) \frac{ \mu_{\mathrm{S}_2}}{2} \right] \nonumber \\ &\hphantom{=} - a(P) \exp \left( - \frac{\left(T - T_{tr} + b \right)^2}{2 c^2} \right) \end{align} where \(\mu_{\mathrm{S}_8} (T,P) = \num{7.620e-1} - \num{2.457e-3} T - \num{4.012e-6} T^2 + \num{1.808e-9} T^3 - \num{3.810e-13} T^4 + k_B \ln\left(\frac{P}{\mathrm{1 bar}}\right)\), \(\mu_{\mathrm{S}_2} (T,P) = \num{1.207} - \num{1.848e-3} T - \num{8.566e-7} T^2 + \num{4.001e-10} T^3 - \num{8.654e-14} T^4 + k_B \ln\left(\frac{P}{\mathrm{1 bar}}\right)\). \(T_{tr}\), the transition temperature obtained by solving \(\tfrac{1}{2} \mu_{\mathrm{S}_2} = \tfrac{1}{8} \mu_{\mathrm{S}_8}\) is approximated by the polynomial \(T_{tr} = \num{5.077e2} + \num{7.272e1}\log_{10}P - \num{8.295e0}(\log_{10}P)^2 + \num{1.828e0}(\log_{10}P)^3\). The height of the Gaussian correction \(a(P) = \num{1.414e3} - \num{2.041e2}\log_{10}P + \num{6.663e1}(\log_{10}P)^2\), and the more arbitrarily assigned width and offset parameters \(b=10\), \(c = 80\), \(w = 100\). It is noted that this parameterisation contains many fitting parameters; however, given its physically-motivated form the resulting function is smooth and well-behaved over the region studied, while the fits to \(\mu_{\mathrm{S}_2}\), \(\mu_{\mathrm{S}_8}\) and \(T_{tr}\) have some value in their own right. The fitting error is plotted in Fig.~\ref{fig:param_error}, and while somewhat irregular remains below 1 kJ mol\(^{-1}\). \begin{figure}[htb] \centering \includegraphics[width=8.3cm]{param_error.pdf} \caption{\label{fig:param_error} Error of parameterisation in kJ mol\(^{-1}\). Error is reduced to less than 1 kJ mol\(^{-1}\), but is highly non-uniform. Parameterisation is recommended for convenient application over wide T--P ranges; the full equilibrium solution is required to correctly capture fine detail.} \end{figure} \section{Conclusions} \label{sec:orgheadline24} The chemical potential of sulfur vapours has been studied by solving the thermodynamic equilibrium of 13 gas-phase allotropes, including the dominant components S\(_2\) and S\(_8\). Thermochemical data was obtained from first-principles calculations and corrected with an empirical scaling factor for the vibrational frequencies. The transition between these dominating phases is highly pressure-dependent, and the free energy is further depressed at the transition temperature by the presence of additional phases, especially at elevated pressures. Selection of an inappropriate gas phase can lead to errors of the order 50 kJ mol\(^{-1}\) atoms, while the minor phases contribute free energy of the order 1 kJ mol\(^{-1}\) atoms. The resulting chemical potential data is made available through tabulated data, a parameterised model with error of the order 0.5 kJ mol\(^{-1}\) atoms and through open-source code; the reference energy is compatible with the NIST-Janaf thermochemical tables for the solid \(\alpha\)-sulfur phase.\cite{Chase1998} This phase is frequently used as a reference state for thermodynamic studies of defects and stability in metal chalcogenides; the application of this gas-phase potential may allow such studies to examine a wide range of reactions involving sulfur vapours, taking into account the equilibrium within the vapour phase. The selection of appropriate chemical potentials is also critical for the development and interpretation of phase diagrams. \section{Data Access Statement} \label{sec:orgheadline25} The reference implementation of this model, complete with Python 2.7 code to generate all the plots in this paper as well as tabulated data in the form of Table~\ref{tbl:mu_pbe0_scaled}, is available online at \url{https://github.com/WMD-Bath/sulfur-model} and a snapshot of the code at the point of submission of this article is hosted by Zenodo and available with the DOI: \texttt{10.5281/zenodo.28536}. In addition, full tables are provided with this paper in the ESI\(^{\dag}\) for the composition, enthalpy and chemical potential from the calculations with PBE0 and empirical corrections; one set of enthalpy and chemical potential data follows Table~\ref{tbl:mu_pbe0_scaled} and uses the enthalpy of \(\alpha\)-S as a reference energy (for use with other tabulated data) while the other employs the ground state of S\(_8\) as a reference energy (for use with first-principles calculations.) The code and its dependencies are Free Software, using a range of licenses. Input and output files from DFT calculations with FHI-aims have been deposited with Figshare and are available with the DOI: \texttt{10.6084/m9.figshare.1513736}. A set of data generated during the evolutionary search, consisting of candidate structures and the DFT energies used to rank them, has been deposited with Figshare and is available with the DOI: \texttt{10.6084/m9.figshare.1513833}. \section{Acknowledgements} \label{sec:orgheadline26} The authors thank J. M. Skelton and J. M. Frost for useful discussions. USPEX/VASP calculations with PBE were carried out using the University of Bath's High Performance Computing facilities. Hybrid DFT calculations were carried out using ARCHER, the UK's national high-performence computing service, via our membership of the UK's HPC Materials Chemistry Consortium, which is funded by EPSRC (grant no. EP/L000202). A.J.J. is part of the EPSRC Doctoral Training Center in Sustainable Chemical Technologies (grant no. EP/G03768X/1). The contribution of D.T. was supported by ERC Starting Grant 277757.
train/arxiv
BkiUdIQ5qWTA_aC0P3aU
5
1
\section{} Hydrodynamics is one of the main tools to study the collective flow in high-energy nuclear collisions. Here we discuss results on elliptic flow obtained with the hydrodynamical code NeXSPheRIO. It is a junction of two codes: NeXus and SPheRIO. The SPheRIO code is used to compute the hydrodynamical evolution. It is based on Smoothed Particle Hydrodynamics, a method originally developed in astrophysics and adapted to relativistic heavy ion collisions \cite{spherio}. Its main advantage is that any geometry in the initial conditions can be incorporated. The NeXus code is used to compute the initial conditions $T_{\mu \nu}$, $j^{\mu}$ and $u^{\mu}$ on a proper time hypersurface \cite{IC}. NeXSPheRIO is run many times, corresponding to many different events or initial conditions. At the end, an average over final results is performed. This mimicks experimental conditions. This is different from the canonical approach in hydrodynamics where initial conditions are adjusted to reproduce some selected data and are very smooth. This code has been used to study a range of problems concerning relativistic nuclear collisions: effect of fluctuating initial conditions on particle distributions \cite{FIC}, energy dependence of the kaon effective temperature \cite{kaon}, interferometry at RHIC \cite{HBT}, transverse mass distributions at SPS for strange and non-strange particles \cite{strange}, effect of the different theoretical and experimental binnings \cite{BJP}, effect of the nature of the quark-hadron transition and of the particle emission mechanism \cite{QM05}. Here we study the evaluation of elliptic flow using the so-called standard method. The version of NeXSPheRIO used here has a first order quark-hadron transition, sudden freeze out and no strangeness conservation. The only parameter, the freeze out temperature, was assumed to be 150 MeV, since this gives good agreement for the charged particle pseudo-rapidity and transverse momentum distributions for all PHOBOS centrality windows. Theoretically, the impact parameter $\vec{b}$ is known and varies in the range of the centrality window chosen. The theoretical, or true, elliptic flow parameter at a given pseudo-rapidity $\eta$ is defined as \begin{equation} <v_2^b(\eta)>=<\frac{\int d^2N/d\phi d\eta \cos[2(\phi-\phi_b)]\, d\phi} {\int d^2N/d\phi d\eta \, d\phi}> \end{equation} $\phi_b$ is the angle between $\vec{b}$ and some fixed reference axis. The average is performed over all events in the centrality bin. Experimentally, the impact parameter angle $\phi_b$ is not known. An approximation, $\psi_2$, is estimated. Elliptic flow parameter with respect to this angle, $v_2^{obs}(\eta)$, is calculated. Then a correction is applied to $v_2^{obs}(\eta)$ to account for the reaction plane resolution, leading to the experimentally reconstructed elliptic flow parameter $v_2^{rec}(\eta)$. For example in a Phobos-like way \cite{phobos} \begin{equation} <v_2^{rec}(\eta)>=<\frac{v_2^{obs}(\eta)} {\sqrt{<\cos[2(\psi_2^{<0}-\psi_2^{>0})]>}}> \end{equation} where \begin{equation} v_2^{obs}(\eta)=\frac{\sum_i d^2N/d\phi_i d\eta \cos[2(\phi_i-\psi_2)]} {\sum_i d^2N/d\phi_i d\eta} \end{equation} and \begin{equation} \psi_2=\frac{1}{2} \tan^{-1} \frac{\sum_i \sin 2 \phi_i}{\sum_i \cos 2 \phi_i} \end{equation} In the hit-based method, $\psi_2^{<0}$ and $\psi_2^{>0}$ are determined for subevents $\eta < 0$ and $>0$ respectively and if $v_2$ is computed for a positive (negative) $\eta$, the sums in $\psi_2$, eq. (4), are over particles with $\eta < 0$ ($\eta > 0$). In the track-based method, $\psi_2^{<0}$ and $\psi_2^{>0}$ are determined for subevents $2.05<\mid \eta \mid < 3.2$, the sums in $\psi_2$, eq. (4), are over particles in both sub-events, $v_2$ is obtained for particles around $0<\eta < 1.8$ and reflected (to account for the different multiplicities between a subevent and the sums in eq. (4), there is also an additional $\sqrt{2\alpha}$ with $\alpha\sim 1$, in the reaction plane correction in eq. (2)). Since both methods are in agreement but only the hit-based method covers a large pseudo-rapidity interval, we use this latter method. We want to check whether the theoretical and experimental estimates are in agreement, i.e., $<v_2^b(\eta)>=<v_2^{rec}(\eta)>$. A necessary condition for this, from eq. (2), is, $<v_2^b(\eta)>\geq <v_2^{obs}(\eta)>$. In figure 1, we show the results for $<v_2^b(\eta)>$ (solid line) and $<v_2^{obs}(\eta)>$ (dashed line). We see that $<v_2^b(\eta)>\le <v_2^{obs}(\eta)>$ for most $\eta$'s. So, as shown also in the figure, dividing by a cosine to get $<v_2^{rec}(\eta)>$ (dotted curve) makes the disagreement worse: $<v_2^{b}(\eta)>$ and $<v_2^{rec}(\eta)>$ {\em are} different. This is true for all three Phobos centrality windows and more pronounced in the most central window. \begin{center} \begin{figure} \epsfig{file=fig1.eps,height=4.cm} \caption[]{Comparison between various ways of computing $v_2$ using NeXSPheRIO for Phobos 15-25\% centrality window\cite{phobos}: solid line is $v_2^b$, obtained using the known impact parameter angle $\phi_b$, dashed (dotted) line is $v_2^{obs}$ ($v_2^{rec}$), obtained using the reconstructed impact parameter angle $\psi_2$ without (with) reaction plane correction.} \end{figure} \end{center} Since the standard way to include the correction for the reaction plane resolution (eq. (2)) seems inapplicable, we need to understand why. When we look at the distribution $d^2N/d\phi d\eta$ obtained in a NeXSPheRIO event (presumably also in a true event), it is not symmetric with respect to the reaction plane. (We recall that the reaction plane is the plane defined by the impact parameter vector and the beam axis.) This happens because i) the incident nuclei have a granular structure, ii) the number of produced particles is finite. The symmetry might be better with respect to the plane with inclination $\psi_2$ in relation to the reference axis and containing the beam axis. Therefore we must write for each event \begin{widetext} \begin{eqnarray} \hspace*{-0.5cm} \frac{d^2N}{d\phi d\eta}& =& v_0(\eta) [1+ \sum_n 2 v^b_n(\eta) \cos(n(\phi-\phi_b))+ \sum_n 2 v'^{b}_n(\eta) \sin(n(\phi-\phi_b)) ]\\ & = & v_0(\eta) [1+ \sum_n 2 v^{obs}_n(\eta) \cos(n(\phi-\psi_2))+ \sum_n 2 v'^{obs}_n(\eta) \sin(n(\phi-\psi_2)) ] \end{eqnarray} \end{widetext} It follows that \begin{equation} v_2^{obs}(\eta)=v_2^b(\eta) \cos[2(\psi_2-\phi_b)] + v'^{b}_2(\eta) \sin[2(\psi_2-\phi_b)] \end{equation} We see that due to the sine term, we can indeed have $<v_2^{obs}(\eta)>><v_2^b(\eta)>$, and therefore $<v_2^{rec}(\eta)>><v_2^b(\eta)>$ as in figure 1. The sine term does not vanish upon averaging on events because if a choice such as eq. (4) is done for $\psi_2$, $v'^{b}_2(\eta)$ and $\sin(2(\psi_2-\phi_b))$ have the same sign. This can be visualized with fig. 2a. If the momentum distribution, instead of being symmetric with respect to the reaction plane, (for example $v_2^b> 0, v'^{b}_2=0$) has a positive sine term added ($v'^b_2>0$), it now points at an angle between 0 and $\pi/4$ above the reaction plane. This angle is in fact $\psi_2$ and is determined experimentally with eq. (4). Therefore $v'^b_2\,\sin(2(\psi_2-\phi_b))>0$. Similarly, if $v'^b_2<0$, $\psi_2$ is between $-\pi/4$ and 0 and $v'^b_2\,\sin(2(\psi_2-\phi_b))>0$. (Rigorously, this sign condition is true if $\psi_2$ is computed for the same $\eta$ as $v'^b_2(\eta)$. Due to the actual way of experimentally extracting $\psi_2$, we expect this condition is approximately satisfied only for particles with small or moderate pseudorapidity, which are close enough to where $\psi_2$ was computed.) \begin{center} \begin{figure} \epsfig{file=fig2top.eps,height=3.5cm,angle=0} \epsfig{file=fig2bot.eps,height=3.5cm,angle=0} \caption[]{Assuming (top) $d^2N/d\phi\,d\eta=1+2v_2^b\cos(2(\phi-\phi_b)) +2v'^b_2\sin(2(\phi-\phi_b))$ with $\phi_b=0$: dash-dotted momentum distribution is symmetric with respect to the reaction plane ($v_2^b>0,v'^b_2=0$) and solid is asymmetric ($v_2^b>0,v'^b_2>0$); assuming (bottom) $d^2N/d\phi\,d\eta=1+2v_2^{obs}\cos(2(\phi-\psi_2)) +2v'^{obs}_2\sin(2(\phi-\psi_2))$ with $\phi_b=0$: dashed momentum distribution is symmetric with to the plane inclination $\psi_2$ above the impact parameter and containing the beam axis ($v_2^{obs}>0,v'^{obs}_2=0$) and solid is asymmetric ($v_2^{obs}>0, v'^{obs}_2>0$).} \end{figure} \end{center} In the standard approach, for example as in Phobos analysis, it is {\em assumed} that $d^2N/d\phi d\eta$ is symmetric with respect to the reaction plane and there are no sine terms in the Fourier decomposition in (eq. (5)); eq. (7) leads to (for the hit-based or track-based method) \begin{equation} <v_2^b(\eta)>=<v_2^{obs}(\eta)>/<\cos[2(\psi_2-\phi_b)]> \end{equation} Then using $<cos[2(\psi_2-\phi_b)]>=<cos[2(\psi_2^{>}-\phi_b)]> =<cos[2(\psi_2^{<}-\phi_b)]>$ and $<cos[2(\psi_2^{>}-\psi_2^{<})]>=<cos[2(\psi_2^{>}-\phi_b)]><cos[2(\psi_2^{<}-\phi_b)]>=<cos[2(\psi_2^{>}-\phi_b)]>^2$ (where it is assumed that the distributions of $\psi_2^{>}-\phi_b$ and $\psi_2^{<}-\phi_b$ are symmetrical with respect to the reference axis and $\psi_2^{>}-\phi_b$ and $\psi_2^{<}-\phi_b$ are independent), eq. (2) follows. However as explained above, the use of NeXus initial conditions leads to $d^2N/d\phi d\eta$ not symmetric with respect to the reaction plane (and presumably this is also the case in each real event), so eq. (8) and (2) are not valid. As already mentioned, the symmetry might be better with respect to the plane with inclination $\psi_2$ in relation to the reference axis and containing the beam axis. From (5) and (6), we have \begin{equation} v_2^b(\eta)=v_2^{obs}(\eta)\times \cos[2(\psi_2-\phi_b)] + v'^{obs}_2(\eta)\times \sin[2(\psi_2-\phi_b)]. \end{equation} If the symmetry is perfect $v'^{obs}_2=0$. Otherwise, looking at fig. 2b, if the angular distribution, instead of being symmetric with respect to the axis with inclination $\psi_2$ in relation to the impact parameter, (for example $v_2^{obs}> 0, v'^{obs}_2=0$) has a positive sine term added ($v'^{obs}_2>0$), it now points at an angle $\psi_2^{new}$ greater than $\psi_2$. If a negative sine term is added ($v'^{obs}_2<0$), it now points at an angle $\psi_2^{new}$ smaller than $\psi_2$. Both possibilities are equally likely for a given $\psi_2$ but lead to opposite signs for $v'^{obs}_2(\eta)\times \sin[2(\psi_2^{new}-\phi_b)$ (in general). Therefore $<v'^{obs}_2(\eta)\times \sin[2(\psi_2-\phi_b)]>=0$. So whether the symmetry is perfect or approximate, $<v_2^b(\eta)>\sim <v_2^{obs}(\eta)\times \cos[2(\psi_2-\phi_b)]>$ and instead of eq. (2) we would have \begin{equation} <v_2^{Rec}(\eta)>=<v_2^{obs}(\eta)\times \sqrt{<\cos[2(\psi_2^{<0}-\psi_2^{>0})]>}> \end{equation} In figure 3, we show $<v_2^{Rec}(\eta)>$ (dash-dotted line) and $<v_2^b(\eta)>$ (solid line). We see that the agreement between both methods is improved compared to figure 1. \begin{center} \begin{figure} \epsfig{file=fig3top.eps,height=4.0cm}\\ \vspace*{0.5cm} \epsfig{file=fig3med.eps,height=4.0cm}\\ \vspace*{0.4cm} \epsfig{file=fig3bot.eps,height=4.0cm} \caption[]{Comparison between true elliptic flow $v_2^b$ (solid line) and suggested method to compute reconstructed elliptic flow from data $v_2^{Rec}$ (dash-dotted) for the three Phobos centrality windows\cite{phobos}. Squares represent Phobos data (black error bars are 1 $\sigma$ statistical errors and grey bands, systematic uncertainties at $\sim$90\% confidence level). } \end{figure} \end{center} We have also computed the elliptic flow parameter as function of transverse momentum for charged hadrons with $0<\eta<1.5$ for the 50\% most central collisions. We found that $<v_2^b(p_\perp)>$ computed as in eq. (1) is well approximated by $<v_2^{Rec}(p_\perp)>$ computed as in eq. (10). In summary, from figure 1, elliptic flow estimated from the standard method with reaction plane correction is an overestimate of true elliptic flow ($v_2^{rec}>v_2^b$). From figure 3, using a method that takes into account the more symmetrical nature of particle distribution in relation to the plane with inclination $\psi_2$ with respect to the reference axis and containing the beam axis (rather than with respect to the reaction plane), we get a better agreement between reconstructed and true elliptic flows ($v_2^{Rec}\sim v_2^b$). As for overestimating the true elliptic flow, a similar conclusion was reached in \cite{miller} and \cite{zhu}. In \cite{miller}, elliptic flow was assumed proportional to eccentricity and eccentricity was computed event-by-event using a Monte Carlo Glauber calculation. As in our case, $\vec{b}$ is known. It was found that the integrated true $v_2^b$ is smaller than $v_2^{rec}$ computed with a two-particle cumulant method (for all centralities) and larger than $v_2^{rec}$ computed with higher order cumulants (for centralities 0-80\%). In \cite{zhu}, elliptic flow was computed event-by-event within the UrQMD model. Again $\vec{b}$ is known. It was found that the integrated true $v_2^b$ is smaller than $v_2^{rec}$ computed with a two-particle cumulant method (for all centralities) and equal to $v_2^{rec}$ computed with higher order cumulants (for centralities 10-50\%). Differential elliptic flow was also studied leading to similar conclusions. In these two papers, it is expected \cite{miller,zhu} that there will be differences between $v_2^b$ and $v_2^{rec}$ calculated with the reaction plane method or two-particle cumulant method both because of the so-called non-flow correlations (overall momentum conservation, resonance decays, jet production,etc) and event-by-event fluctuations (mostly eccentricity fluctuations). In principle, higher-order cumulant methods take care of non-flow effects. If there is still disagreement between the true elliptic flow and higher-order cumulant methods, as in \cite{miller}, then fluctuations are important. If there is agreement as in \cite{zhu}, then non-flow effects are important and not fluctuations. In addition to the disagreement between their conclusions, \cite{miller} and \cite{zhu} do not (neither are expected to) reproduce the RHIC data. So an interesting question is whether a more accepted hydrodynamical description would lead to a sizable effect. Using NeXSPheRIO, we found that true elliptic flow $v_2^b(\eta=0)$ is overestimated by $\sim$ 15-30 \% (according to centrality) with the reaction plane method, and $v_2^b(p_\perp)$ by $\sim$ 30\% at $p_\perp$=0.5 GeV. In our case, since $<v_2^b> \sim <v_2^{Rec}>$, a large part of the difference between the true $<v_2^b>$ and reconstructed $<v_2^{rec}>$ is due to the (wrong) assumption of symmetry of the particle distribution around the reaction plane, made to obtain $<v_2^{rec}>$. Finally, we would like to emphasize that it is important to have precise experimental determination of elliptic flow, in particular free from the assumption of symmetry that we discussed. Elliptic flow teaches us about the initial conditions and thermalization, in principle. In this manner, in \cite{Hirano1}, the author showed that with his hydrodynamical code plus freeze out, he could not reproduce $v_2(\eta)$, in particular at large $\eta$, and therefore concluded that there might be a lack of thermalization for these large $\eta$'s. In \cite{Hirano2}, it was shown that agreement with $v_2(\eta)$ data could be obtained for central collisions with a similar hydrodynamical code but with color glass initial conditions and, instead of freeze out, a transport code matched to the hydrodynamical code to describe particle emission. It was therefore concluded that some viscosity was necessary in the hadronic phase. Lastly, in \cite{Hirano3} (see figures 3 and 4), it was shown that for all centralities, Glauber-type initial conditions plus hadronic dissipation lead to a reasonable agreement with $v_2(\eta)$ data while color glass condensate initial conditions plus hadronic dissipation do not, except in the most central window (unless some additional dissipation occurs in the early quark gluon plasma phase). Both sets of initial conditions without hadronic dissipation tend to underestimate $v_2(\eta)$ data if $T_{f.out}=169$MeV and overpredict them if $T_{f.out}=100$MeV. However these conclusions would be affected if $v_2(\eta)$ data were lower, as we think they should be. (Incidently, though our objective was not to reproduce data, note that our model with freeze out (no transport code) reproduces reasonably both the $v_2(\eta)$ data as in \cite{Hirano3} (figure 3) and the $v_2(p_\perp)$ data (not shown).) Therefore, to know e.g. what the initial conditions are or if there is viscosity and in what phase, we need to settle the question of whether event-by-event fluctuations are important and take them into account in the experimental analysis. We acknowledge financial support by FAPESP (2004/10619-9, 2004/13309-0, 2004/15560-2, CAPES/PROBRAL, CNPq, FAPERJ and PRONEX.
train/arxiv
BkiUdbI5qYVBNAWDbmTk
5
1
\section*{Introduction} The empirical evaluation of many theories in comparative politics, ranging from government coalitions to voting behaviour, requires data on the policy positions of political parties. Yet, despite the promise and availability of several cross-national data sources, the methods used to estimate parties' positions continue to be a highly contested area of political science. In the debate regarding the appropriateness of competing methods, the computer-assisted analysis in political text has offered particularly promising insights \cite{Grimmer2013}. One prominent method in this area is the \emph{Wordscores} scaling method as proposed by \citeasnoun{Laver2003}. \emph{Wordscores} can be seen as an application of correspondence analysis to words as data \cite[366--368]{Lowe2008}. In a nutshell, the vocabulary of a set of `reference' texts for which the position on the dimension of interest is known is used as a training set for estimating the unknown positions of another set of `virgin' texts. To position documents and hence political actors, \emph{Wordscores} makes a series of assumptions regarding the distribution of reference documents across the dimension of interest, the distribution of words across reference documents, and of the use of words as data more generally \cite{Lowe2008}. As \citeasnoun{Grimmer2013} note, however, most of these assumptions might not hold in practice, so it is absolutely important to evaluate the performance of computer-assisted methods for analysing political text. Nevertheless, despite the `validate, validate, validate' recommendation by \citeasnoun{Grimmer2013}, our review of the published studies using \emph{Wordscores} revealed that there are very few studies that assessed the validity of \emph{Wordscores} output. Moreover, most of the few attempts that tried to assess the validity of \emph{Wordscores} in the context of estimating parties' positions were rather limited in terms of their scope. In this paper, we present the most rigorous approach to date in validating \emph{Wordscores}.\footnote{Full replication material, including .do files and all associated source documents, will be made available through a public data-verse on publication.} After a short explanation of the \emph{Wordscores} assumptions, we review the previous attempts to validate the \emph{Wordscores} output and outline the design of our study. Our analysis consists of an extensive application of \emph{Wordscores} to estimate the positions of 164 parties across 23 countries over four widely-used policy dimensions. We furthermore check the robustness of our estimation employing multiple reference scores for the reference texts and methods of transforming the raw \emph{Wordscores} output. Following estimation, we attempt a rigorous assessment of validity in the framework laid out by \citeasnoun{Carmines1979}. We conclude that, despite the promise in the original expos\'e \cite{Laver2003}, \emph{Wordscores} cannot produce valid estimates of parties' positions in a cross-national context. Our findings have important implications for those who use \emph{Wordscores} in their empirical analyses. \section*{\emph{Wordscores} as a popular method of automated text analysis} The \emph{Wordscores} method was originally proposed by \citeasnoun{Laver2003}. According to the method, it is possible to estimate the positions of documents (called `virgin' texts) on an a priori defined dimension of interest, by comparing them to a set of documents (called `reference' texts) in which their position on the dimension of interest is known. \emph{Wordscores} can therefore be described as a supervised scaling model \cite{Grimmer2013}, in the sense that documents are placed on a priori defined policy scales, that it uses `reference texts' and scores assigned to them akin to a training set in a machine learning framework. As such, \emph{Wordscores} makes the `bag-of-words' assumption by treating individual words as `data' irrespective of their syntactic context, and assumes that the relative frequencies of specific words provide manifestations of underlying political positions \cite[748]{Klemmensen2007}. Over the years, \emph{Wordscores} has proven to be highly popular due to its ease of use and implementation in two popular statistical programmes (Stata and R). As of October 2016, Google Scholar gives 1021 citations to \citeasnoun{Laver2003} who introduced \emph{Wordscores} (hereafter Laver et al.). Some of the most prominent applications applications of the method involve the analysis of election manifestos to estimate the policy preferences of political parties and use these measurements in order to empirically test a wide range of questions. For instance, \emph{Wordscores} has been used to explain government coalitions at the national and sub-national level \cite{Back2013,Debus2009a,Linhart2010,Proksch2006}, to study party competition by mapping parties in multi-dimensional ideological space \cite{Laver2006}, to study similarity in the context of intra-party politics \cite{Coffe2011,Debus2009b}, to investigate whether parties keep their policy promises \cite{Debus2008}, to explain the success of bills in legislatures \cite{Brunner2008}, the choice of putting the EU's constitutional treaty on a referendum \cite{Hug2007a}, and to establish the policy preferences of sub-national parties and governments \cite{Klingelhofer2014,Muller2009}, or simply to map the positions of political parties across time \cite{Kritzinger2004}. Moreover, \emph{Wordscores} has been used extensively to estimate the positions of documents other than party manifestos. These include speeches delivered by MPs in Ireland, Italy, Germany, and Spain \cite{Bernauer2009,Giannetti2009,Laver2002,Leonisio2012}, speeches by US state governors \cite{Weinberg2010}, leaders of Russian regional parliaments \cite{Baturo2013}, delegates at the Convention on the Future of Europe \cite**{Benoit2005} and the head of state in the UK \cite{Hakhverdian2009}. Furthermore, novel applications of \emph{Wordscores} outside comparative politics include analyses of reports from US state lotteries \cite{Charbonneau2009}, Chinese newspaper articles \cite{Chen2011}, public statements by US Senators justifying their votes \cite{Bertelli2006}, advocacy briefs in the US Supreme Court \cite{Evans2007a}, press releases of the European Commission \cite{Kluver2009}, and even open-ended questions in surveys \cite{Baek2011}. \begin{figure}[!htb] \caption{Analysis of citations to Laver et al. article} \includegraphics[width=.45\textwidth]{Wordscores-Fig1a.pdf} \includegraphics[width=.45\textwidth]{Wordscores-Fig1b.pdf} \floatfoot{Note: The plot on the left shows mere citations compared to empirical applications, while the plot on the right shows the empirical applications published in peer-reviewed journals compared to other outlets.} \end{figure} Despite this breath and wealth of applications, one could argue that \emph{Wordscores} is becoming increasingly outdated as a method, especially due to the advent of more sophisticated methods of automated text analysis in political science \citeaffixed{Grimmer2013}{see}. To investigate this possibility, we performed a rigorous review of all the citations to Laver et al. article that were captured by Google Scholar.\footnote{A spreadsheet with the details of the review can be found in the replication materials.} Our review revealed that there are total of 146 uses of \emph{Wordscores} in empirical analyses, 78 of which have been published in peer-review journals, with the remaining appearing in monographs, chapters in edited volumes, working papers, and conference papers. Interestingly, as Figure 1 shows, the publication of empirical analyses using \emph{Wordscores} constitute a relatively stable fraction of the total citations to the Laver et al. article, whereas the trend of the publications of empirical analyses in peer-review journals closely mirrors the trend of publications in other outlets. Finally, as shown in Figure 2, our review shows no evidence that the empirical analyses using \emph{Wordscores} are now published in lesser quality journals (at least judging from their impact factor) compared to previous years. We therefore conclude that, despite the advent of more sophisticated methods of automated text analysis, \emph{Wordscores} deserves a rigorous evaluation in its own right as it remains a popular automated text analysis method in the literature. \begin{figure}[!htb] \caption{Journal impact factors of articles using \emph{Wordscores} in empirical analyses.} \includegraphics[width=.5\textwidth]{Wordscores-Fig2.pdf} \floatfoot{Note: Trend line is a locally adjusted regression curve (loess, bandwidth=.7).} \end{figure} \section*{Estimation and assumptions} The estimation process begins with the researcher defining a set of reference texts that have positions on a dimension that we can assume with some confidence (for example, when they are obtained by an expert survey). Reference texts therefore need to be informative with regards to their content (words), and need to have a known position on the dimension of interest. \emph{Wordscores}, implemented as a user-written package in Stata and R, begins by counting the frequency of words in each reference text and assigns a score to each of these words. To do so, \emph{Wordscores} calculates the probability $P$ that a word $w$ appears in reference text $r$ as follows: \begin{equation} P_{wr}=\frac{F_{wr}}{\sum_{r}F_{wr}} \end{equation} where $F_{wr}$ is the frequency of word $w$ in reference text $r$. Using these probabilities, \emph{Wordscores} calculates a score for each word $w$ on each dimension of interest $d$ as follows: \begin{equation} S_{wd}=\sum_{r}P_{wr}A_{rd} \end{equation} where $A_{rd}$ is the known position of reference text $r$ on dimension $d$. To score each virgin text $v$ on dimension $d$, \emph{Wordscores} use the word scores $S_{wd}$ obtained from reference texts as follows: \begin{equation} S_{vd}=\sum_{w}F_{wv}S_{wd} \end{equation} According to \citeasnoun[316]{Laver2003}, $F_{wv}$ in equation 3 denotes `the relative frequency of each virgin text word [$w$], as a proportion of the \emph{total number of words in the virgin text} [$v$]' (emphasis added). However, all the statistical packages that have been written to implement \emph{Wordscores},\footnote{These are the \texttt{wordscores} package in Stata (written by Kenneth Benoit), and the \texttt{austin} (written by Will Lowe) and \texttt{quanteda} (written by Kenneth Benoit and Paul Nulty) packages in R.} use a different definition of $F_{wv}$. Here the relative frequency of each virgin text word $w$ is taken as a proportion of the total number of words \emph{co-occurring between the reference and the virgin texts}. This inconsistency between the Laver et al. article and the software implementations is of no particular concern to how \emph{Wordscores} work, but it does challenge the proof-of-concept validation presented in the Laver et al. article as we will see in the following section. Nevertheless, irrespective of how one defines $F_{wv}$, the $S_{vd}$ scores only indicate the relative position of virgin texts to each other on dimension $d$. To be able to compare the scores of virgin texts to the scores of reference texts, we need one more step. \emph{Wordscores} will transform the raw scores back to the original metric used in the scores used in the reference texts, as this allows us to compare the raw scores of the virgin texts with the assigned scores of the reference texts. In their original paper, Laver et al. suggest the following transformation: \begin{equation} S^{*}_{vd}=(S_{vd}-S_{\bar{vd}})\left(\frac{SD_{rd}}{SD_{vd}}\right)+S_{\bar{vd}} \end{equation} Here, $S^{*}_{vd}$ is the transformed score, $S_{vd}$ the raw score, $S_{\bar{vd}}$ the average raw score of the virgin texts, and $SD_{rd}$ and $SD_{vd}$ the standard deviations of the reference and virgin text scores respectively. This metric preserves the mean of the virgin text scores, but equals their variance to that of the reference text scores, thus allowing for comparison. \citeasnoun{Lowe2008} points out that the LBG transformation assumes that the raw virgin text scores have the correct mean, but the incorrect variance. However, due to the large amount of overlapping words, the virgin score mean is invariably close to the reference text mean---an effect called shrinkage. These overlapping words are often words as `the' or `and', and as they occur frequently in all documents, they get centrist scores. As such, the distances between the virgin texts are shrunken, and all texts bounce towards the middle of the scale. Laver et al. fix this by recouping the original variance, but falsely assume that the newly derived mean is correct. This is no problem when the variance and mean are expected to be the same for both reference and virgin texts. However, as \citeasnoun[359--360]{Lowe2008} notes, increasing polarisation between parties, or joint movement to the sides of a set of parties, makes it hard, if impossible, to discern whether the mean of the virgin texts is centrist due to the reference scores or a shrinkage artifact. \citeasnoun[95--97]{Martin2008} agree with the above criticism and note several more shortcomings of the Laver et al. transformation method. First, as the transformation uses the standard deviation of the virgin text raw scores it depends on the set of virgin texts themselves. This makes the scores non-robust with regard to the virgin texts, and any difference in the set of reference texts automatically leads to a difference in the scores. This way, a researcher could obtain different positions in the virgin texts solely because of a different selection in the reference texts. Second, despite what Laver et al. claim, their method fails to recover the accurate relative distance ratios and therefore put the transformed scores and the virgin scores on the same metric. This is due to shrinkage, as we pointed out above. To combat these problems, \citeasnoun{Martin2008} provide a new transformation based on the idea of relative distance ratios $S_{i}$: \begin{equation} S_{i}=\frac{S_{i}-S_{R1}}{S_{R1}-S_{R2}} \end{equation} where two `anchoring texts' $S_{R1}$ and $S_{R2}$ are chosen, and the placement of all other texts are expressed in relation to this `standard unit' \cite[97]{Martin2008}. They then use these ratios to construct a new transformation: \begin{equation} S^{*}_{vd}=\left((S_{vd}-S_{R1})\frac{A_{R2}-A_{R1}}{S_{R2}-S_{R1}}\right)+A_{R1} \end{equation} Here, $S^{*}_{vd}$ is the transformed score, $S_{vd}$ the raw score, $A_{R1}$ and $A_{R1}$ are the assigned scores to reference texts $R1$ and $R2$ (where $R1$ is located to the left of $R2$), and $S_{R1}$ and $S_{R2}$ are the reference texts' raw scores. In their article, Martin \& Vanberg use two reference texts, or `anchor texts' located to the left and right of virgin texts. As seen in equation (6) above, both the assigned scores for the reference texts are recovered, and the virgin texts are thus placed on the original metric. However, as soon as more than two reference texts are used---as \citeasnoun{Laver2003} strongly advise---not all the original exogenous scores of the reference texts can be recovered exactly, as only two texts can be used to define the metric. MV thus suggest a change to the transformation: \begin{equation} S^{*}_{vd}=\left((S_{vd}-S_{Rmin})\frac{A_{Rmax}-A_{Rmin}}{S_{R2}-S_{R1}}\right)+A_{Rmin} \end{equation} Here $A_{Rmin}$ and $A_{Rmax}$ denote the lowest and highest placed reference text on the original metric. The positions of these texts will be recovered exactly, while the scores of the other texts will be distorted as the relative distance ratios of the raw scores do not correspond to the relative distance ratios of the reference scores. Comparison between reference and virgin texts thus becomes difficult and researchers face a trade-off between increased accuracy of the dictionary and internal consistency, and the ability to make valid comparisons \cite{Martin2008} (see Appendix F). To conclude, while the transformation by \citeasnoun{Laver2003} depends on the virgin texts and is indifferent to the composition of the reference texts, the transformation by \citeasnoun{Martin2008} depends on the reference texts and is indifferent to the composition of the set of virgin texts \cite[360]{Lowe2008}. Moreover, Laver et al. assume that the variances of both the set of reference texts and virgin texts are the same, while the Martin \& Vanberg transformation does not do so \cite[110]{Benoit2008}. In this paper, we use both transformation methods as we have no use for the raw scores and neither of the scores has until now proven to be the most appropriate in all circumstances. More generally, \citeasnoun{Lowe2008} criticised \emph{Wordscores} for its heavy dependence on reference texts. \citeasnoun[366--368]{Lowe2008} views \emph{Wordscores} as an approximation to correspondence analysis and goes on to treat the method as a statistical ideal point model for words. In doing so, he identified six conditions that \emph{Wordscores} needs to fulfil in order to ensure consistent and unbiased estimation of the parameters of the ideal point model: \begin{enumerate} \item The word scores of the virgin texts need to be equally spaced and extend over the whole range of word scores for the reference texts \item The word scores of the virgin texts need to be spaced relative to the informativeness term (all texts are thus informative) \item The reference scores of the reference texts need to be equally spaced and extend past each word score of the virgin texts in both directions \item The word scores of the reference texts need to be spaced relative to the informativeness term (all texts are thus informative) \item All the words need to be equally informative \item The probability of seeing a word needs to be the same for all words \end{enumerate} According to \citeasnoun[369]{Lowe2008}, conditions 5 and 6 will never hold for word count data because any text exhibits a highly skewed word frequency distribution, regardless of the genre, and contain many uninformative words. Nevertheless, we can significantly reduce these problems by filtering out uninformative words such as stop words, function words that do not convey meaning but primarily serve grammatical functions, very uncommon words, and words which appear in less than 1\% and more than 99\% of documents in the corpus \cite{Grimmer2013}. Doing this makes the probability of seeing a word more equal, and removes non-informative words. Conditions 1 and 2 will be less likely to hold when there is not enough overlap between word distributions between the reference documents. However, by using many documents as reference texts (as Laver et al. advised), the conditions might be well approximated. Condition 2, however, suffers from the fact that some documents are small, and thus contain very little to no information. This does not only increase the confidence intervals around the estimates, but also creates a large bias in the estimates, negatively influence the validity of the virgin documents scores. Conditions 3 and 4 are similar to 1 and 2, but as words are more plentiful then texts, the changes of insufficient overlap are considerably lower, and the conditions are thus less important. Lowe even states `we might hope that they [words] may relatively evenly spread out across a policy dimension' \cite[369]{Lowe2008}, which makes the conditions even more plausible. Last, \citeasnoun[369]{Lowe2008} considers that conditions 1 and 3 can never hold simultaneously, as this would require an infinite data set---and thus concludes that bias in \emph{Wordscores} is inevitable. \section*{Previous validation attempts and their shortcomings} Considering the comprehensive critique of \citeasnoun{Lowe2008} one could conclude that \emph{Wordscores} could find little use in political science. However, as \citeasnoun[270]{Grimmer2013} note, the question is not whether computer-assisted methods satisfy assumptions with regards to how language works and texts are generated, but to evaluate methods on the basis of `their ability to perform some useful social scientific task'. In this respect, we should not focus on the assumptions, but on validation. As \citeasnoun[271]{Grimmer2013} note, validation in supervised methods such as \emph{Wordscores} should involve demonstrating that the computer-assisted method can reproduce the results in a set of documents for which the true scores of the quantity of interest are known. When true scores are not known, the output of computer-assisted methods can be validated against human judgement \citeaffixed{Lowe2013}{see, for instance the validation of another method by}. Validation, however, is more difficult in the case of parties' ideological positions because the true scores of the quantity of interest are unknown and it is difficult to estimate them reliably using human judgement \citeaffixed{Mikhaylov2012}{see}. In such instances, researchers often resort to assessing the `face validity' of estimates of party positions, in other words whether positions `appear' to be valid in the eyes of the researcher. As \citeasnoun[363]{Sartori2007} pointed out, however, demonstrating a measure's face validity might be comforting when other types of validity cannot be employed due to the lack of resources, but this strategy is not adequate. Face validity should be seen as a necessary but not sufficient condition for good measurement. In the absence of face validity, one could certainly question the usefulness of the measuring instrument. However, face validity by itself is not enough, and researchers need to assess additional types of validity as outlined in Table 1 \cite{Carmines1979}. These three additional types of validity should not be considered interchangeable \cite[537]{Adcock2001}. If we fail to validate a measure in one type of validity, this cannot be compensated by showing that the measure fares well in terms of another. \begin{table}[!htb]\footnotesize{ \caption{Types of validity and their assessment.} \begin{tabularx}{\textwidth}{l X l} \toprule Type& Assesses the degree to which our measure\ldots & The assessment is\ldots \\ \midrule Face & \ldots appears to be valid in light of heuristic knowledge & \ldots qualitative \\ Content & \ldots contains indicators that reflect the construct that is being measured & \ldots qualitative \\ Criterion & \ldots correlates with other known measures of the concept that is being measured & \ldots quantitative \\ Construct & \ldots is associated with measures of other concepts in a way that conforms to the theoretical expectations & \ldots quantitative \\ \bottomrule \multicolumn{3}{l}{Adapted from \citeasnoun{Carmines1979} and \citeasnoun{Sartori2007}}\\ \end{tabularx} } \end{table} More specifically, in the case of estimating parties' ideological positions \citeasnoun[271]{Grimmer2013} argue that validation `requires numerous and substance-based evaluations', and propose that `scholars must combine experimental, substantive, and statistical evidence' to demonstrate that the output of computer-assisted methods such as \emph{Wordscores} can be considered to be valid. Nevertheless, while these recommendations have been stated in classic works in social \cite{Zeller1980} and political science \cite{Adcock2001} measurement, and content analysis \cite{Krippendorff2004}, our review of the literature showed that most of the published studies have used the \emph{Wordscores} routines in Stata or R without validating the output. As expected, the first study that attempted to validate the \emph{Wordscores} output was the original article by \citeasnoun{Laver2003}. In their article, Laver et al. use the 1992 manifestos of British and Irish parties as reference texts and assign to them reference scores from expert surveys conducted in 1992 in order to estimate parties' positions of the 1997 election manifestos in both economic and social policy dimensions. Laver et al. then assess the criterion validity of the estimates by comparing the \emph{Wordscores} output against the estimates of an expert survey conducted in 1997. Laver et al. also used a similar approach to estimate parties' positions for the German election of 1994 but, in lack of comparable expert survey data, only assessed the German estimates in terms of face validity. Our replication of the Laver et al. analysis not only revealed the inconsistencies between the definitions in the article and the way \emph{Wordscores} is implemented in R and Stata, but more importantly, that the results presented in the article are not particularly robust. More specifically, we found that the addition of manifestos of smaller parties in the analysis drastically change the estimates provided by \emph{Wordscores}, making them inconsistent in comparison to expert survey estimates. We report in detail these findings in Appendix A. Furthermore, we argue that if \emph{Wordscores} aims to be a useful tool for estimating parties' positions on policy dimensions, its validity needs to be evaluated beyond such simple `proof of concept' demonstrations, especially when these demonstrations are shown not to be robust. In this respect, \citeasnoun{Budge2007} compared the estimates given by \emph{Wordscores} to those of the Manifesto Project on the left-right dimension for British parties across time. Their results were unfavourable as they found that \emph{Wordscores} produces flat scores across time compared to the Manifesto Project estimates. However, in a response, \citeasnoun{Benoit2007} dismissed these findings because \emph{Wordscores} was not properly implemented (Budge \& Pennings merged several manifestos before using them as reference texts) and because the Manifesto Project estimates were used as a benchmark, something which, the authors argue, can easily be contested. \citeasnoun{Klemmensen2007} performed a similar evaluation by using \emph{Wordscores} to estimate the positions of Danish parties on the left-right dimension. Although their article has been widely cited as a successful validation of \emph{Wordscores}, a closer investigation of the results shows that this is not actually the case. The correlations reported by Klemmensen et al. show that \emph{Wordscores} performs worse than the Manifestos Project estimates when compared to a common benchmark (expert surveys). If the proponents of \emph{Wordscores} argue that the Manifesto Project estimates are problematic because they do not always correlate with expert surveys \citeaffixed{Benoit2007,Benoit2007a}{e.g.}, then it should follow that \emph{Wordscores} estimates are even worse. Most recently, \citeasnoun**{Hjorth2015} have repeated this exercise in both Denmark and Germany, by validating the \emph{Wordscores} output against placements by experts and voters using rank order correlations. The results of this validation pointed that the \emph{Wordscores} output correlated better with independent measures of party positions compared to the output produced by another popular text scaling method (\emph{Wordfish}). However, the rank order correlations examined by the authors produced a far too lenient test on a method which promises to deliver interval level measurements of party positions (point estimates with associated 95\% confidence intervals). The most comprehensive validation so far has been conducted by \citeasnoun{Brauninger2013} who used \emph{Wordscores} to estimate parties' left-right positions across 13 West European countries between 1980 and 2010 in a study specifically aimed to assess the validity of the technique. Their results were mixed, concluding that \emph{Wordscores} estimates correlated well with the Manifesto Project in some countries, but not in others. We note that the results of this comparative study were far more cautious compared to the earlier investigations based on single countries (including the original proof of concept in Laver et al.). The Br\"auninger et al. study, however, had its own limitations namely that it only assessed estimates on a single dimension (left-right), using a single benchmark (the Manifesto Project data) which is controversial in itself as previously argued.\footnote{\citeasnoun{Ruedin2013} and \citeasnoun{Hug2007} compared \emph{Wordscores} estimates against many other methods aiming to measure parties' positions. Their comparisons, however, did not focus on \emph{Wordscores} as such but rather showed how results might differ across the various methods.} In general, all of the previous studies attempted to assess the validity of \emph{Wordscores} in the context of party positions, looked at criterion validity, neglecting other, equally important, types of validity as discussed above. Moreover, the correlation coefficients used to assess criterion validity were either Pearson's product-moment or Spearman's rank-order, which do not take into account systematic measurement error. Finally, none of the studies attempted to investigate the robustness of estimation by using difference sources for the reference scores and different transformation methods. Our study addresses all these limitations and provides the most rigorous validation approach to date. We use \emph{Wordscores} to estimate parties' positions in 23 countries, across four different policy/ideological dimensions, using three different sets of reference scores, and two different transformation methods, and we assess the estimates in terms of content, criterion, and construct validity using appropriate statistical measures. \section*{Study design} We applied \emph{Wordscores} to the manifestos of political parties published on the occasion of the 2009 elections to the European Parliament (hereafter we refer to these documents as `Euromanifestos') across 23 countries using the 2004 EP elections Euromanifestos as reference texts.\footnote{The countries in our study include all EU member-states up to 2009 with the exclusion of Luxembourg and Malta where no appropriate reference scores were available for 2004. The names of parties used in the study can be found in Appendix B.} We chose the elections to the EP over national elections to improve the comparability of estimates across countries. National elections contain more idiosyncratic parameters in the campaigning and use of political text compared to elections to the EP that take place at the same time and within a shared political context. Moreover, we avoid stretching the comparison across time (unlike Br{\"a}uninger et al.) in order to ensure that our comparisons are not affected by changes in the political discourse. This way we provide a very favourable context to test the validity of \emph{Wordscores}, much like Laver et al. have done so. Instead of tracking down all these documents ourselves, we rely on an off-the-shelf collection provided by the Euromanifestos Project.\footnote{The collection can be accessed at http://www.ees-homepage.net/. The names of the documents used can be found in Appendix B. Moreover, following the advice by \citeasnoun[272--273]{Grimmer2013}, we processed these documents to make them suitable for computer-assisted analysis. We present our processing method in Appendix C.} These are the documents collected and coded (according to a hand-coding scheme similar to the Manifesto Project) by country-specific coders of the Euromanifestos Project \cite{Braun2010}. As also shown in the case of the Manifesto Project \cite{Gemenis2012,Hansen2008}, the collection of these documents is fraught with problems. Along with `genuine' Euromanifestos, the collection includes all sorts of documents of dubious usefulness in terms of estimating parties' positions. Amongst them, there are small pamphlets that do not present a broad policy profile, and documents that contain irrelevant or misleading sections (e.g. references to \emph{other} parties' positions). As evident, such documents would be highly problematic to use with computer-assisted methods for content analysis \citeaffixed{Proksch2009}{see}. We nevertheless decided to use this off-the-shelf database in order to test the method in a realistic context as researchers are more likely to rely on off-the-shelf collections for their cross-country comparative analyses than constructing their own using country experts \citeaffixed[328]{Hug2007a,Pennings2006}{e.g.}. Unlike all the previous studies we do not limit our validation to the left-right dimension, but estimate parties' positions on three additional dimensions: European integration, economic left-right, and the socio-cultural liberal-conservative dimension. These are dimensions that have been used extensively to analyse party competition in the context of (elections to) the EP \cite{Hix1999,Hix2006a,Hooghe1999,Hooghe2002,McElroy2007}. In addition, unlike previous studies, we use a variety of sources for reference scores, and also various sources of party positions to compare the \emph{Wordscores} estimates against. To begin with, we do not use the estimates from the Manifesto Project as we agree with Benoit and Laver \citeyear{Benoit2007,Benoit2007a} that they are fraught with measurement error and, as such, should not be used as a `gold standard' for evaluating the validity of other methods. The reasons for doing so are further explained elsewhere \citeaffixed{Gemenis2013}{see}. Instead, we use expert survey estimates as Laver et al. and most of the empirical applications that we cited earlier on have done. Of course, expert surveys have their own problems, so we cross-validate the \emph{Wordscores} estimates using estimates from an alternative, less used, but highly useful approach: the judgemental estimation of party positions using manifestos and other document sources. For the advantages and shortcomings of the judgemental approach to coding see \citeasnoun[2293--2296]{Gemenis2015b}. We \emph{further} cross-validate the findings by employing two different data sources within each approach. For expert surveys, we use the 2003 \citeasnoun{Benoit2006} and the 2002 and 2010 Chapel Hill Expert Surveys \cite**{Bakker2012,Hooghe2010}, and for judgemental coding, the \emph{overall} position coders assigned to the party on the basis of the whole document in the Euromanifestos Project dataset \cite{Braun2010}, and the estimates from the 2009 EU Profiler dataset \cite{Trechsel2010} as scaled in \citeasnoun{Gemenis2013c}. Table 2 gives a summary of these sources, while the exact wording of questions and scales used in our our study are presented in Appendix D. \begin{table}[!htb]\footnotesize{ \caption{Party position data sources used in this study.} \begin{tabularx}{\textwidth}{X X X} \toprule Source type& Used for reference scores (2004) & Used for the validation (2009) \\ \midrule Expert survey&BL 2003&-\\ Expert survey&CHES 2002&CHES 2010\\ Judgemental coding&EMP 2004&EMP 2009\\ Judgemental coding&-&EUP 2009\\ \bottomrule \end{tabularx} } \end{table} Finally, unlike previous studies we cross-validate the results by employing two different transformations for each set of \emph{Wordscores} estimates: the transformation originally proposed by Laver et al. (hereafter referred to as LBG) and the alternative transformation proposed by \citeasnoun{Martin2008}, hereafter referred to as MV.\footnote{Following, Laver et al. we use all available documents for 2004 as reference texts when using the LBG transformation. This way, the texts more or less extend over the whole range as required by the first assumption made by \emph{Wordscores} (see section on \emph{Wordscores} assumptions). In Appendix E, we show which two documents we selected for each country to serve as anchors for estimation according to the MV transformation.} The use of all of these sources and methods for transforming the raw scores allows us to perform the most extensive validation of \emph{Wordscores} to date. \section*{Results} The combination of different sources of reference scores and transformation over the examined methods and countries implies that we ran the \emph{Wordscores} scaling model a whooping 600 times for the validation: 25 countries/territories (including separate analyses for Flanders, Wallonia, and Northern Ireland)*4 dimensions*3 sources of reference scores*2 transformation methods. All the \emph{Wordscores} estimates from these analyses were copied to a meta-dataset with parties as the unit of analysis and merged with estimates from the sources listed in the last column of Table 2. This meta-dataset was used for the subsequent analyses presented below. \subsection*{Content validity} According to \citeasnoun{Carmines1979}, content validity refers to whether the method used for measuring a latent construct represents all of its facets. If one uses multiple indicators that are scaled in a single index, then these indicators should represent all facets of the construct. Alternatively, if one uses a single indicator (for instance as done in surveys asking for a left-right placement) then this indicator has to capture all different facets of the construct. Moreover, a measure that includes facets that do not belong to the construct would be problematic in terms of content validity. As noted in the section about the previous validation attempts, the evaluation of content validity is usually of qualitative nature, so it would be difficult to see how it could be assessed in the context of the output presented by \emph{Wordscores}. We propose a workaround this problem by conceptualising the construct in the context of \emph{Wordscores} as being represented by the words used in the reference texts. When \emph{Wordscores} places virgin texts on a dimension of interest it does so by calculating a wordscore for each of the words occurring in the reference texts. As \emph{Wordscores} is non-discriminating and scores all words on all dimensions, treating all words as equally informative of the dimension of interest is problematic in terms of content validity. This is because we should not expect each and every word in a reference text to be associated with a dimension of interest, no matter what this dimension is. This problem of \emph{Wordscores} is known, of course, but here we are interested in quantifying the degree of content validity in order to investigate how big of a problem it is for estimating parties' positions. To do so, we decided to treat each of the words scored in the reference texts as an indicator of the latent concept, and evaluated whether these words relate to the latent concept/dimension of interest. To assess this, following \citeasnoun[101--102]{Krippendorff2004} we looked at the context in which these word appear. For example, the word `committee' can be indicative of a party's position in the dimension of EU integration when it refers to an EU committee, but not when it refers to other types of committees. We therefore hand-coded \emph{each and every word} in the reference texts to see how many of the words used to score the virgin texts were actually used in the context of the dimension of interest. As this is a particularly time-consuming process, we restricted this analysis to British documents and the European integration dimension. Our choice of British parties should be fair for \emph{Wordscores} given that British Euromanifestos are some of the best documents in terms of relevance for assessing parties' positions on European integration. For our hand-coding exercise we defined the context as a natural sentence that starts with a capital letter, and end with one of the following delimiters: `.', `?', `!', `;' \cite**[942]{Daubler2012}. Items in (bullet-pointed) lists were considered as separate sentences. Each word was coded as one (1) when it was used in a context referring to European integration and zero (0) otherwise. In Figure 3 we plot the distribution of the average hand-coding evaluation for among all the words used in each virgin document of each British party. What is clear from the figure is that the vast majority of words used by \emph{Wordscores} to estimate party positions are not particularly informative if one looks at the context in which they appear. It appears that \emph{Wordscores} uses far more noise than signal to estimate party positions. \begin{figure}[!htb] \caption{Assessing content validity in the European integration dimension.} \includegraphics[width=1\textwidth]{Wordscores-Fig3.pdf} \floatfoot{Note: The horizontal axis refers to the rate in which words were considered by the hand-coding to be relevant.} \end{figure} If one considers that all this noise brought by the non-informative words which are automatically used in \emph{Wordscores} moves party positions towards the middle of the scale, one can understand the logic behind the LBG transformation which stretches the party scores towards the end points of the scale. Although we agree that one needs to make some kind of transformation to account for the presence of noise that leads to the centrist bias in party positions, we do not agree that such a fundamental problem in content validity present in \emph{Wordscores} can be solved by a simple transformation of the raw scores. To give an example, we examine closely the wordscoring of the 2009 UKIP manifesto. UKIP is well-known for its extreme anti-EU stance which should leave no doubts about where the party should be placed. The \emph{Wordscores} raw placement for UKIP is 11.5 [11.2, 11.8] and the LBG transformed one is 9.3 [5.5, 13]. In either case, the party is placed in the middle of the scale. The transformation only improves this placement by specifying that this counter-intuitive middle placement is estimated with a lot of uncertainty. \emph{Wordscores} tells us that UKIP could be placed on either side of the scale even though one should not have much difficulty in establishing the position of the party simply by looking at the UKIP Euromanifesto. One could argue of course, that this is a problem of the 2009 UKIP Euromanifesto being very short. However, the size of the document should only contribute to making the confidence interval around the point estimate larger. However, the problem here is that the UKIP point estimate is counter-intuitively estimated in the middle of the scale. This is not because the UKIP document is short, but because \emph{Wordscores} is unable to accurately estimate the party position due to all the noise that was introduced by the scoring of non-informative words. This is clearly shown in Figure 4, where we plotted all the words scored in the UKIP 2009 Euromanifesto according to their wordscore. Most of the words scored by \emph{Wordscores} are not informative with regards to placing UKIP on the European integration dimension and since most of the words have wordscores near the middle of the scale, the point estimate for UKIP was counter-intuitively given at 11.5 (transformed by LBG to 9.3). \begin{figure}[!htb] \caption{Wordscoring the UKIP 2009 Euromanifesto on the European integration dimension.} \includegraphics[width=1\textwidth]{Wordscores-Fig4.pdf} \floatfoot{Note: Word size corresponds to the frequency of appearance in the UKIP virgin text; words that were hand-coded as being relevant at least 50\% of the instances are plotted in black.} \end{figure} The problem is therefore deeper than the uncertainty that comes with the size of the documents, and this can be established simply by looking at the cases of parties with much larger documents than UKIP. The fundamental problem lies in the content validity of \emph{Wordscores}. The lack of content validity brought by scoring each and every word irrespective of its relevance in providing information about the dimension of interest, pushes scores towards the middle of the scale. Transforming the raw scores will pull the estimates towards the endpoints of the scale, but there is no guarantee that the estimates will be pulled to the right direction. This will become evident in the next section where we examine the criterion validity of the \emph{Wordscores} estimates across all countries. \FloatBarrier \subsection*{Criterion validity} Criterion validity refers to the extent to which a measure correlates with another measure which reflects the same concept \cite{Carmines1979}. Here, we assess the criterion validity of \emph{Wordscores} by comparing its estimate to alternative measures of party positions on each dimension as outlined in the study design section. As we have argued, this comparison needs to be made using appropriate correlation coefficients. Neither Pearson's product-moment correlation coefficient nor Spearman's rank-order correlation coefficient are able to capture the presence of systematic measurement error. As has been pointed out by \cite[144]{Krippendorff1970}, both Pearson's and Spearman's coefficients, are based on the presumption of linearity ($Y=bX$) which is not the same as agreement between two measurements ($Y=X$). It is therefore possible for two measures to correlate perfectly (according to Pearson's or Spearman's coefficients) without them being identical measures. Therefore all the studies that have used such coefficients to assess the criterion validity of measures of party positions (including all previous validation studies involving \emph{Wordscores}) are likely to \emph{overestimate} the degree of validity in case of the presence of systematic measurement error. In order to overcome these problems, we use the concordance correlation coefficient \cite{Lin1989} defined as: \begin{equation} \rho_{c}=\frac{2\rho\sigma_{x}\sigma_{y}}{\sigma^{2}_{x}+\sigma^{2}_{y}+(\mu_{x}-\mu_{y})^{2}} \end{equation} Where $\mu_{x}$ and $\mu_{y}$ are the means for the two measures and $\sigma_{x}$ and $\sigma_{y}$ are the corresponding variances, and $\rho$ is Pearson's product-moment correlation coefficient between the two measures. Put more simply, CCC is conceptualised as \begin{equation} \rho_{c}=\rho C_{b} \end{equation} or, in other words, as the product between Pearson's product-moment correlation coefficient $\rho$ that measures dispersion (i.e. the degree of random measurement error) and a bias correction factor $C_{b}$ that measures the deviation from the 45 degrees line of perfect concordance. A $\rho_{c}$ of 0 denotes absence of concordance, a $\rho_{c}$ of 1 denotes perfect concordance, and a $\rho_{c}$ of -1 perfect negative concordance. To estimate and interpret the CCC, we further need to consider two complicating factors. Firstly, CCC requires for both measures to be on the same scale. Normally, one could rescale all estimates of party positions from 0 to 1 using the well-known $\frac{estimate - min}{max - min}$ formula. Although this is straightforward using the expert survey and judgemental coding data where the scale minimum and maximum are clearly defined, this is not the case with \emph{Wordscores} estimates. Despite the promise made by the LBG transformation that it puts the estimates on the same metric of the reference texts \cite[317]{Laver2003}, this does not always happen in practice. For instance, our \emph{Wordscores} estimates on the left-right range from -2.09 to 22.45 when the BL expert survey that was used for the reference scores ranges from 0 to 20. The question is thus how to treat such counter-intuitive results. Following other studies that used the CCC with the Manifesto Project estimates that suffer from the same problem \cite{Gemenis2012,Gemenis2013}, we use the empirical scale minimum and maximum as given in the \emph{Wordscores} output. In one approach, we do this per dimension (in the aforementioned example, we use -2.09 and 22.45 as min and max in the formula respectively), and in another we implement this process per individual country. This way, we can check whether our inferences are robust to this rescaling. Secondly, we need to set beforehand an objective criterion of what will be considered the minimum accepted correlation for criterion validity. Unfortunately, all previous studies have interpreted correlation coefficients (as strong, moderate, etc) entirely on subjective criteria. Given that Lin's original strength-of-agreement criterion $\rho_{c}>.9$ is too stringent for social science measurement, we use as the criterion the CCC between various estimates to which we compare the \emph{Wordscores} estimates to.\footnote{We would like to thank Oliver Treib for suggesting this.} This way, we have a clear, precise, and objective criterion for our assessment. If \emph{Wordscores} promises to estimate party positions accurately, then these positions should correlate with other measures of party positions at least as high as these other measures correlate with one another. Finally, we introduce a measure of uncertainty for the CCC, based on 95\% z-transformed confidence intervals. To be as lenient as possible, we consider successful in terms of criterion validity when the upper CI (not the point estimate) of the CCC is higher than three CCCs possible when comparing the three other datasets of party positions to one another. Despite the objective but lenient terms of our evaluation, Figures 5 and 6 clearly show that the \emph{Wordscores} estimates cannot be considered as valid estimates of party positions in terms of criterion validity (for a detailed overview of the concordance correlations see Appendix G). No matter the dimension (left-right, European integration, economic, or socio-cultural), the source of reference scores (BL, CHES, or EMP), the method of transformation (LBG or MV), rescaling to estimate the CCC (whole dimension or per country), or the dataset to which we compared them to (CHES, EMP, or EUP), the correlation of \emph{Wordscores} with other datasets never attained a CCC as high as the other datasets attained when compared to one another.\footnote{Detailed results and additional figures are available in Appendix G.} To be sure, one could argue that this pessimistic conclusion could be due to the constraints put by rescaling and calculating of the CCC. Nevertheless, the simple Pearson's $r$ correlation coefficients on the estimates before the rescaling needed for CCC (available in Appendix H) were also very low. \begin{figure}[ht] \caption{Assessing criterion validity on left-right and European integration dimensions.} \includegraphics[width=.8\textwidth]{Wordscores-Fig5a.pdf} \includegraphics[width=.8\textwidth]{Wordscores-Fig5b.pdf} \floatfoot{Note: Vertical lines represent the CCC between CHES/EMP (solid), CHES/EUP (dotted), EMP/EUP (dash).} \end{figure} \begin{figure}[ht] \caption{Assessing criterion validity on economic and socio-cultural dimensions.} \includegraphics[width=.8\textwidth]{Wordscores-Fig6a.pdf} \includegraphics[width=.8\textwidth]{Wordscores-Fig6b.pdf} \floatfoot{Note: Vertical lines represent the CCC between CHES/EMP (solid), CHES/EUP (dotted), EMP/EUP (dash).} \end{figure} \FloatBarrier \subsection*{Construct validity} Construct validity refers to the extent to which our measure behaves as expected within a given theoretical context. To assess construct validity, we formulate a simple hypothesis, about the relationship between party positions and membership in the political groups of the EP. This relationship has been used before to illustrate the use of the Manifesto Project \cite[36--39]{Klingemann2007}, and expert survey \cite{McElroy2007} data. In this paper, we take this hypothesis a step further, arguing that we can predict with some confidence party membership in the political groups of the EP on the basis of national parties positions on the socio-economic and European integration dimensions. To do so, we estimate a multinomial regression model, where the dependent variable takes eight values, one for each of the seven party groups in the EP (as of 2009) with non-attached parties forming the eighth group. To assess the explanatory power of the model we use count $R^{2}$ which is simply the proportion of correct predictions, as well as McFadden's pseudo-$R^{2}$ which compares the explanatory power added by the independent variables compared to a model that includes only the intercept. We compare the explanatory power of the model using the three predictor variables as estimated by \emph{Wordscores} (using all possible configurations of reference scores and transformations) to the explanatory power of models using exactly the same predictors as measured by three alternative datasets as shown in Table 2: the 2010 Chapel Hill Expert Survey, and the judgemental coding of the Euromanifestos Project and EU Profiler. As can be seen from Figure 7, in none of the cases do the \emph{Wordscores} estimates perform better than estimates from other datasets in predicting membership in the EP party groups. To avoid misleading evaluations as to how much better one model is compared to the other, we use the Bayesian Information Criterion (BIC) as a measure of overall fit. In every case, the difference in BIC between models using the \emph{Wordscores} estimates and models using estimates from the other datasets is larger than 10. This indicates `very strong' evidence \citeaffixed[87]{Long2001}{see} against the model using the \emph{Wordscores} estimates. What does this imply for \emph{Wordscores}? According to \citeasnoun[82]{Zeller1980}, construct validation requires `a pattern of consistent findings' across different hypotheses and studies in order for a measure to establish a high degree of construct validity. Our study did not provide such extensive evidence, but it is rather instructive that \emph{Wordscores} failed the very simple construct validation test that has been used elsewhere in the literature. \begin{figure}[ht] \caption{Assessing construct validity by predicting membership in EP party groups.} \includegraphics[width=.6\textwidth]{Wordscores-Fig7a.pdf} \includegraphics[width=.6\textwidth]{Wordscores-Fig7b.pdf} \floatfoot{Note: Vertical lines represent the R squared of models using estimates from CHES (solid), EUP (dashed), and EMP (dotted).} \end{figure} \FloatBarrier \section*{Conclusions} In their proof-of-concept \citeasnoun[329]{Laver2003} promised that \emph{Wordscores} can deliver `effective' estimates of political actors' policy positions in a matter of seconds. Our replication of Laver et al. revealed inconsistencies in the software implementations of \emph{Wordscores} and showed that the results presented in their proof-of-concept are not particularly robust. Following Grimmer \& Stewart's \citeyear{Grimmer2013} advice to `validate, validate, validate', we subjected \emph{Wordscores} to a rigorous validation on conditions that should be favourable to the method. Hence, we focused on a cross-sectional rather than longitudinal \citeaffixed{Brauninger2013}{cf.} comparison where we should not expect significant changes in the discourse that could compromise the effectiveness of the method. Moreover, we used an `off-the-shelf' collection of documents and data from expert surveys and the judgemental coding of party manifestos, which are consistent with how the method is used in practice. In contrast to what was promised by Laver et al. our findings showed that the \emph{Wordscores} estimates of party positions cannot be considered valid. The examination of content validity showed that the \emph{Wordscores} estimates are compounded by the scoring of irrelevant words and this cannot be corrected by the LBG rescaling method. The examination of criterion validity showed that the \emph{Wordscores} estimates correlate far lower with other estimates of party positions than the other estimates correlate with one another. Moreover, the examination of construct validity showed that \emph{Wordscores} estimates have significantly lower predictive power when used in statistical models compared to other estimates of parties' positions. Finally, these findings were shown to be robust across different configurations of reference scores and rescaling methods. In general our overall negative conclusions imply that \emph{Wordscores} should not be used to estimate parties' policy positions using electoral manifestos as reference and virgin texts. However, we need to qualify this conclusion. As the performance of \emph{Wordscores} has shown to vary widely depending on the circumstances of estimation \citeaffixed{Brauninger2013}{see}, we outline three ways in which the \emph{Wordscores} estimates can be improved, namely by careful document selection, pre-processing, and parsing. With regards to document selection, we note that our results could be driven by the fact that we used Euromanifestos rather than national election manifestos. However, the most comprehensive validation study using national election manifestos, found mixed results \citeaffixed{Brauninger2013}{see}. It seems that the problem is not so much the electoral context in which the documents are produced, but rather the quality of the documents as sources of party positions. In our validation we used the off-the-shelf collection of the Euromanifestos Project which is less than ideal. One could possibly improve the validity of \emph{Wordscores} estimates by carefully selecting the documents to be analysed, as already pointed out by \citeasnoun{Proksch2009} for the case of Germany. Second, researchers can further improve the validity of \emph{Wordscores} estimates by using a more rigorous document pre-processing procedure than the one we used in this paper. Instead of removing the most frequently occurring words as we did, researchers could consider removing stop words even more rigorously using a pre-defined list. Removing stop words would reduce the amount of noise, which tends to push \emph{Wordscores} estimates towards the middle of the scale irrespective of the informative content of the documents. It is also worth mentioning this this problem has already been accounted for by another popular scaling method, \emph{Wordfish}, which applies weights `capturing the importance of [words] in discriminating between party positions' \cite[709]{Slapin2008}. Third, researchers should consider using only those parts of the documents they are interested in. So, when the object of investigation is foreign policy, only the paragraphs directly dealing with foreign policy should be used, and not the document as a whole. Parsing documents to different policy areas depending on the estimated policy dimension is required in text scaling methods like \emph{Wordfish} that assume that the text is unidimensional \cite{Slapin2008}. The same logic can be taken to \emph{Wordscores} assuming that the content of policy areas one is not interested in would only add noise to the estimates. Nevertheless, while these three suggestions can improve the validity of the estimates they come at the expense of considerable investment in time and resources. Document selection requires considerable expertise in terms of party politics, and is often difficult to assemble and manage in a cross-national project. Lists of stop words are often context dependent, while compound words can cause considerable problems in identifying stop words by automated software. Moreover, parsing documents into policy-related sections requires knowledge of the language the documents were written, something which goes against the promise of \emph{Wordscores} as a method where it is `not necessary for an analyst [using the technique] to understand or even read the text to which the technique is applied' \cite[329]{Laver2003}. \emph{Wordscores} could potentially produce valid estimates of party positions, but only after some serious investment in time, language- and country-related expertise. We leave to the reader the question whether this investment negates the original promise of a quick and easy method \cite[226, 312]{Laver2003}. What we showed here is that, when the method is used as a language-blind and quick way to estimate party positions, it does not deliver what it promises. Therefore, any researcher who wishes to use \emph{Wordscores} `as is' should always demonstrate the validity of the output using a carefully designed validation study as shown here. \newpage \theendnotes \newpage \section*{Appendix A: Reanalysis of Laver, Benoit \& Garry (2003)} Much of the initial validation for \emph{Wordscores} rested on scoring the 1997 Irish manifestos on a social and economic dimension using the 1992 manifestos as reference texts \cite{Laver2003}. We attempted to replicate the findings in the paper using the manifestos, code, and reference scores as available on the \emph{Wordscores} website \url{http://www.tcd.ie/Political_Science/wordscores/index.html}. Unfortunately, we were not able to replicate the results published in Laver et al. using the materials from the website. Upon closer examination we realized that replication is not possible for two reasons. First, the reference texts provided in the \emph{Wordscores} website are not the same as the ones used in the Laver et al. article. As is clear from the number of words, the documents provided in the website have been cleaned differently compared to the documents used in the Laver et al. article. This cleaning refers to the removal of numbers, special characters, document formatting content (tables of contents, headers, footers), and occasionally stop words which is an important step in computer-assisted text analysis. Moreover, the website, includes in the set of reference texts the manifestos of two additional parties (Greens and Sinn Fein), unlike the Laver et al. article which uses as reference texts the manifestos of only five parties. Second, and most importantly, the current (as of \date{\today}) `23-June-2009' version of \texttt{wordscores} for Stata gives different results than the older version `v0.36' that was used to produce the results in the Laver et al. article. The differences in the output given by these two versions can be attributed to changes in the code with regards to how $F_{wv}$ (equation 3 in the main text) is calculated. According to \citeasnoun[316]{Laver2003}, $F_{wv}$ denotes `the relative frequency of each virgin text word [w], as a proportion of the total number of words in the virgin text [v]' (emphasis added). This is what has been implemented in the `23-June-2009' version of the Stata \texttt{wordscores} package. Conversely, `v0.36' and the two packages that can implement Wordscores in R (`austin' and `quanteda'), define Fwv as the relative frequency of each virgin text word w is taken as a proportion of the total number of words co-occurring between the reference and the virgin texts. In an e-mail communication, Kenneth Benoit clarified that the `correct' implementation of \emph{Wordscores} is in the R packages and `v0.36' version of \texttt{wordscores} for Stata. This implies that the definition of $F_{wv}$ given in Laver et al. is incorrect. It also implies that all those who used the `23-June-2009' version in their (published) papers got the `wrong' \emph{Wordscores} results. In our communication, Kenneth Benoit also indicated that the change in how $F_{wv}$ is defined does not make much difference as the results correlate highly. \begin{figure}[!htbp] \caption{Comparing the results of the two implementations of $F_{wv}$ in \texttt{wordscores} for Stata.} \includegraphics[width=1\textwidth]{totalcomparison.pdf} \end{figure} We tested this claim by implementing the two versions of \texttt{wordscores} (v0.23 and `23-June-2009') for Stata across all the parties in our analysis for four different dimensions (left-right, European integration, economic, social) using the \citeasnoun{Benoit2006} expert survey for the reference text scores and the LBG transformation. Figure A1 shows the results which clearly contradict the claim that the results of the two implementations correlate would highly (`about .97'). The concordance between the two scores measured by the concordance correlation coefficient are .44 (left-right), .53 (European integration), .33 (economic), .32 (social). The respective Pearson correlation coefficients are .55, .62, .41, .38. The correlations are similar when different sources for the reference text scores were used. This is clear evidence that changing the definition of $F_{wv}$ changes the \emph{Wordscores} estimates radically. Nevertheless, the most important point here is that the inconsistency between the Laver et al. article and the software implementations challenges the proof-of-concept validation presented in the Laver et al. article. In the figures presented in Table 1 below, we show how the \emph{Wordscores} estimates for Irish party positions vary when one uses different sets of documents for reference texts (five parties as in the Laver et al. article versus seven parties as in the replication material found in the \emph{Wordscores} website) and different implementations of \texttt{wordscores} for Stata (`v0.36' versus `23-June-2009') lead to substantially different results. The results in the top left quartile of Table 1 attempt to replicate the findings of Laver et al. by using the manifestos of five Irish parties (FF, FG, Labour, DL, PD) and the `v0.36' \texttt{wordscores} for Stata (which is identical to the \texttt{wordscores} and \texttt{quanteda} packages in R). They are almost identical save some minor differences due to the way the documents were cleaned for the analysis in Laver et al. As pointed out in that article, the results look reasonable and consistent with how the parties have been placed in expert surveys (e.g. DL and Labour on the economic left, the other parties on the economic right). However, when we change the definition of $v$ from `the total number of words in the virgin text' as stated in the original article \citeasnoun[316]{Laver2003} to `the proportion of the total number of words co-occurring in the virgin and reference texts' as was done in the `23-Jun-2009' version of \texttt{wordscores} for Stata, we get the much different results presented in the bottom left quartile. It is clear from the figure that changing the definition of $v$ produces estimates that move parties in a way that does not make much sense (for instance, Fianna Fail as the most economically left party) and otherwise makes it impossible to distinguish between the parties given the confidence intervals of the estimates. The change in the definition of $v$ that was implemented on 23 June 2009 will produce party positions that appear reasonable and intuitive only if one adds the manifestos of Greens and Sinn Fein in the set of reference texts as shown in the bottom right quartile. However, if we add these two manifestos in the set of reference texts, but keep the definition of $v$ as in the Laver et al. article, we will get the results in the top right quartile. Again, these results do not make much sense, since the confidence intervals overlap significantly and many of the point estimates are rather implausible (e.g. the Greens and Sinn Fein are in the middle of both scales. We find it strange that the documents for Greens and Sinn Fein were not included in the APSR article, but were included in the replication of the article as implemented in the \emph{Wordscores} website which contained a different Stata \texttt{wordscores} code. Why did the authors not include the SF and Greens documents in their original analysis as presented in the APSR article? We believe that this was not done because the addition of these two parties in 2003 under the alternative definition of $v$ which is used in R and is favoured by Kenneth Benoit (as per our e-mail communication) would have given results that are inconsistent with expert surveys. Similarly, when the \texttt{wordscores} code was changed and the results appeared to be implausible, the two documents were added as reference texts in the replication materials in the \emph{Wordscores} to improve the validity of the results. Since the positions of parties under the Laver et al. transformation (which is used in the APSR article) are sensitive to the inclusion/exclusion of virgin texts as shown by \citeasnoun{Martin2008}, we ask whether the exclusion of SF and the Greens from the analysis in Laver et al. but their inclusion in the `replication' of the analysis in the \emph{Wordscores} website does not constitute an attempt to `cherry pick' among different possible results in a way that supports the argument in favour of \emph{Wordscores}. \begin{landscape} \begin{table}[htbp] \caption{Replication of the original scores} \centering \begin{tabular}{m{2.5cm}cc} \toprule \multicolumn{3}{c}{Number of Parties}\\ Stata Version & 5 parties & 7 parties\\ \midrule \multirow{9}{*}{0.36} & & \\ & \includegraphics[scale=.9]{Topleft.pdf} & \includegraphics[scale=.9]{Topright.pdf} \\ & Laver et al. (2003) & \\ & \\ \multirow{-8}{*}{23-Jun-2009} & \includegraphics[scale=.9]{Bottomleft.pdf} & \includegraphics[scale=.9]{Bottomright.pdf} \\ & & Laver et al. (2003) Replication Material \\ \bottomrule \end{tabular} \end{table} \end{landscape} \newpage \begin{landscape} \section*{Appendix B: Documents used in the analysis} \small \begin{longtable}{m{1.8cm}m{1cm}m{3cm}m{7cm}m{7.2cm}m{1.5cm}m{1.5cm}} \toprule Country & Year & Party & Full Name & Title & Total Words* & Unique Words* \\ \midrule \endhead AT & 2004 & FP\"{O}& Freiheitliche Partei \"{O}sterreichs & T\"{u}rkei in der EU? & 1236 & 792 \\ AT & 2009 & FP\"{O} & Freiheitliche Partei \"{O}sterreichs & Echte Volksvertreter statt EU-Verr\"{a}ter & 704 & 448 \\ AT & 2004 & Gr\"{u}nen & Die Gr\"{u}nen – Die Gr\"{u}ne Alternative & Bestimmen Sie! Ihre Zukunt in Europa & 3699 & 1894 \\ AT & 2009 & Gr\"{u}nen & Die Gr\"{u}nen – Die Gr\"{u}ne Alternative & Vorw\"{a}rts Gr\"{u}n!& 3585 & 1830 \\ AT & 2009 & HPM & Liste Hans-Peter Martin & Nur er kontrolliert die M\"{a}chtigen & 119 & 106 \\ AT & 2009 & LF & Liberales Forum & Europa als Chance ergreifen & 5335 & 2308 \\ AT & 2004 & \"{O}VP & \"{O}sterreichische Volkspartei & Europa-Manifest zur Europawahl 2004 & 2226 & 1145 \\ AT & 2009 & \"{O}VP & \"{O}sterreichische Volkspartei & Wahlmanifest Zur Europawahl 2009 & 4238 & 1822 \\ AT & 2004 & SP\"{O} & Sozialdemokratische Partei \"{O}sterreichs & \"{O}sterreich Muss Wieder Geh\"{o}rt Werden! & 985 & 570 \\ AT & 2009 & SP\"{O} & Sozialdemokratische Partei \"{O}sterreichs & Wahlmanifest SP\"{O} & 2268 & 1197 \\ BE (FR) & 2004 & CDH & Centre D\'{e}mocrate Humaniste & Programme europ\'{e}en 2004 du CDH & 11184 & 3341 \\ BE (FR) & 2009 & CDH & Centre D\'{e}mocrate Humaniste & Un autre monde, une autre Europe! & 15247 & 3995 \\ BE (FR) & 2004 & ECOLO & Ecolo & Projet pour l'Europe & 4665 & 1969 \\ BE (FR) & 2009 & ECOLO & Ecolo & Programme Ecole \'{E}lections 2009 & 7760 & 2741 \\ BE (FR) & 2009 & FN & Front National & Le Manifeste du FN & 7004 & 2846 \\ BE (FR) & 2004 & MR & Mouvement R\'{e}formateur & 25 Propositions pour l'Europe & 3346 & 1486 \\ BE (FR) & 2009 & MR & Mouvement R\'{e}formateur & Le Programme Complet du Mouvement R\'{e}formateur \'{e}lections 2009 & 9592 & 3041 \\ BE (FR) & 2004 & PS & Parti Socialiste & Programme du PS pour les \'{e}lections europ\'{e}ennes & 15640 & 3836 \\ BE (FR) & 2009 & PS & Parti Socialiste & Programme Union Europ\'{e}enne 2009 & 12213 & 3522 \\ BE (NL) & 2004 & CD\&V & Christen-Democratisch en Vlaams & Europees verkiezingsprogramma CD\&V 13 Juni 2004 & 5391 & 1976 \\ BE (NL) & 2009 & CD\&V & Christen-Democratisch en Vlaams & Europa op maat van de globalisering & 3237 & 1435 \\ BE (NL) & 2004 & Groen! & Groen! & Europa kan zoveel beter - Jij beslist! & 6945 & 2612 \\ BE (NL) & 2009 & Groen! & Groen! & Groene wegen voor een beter Europa & 14811 & 4434 \\ BE (NL) & 2009 & LDD & Libertair, Direct, Democratisch - Lijst Dedecker & Europees Programma LDD - LDD, de Eurorealisten & 6353 & 2452 \\ BE (NL) & 2004 & NVA & Nieuw-Vlaamse Alliantie & Verkiezingsprogramma N-VA Europese verkiezingen 13 juni 2004 & 1774 & 867 \\ BE (NL) & 2009 & NVA & Nieuw-Vlaamse Alliantie & NVA Europees programma 2009 & 10955 & 3387 \\ BE (NL) & 2004 & SPA & Socialistische Partij Anders & Europees programme 13 juni 2004 & 7247 & 2433 \\ BE (NL) & 2009 & SPA & Socialistische Partij Anders & Mensen op 1 - Een eerlijke koers voor Europa & 5535 & 1860 \\ BE (NL) & 2004 & VB & Vlaams Belang & Vlaamse Staat, Europese Natie & 15197 & 4429 \\ BE (NL) & 2009 & VB & Vlaams Belang & Dit is ons land & 10178 & 3451 \\ BE (NL) & 2004 & VLD & Vlaamse Liberalen en Democraten & Programma VLD - Vlaamse en Europese verkiezingen 13 juni 2004 & 748 & 503 \\ BE (NL) & 2009 & VLD & Vlaamse Liberalen en Democraten & Top 15 van de Europese Liberalen voor de verkiezingen van het Europees parlement & 4696 & 1956 \\ CY & 2004 & AKEL & \selectlanguage{greek}{Ανορθωτικό Κόμμα Εργαζόμενου Λαού} & \selectlanguage{greek}{Προγραμματικη Διακηρυξη} & 2155 & 1180 \\ CY & 2009 & AKEL & \selectlanguage{greek}{Ανορθωτικό Κόμμα Εργαζόμενου Λαού} & \selectlanguage{greek}{Στην Ευρώπη Διεκδικητές και όχι Χειροκροτητές} & 989 & 638 \\ CY & 2004 & DIKO & \selectlanguage{greek}{Δημοκρατικό Κόμμα} & \selectlanguage{greek}{Ισχυρή Κύπρο στην Ευρώπη!} & 1698 & 1002 \\ CY & 2009 & DIKO & \selectlanguage{greek}{Δημοκρατικό Κόμμα} & \selectlanguage{greek}{Στείλε καθαρό µήνυµα στην Ευρώπη } & 1092 & 643 \\ CY & 2004 & DISY & \selectlanguage{greek}{Δημοκρατικός Συναγερμός} & \selectlanguage{greek}{Η καλύτερη ομάδα} & 1769 & 985 \\ CY & 2009 & DISY & \selectlanguage{greek}{Δημοκρατικός Συναγερμός} & \selectlanguage{greek}{Πρόταση Πολιτικής } & 1796 & 1055 \\ CY & 2004 & EDEK & \selectlanguage{greek}{Κίνημα Σοσιαλδημοκρατών ΕΔΕΚ} & \selectlanguage{greek}{Έχουμε θέση στην Ευρώπη} & 465 & 282 \\ CY & 2009 & EDEK & \selectlanguage{greek}{Κίνημα Σοσιαλδημοκρατών ΕΔΕΚ} & \selectlanguage{greek}{Ομιλία Γιαννάκη Ομήρου στην Κεντρική Συγκέντρωση } & 1154 & 698 \\ CY & 2004 & KOP & \selectlanguage{greek}{Κίνημα Οικολόγων Περιβαλλοντιστών} & \selectlanguage{greek}{Εκλογικο Μανιφεστο Των Ευρωεκλογων Του 2004 ΟμοσπονδΙασ Πρασινων Ευρωπαικων Κομματων} & 1466 & 827 \\ CZ & 2004 & CSSD & Cesk\'{a} strana soci\'{a}lne demokratick\'{a} & Za Evropu bezpec\'{i}, m\'{i}ru, prosperity a soci\'{a}ln\'{i}ch jistot & 1138 & 664 \\ CZ & 2009 & CSSD & Cesk\'{a} strana soci\'{a}lne demokratick\'{a} & Jistota 2009 & 876 & 627 \\ CZ & 2004 & KDU-CSL & Krestansk\'{a} a demokratick\'{a} unie – Ceskoslovensk\'{a} strana lidov\'{a} & Evropsk\'{y} volebn\'{i} program KDU - CSL & 2602 & 1443 \\ CZ & 2009 & KDU-CSL & Krestansk\'{a} a demokratick\'{a} unie – Ceskoslovensk\'{a} strana lidov\'{a} & Volebn\'{i} Program Pro Volby Do EP 2009-2014 & 1754 & 1173 \\ CZ & 2004 & KSCM & Komunistick\'{a} strana Cech a Moravy & S v\'{a}mi a pro v\'{a}s doma i v EU & 1771 & 1155 \\ CZ & 2009 & KSCM & Komunistick\'{a} strana Cech a Moravy & Otevren\'{y} volebn\'{i} program KSCM pro volby do - Evropsk\'{e}ho parlamentu 2009 & 698 & 519 \\ CZ & 2009 & NEZ & Politck\'{e} Hnut\'{i} Nezt\'{a}vislt\'{i} & Volby do Evropsk\'{e}ho parlamentu 2009 & 785 & 596 \\ CZ & 2004 & ODS & Obcansk\'{a} demokratickt\'{a} strana & Stejnt\'{e} \u{s}ance pro v\v{s}echny - Program pro volby do Evropsk\'{e}ho Parlamentu & 1439 & 976 \\ CZ & 2009 & ODS & Obcansk\'{a} demokratickt\'{a} strana & Volebnt\'{i} Program ODS & 5608 & 2865 \\ CZ & 2009 & SNK-ED & SNK Evrop\v{s}t\'{i} demokrat\'{e} & Spolecne uka\v{z}me Evrope sebevedomou tv\'{a}r Cesk\'{e} republiky, kter\'{a} um\'{i} vyu\v{z}\'{i}t sv\'{y}ch \v{s}anc\'{i}! & 2285 & 1365 \\ DE & 2004 & B90/GR\"{U}NEN & B\"{u}ndnis 90/Die Gr\"{u}nen & Europa Besser Machen - Du Entscheidest! & 24984 & 7243 \\ DE & 2009 & B90/GR\"{U}NEN & B\"{u}ndnis 90/Die Gr\"{u}nen & F\"{u}r ein besseres Europa! & 1263 & 756 \\ DE & 2004 & CDU & Christlich Demokratische Union Deutschlands & Europa-Manifest der CDU & 1773 & 999 \\ DE & 2009 & CDU & Christlich Demokratische Union Deutschlands & Starkes Europa – Sichere Zukunft & 3771 & 1759 \\ DE & 2004 & CSU & Christlich-Soziale Union in Bayern e. V. & F\"{u}r ein starkes Bayern in Europa & 1904 & 1062 \\ DE & 2009 & CSU & Christlich-Soziale Union in Bayern e. V. & CSU-Europawahlprogramm 2009 & 3217 & 1462 \\ DE & 2004 & FDP & Freie Demokratische Partei & Wir k\"{o}nnen Europa besser! - F\"{u}r ein freies und faires Europa& 6600 & 2664 \\ DE & 2009 & FDP & Freie Demokratische Partei & Ein Europa der Freiheit - f\"{u}r die Welt des 21. Jahrhunderts & 6523 & 2829 \\ DE & 2004 & DIELINKE & Partei des Demokratischen Sozialismus - DIE LINKE & Alternativen sind machbar: F\"{u}r ein soziales, demokratisches und friedliches Europa! & 12869 & 4777 \\ DE & 2009 & DIELINKE & Partei des Demokratischen Sozialismus - DIE LINKE & Solidarit\"{a}t, Demokratie, Frieden - Gemeinsam f\"{u}r den Wechsel in Europa! & 9718 & 3835 \\ DE & 2009 & REP & Die Republikaner & F\"{u}r die deutsche Republik – Raus aus dieser EU! & 444 & 320 \\ DE & 2004 & SPD & Sozialdemokratische Partei Deutschlands & Europamanifest der SPD & 1965 & 1019 \\ DE & 2009 & SPD & Sozialdemokratische Partei Deutschlands & Europamanifest & 5853 & 2404 \\ DK & 2004 & A & Socialdemokraterne & Socialdemokraternes Visioner for Fremtidens Europa & 2362 & 1199 \\ DK & 2009 & A & Socialdemokraterne & F\ae llesskab & 2758 & 1283 \\ DK & 2004 & B & Det Radikale Venstre - Danmarks social-liberale parti & Program til Europa-Parlamentsvalg 2004 & 1422 & 830 \\ DK & 2009 & B & Det Radikale Venstre - Danmarks social-liberale parti & Europa & 2338 & 1178 \\ DK & 2004 & C & Det Konservative Folkeparti & Sund konservativ fornuft i Europa & 948 & 530 \\ DK & 2009 & C & Det Konservative Folkeparti & Konservatives EP-valgprogram & 2847 & 1283 \\ DK & 2004 & F & Socialistisk Folkeparti & Fremtidens Europa - SFs valgprogram til Europaparlamentsvalg 2004 & 4151 & 1804 \\ DK & 2009 & F & Socialistisk Folkeparti & Et ansvarligt Europa & 473 & 338 \\ DK & 2009 & J & Juni Bev\ae gelsen & F\r{a} Tilsendt Hanne Dahls Nye Bog Helt Gratis & 417 & 263 \\ DK & 2009 & N & Folkebev\ae gelsen mod EU & Valggrundlag - opstillingsgrundlag og rammer & 1019 & 572 \\ DK & 2004 & O & Dansk Folkeparti & Den Europ\ae iske Union & 791 & 509 \\ DK & 2009 & O & Dansk Folkeparti & Den Europ\ae iske Union & 1452 & 816 \\ DK & 2004 & V & Venstre, Danmarks liberale parti & En st\ae rk stemme i det ny Europa – Venstres Valgprogram til EP valg 2004 & 2687 & 1360 \\ DK & 2009 & V & Venstre, Danmarks liberale parti & Venstres handlingsprogram til Europa-Parlamentsvalget 2009 & 4287 & 1709 \\ EE & 2004 & EKRP-EKD & Erakond Eesti Kristlikud Demokraadid-Eesti Kristlik Rahvapartei & Kaitse Eesti Krooni, Vali Rahvaliit & 1062 & 794 \\ EE & 2004 & IL & Erakond Isamaaliit & Eesti Eest Euroopas! & 987 & 782 \\ EE & 2004 & K & Eesti Keskerakond & Eesti Keskerakonna Valimisprogramm Euroopa Parlamendi Valimisteks & 848 & 696 \\ EE & 2009 & K & Eesti Keskerakond & Eesti Vajab Vahetust! & 1060 & 859 \\ EE & 2004 & RE & Eesti Reformierakond & Reformierakonna Platvorm Euroopa Parlamendi Valimisteks & 726 & 559 \\ EE & 2009 & RE & Eesti Reformierakond & Plaan Eesti Majanduskasvu Taastamiseks & 1421 & 1022 \\ EE & 2004 & RESP & Erakond Res Publica & Res Publica Teekaart Euroopas & 4258 & 2675 \\ EE & 2009 & RESP & Isamaa ja Res Publica Liit & Isamaa Ja Res Publica Liidu Programm Europarlamendi Valimistel & 833 & 639 \\ EE & 2004 & RM-SDE & Rahvaerakond M\~{o}\~{o}dukad-Sotsiaaldemokraatlik Erakond & Sotsiaaldemokraatliku Erakonna P\~{o}him\~{o}tted Ja Lubadused T\"{o}\"{o}ks Euroopa Parlamendis & 877 & 704 \\ EE & 2009 & RM-SDE & Rahvaerakond M\~{o}\~{o}dukad-Sotsiaaldemokraatlik Erakond & Inimesed Eelk\~{o}ige: Uus Suund Euroopale & 1397 & 1102 \\ ES & 2009 & BNG & Bloque Nacionalista Galego & Imos A Europa. V\'{e}s?& 5552 & 1840 \\ ES & 2009 & CDS & Centro Democrático y Social/Coalici\'{o}n Foro & Programa Electoral Para Las Elecciones Europeas -2009 & 595 & 366 \\ ES & 2009 & CIUCDCUDC & Convergència i Uni\'{o} & Programa Electoral Ciu Eleccions Europees 2009 & 22238 & 4931 \\ ES & 2009 & ERC & Esquerra Republicana de Catalunya & Programma Electoral - Eleccions Al Parlament Europeu 2009 & 4461 & 1741 \\ ES & 2004 & IU & Izquierda Unida & Programa De Izquierda Unida & 12489 & 3908 \\ ES & 2009 & IU & Izquierda Unida & Programa Electoral Elecciones Europeas 2009. Izquierda Unida & 16479 & 4534 \\ ES & 2009 & Los Verdes & Confederaci\'{o}n de los Verdes & Programa Electoral Los Verdes & 18814 & 5034 \\ ES & 2004 & PNV-EAJ & Partido Nacionalista Vasco-Euzko Alderdi Jeltzalea & Una Nueva Europa Ampliada Abierta A Las Personas Y Al Mundo & 22489 & 4968 \\ ES & 2009 & PNV-EAJ & Partido Nacionalista Vasco-Euzko Alderdi Jeltzalea & Programa Electoral Europeas-09 & 7285 & 2699 \\ ES & 2004 & PP & Partido Popular & Programa Electoral Elleciones Europeas & 6244 & 2140 \\ ES & 2009 & PP & Partido Popular & Programa Electoral Extenso Elecciones Al Parlamento Europeo & 17745 & 4591 \\ ES & 2004 & PSOE & Partido Socialista Obrero Espa\~{n}ol & Manifiesto Europeas 2004 & 4120 & 1744 \\ ES & 2009 & PSOE & Partido Socialista Obrero Espa\~{n}ol & Manifiesto-Programa Electoral Psoe 'Europeas 2009' & 5566 & 2201 \\ ES & 2009 & UPD & Uni\'{o}n Progreso y Democracia & Programa Electoral & 5971 & 2302 \\ FI & 2004 & KD & Suomen Kristillisdemokraatit & Kristillisdemokraattien & 313 & 285 \\ FI & 2009 & KD & Suomen Kristillisdemokraatit & Tehtävä EU:ssa & 4867 & 3050 \\ FI & 2004 & KESK & Suomen Keskusta & Keskustan Eurooppa-kannanotto & 2510 & 1732 \\ FI & 2009 & KESK & Suomen Keskusta & Urhoutta Eurooppaan & 3444 & 2347 \\ FI & 2004 & KOK & Kansallinen Kokoomus & "Jotta Suomella menisi paremmin" - Kokoomuksen eurovaalijulistus & 1847 & 1325 \\ FI & 2009 & KOK & Kansallinen Kokoomus & Kokoomuksen eurovaaliohjelma 2009 & 985 & 805 \\ FI & 2009 & PERUS & Perussuomalaiset & Perussuomalaisten Eu-Vaaliohjelma 2009 & 1626 & 1114 \\ FI & 2004 & RKP/SFP & Suomen ruotsalainen kansanpuolue/Svenska folkpartiet i Finland & Eurooppa Koskee Sinua & 1343 & 1026 \\ FI & 2009 & RKP/SFP & Suomen ruotsalainen kansanpuolue/Svenska folkpartiet i Finland & Moninaisuus tuo lis\"{a}arvoa. RKP – yhteinen tekij\"{a} & 749 & 602 \\ FI & 2004 & SDP & Suomen Sosialidemokraattinen Puolue & Ihmisten Eurooppaan & 1491 & 1129 \\ FI & 2009 & SDP & Suomen Sosialidemokraattinen Puolue & Euroopan Parlamentin Vaalien - Vaaliohjelma 2009 & 2331 & 1667 \\ FI & 2004 & VAS & Vasemmistoliitto & Meid\"{a}n Eurooppa & 574 & 470 \\ FI & 2009 & VAS & Vasemmistoliitto & Parempi Eurooppa on mahdollinen & 1474 & 1097 \\ FI & 2004 & VIHR & Vihre\"{a} liitto & Vihre\"{a}n liiton EU-ohjelma & 198 & 179 \\ FI & 2009 & VIHR & Vihre\"{a} liitto & Green new deal - uusi vihre\"{a} sopimus Euroopalle & 2115 & 1543 \\ FR & 2009 & EE & Europe Écologie & Le Contrat Ecologiste Pour L'Europe & 8427 & 3058 \\ FR & 2009 & FG & Front de Gauche & D\'{e}claration de principes du Front de Gauche pour Changer d'Europe & 1508 & 855 \\ FR & 2004 & FN & Front National & Les Abberations de l'Europe & 6120 & 2424 \\ FR & 2009 & FN & Front National & «Leur» Europe N'est Pas La Notre ! Voila L'europe Que Nous Voulons & 1344 & 803 \\ FR & 2009 & Libertas & Libertas & Le Projet & 490 & 321 \\ FR & 2009 & LO & Lutte ouvrière & Lutte Ouvrière dans les élections européennes & 837 & 482 \\ FR & 2009 & MODEM & Mouvement Démocrate & Nous l'Europe & 1683 & 870 \\ FR & 2004 & PCF & Parti communiste fran\c{c}ais & L'Europe: oui. Mais pas celle-l\`{a}! & 2310 & 1037 \\ FR & 2004 & PRG & Parti Radical de Gauche & De nouveaux caps pour l'Europe & 1313 & 735 \\ FR & 2004 & PS & Parti socialiste & Une Ambition Socialiste pour L'Europe& 4676 & 1853 \\ FR & 2009 & PS & Parti socialiste & L'Europe face \`{a} la crise: la relance des socialistes & 1119 & 640 \\ FR & 2004 & UDF & Union pour la D\'{e}mocratie Française & Nous avons besoin d'Europe & 8721 & 2941 \\ FR & 2004 & UMP & Union pour un mouvement populaire & Avec l'Europe, Voyons la France en Grand! & 1873 & 945 \\ FR & 2009 & UMP & Union pour un mouvement populaire & 30 Propositions pour une Europe Qui Prot\`{e}ge et Qui Agit & 4748 & 1841 \\ GR & 2004 & KKE & \selectlanguage{greek}{Κομμουνιστικό Κόμμα Ελλάδας} & \selectlanguage{greek}{∆ιακηρυξη Της Κεντρικης Επιτροπης Του ΚΚΕ} & 2810 & 1599 \\ GR & 2009 & KKE & \selectlanguage{greek}{Κομμουνιστικό Κόμμα Ελλάδας} & \selectlanguage{greek}{Διακηρυξη Της Κεντρικης Επιτροπης Του Κκε Για Τις} & 4179 & 2218 \\ GR & 2009 & LAOS & \selectlanguage{greek}{Λαικός Ορθόδοξος Συνδεσμός} & \selectlanguage{greek}{Ευρωεκλογες 2009} & 1163 & 705 \\ GR & 2004 & ND & \selectlanguage{greek}{Νέα Δημοκρατία} & \selectlanguage{greek}{Πολιτικα Κειμενα} & 9031 & 3225 \\ GR & 2009 & ND & \selectlanguage{greek}{Νέα Δημοκρατία} & \selectlanguage{greek}{Νεα Δημοκρατία Η Αυθεντική Ευρωπαϊκή Επιλογή} & 1686 & 1158 \\ GR & 2009 & OP & \selectlanguage{greek}{Οικολόγοι Πράσινοι} & \selectlanguage{greek}{Διακήρυξη για τις Ευρωεκλογές 2009} & 1392 & 906 \\ GR & 2004 & PASOK & \selectlanguage{greek}{Πανελλήνιο Σοσιαλιστικό Κίνημα} & \selectlanguage{greek}{Ευρωεκλογες 2004 - Το Όραµα, Οι Θέσεις, Οι Δεσµεύσεις µας} & 2037 & 1049 \\ GR & 2009 & PASOK & \selectlanguage{greek}{Πανελλήνιο Σοσιαλιστικό Κίνημα} & \selectlanguage{greek}{Ψηφιζουμε Για Την Ευρώπη - Αποφασιζουμε Για Την Ελλάδα} & 2236 & 1130 \\ GR & 2004 & SYRIZA & \selectlanguage{greek}{Συνασπισμός Ριζοσπαστικής Αριστεράς - Ενωτικό Κοινωνικό Μέτωπο} & \selectlanguage{greek}{Συνασπισμός Της Αριστεράς Των Κινημάτων Και Της Οικολογίας} & 2531 & 1328 \\ GR & 2009 & SYRIZA & \selectlanguage{greek}{Συνασπισμός Ριζοσπαστικής Αριστεράς - Ενωτικό Κοινωνικό Μέτωπο} & \selectlanguage{greek}{Διακηρυξη Για Τισ Ευρωεκλογεσ}& 1050 & 694 \\ HU & 2004 & FIDESZ-MPP & Fidesz – Magyar Polg\'{a}ri Sz\"{o}vets\'{e}g & Csak egyr\"{u}tt siker\"{u}lhet! & 23616 & 9144 \\ HU & 2009 & FIDESZ-MPP & Fidesz – Magyar Polg\'{a}ri Sz\"{o}vets\'{e}g & El\^{o}sz\'{o} & 64743 & 18442 \\ HU & 2009 & JOBBIK & Jobbik Magyarorsz\'{a}g\'{e}rt Mozgalom & Magyarorsz\'{a}g a magyarok\'{e} & 14393 & 7280 \\ HU & 2004 & MDF & Magyar Demokrata F\'{o}rum & "A norm\'{a}lis Magyarorsz\'{a}g\'{e}rt!" & 1514 & 1096 \\ HU & 2009 & MDF & Magyar Demokrata F\'{o}rum & Mi\'{e}rt IGEN az MDF list\'{a}j\'{a}ra j\'{u}nius 7-\'{e}n? & 2681 & 1614 \\ HU & 2004 & MSZP & Magyar Szocialista P\'{a}rt & A Sikeres Eur\'{o}pai Magyarorsz\'{a}g\'{e}rtM & 772 & 543 \\ HU & 2009 & MSZP & Magyar Szocialista P\'{a}rt & \'{U}jult erovel & 1570 & 1066 \\ HU & 2004 & SZDSZ & Szabad Demokrat\'{a}k Sz\"{o}vets\'{e}ge & Egy \'{U}j, Kibov\'{i}tett Eur\'{o}pa, Mely Nyitott \'{A}llampolg\'{a}rai & 7478 & 3776 \\ HU & 2009 & SZDSZ & Szabad Demokrat\'{a}k Sz\"{o}vets\'{e}ge & 200 001 Szabad, Demokrata Szavaz\'{o} & 456 & 375 \\ IE & 2004 & FF & Fianna F\'{a}il & Fianna F\'{a}il 2004 & 4707 & 1510 \\ IE & 2009 & FF & Fianna F\'{a}il & Europe, we are better working together & 8014 & 2265 \\ IE & 2004 & FG & Fine Gael & Fine Gael European Parliament Elections 2004 & 5861 & 1872 \\ IE & 2009 & FG & Fine Gael & Securing Ireland's Future in Europe & 6404 & 1882 \\ IE & 2004 & GREENS & Green Party & Manifesto 2004 - European and Local Elections & 3948 & 1595 \\ IE & 2009 & GREENS & Green Party & A Green New Deal for Europe & 2445 & 1033 \\ IE & 2004 & LAB & Labour Party & Making the Difference in Europe & 3080 & 1116 \\ IE & 2009 & LAB & Labour Party & Putting people, jobs and fairness at the heart of Europe & 4441 & 1533 \\ IE & 2004 & SF & Sinn F\'{e}in & An Ireland of Equals in a Europe of Equals & 12062 & 2610 \\ IE & 2009 & SF & Sinn F\'{e}in & Europe '09 & 5008 & 1369 \\ IE & 2009 & SP & Socialist Party & We Want a Europe Fit for Workers & 3332 & 1412 \\ IT & 2009 & Altra & Altra Italia & Programma Unitario Per Le Elezioni Europee & 2181 & 1171 \\ IT & 2004 & AN & Alleanza Nationale & Programma - Alleanza Nazionale & 2015 & 1047 \\ IT & 2009 & Auton. & L'Autonomia & Nasce il Polo dell'Autonomia & 382 & 271 \\ IT & 2004 & DSULIVO & Uniti nell'Ulivo & L'Europa contro le nostre paure & 14761 & 3910 \\ IT & 2004 & FI & Forza Italia & Elezioni Per Il Parlamento Europeo & 3591 & 1371 \\ IT & 2009 & IDV & Italia dei Valori & Torniamo In Europa & 244 & 181 \\ IT & 2004 & LN & Lega Nord & Programma Per Le Elezioni Europee 2004 & 6306 & 2401 \\ IT & 2009 & LN & Lega Nord & Proposte e Obiettivi & 21632 & 5568 \\ IT & 2009 & PDL & Il Popolo della Libert\`{a} & Elezioni 2009: Manifesto del Partito Popolare Europeo & 777 & 501 \\ IT & 2004 & PRC & Partito della Rifondazione Comunista & La Sinistra, L'altra Europa & 29371 & 6610 \\ IT & 2009 & SEL & Sinistra e Liberta & Sinistra A Liberta - Programma Elettorale & 3963 & 1729 \\ IT & 2009 & UDC & Unione dei Democratici Cristiani e di Centro & UDC 2009 & 356 & 259 \\ LT & 2009 & DP & Darbo partija & Geroves Lietuvai Europoje – Svarbiausias Yra Tavo Balsas ! & 681 & 547 \\ LT & 2004 & LiCS & Liberalu ir centro sajunga & "Padarykime Europa Naudinga Lietuvai" & 3325 & 1856 \\ LT & 2009 & LiCS & Liberalu ir centro sajunga & Liberalu Ir Centro Sajungos Rinkimu I Europos Parlamenta & 4282 & 2333 \\ LT & 2004 & LKD & Lietuvos krik\u{s}cionys demokratai & 2004 Metu Rinkimu I Europos Parlamenta Programa & 2699 & 1662 \\ LT & 2009 & LLRA & Lietuvos lenku rinkimu akcija & Lietuvos Lenku Rinkimu Akcijos Kandidatu I Europos Parlamenta Rinkimu Deklaracija & 1291 & 914 \\ LT & 2009 & LRLS & Lietuvos Respublikos Liberalu sajudis & Programa 2009 – 2013 M. Europos Parlamento Kadencijai & 2599 & 1441 \\ LT & 2004 & LSDP & Lietuvos socialdemokratu partija & Su Europa - Už Lietuva Veikime Kartu! & 2490 & 1534 \\ LT & 2009 & LSDP & Lietuvos socialdemokratu partija & Lietuvos Socialdemokratu Partijos Rinkimu I Europos Parlamenta 2009 Metais Programa & 4766 & 2433 \\ LT & 2009 & LVLS & Lietuvos valstieciu liaudininku sajunga & Lietuvos Valstieciu Liaudininku Sajungos (Lvls) Rinkimu I Europos Parlamenta Programa & 1877 & 1239 \\ LT & 2004 & NS & Naujoji Sajunga (Socialliberalai) & Naujosios Sajungos Programa 2004 Metu Europos Parlamento Rinkimams & 7545 & 3399 \\ LT & 2004 & TS & Tevynes Sajunga & Tevynes Sajungos Rinkimu I Europos Parlamenta Programa & 5537 & 2954 \\ LT & 2009 & TS-LKD & Tevynes sajunga - Lietuvos krik\u{s}\u{s}cionys demokratai & Tevynes Sajungos-Lietuvos Krik\u{s}cioniu Demokratu Rinkimu I Europos Parlamenta Programines Nuostatos & 873 & 634 \\ LT & 2009 & TT & Tvarka ir teisingumas - Liberalu Demokratu Partija & 2009 Metu Europos Parlamento Rinkimu Programa & 855 & 643 \\ LV & 2004 & JL & Jaunais Laiks & Jaunais laiks priek\u{s}vele\u{s}anu programma 2004.gada Eiropas Parlamenta vele\u{s}anam & 349 & 306 \\ LV & 2009 & JL & Jaunais Laiks & Jaunais laiks priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta vele\u{s}anam & 375 & 295 \\ LV & 2004 & LC & Latvijas Cel\u{s} & Savieniba "Latvijas cel\u{s}" priek\u{s}vele\u{s}anu programma 2004.gada Eiropas Parlamenta vele\u{s}anam & 367 & 309 \\ LV & 2009 & LPP/LC & Latvijas Pirma partija/Latvijas Cel\u{s} & Partija "LPP/LC" priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta vele\u{s}anam & 352 & 297 \\ LV & 2004 & PCTVL & Par cilveka tiesibam vienota Latvija & Politisko organizaciju apvieniba "Par cilveka tiesibam vienota Latvija" priek\u{s}vele\u{s}anu programma 2004.gada Eiropas Parlamenta vele\u{s}anam & 357 & 302 \\ LV & 2009 & PCTVL & Par cilveka tiesibam vienota Latvija & PCTVL - Par cilveka tiesibam vienota Latvija priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta vele\u{s}anam & 371 & 298 \\ LV & 2009 & PS & Pilsoniska Savieniba & "Pilsoniska savieniba" priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta vele\u{s}anam & 390 & 329 \\ LV & 2009 & SC & Saskanas Centrs & Politisko partiju apvieniba "Saskanas Centrs" priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta vele\u{s}anam & 377 & 307 \\ LV & 2004 & TB/LNNK & Tevzemei un Brivibai/LNNK & Apvieniba "Tevzemei un Brivibai"/LNNK priek\u{s}vele\u{s}anu programma 2004.gada Eiropas Parlamenta velešanam & 434 & 353 \\ LV & 2009 & TB/LNNK & Tevzemei un Brivibai/LNNK & Apvieniba "Tevzemei un Brivibai"/LNNK priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta velešanam & 463 & 394 \\ LV & 2004 & TP & Tautas Partija & Tautas partija priek\u{s}vele\u{s}anu programma 2004.gada Eiropas Parlamenta vele\u{s}anam & 349 & 294 \\ LV & 2009 & TP & Tautas Partija & Tautas partija priek\u{s}vele\u{s}anu programma 2009.gada Eiropas Parlamenta vele\u{s}anam & 406 & 346 \\ LV & 2004 & ZZS & Zalo un Zemnieku Savieniba & Zalo un Zemnieku savieniba priek\u{s}vele\u{s}anu programma 2004.gada Eiropas Parlamenta vele\u{s}anam & 230 & 209 \\ NL & 2004 & CDA & Christen-Democratisch App\`{e}l & Verkiezingsmanifest CDA 2004 & 1042 & 560 \\ NL & 2009 & CDA & Christen-Democratisch App\`{e}l & Kracht en Ambitie & 6278 & 2271 \\ NL & 2004 & CUSGP & ChristenUnie-Staatskundig Gereformeerde Partij & Geloofwaardige keuzes - Manifest voor Christelijke politiek in Europa & 6431 & 2540 \\ NL & 2009 & CUSGP & ChristenUnie-Staatskundig Gereformeerde Partij & Samenwerking Ja, Superstaat Nee & 9119 & 2894 \\ NL & 2004 & D66 & Democraten '66 & Een succesvol Europa & 3651 & 1505 \\ NL & 2009 & D66 & Democraten '66 & Europa gaat om mensen! & 10035 & 3120 \\ NL & 2004 & GL & GroenLinks & Eigenwijs Europees & 16119 & 5296 \\ NL & 2009 & GL & GroenLinks & Nieuwe Energie voor Europa & 11997 & 4197 \\ NL & 2004 & LPF & Lijst Pim Fortuyn & ..... Is U iets Gevraagd ? & 1427 & 782 \\ NL & 2004 & PVDA & Partij van de Arbeid & Een Sterk en Sociaal Europa & 5669 & 2080 \\ NL & 2009 & PVDA & Partij van de Arbeid & Verkiezingsprogramma Europees Parlement 2009-2014 & 8552 & 2818 \\ NL & 2009 & PVV & Partij voor de Vrijheid & Partij voor de Vrijheid - Verkiezingsprogramma Europees Parlement 2009 & 234 & 157 \\ NL & 2004 & SP & Socialistische Partij & Wie zwijgt stemt toe! & 8343 & 2888 \\ NL & 2009 & SP & Socialistische Partij & Een Beter Europa Begint in Nederland & 6659 & 2304 \\ NL & 2004 & VVD & Volkspartij voor Vrijheid en Democratie & Een nieuw, Uitgereid Europa, open voor zijn burgers en open voor de wereld & 7552 & 2313 \\ NL & 2009 & VVD & Volkspartij voor Vrijheid en Democratie & Voor een werkend Europa & 1892 & 965 \\ PL & 2009 & PDP-CL & Porozumienie dla Przyszlosci -CentroLewica & Europa To Ludzie & 1665 & 1056 \\ PL & 2004 & PiS & Prawo i Sprawiedliwosc & Deklaracja Krakowska & 504 & 418 \\ PL & 2009 & PiS & Prawo i Sprawiedliwosc & Nowoczesna Solidarna Bezpieczna Polska & 3909 & 2078 \\ PL & 2004 & PO & Platforma Obywatelska & Program Europejski Platformy Obywatelskiej & 996 & 725 \\ PL & 2009 & PO & Platforma Obywatelska & Projekt dokumentu wyborczego EPL 2009r. & 13178 & 4912 \\ PL & 2004 & PSL & Polskie Stronnictwo Ludowe & Zadbamy O Polske ! & 765 & 515 \\ PL & 2009 & PSL & Polskie Stronnictwo Ludowe & Narodowe Priorytety Europejskiej Polityki PSL & 3014 & 1354 \\ PL & 2004 & SLD & Sojusz Lewicy Demokratycznej & Manifest Europejski SLD & 555 & 444 \\ PL & 2009 & SLD & Sojusz Lewicy Demokratycznej & Po pierwsze, czlowiek & 5714 & 2599 \\ PL & 2009 & SRP & Samoobrona Rzeczpospolitej Polskiej & Przedstawiciele Samoobrony w Parlamencie Europejskim & 498 & 379 \\ PL & 2004 & UW & Unia Wolnosci & Ruszyla kampania wyborcza Unii Wolnosci & 231 & 180 \\ PT & 2004 & BE & Bloco de Esquerda & Refundar a Europa Mudar Portugal & 1913 & 1003 \\ PT & 2009 & BE & Bloco de Esquerda & Compromisso Eleitoral Da Candidatura Do Bloco \`{A}s Europeias & 3461 & 1629 \\ PT & 2009 & CDS-PP & Centro Democr\'{a}tico e Social – Partido Popular & Manifesto Eleitoral Europeias 2009 & 1439 & 825 \\ PT & 2004 & CDU-PCP/PEV & Partido Comunista Portugu\^{e}s/Partido Ecologista "Os Verdes" & Declara\c{c}\~{a}o Program\'{a}tica2004 & 5023 & 1767 \\ PT & 2009 & CDU-PCP/PEV & Partido Comunista Portugu\^{e}s/Partido Ecologista "Os Verdes" & Declara\c{c}\~{a}o Program\'{a}tica do PCP para as Elei\c{c}\~{o}es Europeias de 2009 & 4701 & 1642 \\ PT & 2004 & PPD/PSD & Partido Social Democrata & For\c{c}a Portugal & 2370 & 1214 \\ PT & 2009 & PPD/PSD & Partido Social Democrata & Pelo Interesse Nacional & 690 & 449 \\ PT & 2004 & PS & Partido Socialista & Pela Europa, pelos portugueses & 5553 & 2118 \\ PT & 2009 & PS & Partido Socialista & As Pessoas Primeiro - Um Novo Rumo Para A Europa & 3903 & 1625 \\ SE & 2004 & C & Centerpartiet & Smalare men vassare! & 2953 & 1336 \\ SE & 2009 & C & Centerpartiet & Europas f\"{o}renta krafter & 1043 & 630 \\ SE & 2009 & FP & Folkpartiet Liberalerna & Ja till Europa & 1985 & 1089 \\ SE & 2009 & JL & Junilistan & Junilistans valplattform 2009 & 548 & 380 \\ SE & 2004 & KD & Kristdemokraterna & Inf\"{o}r valet till Europaparlamentet 13 juni 2004 & 7580 & 2933 \\ SE & 2009 & KD & Kristdemokraterna & Ett tryggt Europa – v\r{a}r v\"{a}g dit. & 699 & 498 \\ SE & 2004 & M & Moderata samlingspartiet & Europasamarbetet kan g\"{o}ra Sverige bättre & 1420 & 751 \\ SE & 2009 & M & Moderata samlingspartiet & Tid f\"{o}r ansvar & 1478 & 807 \\ SE & 2004 & MP & Milj\"{o}partiet de Gröna & Ja till samarbete, nej till EU-stat - f\"{o}r ett gr\"{o}nt och solidariskt Europa & 3406 & 1565 \\ SE & 2009 & MP & Milj\"{o}partiet de Gröna & Valmanifest - Gr\"{o}nt Klimatval 2009 & 284 & 240 \\ SE & 2009 & PP & Piratpartiet & Principprogram version 3.3 & 1349 & 815 \\ SE & 2004 & S & Sveriges Socialdemokratiska arbetarpart & Valmanifest 2004 & 638 & 414 \\ SE & 2009 & S & Sveriges Socialdemokratiska arbetarpart & Valmanifest - Jobben först & 735 & 432 \\ SE & 2004 & V & V\"{a}nsterpartiet & V\"{a}nsterpartiets EU-Valplattform & 1529 & 927 \\ SE & 2009 & V & V\"{a}nsterpartiet & Valplattform inför EU-parlamentsvalet & 2141 & 1182 \\ SI & 2009 & LDS & Liberalna demokracija Slovenije & Poslanica LDS za evropske volitve & 788 & 600 \\ SI & 2004 & NSI & Nova Slovenija – kr\v{s}canska ljudska stranka & Volitve V Evropski Parlament & 492 & 391 \\ SI & 2009 & NSI & Nova Slovenija – kr\v{s}canska ljudska stranka & Nova Slovenija Kr\v{s}\`{e}anski Ljudska Stranka & 587 & 441 \\ SI & 2009 & SD & Socialni demokrati & Manifest Stranke evropskih socialdemokratov & 5870 & 2491 \\ SI & 2004 & SDS & Slovenska demokratska stranka & Spletna Stran - Program & 1653 & 1066 \\ SI & 2009 & SDS & Slovenska demokratska stranka & Nova pot - 20 let slovenske pomladi & 328 & 245 \\ SI & 2004 & SLS & Slovenska ljudska stranka & »Vec Slovenije V Evropi, Vec Evrope V Sloveniji« & 2285 & 1291 \\ SI & 2009 & SLS & Slovenska ljudska stranka & SLO: SLS + SKD Slovenska Ljudska Stranka & 161 & 136 \\ SI & 2009 & Zares & Zares – socialno-liberalni & Vzemimo Evropo Zares & 14802 & 5045 \\ SI & 2004 & ZLSD & Zdru\v{z}ena lista socialnih demokratov & V Evropi za dobro Slovenije! & 2073 & 1185 \\ SK & 2004 & KDH & Krestanskodemokratick\'{e} hnutie & Volebn\'{y} program KDH do volieb do Eur\'{o}pskeho parlamentu & 1464 & 1002 \\ SK & 2009 & KDH & Krestanskodemokratick\'{e} hnutie & Volebn\'{y} program KDH do Eur\'{o}pskeho parlamentu & 1735 & 1135 \\ SK & 2004 & LS-HZDS & Ludov\'{a} strana - Hnutie za demokratick\'{e} Slovensko & Odpovede na ot\'{a}zky: Irena Belohorsk\'{a}, kandid\'{a}tka na poslanca EP za HZDS & 788 & 563 \\ SK & 2009 & LS-HZDS & Ludov\'{a} strana - Hnutie za demokratické Slovensko & Slovensko – Stabilné Srdce Eur\'{o}py & 5084 & 2641 \\ SK & 2004 & SDKU-DS & Slovensk\'{a} demokratick\'{a} a krestansk\'{a} \'{u}nia - Demokratick\'{a} strana & Manifest SDK\'{U} pre nov\'{u} Eur\'{o}pu & 1805 & 1020 \\ SK & 2009 & SDKU-DS & Slovensk\'{a} demokratick\'{a} a krestansk\'{a} \'{u}nia - Demokratick\'{a} strana & Za Prosperuj\'{u}ce Slovensko V Silnej Európe & 5312 & 2317 \\ SK & 2004 & SMER-SD & Smer – soci\'{a}lna demokracia & silnej\v{s}ie Slovensko v soci\'{a}lnej Eur\'{o}pe& 2150 & 1121 \\ SK & 2009 & SMER-SD & Smer – soci\'{a}lna demokracia & Soci\'{a}lna Eur\'{o}pa – Odpoved Na Kr\'{i}zu & 461 & 303 \\ SK & 2004 & SMK-MKP & Strana madarskej komunity - Magyar K\"{o}z\"{o}ss\'{e}g P\'{a}rtja & Hely\"{u}nk Eur\'{o}p\'{a}ban & 2506 & 1556 \\ SK & 2009 & SMK-MKP & Strana madarskej komunity - Magyar K\"{o}z\"{o}ss\'{e}g P\'{a}rtja & Na\v{s}a bud\'{u}cnost v Eur\'{o}pe & 3944 & 2117 \\ SK & 2009 & SNS & Slovensk\'{a} n\'{a}rodn\'{a} strana & Jaroslav Pa\v{s}ka: Priority na najbli\v{z}\v{s}\'{i}ch 5 rokov v Eur\'{o}pskom parlamente & 180 & 153 \\ UK & 2009 & BNP & British National Party & 2009 Manifesto for the European Elections & 964 & 489 \\ UK & 2004 & CON & Conservative Party & Putting Britain First & 7128 & 2070 \\ UK & 2009 & CON & Conservative Party & Vote for Change& 4742 & 1611 \\ UK & 2009 & DUP & Democratic Unionist Party & Strong Leadership in Challenging Times & 385 & 278 \\ UK & 2009 & GREEN & Green Party of England and Wales & "it's the economy, stupid" & 7831 & 2389 \\ UK & 2004 & LAB & Labour Party & Britain is working & 4289 & 1273 \\ UK & 2009 & LAB & Labour Party & Winning the fight for Britain's future & 4910 & 1357 \\ UK & 2004 & LD & Liberal Democrats & Making Europe Work For You & 7986 & 2162 \\ UK & 2009 & LD & Liberal Democrats & Stronger Together, poorer apart & 5355 & 1590 \\ UK & 2004 & PC & Plaid Cymru – the Party of Wales & Fighting Hard For Wales & 2184 & 932 \\ UK & 2009 & PC & Plaid Cymru – the Party of Wales & European Manifesto & 2914 & 1232 \\ UK & 2009 & SDLP & Social Democratic and Labour Party & A Vision For Europe - Ambition For You & 7055 & 2223 \\ UK & 2009 & SF & Sinn F\'{e}in & Sinn F\'{e}in European Election Manifesto 2009 & 4920 & 1372 \\ UK & 2004 & SNP & Scottish National Party & Vote for Scotland & 3447 & 1248 \\ UK & 2009 & SNP & Scottish National Party & We've got what it takes & 3764 & 1211 \\ UK & 2009 & UKIP & UK Independence Party & UKIP Manifesto 2009 & 295 & 197 \\ UK & 2009 & UUP & Ulster Unionist Party & Vote For Change & 4742 & 1611 \\ \bottomrule \multicolumn{7}{l}{* Refers to the number of words after the documents were cleaned} \end{longtable} \end{landscape} \newpage \section*{Appendix C: Document preparation} \subsection*{Document Selection} We obtained the manifestos from the \textit{Euromanifestos Project} website.\footnote{\url{http://www.ees-homepage.net/}} For all countries, text files were available for the 2009 manifestos, while for the 2004 manifestos, only some parties in Germany and the United Kingdom were available in this format. We thus used the stored portable document file, which we converted into UTF-8 text files, to assure compatibility and preservation of non-English characters. When conversion from .pdf was not possible due to the file being saved as an image, we used optical character recognition (OCR) software. While OCR will never convert a text 100\% faithfully, sufficient results can be gained, especially as the software we used allowed us to manually correct mistakes and instances were the software was not sure. For some countries, not all the released manifestos were stored in the database, or the stored document was something other than a true Euromanifesto, in which case we looked for the document in other online sources. Both the resulting .txt and .pdf version of these source documents can be found among our replication files. \subsection*{Pre-processing} From all text files, we removed headers and footers, page numbering, section headings, graphs, numbers, currency symbols and tables. We then imported these texts into Wordfreq (cite) to make the frequency tables for each country. From these frequency tables, we then deleted stop-words as they carry minimal information value \cite[332]{Slapin2008}. While not all studies using Wordscores apply stop-words, a significant number do \cite{Ruedin2013,Ruedin2013a,Slapin2008}. Moreover, the practise seems to be common in automatic content analysis \cite{Grimmer2013}, and seems especially suited for Wordscores, as it falsely assumes all scored words to carry the same informative value. However, a word such as 'immigration' adds information to a text in a way words like 'the' or 'and' do not. Nevertheless, as these words occur often in all texts, their score will be close to the mean of the reference texts, and will thus cause the scores for the virgin texts to cluster around the mean. As such, they are indistinguishable from truly centrist words, causing parties to appear more centric than they really are \cite[360--361]{Lowe2008}. Removing these words thus increases the discriminative power of Wordscores. Here, we follow \citeasnoun{Ruedin2013a}, and remove the 20 most frequently occurring words for each country in both 2004 and 2009. We do not use stemming, as this decreases the effectiveness of the method \cite{Ruedin2013a} and because it is not beneficial for all languages. This is especially the case for languages in which compound words are common, such as in German or Finnish, where stemming may lead to a reduction of information. Table \ref{tab:title3} shows the 20 most frequently occurring words that were dropped for Great Britain. Most of these words can easily be considered non-informative, as they are either adjectives, adverbs or propositions. Even a word as \textit{european} or \textit{europe} can be argued to function mostly as an adjective as would be expected in a manifesto for European elections. The .dta files with these words removed may be found in the replication files. \begin{table}[ht] \centering \caption{Words dropped for Great Britain} \label{tab:title3} \begin{tabular}{llll} \toprule \multicolumn{2}{c}{2004} & \multicolumn{2}{c}{2009} \\ \cmidrule(lr){1-2} \cmidrule(lr){3-4} Word & Count & Word & Count\\ \midrule the & 2626 & the & 2785 \\ to & 1337 & and & 1814 \\ and & 1335 & to & 1770 \\ of & 1110 & of & 1252 \\ in & 844 & in & 1115 \\ a & 641 & a & 795 \\ eu & 555 & for & 739 \\ for & 543 & we & 707 \\ that & 448 & that & 527 \\ is & 419 & is & 476 \\ be & 344 & eu & 459 \\ we & 329 & will & 453 \\ european & 327 & our & 399 \\ on & 316 & on & 394 \\ our & 256 & european & 340 \\ europe & 255 & are & 305 \\ are & 250 & be & 300 \\ will & 240 & as & 299 \\ has & 232 & europe & 294 \\ it & 230 & with & 292 \\ \bottomrule \end{tabular} \end{table} \subsection*{Wordcount} The table below shows the word count for the documents. Using the wordscores package for Stata, we calculated the mean and standard deviation for the total words in the documents and the unique words (referring to words only occurring in a single document). In addition, \textit{New} indicates whether the 2004 European election was the first election the country participated in. Documents from the new countries were significantly shorter in 2004, but showed an increase in 2009, while the number of unique words changed little. The number of documents analysed was higher in 2004 than in 2009, which is mostly to due the availability of an existing digital copy. The number of words per manifesto differs significantly per country and also within countries as shown by the standard deviation. This implies that the size and scope of documents differ and that when performing an analysis, scholars need to be aware of what the document under investigation covers and whether all documents are the same. \begin{table}[htbp] \centering \caption {Total and Unique word count for the used documents} \label{tab:title0} \begin{tabular}{lcrrrrrrrrrr} \toprule && \multicolumn{5}{c}{2004} & \multicolumn{5}{c}{2009} \\ \cmidrule(lr){3-7} \cmidrule(lr){8-12} &&& \multicolumn{2}{c}{Total}&\multicolumn{2}{c}{Unique}&&\multicolumn{2}{c}{Total}&\multicolumn{2}{c}{Unique}\\ \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){9-10} \cmidrule(lr){11-12} Country & New & Obs & Mean & SD & Mean & SD & Obs & Mean & SD & Mean& SD \\ \midrule AT & No & 4 & 2037 & 1231 & 1100 & 580 & 6 & 2708 & 2046 & 1285 & 864 \\ BE(FR) & No & 4 & 8709 & 5753 & 2658 & 1111 & 5 & 10363 & 3389 & 3229 & 523 \\ BE(NL) & No & 6 & 6217 & 5149 & 2137 & 1405 & 7 & 7966 & 4128 & 2711 & 1077 \\ CY & Yes & 5 & 1511 & 635 & 855 & 344 & 4 & 1258 & 365 & 759 & 200 \\ CZ & Yes & 4 & 1738 & 632 & 1060 & 326 & 6 & 2001 & 1876 & 1191 & 890 \\ DE & No & 6 & 8349 & 9228 & 2961 & 2567 & 7 & 4398 & 3219 & 1909 & 1216 \\ DK & No & 6 & 2060 & 1274 & 1039 & 509 & 8 & 1949 & 1348 & 930 & 515 \\ EE & Yes & 6 & 1460 & 1376 & 1035 & 808 & 4 & 1178 & 283 & 906 & 204 \\ ES & No & 4 & 11336 & 8241 & 3190 & 1513 & 10 & 10471 & 7523 & 3024 & 1,627 \\ FI & No & 7 & 1182 & 858 & 878 & 580 & 8 & 2199 & 1369 & 1528 & 823 \\ FR & No & 6 & 4169 & 2887 & 1656 & 896 & 8 & 2520 & 2722 & 1109 & 909 \\ GR & No & 4 & 4102 & 3301 & 1800 & 976 & 6 & 1951 & 1171 & 1135 & 567 \\ HU & Yes & 4 & 8345 & 10614 & 3640 & 3932 & 5 & 16769 & 27399 & 5755 & 7605 \\ IE & No & 5 & 5932 & 3576 & 1741 & 556 & 6 & 4941 & 2033 & 1582 & 432 \\ IT & No & 5 & 11209 & 11281 & 3068 & 2273 & 7 & 4219 & 7797 & 1383 & 1932 \\ LT & Yes & 5 & 4319 & 2171 & 2281 & 840 & 8 & 2153 & 1598 & 1273 & 752 \\ LV & Yes & 6 & 348 & 66 & 296 & 47 & 7 & 391 & 36 & 324 & 36 \\ NL & No & 8 & 6279 & 4795 & 2246 & 1482 & 8 & 6846 & 4025 & 2341 & 1268 \\ PL & Yes & 5 & 610 & 288 & 456 & 196 & 6 & 4663 & 4544 & 2063 & 1597 \\ PT & No & 4 & 3715 & 1839 & 1526 & 510 & 5 & 2839 & 1700 & 1234 & 561 \\ SE & No & 6 & 2921 & 2504 & 1321 & 890 & 9 & 1140 & 645 & 675 & 323 \\ SI & Yes & 4 & 1626 & 800 & 983 & 405 & 6 & 3756 & 5831 & 1493 & 1945 \\ SK & Yes & 5 & 1743 & 660 & 1052 & 354 & 6 & 2786 & 2294 & 1444 & 1069 \\ UK & No & 5 & 5007 & 2464 & 1537 & 546 & 12 & 3990 & 2447 & 1297 & 691 \\ \bottomrule \end{tabular} \end{table} \newpage \begin{landscape} \section*{Appendix D: Data sources and question wording} \small \begin{longtable}{p{3.4cm}p{4.4cm}p{4.4cm}p{4.4cm}p{4.4cm}} \toprule & \multicolumn{1}{c}{\textbf{LR - Left-Right}} & \multicolumn{1}{c}{\textbf{EU - EU Integration}} & \multicolumn{1}{c}{\textbf{EC - Economic}} & \multicolumn{1}{c}{\textbf{SO - Social}} \\ \midrule &&&& \endhead Benoit \& Laver Expert Survey \cite{Benoit2006}& Left-Right - Please locate each party on a general left-right dimension, taking all aspects of party policy into account & \dag EU Authority (AT, BE, UK, DK, FI, DE, GR, IT, NL, NI, PT, ES, SE), EU Larger \& Stronger (FR), \dag EU Strengthening (IE) & Economic (Spending vs. Taxes) & Social \\ & & & & \\ & Left (1) & Favours (1)& Promotes raising taxes to increase public services (1) & Favours liberal policies on matters such as abortion, homosexuality, and euthanasia (1) \\ & & & & \\ & Right (20) & Opposes (20) & Promotes cutting public services to cut taxes (20) & Opposes liberal policies on matters such as abortion, homosexuality, and euthanasia (20) \\ & & Countries excluded are CZ, EE, HU, LV, LT, PL, SK, SI, CY & & \\ Chapel Hill Expert Survey 2002 \cite**{Hooghe2010}& LRGEN = position of the party in 2002 in terms of its broad ideological stance, where & POSITION = overall orientation of the party leadership towards European integration in 2002, where & LRECON = position of the party in 2002 in terms of its ideological stance on economic issues (role of government in economy), where & GALTAN = position of the party in 2002 in terms of its ideological stance on democratic freedoms and rights (role of government in life choices), where \\ &&&&\\ & 0 indicates that a party is at the extreme left of the ideological spectrum &1 = Strongly opposed to European integration&0 indicates that a party is at the extreme left of the ideological spectrum&0 indicates that a party is at the extreme left of the ideological spectrum\\ &&&&\\ & 5 means that it is at the center &4 = Neutral, no stance on the issue of European integration&5 means that it is at the center&5 means that it is at the center\\ &&&&\\ & 10 indicates that it is at the extreme right &7 = Strongly in favour of European integration&10 indicates that it is at the extreme right&10 indicates that it is at the extreme right\\ & & & & \\ & & & & \\ & & & & \\ & & & & \\ Chapel Hill Expert Survey 2010 \cite**{Bakker2012}& LRGEN = position of the party in 2010 in terms of its overall ideological stance & POSITION = overall orientation of the party leadership towards European integration in 2010 & LRECON = position of the party in 2010 in terms of its ideological stance on economic issues & GALTAN = position of the party in 2010 in terms of its ideological stance on democratic freedoms and rights \\ &&&&\\ & 0 = extreme left & 1 = strongly opposed & 0 = extreme left & 0 = extreme left \\ & (-) & (-) & (-) & (-) \\ & 5 = center & 4 = neutral & 5 = center & 5 = center \\ & (-) & (-) & (-) & (-) \\ & 10 = extreme right & 7 = strongly in favour & 10 extreme right & 10 extreme right \\ & & & & \\ \hline & & & & \\ Euromanifestos Project 2004 \cite{Braun2010}& LEFT - placement of Euromanifesto according to the coder on a left-right scale & \dag EU - placement of Euromanifesto according to coder on a pro-anti-EU-integration scale & STATE - placement of Euromanifesto according to coder on a state interventionism vs. free enterprise scale & LIB - placement of Euromanifesto according to coder on a libertarian-authoritarian scale. \\ & & & & \\ & 1=left & 1 = pro & 1=state interventionism & 1=libertarian \\ & 10=right & 10 = anti & 10=free enterprise & 10=authoritarian \\ & & & & \\ Euromanifestos Project 2009 \cite{Braun2010}& LEFT - Left - Right & \dag INTEGRATION - Pro EU-Integration - Anti-EU-Integration & STATE - State Interventionism - Free Enterprise & LIBERTA - Libertarian - Authoritarian \\ &&&&\\ & Coder rating on a 10-point-scale & Coder rating on a 10-point-scale & Coder rating on a 10-point-scale & Coder rating on a 10-point-scale \\ &&&&\\ \hline & & & & \\ EU Profiler 2009 \cite{Trechsel2010} \ddag & Modified Left-Right - using items 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 14, 16, 18, 19 and 20, with missing values recoded to 4 (Neutral) & Original EU Integration (Y axis), using items 12, 21, 22, 23, 24, 26 and 27 & Scale composed of items 1, 2, 11, 14, 16, and 18 & Scale composed of items 5, 6, 7, 8, 9, 10, 19, 20 and 25 \\ & & & & \\ \bottomrule & & & & \\ \multicolumn{5}{l}{\dag Denotes variables that have been reversed for subsequent analysis} \\ \multicolumn{5}{l}{\ddag EU Profiler data were scaled according to \citeasnoun{Gemenis2013c}.}\\ \end{longtable} \end{landscape} \newpage \begin{landscape} \section*{Appendix E: Documents selected for the Martin-Vanberg transformation} \small \begin{longtable}{m{1.4cm}m{0.7cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}m{1.4cm}} \toprule \multicolumn{1}{c}{Country*} & & \multicolumn{4}{c}{BL} & \multicolumn{4}{c}{CHES} & \multicolumn{4}{c}{EMP}\\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} \cmidrule(lr){11-14} & & \multicolumn{1}{c}{LR} & \multicolumn{1}{c}{EU} & \multicolumn{1}{c}{EC} & \multicolumn{1}{c}{SO} & \multicolumn{1}{c}{LR} & \multicolumn{1}{c}{EU} & \multicolumn{1}{c}{EC} & \multicolumn{1}{c}{SO} & \multicolumn{1}{c}{LR} & \multicolumn{1}{c}{EU} & \multicolumn{1}{c}{EC} & \multicolumn{1}{c}{SO}\\ \midrule \endhead \multirow{2}{*}{AT} & low & GR\"{U}N & FP\"{O} & GR\"{U}N & GR\"{U}N & GR\"{U}N & FP\"{O} & GR\"{U}N & GR\"{U}N & GR\"{U}N & FP\"{O} & GR\"{U}N & GR\"{U}N \\ & high & FP\"{O} & GR\"{U}N & \"{O}VP & FP\"{O} & FP\"{O} & \"{O}VP & \"{O}VP & FP\"{O} & FP\"{O} & \"{O}VP & FP\"{O} & FP\"{O} \\ \multirow{2}{*}{BE (FR)} & low & ECOLO & MR & ECOLO & ECOLO & ECOLO & ECOLO & ECOLO & ECOLO & PS & MR & PS & PS \\ & high & MR & CDH & MR & CDH & MR & CDH & MR & CDH & CDH & ECOLO & CDH & ECOLO \\ \multirow{2}{*}{BE (NL)} & low & GROEN & VB & GROEN & GROEN & GROEN & VB & GROEN & GROEN & SPA & VB & SPA & VLD \\ & high & VB & CDV & VLD & VB & VB & CDV & VB & VB & VLD & GROEN & VLD & SPA \\ \multirow{2}{*}{CY} & low & AKEL & - & AKEL & DISY & - & - & - & - & KOP & KOP & AKEL & KOP \\ & high & DISY & - & DISY & AKEL & - & - & - & - & DIKO & DISY & DIKO & EDEK \\ \multirow{2}{*}{CZ} & low & KSCM & - & KSCM & CSSD & KSCM & KSCM & KSCM & ODS & KSCM & ODS & KSCM & CSSD \\ & high & ODS & - & ODS & KDUCSL & ODS & CSSD & KDUCSL & KSCM & ODS & CSSD & ODS & KSCM \\ \multirow{2}{*}{DK} & low & F & O & F & B & F & O & F & F & F & O & F & V \\ & high & O & V & C & C & O & V & V & O & O & V & O & O \\ \multirow{2}{*}{EE} & low & K & - & SDE & SDE & - & - & - & - & SDE & RE & SDE & RESP \\ & high & RE & - & RESP & EKRP & - & - & - & - & RE & SDE & RESP & IL \\ \multirow{2}{*}{FI} & low & VAS & KESK & VAS & VIHR & VAS & KD & VAS & VIHR & VAS & KD & VAS & KESK \\ & high & KOK & SDP & KOK & KD & KOK & KOK & KOK & KD & KOK & KESK & KOK & KD \\ \multirow{2}{*}{FR} & low & - & FN & PCF & PS & PCF & FN & PCF & PS & PCF & FN & PCF & PRG \\ & high & - & UDF & FN & FN & FN & PS & FN & FN & FN & PCF & UDF & FN \\ \multirow{2}{*}{DE} & low & LINKE & CSU & LINKE & B90GR\"{U} & LINKE & LINKE & LINKE & B90GR\"{U} & LINKE & CSU & LINKE & SPD \\ & high & CSU & B90GR\"{U} & FDP & CDU & CSU & CDU & FDP & CSU & CSU & LINKE & CDU & CDU \\ \multirow{2}{*}{GR} & low & KKE & KKE & KKE & SYRIZA & SYRIZA & KKE & KKE & SYRIZA & KKE & KKE & SYRIZA & SYRIZA \\ & high & ND & ND & ND & ND & ND & PASOK & ND & ND & ND & ND & ND & KKE \\ \multirow{2}{*}{HU} & low & MSZP & - & FIDESZ & SZDSZ & MSZP & FIDESZ & FIDESZ & SZDSZ & FIDESZ & SZDSZ & MDF & SZDSZ \\ & high & FIDESZ & - & SZDSZ & FIDESZ & FIDESZ & SZDSZ & SZDSZ & FIDESZ & SZDSZ & MSZP & SZDSZ & MSZP \\ \multirow{2}{*}{IE} & low & GREENS & GREENS & SF & GREENS & GREENS & SF & SF & GREENS & SF & SF & SF & SF \\ & high & FF & FG & FF & FF & FG & FG & FG & FF & FG & FF & FG & FF \\ \multirow{2}{*}{IT} & low & PRC & LN & PRC & PRC & PRC & LN & PRC & DSULIVO & PRC & PRC & PRC & FI \\ & high & AN & DSULIVO & FI & AN & AN & DSULIVO & FI & AN & FI & DSULIVO & FI & LN \\ \multirow{2}{*}{LV} & low & PCTVL & - & PCTVL & JL & PCTVL & PCTVL & PCTVL & TP & PCTVL & ZZS & PCTVL & LC \\ & high & TP & - & TP & LC & TBLNNK & JL & TP & LC & TBLNNK & JL & LC & JL \\ \multirow{2}{*}{LT} & low & LSDP & - & LSDP & LICS & LSDP & LKD & NS & LICS & LSDP & LSDP & LKD & LICS \\ & high & LICS & - & LICS & LKD & TS & TS & TS & LKD & TS & NS & TS & LKD \\ \multirow{2}{*}{NL} & low & SP & LPF & SP & GL & SP & LPF & SP & D66 & SP & LPF & SP & VVD \\ & high & LPF & D66 & VVD & CUSGP & LPF & D66 & LPF & CUSGP & LPF & VVD & CDA & SP \\ \multirow{2}{*}{PL} & low & SLDUP & - & SLDUP & SLDUP & SLDUP & PSL & PSL & SLDUP & SLDUP & PIS & PSL & UW \\ & high & PIS & - & PO & PIS & PIS & UW & PO & PIS & PIS & UW & PO & PIS \\ \multirow{2}{*}{PT} & low & BE & BE & CDU & BE & CDU & CDU & CDU & CDU & CDU & CDU & CDU & BE \\ & high & PSD & PS & PSD & PSD & PSD & PS & PSD & PSD & PSD & PS & PSD & CDU \\ \multirow{2}{*}{SK} & low & SMER & - & SMER & SMER & SMER & SMER & SMER & SDKUDS & SMER & KDH & SMER & SMER \\ & high & SDKUDS & - & KDH & KDH & KDH & SDKUDS & SDKUDS & KDH & KDH & SMER & KDH & LSHZDS \\ \multirow{2}{*}{SI} & low & ZLSD & - & ZLSD & ZLSD & ZLSD & SLS & ZLSD & ZLSD & ZLSD & SLS & ZLSD & SDS \\ & high & NSI & - & NSI & NSI & NSI & SDS & SLS & NSI & NSI & NSI & NSI & SLS \\ \multirow{2}{*}{ES} & low & PSOE & PP & IU & IU & IU & IU & IU & IU & IU & IU & IU & IU \\ & high & PP & PSOE & PP & PP & PP & PSOE & PP & PP & PNVEAJ & PSOE & PNVEAJ & PP \\ \multirow{2}{*}{ SE} & low & V & V & V & V & V & MP & V & MP & V & V & V & MP \\ & high & M & M & M & KD & M & M & M & KD & KD & M & M & V \\ \multirow{2}{*}{GB} & low & PC & CON & PC & LD & SNP & CON & SNP & LD & PC & CON & PC & SNP \\ & high & CON & LD & CON & CON & CON & LD & CON & CON & CON & PC & CON & CON \\ \multirow{2}{*}{NI} & low & SF & SDLP & SF & SF & - & - & - & - & SF & SUP & DUP & SF \\ & high & UUP & DUP & UUP & DUP & - & - & - & - & UUP & SDLP & UUP & DUP \\ \bottomrule \multicolumn{13}{l}{*Low and high refer to the party with either the lowest score or the highest score on a dimension} \end{longtable} \end{landscape} \newpage \section*{Appendix F: Investigating the Martin \& Vanberg transformation} In their original article \citeasnoun{Martin2008} (hereafter MV) advise in a footnote to calculate the difference between the exogenous assigned scores and the score as used the their transformation to calculate the size of the trade-off scholars have to make between increased accuracy of the dictionary and internal consistency and the ability make valid comparisons. While this step is not necessary to validate the applicability of the MV transformation in our study as we do not compare our scores against the reference scores, we decided to calculate these differences in order to test the transformation and give a preliminary assessment of the trade-off for scholars who want to use the transformation in the future. To calculate the trade-off, we input the reference documents a second time as the virgin documents. The difference between the transformed score and the exogenous assigned score then indicates the degree of trade-off. In addition, it provides the user with an extra tool to assess whether the actual word usage of the texts is reflected in the exogenous assigned score. A large difference then means that the exogenous score is not equal to what is reflected in the words. This difference can be either negative or positive, depending on the direction (either lower or higher on the dimension of interest). To give an idea of how this works, we calculate the difference on the EU integration dimension in the Netherlands using the reference scores from the Benoit \& Laver dataset. \begin{table}[htbp] \centering \caption {Differences for the Netherlands on the EU integration dimension} \label{tab:title1} \begin{tabular}{lrrrr} \toprule Party & Exogenous score & MV altered score & Difference & \% Difference \\ \midrule LPF & 5.1667 & 5.1667 & 0 & 0 \\ SP & 5.4706 & 7.407 & 1.9364 & 35.41 \\ CU-SGP & 7.3572 & 8.7889 & 1.4317 & 19.41 \\ VVD & 8.4 & 9.7341 & 1.3341 & 15.88 \\ CDA & 11.3 & 12.1469 & 0.8469 & 7.49 \\ GL & 11.4737 & 11.882 & 0.4083 & 3.59 \\ PVDA & 13.5263 & 13.2584 & $-0.2679$ & $-2.01$ \\ D66 & 13.9 & 13.9 & 0 & 0 \\ \bottomrule \end{tabular} \end{table} As Table \ref{tab:title1} shows, the scores of the anchor texts (LPF and D66) are fully recovered, while the scores of the texts in between have changed. These changes range from $-2.01\%$ for the PvdA to $35.41\%$ for the SP, indicating that the words in the documents indicate a respectively lower score for the PvdA and a higher score for the SP then what is suggested by the exogenous reference scores. Nevertheless, the SP document, which shows the most significant difference, retains its position relative to the other parties as the CU-SGP score also increases. However, a reversal does take place between the CDA and GL. Based on the exogenous scores, the GL document is more positive about European integration than the CDA, while the MV transformation switches these positions. Besides the PvdA, all parties receive a higher score than exogenous assigned, ranging from a small 3.59\% voor GL to a 35.41\% for the SP. While \citeasnoun{Martin2008} do not give a criterion as to what the maximum amount of difference should be, we consider the differences between the exogenous scores and the scores given by the MV transformation to be sufficiently large to warrant closer inspection. We therefore extend our calculation and include all countries and dimensions, to rule out any possibility of these differences arising out of peculiarities of this specific example.\\ As the table below shows, the results of this analysis show a similar pattern. However, in some cases the positions of the parties are switched and large differences such as the $35.41\%$ for the SP above are not uncommon. Therefore, if scholars choose to use the MV transformation in the future, we would strongly advise them to calculate these differences. Not only will this help them to assess the size of the trade-off, the MV calculated score for the reference documents will also be a more valid score to compare the transformed scores for the virgin texts against. Additionally, they can be used as a (partial) check on how well the exogenous assumed relative distances between the reference texts are shown in the actual word use \cite{Martin2008}. Especially with large differences this can warrant a closer inspection of the exogenous assigned score for the party and why it differences from the actual word use. \begin{landscape} \begin{longtable}{llrrrrrrrrrrrr} \caption{Difference between exogenous and calculated scores, in percentages} \label{tab:title2} \\ \multirow{2}{*}{Country} & \multirow{2}{*}{Party} & \multicolumn{4}{c}{Benoit \& Laver} & \multicolumn{4}{c}{CHES} & \multicolumn{4}{c}{EMP} \\ \cmidrule(r){3-6} \cmidrule(r){7-10} \cmidrule(r){11-14} & & LR & EU & EC & SO & LR & EU & EC & SO & LR & EU & EC & SO \\ \midrule &&&&&&&&&&&&&\\ \endfirsthead \multirow{2}{*}{Country} & \multirow{2}{*}{Party} & \multicolumn{4}{c}{Benoit \& Laver} & \multicolumn{4}{c}{CHES} & \multicolumn{4}{c}{EMP} \\ \cmidrule(r){3-6} \cmidrule(r){7-10} \cmidrule(r){11-14} & & LR & EU & EC & SO & LR & EU & EC & SO & LR & EU & EC & SO \\ \midrule &&&&&&&&&&&&&\\ \endhead \multirow{1}{*}{AT} & FP\"{O} & $-0.03$ & $-0.05$ & $8.66$ & 0.00 & 0.00 & 0.00 & 9.91 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & GR\"{U}NEN & $-0.05$ & 0.02 & $-0.09$ & $-0.13$ & 0.00 & 1.92 & 0.00 & 0.00 & 0.00 & 2.35 & 0.00 & 0.00 \\ & \"{O}VP & $-8.86$ & 0.81 & $-0.02$ & $-10.44$ & $-9.29$ & 0.00 & 0.00 & $-11.41$ & $-11.71$ & 0.00 & $-11.76$ & $-9.86$ \\ & SP\"{O} & $-3.02$ & 0.95 & 2.22 & $-2.47$ & $-2.27$ & 2.06 & 1.90 & $-3.78$ & $-2.48$ & 17.74 & $-4.50$ & $-2.62$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{BE (FR)} & CDH & $-18.39$ & $-0.03$ & $-12.28$ & 0.03 & $-14.05$ & 0.00 & $-16.97$ & 0.00 & 0.00 & $-2.27$ & 0.00 & 0.02 \\ & ECOLO & 0.00 & 2.05 & $-0.02$ & $-0.10$ & 0.00 & 0.00 & 0.00 & 0.00 & $-20.15$ & 0.00 & 8.25 & 0.25 \\ & MR & 0.00 & 0.04 & 0.00 & 20.38 & 0.00 & 1.50 & 0.00 & 17.21 & $-33.68$ & 0.00 & $-38.26$ & $-0.04$ \\ & PS & 25.81 & 3.77 & 22.60 & 43.32 & 10.02 & 1.04 & 29.10 & 20.10 & 0.00 & $-2.57$ & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{BE (NL)} & CDV & $-5.25$ & $-0.03$ & $-4.89$ & $-8.46$ & $-3.52$ & 0.00 & $-4.64$ & $-7.03$ & $-10.98$ & $-2.91$ & $-11.22$ & 7.21 \\ & GROEN & 0.00 & 3.24 & $-0.04$ & 0.04 & 0.00 & 3.71 & 0.00 & 0.00 & $-20.21$ & 0.00 & $-11.97$ & 8.14 \\ & NVA & 6.93 & $-4.36$ & 2.61 & 10.42 & 7.84 & $-3.93$ & 5.46 & 9.66 & $-9.95$ & $-7.13$ & $-16.83$ & 5.55 \\ & SPA & 10.84 & 2.85 & 7.07 & 16.63 & 10.49 & 4.84 & 15.88 & 5.34 & 0.00 & $-0.17$ & 0.00 & 0.00 \\ & VB & 0.02 & $-0.06$ & $-0.83$ & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & $-3.59$ & 0.00 & $-10.79$ & 4.47 \\ & VLD & $-0.91$ & 2.96 & 0.01 & $-7.30$ & $-0.41$ & 4.26 & 0.71 & $-6.62$ & 0.00 & $-0.12$ & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{CY} & AKEL & 0.00 & $-$ & 0.00 & 0.00 & $-$ & $-$ & $-$ & $-$ & 9.42 & 0.70 & 0.00 & 4.44 \\ & DIKO & $-4.04$ & $-$ & $-3.17$ & 0.76 & $-$ & $-$ & $-$ & $-$ & 0.00 & 1.31 & 0.00 & $-6.88$ \\ & DISY & 0.00 & $-$ & 0.00 & 0.00 & $-$ & $-$ & $-$ & $-$ & $-3.54$ & 0.00 & $-4.05$ & $-11.62$ \\ & EDEK & $-23.13$ & $-$ & $-3.82$ & 1.77 & $-$ & $-$ & $-$ & $-$ & $-5.97$ & 2.39 & 0.40 & 0.00 \\ & KOP & 2.19 & $-$ & 1.85 & $-0.40$ & $-$ & $-$ & $-$ & $-$ & 0.00 & 0.00 & $-7.24$ & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{CZ} & CSSD & 1.83 & $-$ & 2.40 & $-0.02$ & 1.86 & 0.00 & 3.36 & $-0.36$ & 2.18 & 0.00 & 1.10 & 0.00 \\ & KDU-CSL & $-0.21$ & $-$ & 0.48 & 0.01 & $-0.13$ & $-0.68$ & 0.00 & $-1.07$ & $-0.34$ & $-0.56$ & $-0.29$ & $-0.10$ \\ & KSCM & $-0.04$ & $-$ & 0.10 & 0.21 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & $-0.32$ & 0.00 & 0.00 \\ & ODS & 0.00 & $-$ & $-0.03$ & 0.55 & 0.00 & 1.26 & 1.22 & 0.00 & 0.00 & 0.00 & 0.00 & $-$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{DE} & B90/GR\"{U}NEN & $-15.73$ & $-0.01$ & $-1.63$ & $-0.07$ & $-15.61$ & 1.59 & $-2.06$ & 0.00 & $-11.31$ & 4.96 & $-9.09$ & $-5.99$ \\ & CDU & $-1.50$ & 1.23 & 6.95 & 0.00 & 0.51 & 0.00 & 7.03 & 0.87 & 0.17 & $-3.66$ & 0.00 & 0.00 \\ & CSU & 0.00 & $-0.03$ & 7.90 & 2.76 & 0.00 & 0.89 & 7.84 & 0.00 & 0.00 & 0.00 & 0.57 & 2.96 \\ & FDP & $-6.09$ & $-1.31$ & 0.01 & 32.09 & $-5.05$ & $-2.40$ & 0.00 & 18.33 & $-6.37$ & $-0.78$ & $-3.74$ & $-3.19$ \\ & PDS/DIELINKE & 0.09 & 1.06 & $-0.03$ & 18.18 & 0.00 & 0.00 & 0.00 & 5.02 & 0.00 & 0.00 & 0.00 & $-11.29$ \\ & SPD & 0.10 & $-3.78$ & 6.65 & 15.74 & 0.28 & 0.28 & 6.81 & 6.89 & 2.71 & $-1.75$ & $-1.15$ & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{DK} & A & $-9.23$ & 5.48 & $-2.51$ & $-4.47$ & $-11.73$ & 4.15 & $-0.74$ & $-15.46$ & $-11.13$ & 0.44 & $-5.83$ & $-0.78$ \\ & B & $-10.90$ & 2.67 & $-3.71$ & 0.00 & $-12.94$ & 4.67 & 0.87 & $-12.58$ & $-14.37$ & 3.70 & $-12.51$ & $-0.59$ \\ & C & $-6.78$ & 2.53 & 0.00 & 0.00 & $-8.76$ & 3.75 & 6.00 & $-9.88$ & $-9.45$ & $-1.53$ & $-10.43$ & 4.87 \\ & F & $-0.07$ & 32.98 & 0.00 & 9.81 & 0.00 & 30.51 & 0.00 & 0.00 & 0.00 & 1.69 & 0.00 & 4.34 \\ & O & $-0.02$ & $-0.07$ & $-4.85$ & 9.97 & 0.00 & 0.00 & 7.91 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & V & $-13.90$ & $-0.03$ & $-6.39$ & $-7.63$ & $-15.61$ & 0.00 & 0.00 & $-17.21$ & $-17.72$ & 0.00 & $-17.37$ & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{EE} & EKRP-EKD & 0.09 & $-$ & 1.47 & 0.00 & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ & $-$ \\ & IL & 0.41 & $-$ & 3.65 & $-0.48$ & $-$ & $-$ & $-$ & $-$ & 0.41 & 0.21 & 3.12 & 0.00 \\ & K & $-0.03$ & $-$ & 0.73 & -1.23 & $-$ & $-$ & $-$ & $-$ & 0.05 & 0.04 & 2.35 & $-6.11$ \\ & RE & $-0.02$ & $-$ & 2.16 & -0.93 & $-$ & $-$ & $-$ & $-$ & 0.00 & 0.00 & 2.43 & $-0.29$ \\ & RESP & $-2.46$ & $-$ & $-0.02$ & $-0.06$ & $-$ & $-$ & $-$ & $-$ $-$ & $-1.17$ & 0.00 & 0.00 \\ & SDE & 1.35 & $-$ & 0.05 & 0.00 & $-$ & $-$ & $-$ & $-$ & 0.00 & 0.00 & 0.00 & $-0.55$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{ES} & IU & $-$ & 0.03 & $-0.01$ & 0.06 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & PNV-EAJ & 3.13 & $-4.06$ & $-1.27$ & $-4.12$ & $-3.12$ & $-2.88$ & $-2.58$ & $-3.32$ & 0.00 & $-0.22$ & 0.00 & $-1.15$ \\ & PP & $-0.02$ & 0.01 & $-0.01$ & $-0.01$ & 0.00 & 0.01 & 0.00 & 0.00 & 6.25 & 0.64 & 7.86 & 0.00 \\ & PSOE & $-0.03$ & 0.03 & $-0.33$ & $-6.68$ & $-1.67$ & 0.00 & $-0.74$ & $-3.51$ & $-0.37$ & 0.00 & 1.79 & $-1.33$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{FI} & KD & 7.91 & $-25.89$ & 2.81 & $-0.02$ & 7.34 & 0.00 & 6.12 & 0.00 & 7.68 & 0.00 & 1.30 & 0.00 \\ & KESK & 3.60 & $-0.02$ & 8.04 & $-8.25$ & 1.56 & 8.38 & 5.98 & $-7.82$ & 3.54 & 0.00 & 5.68 & 0.00 \\ & KOK & $-0.03$ & $-2.13$ & $-0.01$ & $-2.94$ & 0.00 & 0.00 & 0.00 & $-1.52$ & 0.00 & 3.00 & 0.00 & 2.07 \\ & RKP-SFP & 3.43 & $-0.20$ & 4.49 & 10.28 & 2.83 & 1.64 & 3.81 & 4.14 & 3.81 & 0.54 & $-$ & $-$ \\ & SDP & 8.98 & 0.01 & 9.30 & 9.30 & 9.32 & 1.66 & 9.08 & 8.69 & 7.91 & 0.75 & 5.06 & $-3.56$ \\ & VAS & 0.10 & $-5.38$ & 0.08 & 8.18 & 0.00 & 4.99 & 0.00 & 3.39 & 0.00 & 2.31 & 0.00 & $-7.37$ \\ & VIHR & $-6.83$ & 1.12 & $-5.02$ & 0.01 & $-6.34$ & 2.43 & $-6.40$ & 0.00 & 2.80 & 1.95 & $-0.54$ & $-7.99$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{FR} & FN & $-$ & 0.00 & 0.02 & $-0.02$ & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 9.08 & 0.00 \\ & PCF & $-$ & $-3.48$ & 0.00 & $-28.59$ & 0.00 & 16.30 & 0.00 & $-22.97$ & 0.00 & 0.21 & 0.00 & $-$ \\ & PRG & $-$ & $-$ & $-$ & $-$ & $-12.55$ & 8.73 & $-11.64$ & $-27.39$ & $-12.48$ & 0.00 & 1.96 & 0.00 \\ & PS & $-$ & $-10.03$ & 18.34 & 0.08 & 12.84 & 3.62 & 2.91 & 0.00 & 11.45 & $-6.74$ & 13.64 & $-$ \\ & UDF & $-$ & $-0.02$ & $-6.53$ & $-11.54$ & $-0.98$ & 0.00 & $-7.73$ & $-9.52$ & $-1.87$ & $-8.68$ & 0.00 & $-$ \\ & UMP & $-$ & $-19.23$ & $-3.83$ & $-9.68$ & $-4.75$ & 11.63 & $-4.66$ & $-10.73$ & $-2.81$ & $-3.27$ & $-$ & $-$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{GR} & KKE & $-0.08$ & 0.14 & $-0.03$ & $-4.58$ & 3.20 & 0.00 & 0.00 & $-2.87$ & 0.00 & 0.00 & $-$ & 0.00 \\ & ND & 0.02 & 2.10 & 0.00 & 0.00 & 0.00 & 2.57 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & $-10.60$ \\ & PASOK & $-0.93$ & $-0.03$ & $-2.42$ & -5.12 & $-0.45$ & 0.00 & $-2.81$ & $-6.94$ & $-2.65$ & $-4.06$ & $-5.83$ & $-$ \\ & SYRIZA & 7.18 & $-3.40$ & 4.96 & 0.05 & 0.00 & $-1.73$ & 0.45 & 0.00 & 2.25 & $-2.38$ & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{HU} & FIDESZ-MPP & $-0.03$ & $-$ & 0.03 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & $-2.10$ & $-0.54$ & $-13.11$ \\ & MDF & 5.53 & $-$ & $-6.12$ & 9.96 & 6.81 & $-0.10$ & $-3.76$ & 8.73 & $-1.93$ & $-1.14$ & 0.00 & $-10.30$ \\ & MSZP & 0.05 & $-$ & $-9.25$ & $-15.63$ & 0.00 & 9.55 & $-6.12$ & $-29.95$ & 5.03 & 0.00 & $-3.29$ & 0.00 \\ & SZDSZ & 23.11 & $-$ & 0.00 & 0.00 & 22.82 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{IE} & FF & 0.02 & 9.52 & 0.03 & 0.01 & 0.27 & 1.72 & 0.38 & 0.00 & 0.32 & 0.00 & 0.83 & 0.00 \\ & FG & 0.92 & 0.02 & 0.45 & 5.34 & 0.00 & 0.00 & 0.00 & 4.37 & 0.00 & $-1.00$ & 0.00 & 0.54 \\ & GREENS & $-0.07$ & 0.03 & $-15.43$ & 0.00 & 0.00 & $-23.05$ & $-21.31$ & 0.00 & $-21.09$ & $-13.63$ & $-16.75$ & $-11.49$ \\ & LAB & 8.95 & 1.36 & $-0.30$ & 16.23 & 9.07 & $-1.63$ & $-1.58$ & 13.08 & $-1.51$ & $-1.13$ & $-2.36$ & 0.39 \\ & SF & 11.94 & 32.25 & 0.00 & 2.88 & 12.77 & 0.00 & 0.00 & 0.17 & 0.00 & 0.00 & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{IT} & AN & 0.03 & 0.83 & $-6.32$ & 0.00 & 0.00 & 6.73 & $-2.63$ & 0.00 & 7.12 & $-24.13$ & $-7.46$ & 6.21 \\ & DS/ULIVO & $-12.74$ & 0.03 & $-6.11$ & $-5.73$ & $-7.78$ & 0.00 & $-9.92$ & 0.00 & $-6.07$ & 0.00 & $-5.41$ & 9.22 \\ & FI & $-8.87$ & 5.67 & 0.00 & $-10.91$ & $-8.16$ & 7.15 & 0.00 & $-9.65$ & 0.00 & 12.03 & 0.00 & 0.00 \\ & LN & $-11.62$ & $-0.08$ & $-3.62$ & $-12.46$ & $-10.36$ & 0.00 & $-4.08$ & $-11.18$ & $-4.40$ & $-12.75$ & $-4.13$ & 0.00 \\ & PRC & $-0.09$ & 7.39 & $-0.11$ & $-0.07$ & 0.00 & 9.49 & 0.00 & $-2.05$ & 0.00 & 0.00 & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{LT} & LICS & $-0.02$ & $-$ & 0.01 & 0.02 & 3.38 & 0.68 & 0.03 & 0.00 & 1.41 & 1.23 & 0.87 & 0.00 \\ & LKD & 0.93 & $-$ & 1.24 & 0.01 & 2.54 & 0.00 & $-6.94$ & 0.00 & 1.52 & 1.13 & 0.00 & 0.00 \\ & LSDP & 0.06 & $-$ & $-0.06$ & $-2.99$ & 0.00 & 0.52 & $-8.86$ & $-0.60$ & 0.00 & 0.00 & 0.08 & 0.30 \\ & NS & 7.78 & $-$ & 6.37 & 3.65 & 15.84 & 1.44 & 0.00 & $-0.95$ & 6.08 & 0.00 & 0.13 & $-0.07$ \\ & TS & 0.11 & $-$ & 1.47 & $-3.46$ & 0.00 & 0.00 & 0.00 & $-1.38$ & 0.00 & 1.91 & 0.00 & $-1.41$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{LV} & JL & $-2.77$ & $-$ & $-2.22$ & $-0.04$ & $-1.89$ & 0.00 & $-3.94$ & 1.62 & $-1.49$ & 0.00 & $-0.59$ & 0.00 \\ & LC & $-0.66$ & $-$ & $-0.42$ & $-0.03$ & $-0.25$ & 1.00 & $-2.56$ & 0.00 & $-1.17$ & 0.38 & 0.00 & $-$ \\ & PCTVL & $-0.15$ & $-$ & 0.00 & $-2.09$ & 0.00 & 0.00 & 0.00 & 0.58 & 0.00 & 0.97 & 0.00 & $-$ \\ & TB/LNNK & $-0.22$ & $-$ & 0.14 & $-0.61$ & 0.00 & 1.20 & $-0.95$ & 0.76 & 0.00 & 0.25 & $-$ & 1.27 \\ & TP & $0.03$ & $-$ & $-0.03$ & $-0.99$ & 0.48 & 2.16 & 0.00 & 0.00 & 1.03 & 1.79 & $-0.23$ & $-$ \\ & ZZS & 2.13 & $-$ & 0.96 & $-0.27$ & 4.76 & 1.24 & 1.75 & 1.64 & 4.97 & 0.00 & $-$ & $-$ \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{NL} & CDA & $-7.82$ & 7.49 & 5.59 & 15.46 & $-8.25$ & 5.30 & $-8.24$ & 10.55 & $-6.77$ & $-4.41$ & 0.00 & 3.17 \\ & CU-SGP & $-17.19$ & 19.41 & $-4.36$ & $-0.02$ & $-17.48$ & 12.69 & $-18.14$ & 0.00 & $-15.03$ & 18.46 & $-10.61$ & 0.21 \\ & D66 & $-11.50$ & 0.00 & 0.71 & 22.23 & $-11.01$ & 0.00 & $-13.59$ & 0.00 & $-13.05$ & $-4.48$ & $-9.22$ & 12.95 \\ & GL & $-18.64$ & 3.59 & $-12.79$ & 0.05 & $-16.94$ & 3.10 & $-26.68$ & $-32.66$ & $-14.52$ & $-6.20$ & $-15.43$ & 10.33 \\ & LPF & $-0.01$ & 0 & 14.93 & 17.57 & $-0.06$ & 0.00 & 0.00 & 11.79 & 0.00 & 0.00 & 3.94 & $-16.60$ \\ & PVDA & $-10.31$ & $-2.01$ & 1.54 & 8.10 & $-9.99$ & $-0.60$ & $-11.90$ & $-2.76$ & $-6.74$ & $-1.04$ & 2.35 & $-5.93$ \\ & SP & -0.15 & 35.41 & $-0.03$ & 2.43 & 0.00 & 27.02 & 0.00 & $-7.21$ & 0.00 & 31.51 & 0.00 & 0.00 \\ & VVD & $-14.00$ & 15.88 & 0.02 & $-15.41$ & $-14.29$ & 11.70 & $-13.98$ & $-1.57$ & $-13.09$ & 0.00 & $-7.36$ & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{PL} & PIS & 0.00 & 0.00 & 0.19 & $-0.03$ & 0.00 & $-0.14$ & 3.23 & 0.00 & 0.00 & 0.00 & 2.48 & 0.00 \\ & PO & $-1.93$ & $-1.93$ & 0.00 & $-1.98$ & $-1.42$ & $-3.50$ & 0.00 & $-2.60$ & $-2.43$ & $-3.53$ & 0.00 & 0.70 \\ & PSL & $-4.09$ & $-4.09$ & $-10.47$ & 0.80 & $-3.58$ & 0.00 & 0.00 & 1.20 & $-8.49$ & $-2.05$ & 0.00 & $-$ \\ & SLD-UP & 0.04 & 0.04 & 0.00 & 0.03 & 0.00 & $-3.87$ & 5.96 & 0.00 & 0.00 & $-3.43$ & 4.63 & 10.66 \\ & UW & $-0.73$ & $-0.73$ & 7.20 & $-17.42$ & $-1.23$ & 0.00 & 8.33 & $-31.82$ & $-2.30$ & 0.00 & 6.95 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{PT} & BE & 0.08 & 0.00 & $-13.57$ & 0.11 & $-$ & $-$ & $-$ & $-$ & $-18.64$ & 0.86 & $-10.04$ & 0.00 \\ & CDU & 17.82 & 8.90 & $-0.11$ & 18.61 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & PS & 4.52 & 0.00 & 0.96 & 7.64 & 0.09 & 0.00 & 1.40 & 5.35 & 1.20 & 0.00 & 2.84 & 18.43 \\ & PSD & $-0.02$ & 6.13 & 0.00 & 0.00 & 0.00 & 4.84 & 0.00 & 0.00 & 0.00 & 6.75 & 0.00 & 2.88 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{SE} & C & 6.32 & 13.24 & 7.23 & 11.58 & 6.09 & $-0.98$ & 7.51 & 2.27 & 8.68 & 14.05 & 4.72 & $-4.78$ \\ & KD & $-2.82$ & $-4.21$ & $-0.16$ & $-0.02$ & $-2.74$ & $-6.24$ & $-0.66$ & 0.00 & 0.00 & $-4.42$ & $-1.29$ & $-3.57$ \\ & M & 0.02 & 0.00 & $-0.02$ & 8.41 & 0.00 & 0.00 & 0.00 & 1.42 & 3.71 & 0.00 & 0.00 & 0.00 \\ & MP & 12.67 & 38.07 & 12.31 & 30.48 & 13.11 & 0.00 & 10.34 & 0.00 & 23.30 & 35.47 & 8.52 & 1.22 \\ & S & 3.31 & 9.03 & 4.39 & 5.13 & 2.64 & 4.90 & 4.25 & $-1.74$ & 3.84 & 8.69 & 2.76 & $-6.15$ \\ & V & $-0.09$ & $-0.02$ & $-0.04$ & 0.05 & 0.00 & $-33.98$ & 0.00 & $-20.93$ & 0.00 & 0.00 & 0.00 & 0.00 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{SI} & NSI & $-0.02$ & $-$ & 0.00 & $-0.01$ & 0.00 & 0.00 & 1.20 & 0.00 & 0.00 & 0.00 & 0.00 & $-0.61$ \\ & SDS & $-3.71$ & $-$ & $-1.75$ & $-3.53$ & $-2.32$ & 0.00 & 0.64 & $-2.35$ & $-3.42$ & $-0.30$ & $-2.79$ & 0.00 \\ & SLS & $-5.09$ & $-$ & $-1.10$ & $-5.61$ & $-4.10$ & 0.00 & 0.00 & $-4.68$ & $-6.45$ & 0.00 & $-4.23$ & 0.00 \\ & ZLSD & 0.07 & $-$ & 0.00 & $-0.08$ & 0.00 & -1.48 & 0.00 & 0.00 & 0.00 & $-0.72$ & 0.00 & 0.85 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{SK} & KDH & 0.01 & $-$ & $-0.03$ & $-0.02$ & 0.00 & $-0.67$ & 3.32 & 0.00 & 0.00 & 0.00 & 0.00 & $-0.99$ \\ & LSHZDS & $-5.66$ & $-$ & $-3.90$ & $-9.14$ & $-6.99$ & $-1.51$ & $-6.83$ & 0.32 & $-4.07$ & 2.93 & $-6.30$ & 0.00 \\ & SDKUDS & $-3.25$ & $-$ & $-2.14$ & $-4.29$ & $-3.31$ & 0.00 & 0.00 & 0.00 & $-2.88$ & 0.35 & $-3.18$ & $-1.12$ \\ & SMER & $-0.03$ & $-$ & 0.02 & 0.02 & 0.00 & 0.00 & 0.00 & $-1.57$ & 0.00 & 0.00 & 0.00 & 0.00 \\ & SMK-MKP & 1.84 & $-$ & 4.51 & 10.11 & 9.08 & 11.60 & 1.29 & $-7.02$ & 3.25 & 12.40 & $-5.16$ & 0.83 \\ &&&&&&&&&&&&&\\ \multirow{1}{*}{UK} & CON & $-0.02$ & 0.10 & 0.01 & 0.02 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & LAB & 14.18 & 0.78 & 20.02 & $-5.84$ & 8.94 & 2.07 & 9.87 & $-1.60$ & 21.02 & $-12.60$ & 27.25 & 7.66 \\ & LD & 30.74 & 0.03 & 40.65 & 0.07 & 18.73 & 0.00 & 17.56 & 0.00 & 22.54 & $-14.73$ & 18.21 & 20.66 \\ & PC & $-0.02$ & 23.04 & 0.04 & $-11.52$ & $-17.42$ & 14.65 & $-20.92$ & $-17.86$ & 0.00 & 0.00 & 0.00 & 9.79 \\ & SNP & 17.86 & 12.72 & 18.38 & $-3.85$ & 0.00 & 6.87 & 0.00 & $-6.10$ & 31.37 & $-8.42$ & 37.88 & 0.00 \\ &&&&&&&&&&&&&\\ \bottomrule \end{longtable} \end{landscape} \newpage \begin{landscape} \section*{Appendix G: Concordance Correlations} The four tables below show the concordance correlations between the wordscores estimates for the virgin texts and the expert scores, for all the different combinations of exogenous reference score, type of benchmark, transformation, and rescaling.\\ \centering \begin{longtable}[c]{lllcrcrrcr} \caption{Left-Right Dimension}\\ \toprule Reference & Benchmark & Transformation & Rescale & rho\_c & \# of Observations & CI\_low & CI\_high & Pearson's \textit{r} & C\_b \\ \midrule BL & CHES & LBG & pc & 0.624 & 133 & 0.527 & 0.704 & 0.687 & 0.907 \\ BL & EMP & LBG & pc & 0.653 & 151 & 0.561 & 0.73 & 0.69 & 0.946 \\ BL & EUP & LBG & pc & 0.497 & 147 & 0.395 & 0.587 & 0.595 & 0.836 \\ BL & CHES & LBG & wd & 0.638 & 138 & 0.529 & 0.726 & 0.644 & 0.99 \\ BL & EMP & LBG & wd & 0.587 & 158 & 0.48 & 0.676 & 0.617 & 0.951 \\ BL & EUP & LBG & wd & 0.438 & 154 & 0.316 & 0.545 & 0.49 & 0.893 \\ BL & CHES & MV & pc & 0.624 & 133 & 0.527 & 0.704 & 0.687 & 0.907 \\ BL & EMP & MV & pc & 0.653 & 151 & 0.561 & 0.73 & 0.69 & 0.946 \\ BL & EUP & MV & pc & 0.507 & 147 & 0.407 & 0.596 & 0.607 & 0.836 \\ BL & CHES & MV & wd & 0.267 & 138 & 0.123 & 0.401 & 0.296 & 0.904 \\ BL & EMP & MV & wd & 0.213 & 158 & 0.089 & 0.329 & 0.263 & 0.809 \\ BL & EUP & MV & wd & 0.079 & 154 & -0.052 & 0.207 & 0.095 & 0.825 \\ CHES & CHES & LBG & pc & 0.597 & 134 & 0.494 & 0.683 & 0.653 & 0.915 \\ CHES & EMP & LBG & pc & 0.673 & 147 & 0.583 & 0.747 & 0.71 & 0.948 \\ CHES & EUP & LBG & pc & 0.464 & 142 & 0.351 & 0.564 & 0.546 & 0.851 \\ CHES & CHES & LBG & wd & 0.642 & 138 & 0.533 & 0.731 & 0.643 & 1 \\ CHES & EMP & LBG & wd & 0.565 & 158 & 0.454 & 0.658 & 0.595 & 0.949 \\ CHES & EUP & LBG & wd & 0.445 & 154 & 0.323 & 0.552 & 0.497 & 0.896 \\ CHES & CHES & MV & pc & 0.597 & 134 & 0.494 & 0.683 & 0.653 & 0.915 \\ CHES & EMP & MV & pc & 0.673 & 147 & 0.583 & 0.747 & 0.71 & 0.948 \\ CHES & EUP & MV & pc & 0.464 & 142 & 0.351 & 0.564 & 0.546 & 0.851 \\ CHES & CHES & MV & wd & 0.314 & 138 & 0.186 & 0.431 & 0.377 & 0.832 \\ CHES & EMP & MV & wd & 0.215 & 158 & 0.09 & 0.333 & 0.262 & 0.821 \\ CHES & EUP & MV & wd & 0.176 & 154 & 0.044 & 0.303 & 0.209 & 0.844 \\ EMP & CHES & LBG & pc & 0.485 & 138 & 0.365 & 0.59 & 0.535 & 0.906 \\ EMP & EMP & LBG & pc & 0.590 & 158 & 0.487 & 0.677 & 0.62 & 0.951 \\ EMP & EUP & LBG & pc & 0.428 & 154 & 0.317 & 0.527 & 0.508 & 0.841 \\ EMP & CHES & LBG & wd & 0.235 & 138 & 0.169 & 0.299 & 0.607 & 0.387 \\ EMP & EMP & LBG & wd & 0.320 & 158 & 0.25 & 0.387 & 0.667 & 0.48 \\ EMP & EUP & LBG & wd & 0.298 & 154 & 0.221 & 0.372 & 0.591 & 0.505 \\ EMP & CHES & MV & pc & 0.485 & 138 & 0.365 & 0.59 & 0.535 & 0.906 \\ EMP & EMP & MV & pc & 0.590 & 158 & 0.487 & 0.677 & 0.62 & 0.951 \\ EMP & EUP & MV & pc & 0.428 & 154 & 0.317 & 0.527 & 0.508 & 0.841 \\ EMP & CHES & MV & wd & 0.070 & 138 & 0.04 & 0.099 & 0.409 & 0.17 \\ EMP & EMP & MV & wd & 0.093 & 158 & 0.06 & 0.126 & 0.446 & 0.208 \\ EMP & EUP & MV & wd & 0.083 & 154 & 0.046 & 0.12 & 0.361 & 0.229 \\ \bottomrule \multicolumn{9}{l}{ * wd = whole dimension, pc = per country} \end{longtable} \end{landscape} \begin{landscape} \centering \begin{longtable}[c]{lllcrcrrcr} \caption{EU Integration Dimension}\\ \toprule Reference & Benchmark & Transformation & Rescale & rho\_c & \# of Observations & CI\_low & CI\_high & Pearson's \textit{r} & C\_b \\ \midrule BL & CHES & LBG & pc & 0.518 & 98 & 0.365 & 0.644 & 0.539 & 0.961 \\ BL & EMP & LBG & pc & 0.452 & 107 & 0.289 & 0.588 & 0.458 & 0.987 \\ BL & EUP & LBG & pc & 0.489 & 104 & 0.332 & 0.619 & 0.499 & 0.979 \\ BL & CHES & LBG & wd & 0.452 & 138 & 0.309 & 0.575 & 0.453 & 0.998 \\ BL & EMP & LBG & wd & 0.466 & 159 & 0.335 & 0.579 & 0.467 & 0.999 \\ BL & EUP & LBG & wd & 0.403 & 154 & 0.263 & 0.526 & 0.406 & 0.993 \\ BL & CHES & MV & pc & 0.518 & 98 & 0.365 & 0.644 & 0.539 & 0.961 \\ BL & EMP & MV & pc & 0.452 & 107 & 0.289 & 0.588 & 0.458 & 0.987 \\ BL & EUP & MV & pc & 0.489 & 104 & 0.332 & 0.619 & 0.499 & 0.979 \\ BL & CHES & MV & wd & 0.248 & 138 & 0.087 & 0.396 & 0.251 & 0.987 \\ BL & EMP & MV & wd & 0.312 & 159 & 0.170 & 0.44 & 0.323 & 0.966 \\ BL & EUP & MV & wd & 0.216 & 154 & 0.063 & 0.359 & 0.22 & 0.983 \\ CHES & CHES & LBG & pc & 0.430 & 134 & 0.294 & 0.55 & 0.462 & 0.931 \\ CHES & EMP & LBG & pc & 0.345 & 148 & 0.202 & 0.474 & 0.361 & 0.957 \\ CHES & EUP & LBG & pc & 0.406 & 142 & 0.269 & 0.527 & 0.431 & 0.944 \\ CHES & CHES & LBG & wd & 0.508 & 138 & 0.389 & 0.611 & 0.566 & 0.899 \\ CHES & EMP & LBG & wd & 0.405 & 159 & 0.279 & 0.516 & 0.447 & 0.906 \\ CHES & EUP & LBG & wd & 0.334 & 154 & 0.211 & 0.446 & 0.4 & 0.834 \\ CHES & CHES & MV & pc & 0.430 & 134 & 0.294 & 0.55 & 0.462 & 0.931 \\ CHES & EMP & MV & pc & 0.345 & 148 & 0.202 & 0.474 & 0.361 & 0.957 \\ CHES & EUP & MV & pc & 0.406 & 142 & 0.269 & 0.527 & 0.431 & 0.944 \\ CHES & CHES & MV & wd & 0.361 & 138 & 0.231 & 0.478 & 0.421 & 0.858 \\ CHES & EMP & MV & wd & 0.256 & 159 & 0.123 & 0.381 & 0.289 & 0.886 \\ CHES & EUP & MV & wd & 0.140 & 154 & 0.002 & 0.273 & 0.16 & 0.873 \\ EMP & CHES & LBG & pc & 0.370 & 138 & 0.237 & 0.489 & 0.423 & 0.875 \\ EMP & EMP & LBG & pc & 0.296 & 159 & 0.165 & 0.416 & 0.337 & 0.878 \\ EMP & EUP & LBG & pc & 0.401 & 154 & 0.278 & 0.511 & 0.455 & 0.88 \\ EMP & CHES & LBG & wd & 0.202 & 138 & 0.141 & 0.261 & 0.577 & 0.35 \\ EMP & EMP & LBG & wd & 0.180 & 159 & 0.125 & 0.235 & 0.517 & 0.348 \\ EMP & EUP & LBG & wd & 0.223 & 154 & 0.165 & 0.279 & 0.624 & 0.357 \\ EMP & CHES & MV & pc & 0.370 & 138 & 0.237 & 0.489 & 0.423 & 0.875 \\ EMP & EMP & MV & pc & 0.296 & 159 & 0.165 & 0.416 & 0.337 & 0.878 \\ EMP & EUP & MV & pc & 0.401 & 154 & 0.278 & 0.511 & 0.455 & 0.88 \\ EMP & CHES & MV & wd & 0.082 & 138 & 0.047 & 0.117 & 0.41 & 0.201 \\ EMP & EMP & MV & wd & 0.075 & 159 & 0.043 & 0.107 & 0.374 & 0.199 \\ EMP & EUP & MV & wd & 0.093 & 154 & 0.058 & 0.126 & 0.45 & 0.206 \\ \bottomrule \multicolumn{9}{l}{ * wd = whole dimension, pc = per country} \end{longtable} \end{landscape} \begin{landscape} \centering \begin{longtable}[c]{lllcrcrrcr} \caption{Economic Dimension}\\ \toprule Reference & Benchmark & Transformation & Rescale & rho\_c & \# of Observations & CI\_low & CI\_high & Pearson's \textit{r} & C\_b \\ \midrule BL & CHES & LBG & pc & 0.449 & 138 & 0.330 & 0.553 & 0.52 & 0.863 \\ BL & EMP & LBG & pc & 0.424 & 158 & 0.303 & 0.531 & 0.472 & 0.898 \\ BL & EUP & LBG & pc & 0.433 & 154 & 0.322 & 0.532 & 0.526 & 0.823 \\ BL & CHES & LBG & wd & 0.576 & 138 & 0.453 & 0.677 & 0.579 & 0.995 \\ BL & EMP & LBG & wd & 0.481 & 158 & 0.356 & 0.589 & 0.498 & 0.966 \\ BL & EUP & LBG & wd & 0.527 & 154 & 0.415 & 0.623 & 0.585 & 0.901 \\ BL & CHES & MV & pc & 0.449 & 138 & 0.330 & 0.553 & 0.52 & 0.863 \\ BL & EMP & MV & pc & 0.424 & 158 & 0.303 & 0.531 & 0.472 & 0.898 \\ BL & EUP & MV & pc & 0.433 & 154 & 0.322 & 0.532 & 0.526 & 0.823 \\ BL & CHES & MV & wd & 0.242 & 138 & 0.140 & 0.339 & 0.367 & 0.659 \\ BL & EMP & MV & wd & 0.209 & 158 & 0.123 & 0.292 & 0.359 & 0.583 \\ BL & EUP & MV & wd & 0.192 & 154 & 0.109 & 0.272 & 0.355 & 0.541 \\ CHES & CHES & LBG & pc & 0.463 & 134 & 0.345 & 0.566 & 0.542 & 0.854 \\ CHES & EMP & LBG & pc & 0.431 & 147 & 0.308 & 0.539 & 0.49 & 0.878 \\ CHES & EUP & LBG & pc & 0.411 & 142 & 0.295 & 0.516 & 0.51 & 0.807 \\ CHES & CHES & LBG & wd & 0.553 & 138 & 0.428 & 0.658 & 0.563 & 0.983 \\ CHES & EMP & LBG & wd & 0.401 & 158 & 0.273 & 0.515 & 0.436 & 0.919 \\ CHES & EUP & LBG & wd & 0.397 & 154 & 0.279 & 0.503 & 0.479 & 0.828 \\ CHES & CHES & MV & pc & 0.467 & 128 & 0.347 & 0.572 & 0.545 & 0.857 \\ CHES & EMP & MV & pc & 0.438 & 142 & 0.313 & 0.547 & 0.494 & 0.886 \\ CHES & EUP & MV & pc & 0.451 & 136 & 0.333 & 0.554 & 0.545 & 0.827 \\ CHES & CHES & MV & wd & 0.114 & 138 & -0.04 & 0.263 & 0.124 & 0.919 \\ CHES & EMP & MV & wd & 0.092 & 158 & -0.038 & 0.218 & 0.111 & 0.828 \\ CHES & EUP & MV & wd & 0.049 & 154 & -0.068 & 0.164 & 0.066 & 0.737 \\ EMP & CHES & LBG & pc & 0.348 & 138 & 0.216 & 0.467 & 0.4 & 0.87 \\ EMP & EMP & LBG & pc & 0.427 & 158 & 0.307 & 0.534 & 0.477 & 0.896 \\ EMP & EUP & LBG & pc & 0.383 & 154 & 0.268 & 0.487 & 0.469 & 0.815 \\ EMP & CHES & LBG & wd & 0.202 & 138 & 0.123 & 0.279 & 0.429 & 0.472 \\ EMP & EMP & LBG & wd & 0.271 & 158 & 0.190 & 0.348 & 0.501 & 0.541 \\ EMP & EUP & LBG & wd & 0.320 & 154 & 0.226 & 0.408 & 0.495 & 0.647 \\ EMP & CHES & MV & pc & 0.348 & 138 & 0.216 & 0.467 & 0.4 & 0.87 \\ EMP & EMP & MV & pc & 0.427 & 158 & 0.307 & 0.534 & 0.477 & 0.896 \\ EMP & EUP & MV & pc & 0.383 & 154 & 0.268 & 0.487 & 0.469 & 0.815 \\ EMP & CHES & MV & wd & 0.038 & 138 & -0.007 & 0.083 & 0.144 & 0.266 \\ EMP & EMP & MV & wd & 0.106 & 158 & 0.056 & 0.156 & 0.333 & 0.32 \\ EMP & EUP & MV & wd & 0.093 & 154 & 0.033 & 0.154 & 0.242 & 0.386 \\ \bottomrule \multicolumn{9}{l}{ * wd = whole dimension, pc = per country} \end{longtable} \end{landscape} \begin{landscape} \centering \begin{longtable}[c]{lllcrcrrcr} \caption{Social Dimension}\\ \toprule Reference & Benchmark & Transformation & Rescale & rho\_c & \# of Observations & CI\_low & CI\_high & Pearson's \textit{r} & C\_b \\ \midrule BL & CHES & LBG & pc & 0.569 & 138 & 0.459 & 0.662 & 0.617 & 0.923 \\ BL & EMP & LBG & pc & 0.217 & 151 & 0.077 & 0.348 & 0.243 & 0.892 \\ BL & EUP & LBG & pc & 0.475 & 154 & 0.359 & 0.576 & 0.522 & 0.909 \\ BL & CHES & LBG & wd & 0.609 & 138 & 0.495 & 0.703 & 0.62 & 0.982 \\ BL & EMP & LBG & wd & 0.243 & 151 & 0.096 & 0.381 & 0.257 & 0.948 \\ BL & EUP & LBG & wd & 0.54 & 154 & 0.419 & 0.642 & 0.544 & 0.993 \\ BL & CHES & MV & pc & 0.569 & 138 & 0.459 & 0.662 & 0.617 & 0.923 \\ BL & EMP & MV & pc & 0.217 & 151 & 0.077 & 0.348 & 0.243 & 0.892 \\ BL & EUP & MV & pc & 0.475 & 154 & 0.359 & 0.576 & 0.522 & 0.909 \\ BL & CHES & MV & wd & 0.279 & 138 & 0.173 & 0.379 & 0.403 & 0.694 \\ BL & EMP & MV & wd & 0.052 & 151 & -0.057 & 0.16 & 0.076 & 0.681 \\ BL & EUP & MV & wd & 0.226 & 154 & 0.113 & 0.334 & 0.302 & 0.751 \\ CHES & CHES & LBG & pc & 0.552 & 134 & 0.435 & 0.651 & 0.588 & 0.939 \\ CHES & EMP & LBG & pc & 0.161 & 141 & 0.014 & 0.301 & 0.181 & 0.891 \\ CHES & EUP & LBG & pc & 0.445 & 142 & 0.318 & 0.556 & 0.483 & 0.921 \\ CHES & CHES & LBG & wd & 0.585 & 138 & 0.464 & 0.684 & 0.587 & 0.996 \\ CHES & EMP & LBG & wd & 0.154 & 151 & 0.019 & 0.283 & 0.183 & 0.842 \\ CHES & EUP & LBG & wd & 0.455 & 154 & 0.323 & 0.57 & 0.465 & 0.978 \\ CHES & CHES & MV & pc & 0.552 & 134 & 0.435 & 0.651 & 0.588 & 0.939 \\ CHES & EMP & MV & pc & 0.161 & 141 & 0.014 & 0.301 & 0.181 & 0.891 \\ CHES & EUP & MV & pc & 0.445 & 142 & 0.318 & 0.556 & 0.483 & 0.921 \\ CHES & CHES & MV & wd & 0.208 & 138 & 0.085 & 0.324 & 0.274 & 0.759 \\ CHES & EMP & MV & wd & -0.005 & 151 & -0.118 & 0.108 & -0.007 & 0.705 \\ CHES & EUP & MV & wd & 0.166 & 154 & 0.027 & 0.299 & 0.188 & 0.887 \\ EMP & CHES & LBG & pc & 0.228 & 138 & 0.075 & 0.37 & 0.244 & 0.936 \\ EMP & EMP & LBG & pc & 0.094 & 151 & -0.048 & 0.232 & 0.106 & 0.883 \\ EMP & EUP & LBG & pc & 0.177 & 154 & 0.033 & 0.313 & 0.193 & 0.916 \\ EMP & CHES & LBG & wd & 0.070 & 138 & 0.019 & 0.122 & 0.233 & 0.303 \\ EMP & EMP & LBG & wd & 0.080 & 151 & 0.009 & 0.151 & 0.181 & 0.443 \\ EMP & EUP & LBG & wd & 0.056 & 154 & 0.005 & 0.106 & 0.177 & 0.316 \\ EMP & CHES & MV & pc & 0.228 & 138 & 0.075 & 0.37 & 0.244 & 0.936 \\ EMP & EMP & MV & pc & 0.094 & 151 & -0.048 & 0.232 & 0.106 & 0.883 \\ EMP & EUP & MV & pc & 0.177 & 154 & 0.033 & 0.313 & 0.193 & 0.916 \\ EMP & CHES & MV & wd & 0.022 & 138 & -0.011 & 0.055 & 0.114 & 0.194 \\ EMP & EMP & MV & wd & 0.042 & 151 & -0.007 & 0.091 & 0.138 & 0.305 \\ EMP & EUP & MV & wd & 0.019 & 154 & -0.015 & 0.053 & 0.089 & 0.215 \\ \bottomrule \multicolumn{9}{l}{ * wd = whole dimension, pc = per country} \end{longtable} \end{landscape} \newpage The four graphs below shows scattermatrices between the wordscores using the LBG transformation (as can be found in the tables above) and the 2009 expert scores. The matrices were constructed in R using the \texttt{car} package and show the relations between the six data sets including a density plot over the diagonal axis.\\ \begin{figure}[!htbp] \centering \includegraphics[scale=.9]{ApG-LR.pdf} \caption{Left-Right Dimension} \label{fig:leftrightmatrix} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=.9]{ApG-EU.pdf} \caption{European Integration Dimension} \label{fig:euintmatrix} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=.9]{ApG-EC.pdf} \caption{Economic Dimension} \label{fig:economicmatrix} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[scale=.9]{ApG-SO.pdf} \caption{Social Dimension} \label{fig:socialmatrix} \end{figure} \newpage \section*{Appendix H - Pearson's R without rescaling} \begin{table}[!ht] \centering \caption{LBG Transformation} \begin{tabular}{cccc} \toprule Dimension & Reference & Benchmark & Pearson's r \\ \midrule EC & BL & CHES & 0.5796 \\ EC & BL & EUP & 0.5859 \\ EC & BL & EMP & 0.4934 \\ EC & CHES & CHES & 0.6045 \\ EC & CHES & EUP & 0.5891 \\ EC & CHES & EMP & 0.5489 \\ EC & EMP & CHES & 0.4291 \\ EC & EMP & EUP & 0.4951 \\ EC & EMP & EMP & 0.5005 \\ EU & BL & CHES & 0.6217 \\ EU & BL & EUP & 0.5864 \\ EU & BL & EMP & 0.5148 \\ EU & CHES & CHES & 0.6521 \\ EU & CHES & EUP & 0.6151 \\ EU & CHES & EMP & 0.5568 \\ EU & EMP & CHES & 0.5772 \\ EU & EMP & EUP & 0.6275 \\ EU & EMP & EMP & 0.5162 \\ LR & BL & CHES & 0.6928 \\ LR & BL & EUP & 0.6104 \\ LR & BL & EMP & 0.6882 \\ LR & CHES & CHES & 0.6988 \\ LR & CHES & EUP & 0.5909 \\ LR & CHES & EMP & 0.7125 \\ LR & EMP & CHES & 0.6119 \\ LR & EMP & EUP & 0.5907 \\ LR & EMP & EMP & 0.6689 \\ SO & BL & CHES & 0.6211 \\ SO & BL & EUP & 0.5416 \\ SO & BL & EMP & 0.2592 \\ SO & CHES & CHES & 0.6367 \\ SO & CHES & EUP & 0.5382 \\ SO & CHES & EMP & 0.2446 \\ SO & EMP & CHES & 0.2106 \\ SO & EMP & EUP & 0.1399 \\ SO & EMP & EMP & 0.1764 \\ \bottomrule \end{tabular} \end{table} \newpage \begin{table}[!ht] \centering \caption{MV Transformation} \begin{tabular}{cccc} \toprule Dimension & Reference & Benchmark & Pearson's r \\ \midrule EC & BL & CHES & 0.3675 \\ EC & BL & EUP & 0.3548 \\ EC & BL & EMP & 0.3589 \\ EC & CHES & CHES & 0.3296 \\ EC & CHES & EUP & 0.3977 \\ EC & CHES & EMP & 0.3296 \\ EC & EMP & CHES & 0.1436 \\ EC & EMP & EUP & 0.2424 \\ EC & EMP & EMP & 0.3326 \\ EU & BL & CHES & 0.4582 \\ EU & BL & EUP & 0.4921 \\ EU & BL & EMP & 0.3726 \\ EU & CHES & CHES & 0.4864 \\ EU & CHES & EUP & 0.489 \\ EU & CHES & EMP & 0.4651 \\ EU & EMP & CHES & 0.4097 \\ EU & EMP & EUP & 0.4496 \\ EU & EMP & EMP & 0.3745 \\ LR & BL & CHES & 0.3614 \\ LR & BL & EUP & 0.241 \\ LR & BL & EMP & 0.3508 \\ LR & CHES & CHES & 0.522 \\ LR & CHES & EUP & 0.3684 \\ LR & CHES & EMP & 0.4603 \\ LR & EMP & CHES & 0.4092 \\ LR & EMP & EUP & 0.3612 \\ LR & EMP & EMP & 0.4462 \\ SO & BL & CHES & 0.4025 \\ SO & BL & EUP & 0.3015 \\ SO & BL & EMP & 0.0763 \\ SO & CHES & CHES & 0.4069 \\ SO & CHES & EUP & 0.2971 \\ SO & CHES & EMP & 0.0599 \\ SO & EMP & CHES & 0.1135 \\ SO & EMP & EUP & 0.0888 \\ SO & EMP & EMP & 0.1383 \\ \bottomrule \end{tabular} \end{table} \newpage \bibliographystyle{apsr}
train/arxiv
BkiUfP05qhLBirRk3Km5
5
1
\section{Introduction}\label{introduction} Many Banach spaces which play an important role in functional analysis and its applications are obtained in a special way: the norms of these spaces are generated by positive sublinear operators and by $L_p$-norms. In connection with Hardy and Copson operators $$ (Pf)(x) : = \frac{1}{x} \int_0^x f(t)\,dt \qq \mbox{and} \qq (Qf)(x) : = \int_x^{\infty} \frac{f(t)}{t}\,dt,\qq (x > 0), $$ the classical Ces\`{a}ro function space $$ \ces(p) = \bigg\{ f:\, \|f\|_{\ces(p)} : = \bigg( \int_0^{\infty} \bigg( \frac{1}{x} \int_0^x |f(t)|\,dt \bigg)^p\,dx \bigg)^{\frac{1}{p}} < \infty \bigg\}, $$ and the classical Copson function space $$ \cop(p) = \bigg\{ f:\, \|f\|_{\cop(p)} : = \bigg( \int_0^{\infty} \bigg( \int_x^{\infty} \frac{|f(t)|}{t}\,dt \bigg)^p\,dx \bigg)^{\frac{1}{p}} < \infty \bigg\}, $$ where $1 < p \le \infty$, with the usual modifications if $p = \infty$, are of interest. The classical Ces\`{a}ro function spaces $\ces(p)$ have been introduced in 1970 by Shiue \cite{shiue} and subsequently studied in \cite{hashus}. These spaces have been defined analogously to the Ces\`{a}ro sequence spaces that appeared two years earlier in \cite{prog} when the Dutch Mathematical Society posted a problem to find a representation of their dual spaces. This problem was resolved by Jagers \cite{jagers} in 1974 who gave an explicit isometric description of the dual of Ces\`{a}ro sequence space. In \cite{syzhanglee}, Sy, Zhang and Lee gave a description of dual spaces of $\ces(p)$ spaces based on Jagers' result. In 1996 different, isomorphic description due to Bennett appeared in \cite{bennett1996}. For a long time, Ces\`{a}ro function spaces have not attracted a lot of attention contrary to their sequence counterparts. In fact there is quite rich literature concerning different topics studied in Ces\`{a}ro sequence spaces as for instance in \cites{CuiPluc,cmp,cuihud1999,cuihud2001,chencuihudsims,cuihudli}. However, recently in a series of papers \cites{astasmal2008,astasmal2009,astasmal2010,astasmalig10,astashkinmaligran11,asmal12,asmal13,astas5}, Astashkin and Maligranda started to study the structure of Ces\`{a}ro function spaces. Among others, in \cite{astasmal2009} they investigated dual spaces for $\ces (p)$ for $1 < p < \infty$. Their description can be viewed as being analogous to one given for sequence spaces in \cite{bennett1996} (For more detailed information about history of classical Ces\`{a}ro spaces see recent survey paper \cite{asmalsurvey}). In \cite[Theorem 21.1]{bennett1996} Bennett observes that the classical Ces\`{a}ro function space and the classical Copson function space coincide for $p > 1$. He also derives estimates for the norms of the corresponding inclusion operators. The same result, with different estimates, is due to Boas \cite{boas1970}, who in fact obtained the integral analogue of the Askey-Boas theorem \cite[Lemma 6.18]{boas1967} and \cite[Lemma]{askeyboas}. These results generalized in \cite{grosse} using the blocking technique. Let $A$ be any measurable subset of $\I$. By $\mp (A)$ we denote the set of all measurable functions on $A$. The symbol $\mp^+ (A)$ stands for the collection of all $f\in\mp (A)$ which are non-negative on $A$. The family of all weight functions (also called just weights) on $A$, that is, measurable, positive and finite a.e. on $A$, is given by $\W (A)$. For $p\in (0,\i]$, we define the functional $\|\cdot\|_{p,A}$ on $\mp (A)$ by \begin{equation*} \|f\|_{p,A} : = \left\{\begin{array}{cl} \bigg(\int_A |f(x)|^p \,dx \bigg)^{\frac{1}{p}} & \qq\mbox{if}\qq p<\i, \\ \esup_{A} |f(x)| & \qq\mbox{if}\qq p=\i. \end{array} \right. \end{equation*} If $w\in \W(A)$, then the weighted Lebesgue space $L_p(w,A)$ is given by \begin{equation*} L_p(w,A) \equiv L_{p,w}(A) : = \{f\in \mp (A):\,\, \|f\|_{p,w,A} : = \|fw\|_{p,A} < \i\}, \end{equation*} and it is equipped with the quasi-norm $\|\cdot\|_{p,w,A}$. When $A = \I$, we often write simply $L_{p,w}$ and $L_p(w)$ instead of $L_{p,w}(A)$ and $L_p(w,A)$, respectively. We adopt the following usual conventions. \begin{conv}\label{Notat.and.prelim.conv.1.1} {\rm (i)} Throughout the paper we put $0/0 = 0$, $0 \cdot (\pm \i) = 0$ and $1 / (\pm\i) =0$. {\rm (ii)} We put $$ p' : = \left\{\begin{array}{cl} \frac p{1-p} & \text{if} \quad 0<p<1,\\ \infty &\text{if}\quad p=1, \\ \frac p{p-1} &\text{if}\quad 1<p<\infty,\\ 1 &\text{if}\quad p=\infty. \end{array} \right. $$ {\rm (iii)} If $I = (a,b) \subseteq \R$ and $g$ is a monotone function on $I$, then by $g(a)$ and $g(b)$ we mean the limits $\lim_{x\rw a+}g(x)$ and $\lim_{x\rw b-}g(x)$, respectively. \end{conv} To state our results we use the notation $p \rw q$ for $0 < p,\,q \le \infty$ defined by $$ \frac{1}{p \rw q} = \frac{1}{q} - \frac{1}{p} \qq \mbox{if} \qq q < p, $$ and $p \rw q = \infty$ if $q \ge p$ (see, for instance, \cite[p. 30]{grosse}). Throughout the paper, we always denote by $c$ and $C$ a positive constant, which is independent of main parameters but it may vary from line to line. However a constant with subscript or superscript such as $c_1$ does not change in different occurrences. By $a\lesssim b$, ($b\gtrsim a$) we mean that $a\leq \la b$, where $\la>0$ depends on inessential parameters. If $a\lesssim b$ and $b\lesssim a$, we write $a\approx b$ and say that $a$ and $b$ are equivalent. We will denote by $\bf 1$ the function ${\bf 1}(x) = 1$, $x \in \R$. Given two quasi-normed vector spaces $X$ and $Y$, we write $X=Y$ if $X$ and $Y$ are equal in the algebraic and the topological sense (their quasi-norms are equivalent). The symbol $X\hookrightarrow Y$ ($Y \hookleftarrow X$) means that $X\subset Y$ and the natural embedding $\Id$ of $X$ in $Y$ is continuous, that is, there exist a constant $c > 0$ such that $\|z\|_Y \le c\|z\|_X$ for all $z\in X$. The best constant of the embedding $X\hookrightarrow Y$ is $\|\Id\|_{X \rw Y}$. The weighted Ces\`{a}ro and Copson function spaces are defined as follows: \begin{defi}\label{defi.2.1} Let $0 <p, q \le \infty$, $u \in \mp^+ (I)$, $v\in \W(I)$. The weighted Ces\`{a}ro and Copson spaces are defined by \begin{align*} \ces_{p,q} (u,v) : & = \bigg\{ f \in \mp^+ \I: \|f\|_{\ces_{p,q} (u,v)} : = \big\| \|f\|_{p,v,(0,\cdot)} \big\|_{q,u,\I} < \i \bigg\}, \\ \intertext{and} \cop_{p,q} (u,v) : & = \bigg\{ f \in \mp^+ \I: \|f\|_{\cop_{p,q} (u,v)} : = \big\| \|f\|_{p,v,(\cdot,\i)} \big\|_{q,u,\I} < \i \bigg\}, \end{align*} respectively. \end{defi} Many function spaces from the literature, in particular from Harmonic Analysis, are covered by the spaces $\ces_{p,q}(u,v)$ and $\cop_{p,q}(u,v)$. Let us only mention the Beurling algebras $A^p$ and $A^*$, see \cite{gil1970,johnson1974,belliftrig}. Note that the function spaces $C$ and $D$ defined by Grosse-Erdmann in \cite{grosse} are related with our definition in the following way: $$ \ces_{p,q}(u,v) = C(p,q,u)_v \qq \mbox{and} \qq \cop_{p,q}(u,v) = D(p,q,u)_v. $$ We use the notations $\ces_p(u) : = \ces_{1,p}(u,{\bf 1})$ and $\cop_p(u) : = \cop_{1,p}(u,{\bf 1})$. Obviously, $\ces(p) = \ces_p (x^{-1})$ and $\cop(p) = \cop_p (x^{-1})$. In \cite{kamkub}, Kami{\'n}ska and Kubiak computed the dual norm of the Ces\`{a}ro function space $\ces_{p}(u)$, generated by $1 < p < \infty$ and an arbitrary positive weight $u$. A description presented in \cite{kamkub} resembles the approach of Jagers \cite{jagers} for sequence spaces. Our principal goal in this paper is to investigate the embeddings between weighted Copson and Ces\`{a}ro function spaces and vice versa, that is, the embeddings \begin{align} \cop_{p_1,q_1}(u_1,v_1) & \hra \ces_{p_2,q_2}(u_2,v_2), \label{mainemb1}\\ \ces_{p_1,q_1}(u_1,v_1) & \hra \cop_{p_2,q_2}(u_2,v_2). \label{mainemb2} \end{align} This is a very difficult and technically complicated task. We develop an approach consisting of a duality argument combined with estimates of optimal constants of the embeddings between weighted Ces\`{a}ro and Copson spaces and weighted Lebesgue spaces, that is, \begin{align} L_s(w) & \hra \ces_{p,q}(u,v), \label{emb1}\\ L_s(w) & \hra \cop_{p,q}(u,v), \label{emb2}\\ L_s(w) & \hookleftarrow \ces_{p,q}(u,v), \label{emb3}\\ L_s(w) & \hookleftarrow \cop_{p,q}(u,v), \label{emb4} \end{align} which reduce the problem to the solutions of the iterated Hardy-type inequalities \eqref{eq.4.11}. In order to characterize embeddings \eqref{emb1} - \eqref{emb4}, we are going to use the direct and reverse Hardy-type inequalities. Note that embeddings \eqref{mainemb1} - \eqref{mainemb2} contain embeddings \eqref{emb1} - \eqref{emb4} as a special case. Indeed, for instance, if $p = q$ and $v(x) = w(x) / \|u\|_{p,(x,\infty)}$, then $\ces_{p,q}(u,v) = L_p(w)$. Similarly, if $p = q$ and $v(x) = w(x) / \|u\|_{p,(0,x)}$, then $\cop_{p,q}(u,v) = L_p(w)$. Moreover, by the change of variables $x = {1} / {t}$ it is easy to see that \eqref{mainemb2} is equivalent to the embedding $$ \cop_{p_1,q_1}(\tilde{u}_1,\tilde{v}_1) \hra \ces_{p_2,q_2}(\tilde{u}_2,\tilde{v}_2), $$ where $\tilde{u}_i (t) = t^{- {2} / {q_i}}u_i\big({1} / {t}\big)$, $\tilde{v}_i (t) = t^{- {2} / {p_i}} v_i\big({1} / {t}\big)$, $i=1,2$, $t > 0$. This note allows us to concentrate our attention on characterization of \eqref{mainemb1}. On the negative side of things we have to admit that the duality approach works only in the case when, in \eqref{mainemb1} - \eqref{mainemb2}, one has $p_2 \le q_2$. Unfortunately, in the case when $p_2 > q_2$ the characterization of these embeddings remain open. It should be noted that none of the above would ever have existed if it wasn't for the (now classical) well-known characterizations of weights for which the Hardy inequality holds. This subject, which is, incidentally, exactly one hundred years old, is absolutely indispensable in this part of mathematics. In our proof below such results will be heavily used, as well as the more recent characterizations of the weighted reverse inequalities (cf. \cite{ego2008} and \cite{mu2015}). It is mentioned in \cite[p. 30]{grosse} that multipliers between Ces\`{a}ro and Copson spaces are more difficult to treat. It is worth to mention that by using characterizations of \eqref{mainemb1} - \eqref{mainemb2} it is possible to give the solution to the multiplier problem between weighted Ces\`{a}ro and Copson function spaces, and we are going to present it in the future paper. In particular, we obtain two-sided estimates of the optimal constant $c$ in the inequality \begin{equation}\label{emb.as.ineq.} \bigg( \int_0^{\infty} \bigg( \int_0^t f(\tau)^{p_2}v_2(\tau)\,d\tau\bigg)^{\frac{q_2}{p_2}} u_2(t)\,dt\bigg)^{\frac{1}{q_2}} \le c \bigg( \int_0^{\infty} \bigg( \int_t^{\infty} f(\tau)^{p_1} v_1(\tau)\,d\tau\bigg)^{\frac{q_1}{p_1}} u_1(t)\,dt\bigg)^{\frac{1}{q_1}}, \end{equation} where $p_1,\,p_2,\,q_1,\,q_2 \in (0,\infty)$, $p_2 \le \min \{p_1,\, q_2\}$ and $u_1,\,u_2,\,v_1,\,v_2$ are weights on $\I$ (It is shown in Lemma \ref{triviality} that inequality \eqref{emb.as.ineq.} holds true only for trivial functions $f$ when $p_1 < p_2$ for any $q_1,\,q_2 \in (0,\infty]$). The most innovative part consists of the fact that possibly different parameters $p_1$ and $p_2$ and possibly different inner weights $v_1$ and $v_2$ are allowed. Note that \eqref{emb.as.ineq.} was characterized in the particular cases, when $p_1 = p_2 = 1$, $q_1 = q_2 = p > 1$, $u_1(t) = t^{\b p - 1}$, $u_2(t) = t^{-\a p - 1}$, $v_1(t) = t^{-\b - 1}$, $v_2(t) = t^{\a - 1}$, $t > 0$, where $\a > 0$ and $\b > 0$, in \cite[p. 61]{boas1970}, and, when $p_1 = p_2 = 1$, $q_1 = p$, $q_2 = q$, $u_1(t) = v(t)$, $u_2(t) = t^{-q}w(t)$, $v_1(t) = t^{-1}$, $v_2 (t) = 1$, $t > 0$, where $0 < p \le \infty$, $1 \le q \le \infty$ and $v,\,w$ are weight functions on $\I$, in \cite[Theorem 2.3]{cargogmarpick}, respectively. The paper is organized as follows. We start with notation and preliminary results in Section~\ref{s.2}. In Section \ref{s3} we recall characterizations of direct and reverse weighted Hardy-type inequalities. Solutions of embeddings \eqref{emb1} - \eqref{emb2} and \eqref{emb3} - \eqref{emb4} are given in Sections \ref{s4} and \ref{s5}, respectively. In Section \ref{s6} we recall characterizations of weighted iterated Hardy-type inequalities. The characterization of the embeddings between weighted Ces\`{a}ro and Copson spaces are obtained in Section \ref{s7}. \section{Notations and preliminaries}\label{s.2} Let $A,\,B$ be some sets and $\vp,\,\psi$ be non-negative functions defined on $A \times B$ (It may happen that $\vp (\a,\b)= \i$ or $\psi (\a,\b) = \i$ for some $\a \in A$, $\b \in B$). We say that $\vp$ is dominated by $\psi$ (or $\psi$ dominates $\vp$) on $A \times B$ uniformly in $\a \in A$ and write $$ \vp (\a,\b) \ls \psi (\a,\b) \quad \mbox{uniformly in} \quad \a \in A, $$ or $$ \psi (\a,\b) \gs \vp (\a,\b) \quad \mbox{uniformly in} \quad \a \in A, $$ if for each $\b \in B$ there exists $C(\b) > 0$ such that $$ \vp (\a,\b) \le C(\b) \psi (\a,\b) $$ for all $\a \in A$. We also say that $\vp$ is equivalent to $\psi$ on $A \times B$ uniformly in $\a \in A$ and write $$ \vp (\a,\b) \ap \psi (\a,\b) \quad \mbox{uniformly in} \quad \a \in A, $$ if $\vp$ and $\psi$ dominate each other on $A \times B$ uniformly in $\a \in A$. We need the following auxiliary results. \begin{lem}\label{NontrivialCesCopLemma} Let $0<p,q\le \i$, $v\in \W\I$ and let $u\in \M^+\I$. $\ces_{p,q} (u,v)$ and $\cop_{p,q} (u,v)$ are non-trivial, i.e. consists not only of functions equivalent to $0$ on $\I$, if and only if \begin{equation*} \|u\|_{q,(t,\i)} < \i, \qq \mbox{for some} \qq t >0, \end{equation*} and \begin{equation*} \|u\|_{q,(0,t)} < \i,\qq \mbox{for some} \qq t >0, \end{equation*} respectively. \end{lem} \begin{proof} \textit{Sufficiency.} Let $u\in \M^+\I$ be such that $\|u\|_{q,(t,\i)}=\i$ for all $t>0$. Assume that $f\not = 0$ a.e. Then $\|f\|_{p,v,(0,t_0)}>0$ for some $t_0>0$. \begin{align*} \|f\|_{\ces_{p,q}(u,v)} \geq \big\| \|f\|_{p,v,(0,\cdot)} \big\|_{q,u,(t_0,\i)} \geq \|{\bf{1}}\|_{q,u,(t_0,\i)} \|f\|_{p,v,(0,t_0)} = \|u\|_{q,(t_0,\i)} \|f\|_{p,v,(0,t_0)}. \end{align*} Hence $\|f\|_{\ces_{p,q}(u,v)}=\i$. Consequently, if $ \|f\|_{\ces_{p,q}(u,v)} < \i $, then $f = 0$ a.e., that is, $\ces_{p,q}(u,v)=\{0\}$. \textit{Necessity.} Assume that $\|u\|_{q,(t,\i)}<\i$ for some $t>0$. If $f\in L_p(v)$ such that $\supp f \subset (\tau,\infty)$ for some $\tau \ge t$, then $f \in \ces_{p,q}(u,v)$. Indeed: \begin{align*} \|f\|_{\ces_{p,q}(u,v)} = \big\| \|f\|_{p,v,(0,\cdot)} \big\|_{q,u,(\tau, \i)} \leq \|{\bf{1}}\|_{q,u,(\tau, \i)} \|f\|_{p,v,(0,\i)} = \|u\|_{q,(\tau, \i)} \|f\|_{p,v,(0,\i)} < \i. \end{align*} The same conclusion can be deduced for the Copson spaces. \end{proof} \begin{lem}\label{shrinkagelemma} If $\|u\|_{q,(t_1,\infty)}=\infty$ for some $t_1 > 0$, then $$ f\in \ces_{p,q}(u,v) \Rightarrow f = 0 \quad \mbox{a.e. on} \quad (0,t_1). $$ If $\|u\|_{q,(0,t_2)}=\infty$ for some $t_2 > 0$, then $$ f\in \cop_{p,q}(u,v) \Rightarrow f = 0 \quad \mbox{a.e. on} \quad (t_2,\infty). $$ \end{lem} \begin{proof} Assume that $\|u\|_{q,(t_1,\infty)}=\infty$ for some $t_1 > 0$ and let $f\in \ces_{p,q}(u,v)$. Then \begin{align*} \|f\|_{\ces_{p,q}(u,v)} \geq \big\| \|f\|_{p,v,(0,\cdot)} \big\|_{q,u,(t_1,\infty)} \geq \|u\|_{q,(t_1,\infty)} \|f\|_{p,v,(0,t_1)}. \end{align*} Therefore, $\|f\|_{p,v,(0,t_1)}=0$. Hence, $f = 0$ a.e. on $(0,t_1)$. Assume now that $\|u\|_{q,(0,t_2)}=\infty$ for some $t_2 > 0$ and let $f\in \cop_{p,q}(u,v)$. Then \begin{align*} \|f\|_{\cop_{p,q}(u,v)} \geq \big\| \|f\|_{p,v,(\cdot,\infty)} \big\|_{q,u,(0,t_2)} \geq \|u\|_{q,(0,t_2)} \|f\|_{p,v,(t_2,\infty)}. \end{align*} Consequently, $\|f\|_{p,v,(t_2,\infty)}=0$. This yields that $f = 0$ a.e. on $(t_2,\infty)$. \end{proof} \begin{rem} In view of Lemmas~\ref{NontrivialCesCopLemma} and \ref{shrinkagelemma}, it is enough to take $u\in \M^+(0,\infty)$ such that $\|u\|_{q,(t,\infty)}<\infty$ for all $t>0$, when considering $\ces_{p,q}(u,v)$ spaces. Similarly, it is enough to take $u\in \M^+(0,\infty)$ such that $\|u\|_{q,(0,t)}<\infty$ for all $t>0$, when considering $\cop_{p,q}(u,v)$ spaces. \end{rem} \begin{defi} Let $0<q\leq \i$. We denote by $\O_q$ the set of all functions $u \in \mp^+ \I$ such that $$ 0<\|u\|_{q,(t,\i)}<\i,~~ t>0, $$ and by $\dual{\O}_q$ the set of all functions $u \in \mp^+ \I$ such that $$ 0<\|u\|_{q,(0,t)}<\i,~~ t>0. $$ \end{defi} Let $v \in \W\I$. It is easy to see that $\ces_{p,q} (u,v)$ and $\cop_{p,q} (u,v)$ are quasi-normed vector spaces when $u \in \O_q$ and $u \in \dual{\O}_q$, respectively. Note that $\ces_{p,p}(u,v)$ and $\cop_{p,p}(u,v)$ coincide with some weighted Lebesgue spaces. \begin{lem}\label{Cespp} Let $0<p\le \i$, $u\in \O_p$ and $v\in \W\I$. Then $\ces_{p,p}(u,v)=L_p(w)$, where \begin{equation}\label{WeightCesaro} w(x) := v(x)\|u\|_{p,(x,\i)}, ~ x > 0. \end{equation} \end{lem} \begin{proof} Assume first that $p<\i$. Applying Fubini's Theorem, we have \begin{align*} \|f\|_{\ces_{p,p}(u,v)}&=\bigg(\int_0^\i u^p(t) \int_0^t f(\tau)^p v(\tau)^p \,d\tau \,dt\bigg)^{\frac{1}{p}}\\ &= \bigg(\int_0^\i f(\tau)^p v(\tau)^p \int_{\tau}^\i u(t)^p \, dt\, d\tau\bigg)^{\frac{1}{p}}\\ &=\|f\|_{p,w,\I}, \end{align*} where $w$ is defined by \eqref{WeightCesaro}. If $p=\i$, by exchanging suprema, we have \begin{align*} \|f\|_{\ces_{\i,\i}(u,v)}&=\esup_{t\in \I} u(t) \esup_{\tau\in(0,t)} f(\tau)v(\tau)\\ &=\esup_{t\in \I} f(t)v(t) \esup_{\tau \in (t,\i)} u(\tau)\\ &= \|f\|_{\i,w,\I}. \end{align*} \end{proof} \begin{lem}\label{Coppp} Let $0<p\le \i$, $u\in \dual{\O_p}$ and $v\in \W\I$. Then $\cop_{p,p}(u,v)=L_p(w)$, where \begin{equation}\label{WeightCopson} w(x) := v(x)\|u\|_{p,(0,x)}, ~ x > 0. \end{equation} \end{lem} \begin{proof} This follows by the same method as in Lemma~\ref{Cespp}. \end{proof} \section{Some Hardy-type inequalities}\label{Hardy Type Inequalities}\label{s3} In this section we recall characterizations of direct and reverse weighted Hardy-type inequalities. Denote by $$ (Hf) (t) : = \int_0^t f(x)\,dx, \qq (H^*f) (t) : = \int_t^{\i} f(x)\,dx, \qq f \in \mp^+\I,\qq t \ge 0. $$ The well-known two-weight Hardy-type inequalities \begin{equation}\label{eq.2.2} \|Hf\|_{q,w,\I}\leq c \|f\|_{p,v,\I} \end{equation} and \begin{equation}\label{eq.2.200} \|H^*f\|_{q,w,\I}\leq c \|f\|_{p,v,\I} \end{equation} for all non-negative measurable functions $f$ on $(0,\infty)$, where $0 < p,\,q \le \infty$ with $c$ being a constant independent of $f$, have a broad variety of applications and represents now a basic tool in many parts of mathematical analysis, namely in the study of weighted function inequalities. For the results, history and applications of this problem, see \cites{ok,kp, kufmalpers}. \begin{thm}\label{HardyIneq} Let $1 \le p \le \i$, $0 < q \le \i$, $v,\,w \in \mp^+\I$. Then inequality \eqref{eq.2.2} holds for all $f \in \mp^+\I$ if and only if $A(p,q) < \i$, and the best constant in \eqref{eq.2.2}, that is, \begin{equation*} B(p,q) : = \sup_{f\in \mp^+\I} \|Hf\|_{q,w,\I} / \|f\|_{p,v,\I} \end{equation*} satisfies $B(p,q) \ap A(p,q)$, where {\rm (i)} for $p \le q$, $$ A(p,q): = \sup_{t \in \I} \big\|v^{-1}\big\|_{p',(0,t)} \|w\|_{q,(t,\i)} \,; $$ {\rm (ii)} for $q<p$ and $\frac{1}{r} = \frac{1}{q} - \frac{1}{p}$, $$ A(p,q) : = \bigg(\int_0^{\infty} \big\|v^{-1}\big\|_{p',(0,t)}^{r} d \bigg(- \|w\|_{q,(t,\i)}^r\bigg)\bigg)^{\frac{1}{r}}. $$ \end{thm} \begin{thm}\label{CopsonIneq} Let $1 \le p \le \i$, $0 < q \le \i$, $v,\,w \in \mp^+\I$. Then inequality \eqref{eq.2.200} holds for all $f \in \mp^+\I$ if and only if $A^*(p,q) < \i$, and the best constant in \eqref{eq.2.200}, that is, \begin{equation*} B^*(p,q) : = \sup_{f\in \mp^+\I} \|H^*f\|_{q,w,\I} / \|f\|_{p,v,\I} \end{equation*} satisfies $B^*(p,q) \ap A^*(p,q)$. Here {\rm (i)} for $p \le q$, $$ A^*(p,q): = \sup_{t \in \I} \big\|v^{-1}\big\|_{p',(t,\infty)} \|w\|_{q,(0,t)} \,; $$ {\rm (ii)} for $q < p$ and $\frac{1}{r} = \frac{1}{q} - \frac{1}{p}$, $$ A^*(p,q) : = \bigg(\int_0^{\infty} \big\|v^{-1}\big\|_{p',(t,\infty)}^{r} d \bigg(\|w\|_{q,(0,t)}^r\bigg)\bigg)^{\frac{1}{r}}. $$ \end{thm} \begin{thm}\label{HardyforSupremal} Let $0 < q < \i$, $v,\,w \in \mp^+\I$. Denote by $$ (Sf) (t) : = \esup_{x \in (0,t)} f(x), \qq f \in \mp^+\I,\qq t \ge 0. $$ Then the inequality \begin{equation*} \|Sf\|_{q,w,\I}\leq c \|f\|_{\i,v,\I} \end{equation*} holds for all $f \in \mp^+\I$ if and only if $$ \bigg(\int_0^{\infty} \big\|v^{-1}\big\|_{\infty,(0,t)}^q d \bigg(- \|w\|_{q,(t,\i)}^q\bigg)\bigg)^{\frac{1}{q}} <\i, $$ and \begin{equation*} \sup_{f\in \mp^+\I} \|Sf\|_{q,w,\I} / \|f\|_{\i,v,\I} \ap \bigg(\int_0^{\infty} \big\|v^{-1}\big\|_{\infty,(0,t)}^q d \bigg(- \|w\|_{q,(t,\i)}^q\bigg)\bigg)^{\frac{1}{q}}. \end{equation*} \end{thm} \begin{thm}\label{CopsonforSupremal} Let $0 < q < \i$, $v,\,w \in \mp^+\I$. Denote by $$ (S^*f) (t) : = \esup_{x \in (t,\i)} f(x), \qq f \in \mp^+\I,\qq t \ge 0. $$ Then the inequality \begin{equation*} \|S^*f\|_{q,w,\I}\leq c \|f\|_{\i,v,\I} \end{equation*} holds for all $f \in \mp^+\I$ if and only if $$ \bigg(\int_0^{\infty} \big\|v^{-1}\big\|_{\infty,(t,\infty)}^q d \bigg(\|w\|_{q,(0,t)}^q\bigg)\bigg)^{\frac{1}{q}} < \i, $$ and \begin{equation*} \sup_{f\in \mp^+\I} \|S^*f\|_{q,w,\I} / \|f\|_{\i,v,\I} \ap \bigg(\int_0^{\infty} \big\|v^{-1}\big\|_{\infty,(t,\infty)}^q d \bigg(\|w\|_{q,(0,t)}^q\bigg)\bigg)^{\frac{1}{q}}. \end{equation*} \end{thm} For the convenience of the reader we repeat the relevant material from \cite{ego2008} and \cite{mu2015} without proofs, thus making our exposition self-contained. Let $\vp$ be a non-decreasing and finite function on the interval $I : = (a,b)\subseteq \R$. We assign to $\vp$ the function $\la$ defined on subintervals of $I$ by \begin{align} \la ([y,z]) & = \vp(z+) - \vp(y-), \notag\\ \la ([y,z)) & = \vp(z-) - \vp(y-), \label{Notat.and.prelim.eq.1.4}\\ \la ((y,z]) & = \vp(z+) - \vp(y+), \notag\\ \la ((y,z)) & = \vp(z-) - \vp(y+). \notag \end{align} The function $\la$ is a non-negative, additive and regular function of intervals. Thus (cf. \cite{r}, Chapter 10), it admits a unique extension to a non-negative Borel measure $\la$ on $I$. Note also that the associated Borel measure can be determined, e.g., only by putting $$ \la ([y,z]) = \vp(z+) - \vp(y-) \qq \mbox{for any}\qq [y,z]\subset I $$ (since the Borel subsets of $I$ can be generated by subintervals $[y,z]\subset I$). If $J\subseteq I$, then the Lebesgue-Stieltjes integral $\int_J f\,d\vp$ is defined as $\int_J f\,d\la$. We shall also use the Lebesgue-Stieltjes integral $\int_J f\,d\vp$ when $\vp$ is a non-increasing and finite on the interval $I$. In such a case we put $$ \int_J f\,d\vp : = - \int_J f\,d(-\vp). $$ We adopt the following conventions. \begin{conv} \label{conv:3.3} Let $I=(a,b)\subseteq \R$, $f:I\to [0,\infty]$ and $h:I\to [0,\i]$. Assume that $h$ is non-decreasing and left-continuous on $I$. If $h:I\to [0,\infty)$, then the symbol $\int_{I}f\,dh$ means the usual Lebesgue-Stieltjes integral (with the measure $\la$ associated to $h$ is given by $\la([\a,\b))= h(\b)-h(\a)$ if $[\a, \b)\subset (a,b)$ -- cf. \eqref{Notat.and.prelim.eq.1.4}). However, if $h = \infty$ on some subinterval $(c,b)$ with $c\in I$, then we define $\int_{I}f\,dh$ only if $f=0$ on $[c,b)$ and we put $$ \int_{I}f\,dh=\int_{(a,c)}f\,dh. $$ \end{conv} \begin{conv} Let $I=(a,b)\subseteq \R$, $f:I\to [0,+\infty]$ and $h:I\to [-\infty,0]$. Assume that $h$ is non-decreasing and right-continuous on $I$. If $h:I\to (-\infty,0]$, then the symbol $\int_{I}f\,dh$ means the usual Lebesgue-Stieltjes integral. However, if $h= -\infty$ on some subinterval $(a,c)$ with $c\in I$, then we define $\int_{I}f\,dh$ only if $f=0$ on $(a,c]$ and we put $$ \int_{I}f\,dh=\int_{(c,b)}f\,dh. $$ \end{conv} \begin{thm}\cite[Theorems 5.1 and 5.4]{ego2008}\label{ReverseHardyIneq} Let $w \in \mp^+\I$ and $u \in \mp^+\I$ be such that $\|u\|_{q,(t,\i)} <\infty$ for all $t\in (0,\i)$. {\rm (i)} Assume that $0< q\le p\le1$. Then \begin{equation} \label{5.1} \Vert g\Vert _{p,w,\I}\leq c \Vert Hg\Vert _{q,u,\I} \end{equation} holds for all $g \in \mp^+\I$ if and only if \begin{equation} \label{5.2} C(p,q) : =\sup _{t\in (0,\i)}\| w \|_{{p'},(t,\i)} \|u\|_{q,(t,\i)}^{-1} <\infty. \end{equation} The best possible constant in \eqref{5.1}, that is, \begin{equation*} D(p,q) : = \sup _{g \in \mp^+\I}\| g \|_{p,w,\I} / \| Hg\|_{q,u,\I} \end{equation*} satisfies $D(p,q)\approx C(p,q)$. {\rm (ii)} Let $0< p\le1$, $p<q\le\infty$ and $\frac{1}{r} = \frac{1}{p} - \frac{1}{q}$. Then \eqref{5.1} holds if and only if $$ C(p,q) : =\bigg( \int_{\I} \| w \|_{{p'},(t,\i)}^{r}\,d\bigg(\|u\|_{q,(t-,\i) }^{-r}\bigg) \bigg) ^{\frac{1}{r}}+\frac{\|w \|_{{p'},\I}} {\|u\|_{q,\I}}<\infty, $$ and $D(p,q)\approx C(p,q)$. \end{thm} \begin{thm}\cite[Theorems 4.1 and 4.4]{ego2008}\label{ReverseCopsonIneq} Let $w \in \mp^+\I$ and $u \in \mp^+\I$ be such that $\|u\|_{q,(0,t)} <\infty$ for all $t\in (0,\i)$. {\rm (i)} Assume that $0< q\le p\le1$. Then \begin{equation} \label{5.100} \Vert g \Vert _{p,w,\I}\leq c \Vert H^*g\Vert_{q,u,(0,\i)} \end{equation} holds for all $g \in \mp^+\I$ if and only if \begin{equation} \label{5.200} C^*(p,q) : =\sup _{t\in (0,\i)}\| w \|_{{p'},(0,t)} \|u\|_{q,(0,t)}^{-1} <\infty . \end{equation} The best possible constant in \eqref{5.100}, that is, \begin{equation*} D^*(p,q) : = \sup _{g \in \mp^+\I}\| g \|_{p,w,\I} / \|H^*g\|_{q,u,\I} \end{equation*} satisfies $D^*(p,q)\approx C^*(p,q)$. {\rm (ii)} Let $0< p\le1$, $p<q\le\infty$ and $\frac{1}{r} = \frac{1}{p} - \frac{1}{q}$. Then \eqref{5.100} holds if and only if $$ C^*(p,q) : =\bigg( \int_{\I} \| w \|_{p',(0,t)}^{r}\,d\bigg(-\|u\|_{q,(0,t+) }^{-r}\bigg) \bigg) ^{\frac{1}{r}}+\frac{\|w \|_{{p'},\I}} {\|u\|_{q,\I}}<\infty, $$ and $D^*(p,q)\approx C^*(p,q)$. \end{thm} \begin{rem} \label{R:5.5} Let $q<\infty$ in Theorems~\ref{ReverseHardyIneq} and \ref{ReverseCopsonIneq}. Then $$ \| u\|_{q,(t-,\i)}=\| u\|_{q,(t,\i)}\quad \mbox{and} \qq \|u\|_{q,(0,t+) } = \|u\|_{q,(0,t)} \qq\text{for all} \quad t\in \I, $$ which implies that $$ C(p,q) = \bigg( \int_{(0,\infty)} \Vert w \Vert _{{p'},(t,\i)}^{r} \,d\bigg(\Vert u\Vert _{q,(t,\i)}^{-r}\bigg) \bigg) ^{\frac1{r}}+\frac{\Vert w \Vert _{{p'},\I}} {\Vert u\Vert_{q,\I}}, $$ and $$ C^*(p,q) : =\bigg( \int_{\I} \| w \|_{p',(0,t)}^{r}\,d\bigg(-\|u\|_{q,(0,t) }^{-r}\bigg) \bigg) ^{\frac{1}{r}} +\frac{\|w\|_{p',\I}} {\|u\|_{q,\I}}<\infty. $$ \end{rem} \begin{thm}\cite[Theorem 4.1]{mu2015}\label{ReverseHardySupremal} Let $w \in \mp^+\I$ and $u \in \mp^+\I$ be such that $\|u\|_{q,(t,\i)} <\infty$ for all $t\in (0,\i)$. {\rm (i)} Assume that $0< q\le p\le \i$. Then \begin{equation} \label{9.1} \Vert g \Vert _{p,w,\I}\leq c \Vert Sg\Vert_{q,u,(0,\i)} \end{equation} holds for all $g \in \mp^+\I$ if and only if \begin{equation} \label{9.2} E(p,q) : = \sup_{t \in \I} \| w \|_{{p},(t,\i)} \|u\|_{q,(t,\i)}^{-1} <\infty . \end{equation} The best possible constant in \eqref{9.1}, that is, \begin{equation*} F(p,q) : = \sup _{g \in \mp^+\I}\| g \|_{p,w,\I} / \| Sg\|_{q,u,\I} \end{equation*} satisfies $F(p,q)\approx E(p,q)$.\\ {\rm (ii)} Let $0 < p < q \le +\i$ and $\frac{1}{r} = \frac{1}{p} - \frac{1}{q}$. Then \eqref{9.1} holds if and only if $$ E(p,q) : =\bigg( \int_{\I} \| w \|_{{p},(t,\i)}^{r} \, d\bigg(\|u\|_{q,(t,\i)}^{-r} \bigg) \bigg)^{\frac{1}{r}} +\frac{\|w\|_{{p},\I}} {\|u\|_{q,\I}}<\infty, $$ and $F(p,q)\approx E(p,q)$. \end{thm} \begin{thm}\cite[Theorem~3.4]{mu2015}\label{ReverseCopsonSupremal} Let $w \in \mp^+\I$ and $u \in \mp^+\I$ be such that $\|u\|_{q,(0,t)} <\infty$ for all $t\in (0,\i)$. {\rm (i)} Assume that $0< q\le p\le \i$. Then \begin{equation} \label{9.100} \Vert g \Vert _{p,w,\I}\leq c \Vert S^*g\Vert_{q,u,(0,\i)} \end{equation} holds for all $g \in \mp^+\I$ if and only if \begin{equation} \label{9.200} E^*(p,q) : =\sup _{t\in (0,\i)}\| w \|_{{p},(0,t)} \|u\|_{q,(0,t)}^{-1} <\infty . \end{equation} The best possible constant in \eqref{9.100}, that is, \begin{equation*} F^*(p,q) : = \sup _{g \in \mp^+\I}\| g \|_{p,w,\I} / \|S^*g\|_{q,u,\I} \end{equation*} satisfies $F^*(p,q)\approx E^*(p,q)$. {\rm (ii)} Let $0 < p < q \le +\i$ and $\frac{1}{r} = \frac{1}{p} - \frac{1}{q}$. Then \eqref{9.100} holds if and only if $$ E^*(p,q) : =\bigg( \int_{\I} \| w \|_{p,(0,t)}^{r}\,d\bigg(-\|u\|_{q,(0,t) }^{-r}\bigg) \bigg) ^{\frac{1}{r}} +\frac{\|w\|_{p,\I}} {\|u\|_{q,\I}}<\infty, $$ and $F^*(p,q)\approx E^*(p,q)$. \end{thm} \section{Characterizations of $L_{p_1}(v_1) \hra \ces_{p_2,q}(u,v_2)$ and $L_{p_1}(v_1) \hra \cop_{p_2,q}(u,v_2)$}\label{s4} In this section we characterize \eqref{emb1} and \eqref{emb2}. The following theorem is true. \begin{thm}\label{Emb-Lp-Ces} Let $0 < p_2 \le p_1 \le \i$, $0 < q \le \i$, $v_1,\,v_2 \in \W\I$ and $u \in \O_q$. {\rm (i)} If $p_1 \le q$, then \begin{equation*} \|\Id\|_{L_{p_1}(v_1) \rw \ces_{p_2,q}(u,v_2)} \ap \sup_{t \in \I} \big\| v_1^{-1} v_2 \big\|_{p_1 \rw p_2, (0,t)}\|u\|_{q,(t,\i)} \end{equation*} uniformly in $u \in \O_q$. {\rm (ii)} If $q < p_1$, then \begin{equation*} \|\Id\|_{L_{p_1}(v_1) \rw \ces_{p_2,q}(u,v_2)} \ap \bigg( \int_{(0,\infty)} \big\| v_1^{-1} v_2 \big\|_{p_1 \rw p_2,(0,t)}^{p_1 \rw q} \,d \bigg( - \|u\|_{q,(t,\i)}^{p_1 \rw q}\bigg) \bigg)^{\frac{1}{p_1 \rw q}} \end{equation*} uniformly in $u \in \O_q$. \end{thm} \begin{proof} Let $p_2 < \infty$. Since \begin{align*} \|\Id\|_{L_{p_1}(v_1) \rw \ces_{p_2,q}(u,v_2)} = \sup_{f \in \M^+ \I} \frac{\big\| \|f\|_{p_2,v_2,(0,\cdot)}\big\|_{q,u,(0,\infty)}}{\|f\|_{p_1,v_1,\I}} = \left(\sup_{g \in \M^+ \I} \frac{\| H (|g|)\|_{\frac{q}{p_2},u^{p_2},\I}}{\|g\|_{\frac{p_1}{p_2},[v_1v_2^{-1}]^{p_2},\I}}\right)^{\frac{1}{p_2}}, \end{align*} it remains to apply Theorem \ref{HardyIneq}. If $p_2 = \infty$, then \begin{align*} \|\Id\|_{L_{\infty}(v_1) \rw \ces_{\i,q}(u,v_2)} = \sup_{f \in \M^+ \I} \frac{\big\| \|f\|_{\i,v_2,(0,\cdot)}\big\|_{q,u,(0,\infty)}}{\|f\|_{\i,v_1,\I}} = \sup_{g \in \M^+ \I} \frac{\| (S(|g|))u \|_{q,\I} }{\big\|gv_1v_2^{-1}\big\|_{\i,\I}}, \end{align*} and the statement follows by Theorem \ref{HardyforSupremal}. \end{proof} The following statement can be proved analogously. \begin{thm}\label{Emb-Lp-Cop} Let $0 < p_2 \le p_1 \le \i$, $0 < q \le \i$, $v_1,\,v_2 \in \W\I$ and $u \in \dual{\O}_q$. {\rm (i)} If $p_1 \le q$, then \begin{equation*} \|\Id\|_{L_{p_1}(v_1) \rw \cop_{p_2,q}(u,v_2)} \ap \sup_{t \in \I} \big\| v_1^{-1} v_2 \big\|_{p_1 \rw p_2, (t,\infty)}\|u\|_{q,(0,t)} \end{equation*} uniformly in $u \in \dual{\O}_q$. {\rm (ii)} If $q < p_1$, then \begin{equation*} \|\Id\|_{L_{p_1}(v_1) \rw \cop_{p_2,q}(u,v_2)} \ap \bigg( \int_{(0,\infty)} \big\| v_1^{-1} v_2 \big\|_{p_1 \rw p_2,(t,\infty)}^{p_1 \rw q} \,d \bigg( \|u\|_{q,(0,t)}^{p_1 \rw q}\bigg) \bigg)^{\frac{1}{p_1 \rw q}} \end{equation*} uniformly in $u \in \dual{\O}_q$. \end{thm} \section{Characterizations of $ \ces_{p_2,q}(u,v_2) \hra L_{p_1}(v_1) $ and $ \cop_{p_2,q}(u,v_2)\hra L_{p_1}(v_1)$}\label{s5} In this section we characterize the embeddings \eqref{emb3} and \eqref{emb4}. \begin{thm}\label{Emb-Ces-Lp} Let $0 < p_1 \le p_2 \le \infty$, $0 < q \le \infty$, $v_1,\,v_2 \in \W\I$ and $u \in \O_{q}$ . {\rm (i)} If $q \le p_1$, then \begin{equation*} \|\Id\|_{\ces_{p_2,q}(u,v_2) \rw L_{p_1}(v_1)} \ap \sup_{t \in \I} \big\|v_1 v_2^{-1} \big\|_{p_1 \rw p_2,(t,\i)}\|u\|_{q,(t,\infty)}^{-1} \end{equation*} uniformly in $u \in \O_q$. {\rm (ii)} If $p_1 < q$, then \begin{align*} \|\Id\|_{\ces_{p_2,q}(u,v_2) \rw L_{p_1}(v_1)} \ap & \bigg( \int_{\I} \big\| v_1 v_2^{-1} \big\|_{p_1 \rw p_2,(t,\i)}^{q \rw p_1} d \bigg( \|u\|_{q,(t-,\infty)}^{- q \rw p_1} \bigg) \bigg)^{\frac{1}{q \rw p_1}} + \frac{\big\|v_1 v_2^{-1}\big\|_{p_1 \rw p_2,\I}}{\|u\|_{q,(0,\infty)}} \end{align*} uniformly in $u \in \O_q$. \end{thm} \begin{proof} Let $p_2 < \infty$. Since \begin{align*} \|\Id\|_{\ces_{p_2,q}(u,v_2) \rw L_{p_1}(v_1)} = \sup_{f \in \M^+ \I} \frac{\|f\|_{p_1,v_1,\I}} {\big\| \|f\|_{p_2,v_2,(0,t)} \big\|_{q,u,(0,\infty)}} = \left(\sup_{g \in \M^+ \I} \frac{\big\|g ( v_1 v_2^{-1})^{p_2} \big\|_{\frac{p_1}{p_2},\I}}{\big\| H(|g|) u^{p_2} \big\|_{\frac{q}{p_2},\I}}\right)^{\frac{1}{p_2}}, \end{align*} it remains to apply Theorem \ref{ReverseHardyIneq}. If $p_2 = \infty$, then \begin{align*} \|\Id\|_{\ces_{p_2,q}(u,v_2) \rw L_{p_1}(v_1)} = \sup_{f \in \M^+ \I} \frac{\|f\|_{p_1,v_1,\I}} {\big\| \|f\|_{p_2,v_2,(0,t)} \big\|_{q,u,(0,\infty)}} = \sup_{g \in \M^+ \I} \frac{\big\|g v_1 v_2^{-1} \big\|_{p_1,\I}}{\big\| S(|g|) u \big\|_{q,\I}}, \end{align*} and the statement follows by Theorem \ref{ReverseHardySupremal}. \end{proof} The following statement can be proved analogously. \begin{thm}\label{Emb-Cop-Lp} Let $0 < p_1 \le p_2 \le \infty$, $0 < q \le \infty$, $v_1,\,v_2 \in \W\I$ and $u \in \dual\O_q$. {\rm (i)} If $q \le p_1$, then \begin{equation*} \|\Id\|_{\cop_{p_2,q}(u,v_2) \rw L_{p_1}(v_1)} \ap \sup_{t \in \I} \big\|v_1 v_2^{-1} \big\|_{p_1 \rw p_2,(0,t)}\|u\|_{q,(0,t)}^{-1} \end{equation*} uniformly in $u \in \dual\O_q$. {\rm (ii)} If $p_1 < q$ then \begin{align*} \|\Id\|_{\cop_{p_2,q}(u,v_2) \rw L_{p_1}(v_1)} \ap & \bigg( \int_{\I} \big\| v_1 v_2^{-1} \big\|_{p_1 \rw p_2,(0,t)}^{q \rw p_1} d \bigg( - \|u\|_{q,(0,t+)}^{- q \rw p_1} \bigg) \bigg)^{\frac{1}{q \rw p_1}} + \frac{\big\|v_1 v_2^{-1}\big\|_{p_1 \rw p_2,\I}}{\|u\|_{q,(0,\infty)}} \end{align*} uniformly in $u \in \dual\O_q$. \end{thm} \begin{defi} Let $X$ be a set of functions from $\M\I$, endowed with a positively homogeneous functional $\|\cdot \|_X$, defined for every $f\in \M\I$ and such that $f\in X$ if and only if $\|f\|_X<\i$. We define the associate space $X'$ of $X$ as the set of all functions $f\in \M\I$ such that $\|f\|_{X'}<\i$, where $$ \|f\|_{X'}=\sup \bigg\{\int_{\I}|f(x)g(x)|\,dx : \,\,\|g\|_X \leq 1\bigg\}. $$ \end{defi} In particular, Theorems \ref{IterHar3.2.0} and \ref{Emb-Cop-Lp} allows us to give a characterization of the associate spaces of weighted Ces\`{a}ro and Copson function spaces. \begin{thm}\label{thm.6.6} Assume $1\leq p<\i$, $0 < q \leq \i$. Let $u \in \O_q$ and $v \in \W\I$. Set $$ X=\ces_{p,q}(u,v). $$ {\rm (i)} Let $0 < q \leq 1$. Then $$ \|f\|_{X'}\ap \sup_{t \in \I} \|f \|_{p',v^{- 1},(t,\i)} \|u\|_{q,(t,\i)}^{-1}, $$ with the positive constants in equivalence independent of $f$. {\rm (ii)} Let $1 < q \leq \i$. Then $$ \|f\|_{X'}\ap \bigg(\int_{\I} \|f \|_{{p'},v^{-1},(t,\i)}^{q'}d \bigg(\|u\|_{q,(t-,\i)}^{-q'}\bigg)\bigg)^{\frac{1}{q'}}+\|f\|_{p',v^{-1},\I}\|u\|_{q,\I}^{-1}, $$ with the positive constants in equivalence independent of $f$. \end{thm} \begin{thm}\label{thm.6.5} Assume $1 \leq p<\i$, $0 < q \leq \i$. Let $u \in \dual{\O}_q$ and $v \in \W\I$. Set $$ X=\cop_{p,q}(u,v). $$ {\rm (i)} Let $0 < q \leq 1$. Then $$ \|f\|_{X'}\ap \sup_{t \in \I} \|f \|_{p',v^{- 1},(0,t)} \|u\|_{q,(0,t)}^{-1}, $$ with the positive constants in equivalence independent of $f$. {\rm (ii)} Let $1 < q \leq \i$. Then $$ \|f\|_{X'}\ap \bigg(\int_{\I} \|f \|_{p',v^{-1},(0,t)}^{q'} d \bigg(- \|u\|_{q,(0,t+)}^{-q'} \bigg) \bigg)^{\frac{1}{q'}} + \|f\|_{p',v^{-1},\I} \|u\|_{q,\I}^{-1}, $$ with the positive constants in equivalence independent of $f$. \end{thm} \section{The iterated Hardy-type inequalities}\label{Iterated Hardy Type Inequalities}\label{s6} In this section we recall characterizations of weighted iterated Hardy-type inequalities \begin{equation}\label{eq.4.11} \big\|\|H^* f\|_{p,u,(0,\cdot)}\big\|_{q,w,\I}\leq c \,\|f\|_{\theta,v,\I},~f \in \M^+\I, \end{equation} where $0 < p,\,q \le \infty$, $1 < \theta < \infty$. Note that weighted iterated Hardy-type inequalities have been intensely investigated recently (see, for instance, \cite{gmp} and \cite{GogMusPers2}, when $0 < p < \infty$, $0 < q \le \infty$, $1 \le \theta \le \infty$, and \cite{gop}, when $p = \infty$, $0 < q < \infty$, $1 \le \theta < \infty$. For more detailed information see recent papers \cite{GogMusIHI} and \cite{GogMusISI}). There exists different solutions of these inequalities. We will use the characterizations from \cite{gmp} and \cite{gop}. Everywhere in this section, $u$, $v$ and $w$ are weights on $\I$, and we denote $$ U(t)=\int_0^t u(\tau)\,d\tau \qquad \mbox{and}\qquad V_{\t}(t)= \int_t^{\i}{v(\tau)}^{1-\t'}d\tau \quad \mbox{for}\,\, 1<\t<\i. $$ We assume that $u$ is such that $U(t)>0$ for every $t\in\I$. \begin{defi} Let $U$ be a continuous, strictly increasing function on $[0,\i)$ such that $U(0)=0$ and $\lim_{t\rw\i}U(t)=\i$. Then we say that $U$ is admissible. \end{defi} Let $U$ be an admissible function. We say that a function $\vp$ is $U$-quasiconcave if $\vp$ is equivalent to an increasing function on $(0,\i)$ and ${\vp} / {U}$ is equivalent to a decreasing function on $(0,\infty)$. We say that a $U$-quasiconcave function $\vp$ is non-degenerate if $$ \lim_{t\rw 0+} \vp(t) = \lim_{t\rw \i} \frac{1}{\vp(t)} = \lim_{t\rw \i} \frac{\vp(t)}{U(t)} = \lim_{t\rw 0+} \frac{U(t)}{\vp(t)} =0. $$ The family of non-degenerate $U$-quasiconcave functions is denoted by $Q_U$. \begin{defi} Let $U$ be an admissible function, and let $w$ be a nonnegative measurable function on $(0,\i)$. We say that the function $\vp$ defined by \begin{equation*} \vp(t)=U(t)\int_0^{\infty} \frac{w(\tau)\,d\tau}{U(\tau)+U(t)}, \qq t\in (0,\i), \end{equation*} is a fundamental function of $w$ with respect to $U$. One will also say that $w(s)\,ds$ is a representation measure of $\vp$ with respect to $U$. \end{defi} Denote by $$ {\mathcal U}(x,t): = \frac{U(x)}{U(t)+U(x)}. $$ \begin{rem}\label{nondegrem} Let $\vp$ be the fundamental function of $w$ with respect to $U$. Assume that \begin{equation*} \int_0^{\infty}\frac{w(\tau)\,d\tau}{U(\tau)+U(t)}<\i, ~ t> 0, \qq \int_0^1 \frac{w(\tau)\,d\tau}{U(\tau)}=\int_1^{\infty}w(\tau)\,d\tau=\infty. \end{equation*} Then $\vp\in Q_{U}$. \end{rem} First we recall the characterization of \eqref{eq.4.1}, when $p < \infty$ and $q < \infty$. \begin{thm}\cite[Theorem 3.1]{gmp}\label{IterHar3.1.0} Let $0<q<\i$, $0<p<\i$, $1<\t < \i$ and let $u,\,v,\,w$ be weights. Assume that $u$ be such that $U$ is admissible and $\vp \in Q_{U^{\frac{q}{p}}}$, where $\vp$ is defined by \begin{equation}\label{eq.4.3} \vp(x) =\int_0^{\i}{\mathcal U}(x,\tau)^{\frac{q}{p}}w(\tau)\,d\tau \qquad\mbox{for all} \qquad x\in \I. \end{equation} Then inequality \begin{equation}\label{eq.4.1} \bigg(\int_0^{\i}\bigg(\frac{1}{U(t)}\int_0^t\bigg(\int_{\tau}^{\i}h(z)dz\bigg)^p u(\tau)\,d\tau\bigg) ^{\frac{q}{p}}w(t)dt\bigg)^{\frac{1}{q}}\leq c \,\bigg(\int_0^{\infty} h(t)^{\theta} v(t)\,dt\bigg)^{\frac{1}{\theta}}, \end{equation} holds for every measurable function $f$ on $\I$ if and only if {\rm (i)} $\t \leq \min\{p,q\}$ $$ A_1:=\sup_{x\in (0,\i)}\bigg(\int_0^{\i}{\mathcal U}(x,\tau)^{\frac{q}{p}}w(\tau)\,d\tau\bigg)^{\frac{1}{q}} \sup_{t\in (0,\infty)}{\mathcal U}(t,x)^{\frac{1}{p}}{V_{\t}(t)}^{\frac{1}{\t'}}<\i. $$ Moreover, the best constant $c$ in \eqref{eq.4.1} satisfies $c\ap A_1$. {\rm (ii)} $q<\t \le p$, $l=\frac{\t q}{\t-q}$ \begin{gather*} A_2:=\bigg(\int_0^{\i} \bigg(\int_0^{\i}{\mathcal U}(x,\tau)^{\frac{q}{p}}w(\tau)\,d\tau\bigg)^{\frac{l-q}{q}}{w(x)} \sup_{t \in \I} {\mathcal U}(t,x)^{\frac{l}{p}}{V_{\t}(t)}^{\frac{l}{\t'}}dx\bigg)^{\frac{1}{l}}<\i. \end{gather*} Moreover, the best constant $c$ in \eqref{eq.4.1} satisfies $c\ap A_2$. {\rm (iii)} $p<\t \leq q$, $r=\frac{\t p}{\t-p}$ \begin{gather*} A_3:= \sup_{x\in\I}\bigg(\int_0^{\i} {\mathcal U}(x,\tau)^{\frac{q}{p}}w(\tau)\,d\tau\bigg)^{\frac{1}{q}} \bigg( \int_0^{\i}{\mathcal U}(t,x)^{\frac{r}{p}} {V_{\t}(t)}^{\frac{r}{p'}}{v(t)}^{1-\t'}dt \bigg)^{\frac{1}{r}}<\i. \end{gather*} Moreover, the best constant $c$ in \eqref{eq.4.1} satisfies $c\ap A_3$. {\rm (iv)} $\max\{p,q\}<\t$, $r=\frac{\t p}{\t-p}$, $l=\frac{\t q}{\t-q}$ \begin{gather*} A_4:=\bigg(\int_0^{\i}\bigg(\int_0^{\i}{\mathcal U}(x,\tau)^{\frac{q}{p}}w(\tau)\,d\tau\bigg)^{\frac{l-q}{q}}w(x) \bigg( \int_0^{\i}{\mathcal U}(t,x)^{\frac{r}{p}} {V_{\t}(t)}^{\frac{r}{p'}}{v(t)}^{1-\t'}dt \bigg)^{\frac{l}{r}}dx\bigg)^{\frac{1}{l}}<\i. \end{gather*} Moreover, the best constant $c$ in \eqref{eq.4.1} satisfies $c\ap A_4$. \end{thm} \begin{rem}\label{limsupcondition} Suppose that $\vp (x) < \i$ for all $x \in (0,\i)$, where $\vp$ defined by \begin{equation*} \vp(x)=\esup_{t\in(0,x)}{U(t)} \esup_{\tau\in(t,\i)}\frac{w(\tau)}{U(\tau)},~~t\in\I. \end{equation*} If $$ \limsup_{t \rightarrow 0 +} w(t) = \limsup_{t \rightarrow +\infty} \frac{1}{w(t)} = \limsup_{t \rightarrow 0 +} \frac{U(t)}{w(t)} = \limsup_{t \rightarrow +\infty} \frac{w(t)}{U(t)} = 0, $$ then $\vp\in Q_{U}$. \end{rem} We now state the announced characterization of \eqref{eq.4.2}, when $p < \infty$ and $q = \infty$. \begin{thm}\cite[Theorem 3.2]{gmp}\label{IterHar3.2.0} Let $0<p<\i$, $1<\t<\i$ and let $u,\,v,\,w$ be weights. Assume that $u$ is such that $U$ is admissible and $\vp \in Q_{U^{\frac{1}{p}}}$, where $\vp$ is defined by \begin{equation}\label{eq.2.0} \vp(x)=\esup_{t\in\I}w(t){\mathcal U}(x,t)^{\frac{1}{p}},~~x\in\I. \end{equation} Then inequality \begin{equation}\label{eq.4.2} \esup_{t\in\I}w(t)\bigg(\frac{1}{U(t)}\int_0^t\bigg(\int_{\tau}^{\i}h(z)dz\bigg)^pu(\tau)\,d\tau\bigg) ^{\frac{1}{p}} \leq c \,\bigg(\int_0^{\infty} h(t)^{\theta} v(t)\,dt\bigg)^{\frac{1}{\theta}}, \end{equation} holds for every measurable function $f$ on $\I$ if and only if {\rm (i)} $\t\leq p$ and \begin{gather*} B_1:=\sup_{x\in\I}\esup_{\tau\in\I}w(\tau){\mathcal U}(x,\tau)^{\frac{1}{p}} \sup_{t\in (0,\i)}{\mathcal U}(t,x)^{\frac{1}{p}} {V_{\t}(t)}^{\frac{1}{\t'}}<\i. \end{gather*} Moreover, the best constant $c$ in \eqref{eq.4.2} satisfies $c\ap B_1$. {\rm (ii)} $p<\t$, $r=\frac{\t p}{\t-p}$ and \begin{gather*} B_2:= \sup_{x\in\I}\esup_{\tau\in\I}w(\tau){\mathcal U}(x,\tau)^{\frac{1}{p}} \bigg(\int_0^{\i}{\mathcal U}(t,x)^{\frac{r}{p}} {V_{\t}(t)}^{\frac{r}{p'}}{v(t)}^{1-\t'}dt\bigg)^{\frac{1}{r}}<\i. \end{gather*} Moreover, the best constant $c$ in \eqref{eq.4.2} satisfies $c\ap B_2$. \end{thm} For a given weight $v$, $0 \le a < b \le \infty$ and $1 \le \theta < \infty$, we denote $$ v_{\theta} (a,b) = \begin{cases} \bigg ( \int\limits_a^b [v(t)]^{1-\theta'}dt\bigg)^{\frac{1}{p'}} & \qquad \mbox{when} ~ 1 < \theta < \infty, \\ \esup\limits_{t \in (a,b)} \, [v(t)]^{-1} & \qquad \mbox{when} ~ \theta = 1. \end{cases} $$ Finally, recall the characterization of \eqref{eq.4.2}, when $p = \infty$ and $q < \infty$. \begin{thm}\cite[Theorems 4.1 and 4.4]{gop} \label{IterHarSupremal.1} Let $1 \le \theta < \infty$, $0 < q < \infty$ and let $u \in \W\I \cap C\I$. Assume that $v,\,w \in \W\I$ be such that $$ 0 < \int_x^{\infty} v(\tau)\,d\tau < \i \qquad \mbox{and} \qquad 0 < \int_x^{\infty} w(\tau)\,d\tau < \infty \qq \mbox{for all} \qq x > 0. $$ Then inequality \begin{equation}\label{ISI.2} \bigg(\int_0^{\infty} \bigg( \sup_{\tau \in (0,t)} \,u(\tau) \int_{\tau}^{\infty} h(z)\,dz\bigg)^q w(t)\,dt \bigg)^{\frac{1}{q}} \leq c \,\bigg(\int_0^{\infty} h(t)^{\theta} v(t)\,dt\bigg)^{\frac{1}{\theta}} \end{equation} is satisfied with the best constant $c$ if and only if: {\rm (i)} $\theta \le q$, and in this case $c \ap A_1$, where $$ A_1: = \sup_{x \in \I}\bigg( \bigg[\sup_{\tau \in (0,x)} u(\tau)\bigg]^q \int_x^{\infty} w(\tau)\,d\tau + \int_0^x \bigg[\sup_{\tau \in (0,t)} u(\tau)\bigg]^q w(t)\,dt\bigg)^{\frac{1}{q}}v_{\theta}(x,\infty); $$ {\rm (ii)} $q < \theta$ and $\frac{1}{r} = \frac{1}{q} - \frac{1}{\theta}$, and in this case $c \ap B_1 + B_2$, where \begin{align*} B_1: & = \bigg(\int_0^{\infty} \bigg(\int_0^x \bigg[\sup_{\tau \in (0,t)} u(\tau)\bigg]^q w(t)\,dt\bigg)^{\frac{r}{\theta}} \bigg[\sup_{\tau \in (0,x)} u(\tau)\bigg]^q \bigg[v_{\theta}(x,\infty)\bigg]^r w(x)\,dx \bigg)^{\frac{1}{r}}, \\ B_2: & = \bigg(\int_0^{\infty} \bigg( \int_x^{\infty} w(\tau)\,d\tau \bigg)^{\frac{r}{\theta}} \bigg[\sup_{\tau \in (0,x)} \bigg[\sup_{y \in (0,\tau)} u(y)\bigg] v_{\theta} (\tau,\infty) \bigg]^r w(x)\,dx \bigg)^{\frac{1}{r}}. \end{align*} \end{thm} \section{Embeddings Between $\cop_{p_1,q_1}(u_1,v_1)$ and $\ces_{p_2,q_2}(u_2,v_2)$}\label{s7} In this section we characterize the embeddings between weighted Copson and Ces\`{a}ro function spaces. From now on, we will denote $$ v(x) : = v_1(x)^{-1} v_2(x), \quad V(x) : = \| v\|_{p_1 \rw p_2,(0,x)}, \quad \mbox{and} \quad {\mathcal V}(t,x):= \frac{V(t)}{V(t)+V(x)}, ~ (t > 0,\,x > 0). $$ \begin{lem}\label{triviality} Let $0<p_1, p_2, q_1, q_2\le \i$ and $p_1<p_2$. Assume that $v_1, v_2\in \W\I$, $u_1\in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$. Then $\cop_{p_1,q_1}(u_1,v_1)\not \hookrightarrow \ces_{p_2,q_2}(u_2,v_2)$. \begin{proof} Assume that $\cop_{p_1,q_1}(u_1,v_1) \hookrightarrow \ces_{p_2,q_2}(u_2,v_2)$ holds. Then there exist $c>0$ such that \begin{equation*} \|f\|_{\ces_{p_2,q_2}(u_2,v_2)} \leq c \,\|f\|_{\cop_{p_1,q_1}(u_1,v_1)} \end{equation*} holds for all $f \in \M^+\I$. Let $\tau\in \I$ and $f=0$ on $(\tau,\i)$. Thus, we have \begin{align} \|f\|_{\ces_{p_2,q_2}(u_2,v_2)}&= \big\| \|f\|_{p_2,v_2,(0,\cdot)} \big\|_{q_2,u_2,\I}\notag\\ &\geq \big\| \|f\|_{p_2,v_2,(0,\cdot)} \big\|_{q_2,u_2,(\tau,\i)}\notag\\ &\geq \| u_2 \|_{q_2,(\tau,\i)} \|f\|_{p_2,v_2,(0,\tau)} \label{1} \end{align} and \begin{align} \|f\|_{\cop_{p_1,q_1}(u_1,v_1)}&= \big\| \|f\|_{p_1,v_1,(\cdot,\i)} \big\|_{q_1,u_1,\I}\notag\\ &\leq \big\| \|f\|_{p_1,v_1,(\cdot,\i)} \big\|_{q_1,u_1,(0,\tau)}\notag\\ &\leq \| u_1 \|_{q_1,(0,\tau)} \|f\|_{p_1,v_1,(0,\tau)} \label{2}. \end{align} Combining \eqref{1} with \eqref{2}, we can assert that \begin{equation*} \| u_2 \|_{q_2,(\tau,\i)} \|f\|_{p_2,v_2,(0,\tau)}\leq c \| u_1 \|_{q_1,(0,\tau)} \|f\|_{p_1,v_1,(0,\tau)}. \end{equation*} Since $u_1\in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$, we conclude that $L_{p_1}(v_1)\hookrightarrow L_{p_2}(v_2)$, which is a contradiction. \end{proof} \end{lem} \begin{thm} Let $0<p_1=q_1<\i$, $0<p_2=q_2<\i$, $v_1, v_2\in \W\I$, $u_1\in \dual{\O_{q_1}}$ and $u_2 \in \O_{q_2}$. Then \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \bigg\| \|u_1\|_{p_1,(0,\cdot)}^{- 1} \|u_2\|_{p_2,(\cdot,\infty)} \bigg\|_{p_1 \rw p_2,v,\I}. \end{equation*} \end{thm} \begin{proof} In view of Lemmas \ref{Cespp} and \ref{Coppp}, we have that $$ \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} = \|\Id\|_{L_{p_1}(w_1) \rw L_{p_2}(w_2)} $$ with $w_1(x) = v_1(x)\|u_1\|_{p_1,(0,x)}$ and $w_2(x) = v_2(x)\|u_2\|_{p_2,(x,\infty)}$, $x>0$. \end{proof} \begin{thm} Let $0<p_1,p_2,q_1,q_2<\i$, $p_1=q_1$ and $p_2\neq q_2$. Let $v_1, v_2\in \W\I$, $u_1\in \dual{\O_{q_1}}$ and $u_2 \in \O_{q_2}$. {\rm (i)} If $p_1\le q_2$, then \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap \sup_{t \in \I} \big\| \|u_1\|_{p_1,(0,\cdot)}^{-1} \big\|_{p_1 \rw p_2,v,(0,t)}\|u_2\|_{q_2,(t,\infty)}, \end{equation*} {\rm (ii)} If $q_2 < p_1$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap \bigg(\int_{(0,\infty)} \big\| \|u_1\|_{p_1,(0,\cdot)}^{-1} \big\|_{p_1 \rw p_2,v,(0,t)}^{p_1 \rw q_2} d \,\bigg(- \|u_2\|_{q_2,(t,\infty)}^{p_1 \rw q_2} \bigg)\bigg)^{\frac{1}{p_1 \rw q_2}}. \end{align*} \end{thm} \begin{proof} In view of Lemma~\ref{Coppp}, we have that $$ \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} = \|\Id\|_{L_{p_1}(w_1) \rw \ces_{p_2,q_2}(u_2,v_2)} $$ with $w_1(x) = v_1(x)\|u_1\|_{p_1,(0,x)}$, $x>0$. Then the result follows from Theorem~\ref{Emb-Lp-Ces}. \end{proof} \begin{thm} Let $0<p_1,p_2,q_1,q_2<\i$, $p_1 \neq q_1$ and $p_2= q_2$. Let $v_1, v_2\in \W\I$, $u_1\in \dual{\O_{q_1}}$ and $u_2 \in \O_{q_2}$. {\rm (i)} If $q_1 \le p_2$, then \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap \sup_{t \in \I} \|u_1\|_{q_1,(0,t)}^{-1} \,\big\| \|u_2\|_{p_2,(\cdot,\infty)} \big\|_{p_1 \rw p_2}, \end{equation*} {\rm (ii)} If $p_2<q_1$ then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap & \, \bigg(\int_0^\i \big\| \|u_2\|_{p_2,(\cdot,\infty)}\big\|_{p_1 \rw p_2,v,(0,t)}^{q_1 \rw p_2} d \,\bigg( -\|u_1\|_{q_1,(0,t)}^{- q_1 \rw p_2}\bigg)\bigg)^{\frac{1}{q_1 \rw p_2}} \\ & + \|u_1\|_{q_1,(0,\infty)}^{-1} \,\big\| \|u_2\|_{p_2,(\cdot,\infty)}\big\|_{p_1 \rw p_2,v,(0,\infty)}. \end{align*} \end{thm} \begin{proof} In view of Lemma~\ref{Cespp}, we have that $$ \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} = \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw L_{p_2}(w_2)} $$ with $w_2(x) = v_2(x)\|u_2\|_{p_2,(x,\infty)}$, $x>0$. Then the result follows from Theorem~\ref{Emb-Cop-Lp}. \end{proof} The following lemma is true. \begin{lem}\label{mainlemma} Let $0<p_1, p_2, q_1, q_2<\i$, $p_2\le p_1$ and $p_2<q_2$. Let $v_1, v_2\in \W\I$, $u_1 \in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$. Then $$ \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} = \left\{\sup_{g\in \M^+\I} \ddfrac{\big\|\Id\big\|_{\cop_{p_1,q_1}(u_1,v_1)\rw L_{p_2}\big(v_2 H^*(g)^{\frac{1}{p_2}}\big)}^{p_2}}{\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}} \right\}^{\frac{1}{p_2}}. $$ \end{lem} \begin{proof} By duality, interchanging suprema, we have that \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} & \\ & \hspace{-3cm} =\sup_{f\in \M^+\I} \ddfrac{\|f\|_{\ces_{p_2,q_2}(u_2,v_2)}}{\|f\|_{\cop_{p_1,q_1}(u_1,v_1)}} \\ & \hspace{-3cm} = \sup_{f\in \M^+\I} \ddfrac{1}{\|f\|_{\cop_{p_1,q_1}(u_1,v_1)}} \sup_{g\in \M^+\I} \ddfrac{\bigg(\int_0^\i \bigg(\int_0^{\tau} f(x)^{p_2}v_2(x)^{p_2}\,dx\bigg)\,g(\tau)\,d\tau\bigg)^{\frac{1}{p_2}}} {\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}^{\frac{1}{p_2}}}\\ & \hspace{-3cm} = \sup_{g\in \M^+\I} \ddfrac{1}{\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}^{\frac{1}{p_2}}} \sup_{f\in \M^+\I} \ddfrac{\bigg(\int_0^\i \bigg(\int_0^{\tau} f(x)^{p_2}v_2(x)^{p_2}\,dx\bigg)\,g(\tau)\,d\tau\bigg)^{\frac{1}{p_2}}} {\|f\|_{\cop_{p_1,q_1}(u_1,v_1)}}. \end{align*} Applying the Fubini's Theorem, we get that \begin{align} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} & \notag \\ & \hspace{-3cm} = \sup_{g\in \M^+\I} \ddfrac{1}{\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}^{\frac{1}{p_2}}} \sup_{f\in \M^+\I} \ddfrac{\bigg(\int_0^\i f(x)^{p_2}v_2(x)^{p_2} \bigg(\int_x^\i g(\tau)\,d\tau\bigg) dx \bigg)^{\frac{1}{p_2}}} {\|f\|_{\cop_{p_1,q_1}(u_1,v_1)}}\notag\\ & \hspace{-3cm} = \sup_{g\in \M^+\I} \ddfrac{1}{\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}^{\frac{1}{p_2}}} \,\|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw L_{p_2}\big(v_2 (\cdot) H^*(g)^{\frac{1}{p_2}}\big)}.\label{GommeninIlkDenkligi} \end{align} \end{proof} \begin{thm}\label{maintheorem1} Let $0<p_1, p_2, q_1, q_2<\i$, $p_2 < p_1$, $q_1 \le p_2 < q_2$. Let $v_1, v_2\in \W\I$, $u_1 \in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$. Assume that $V$ is admissible and $$ \vp_1(x):= \esup_{t\in \I} V(t){\mathcal V}(x,t) \|u_1\|_{q_1,(0,t)}^{-1} \in Q_{V^{\frac{1}{p_1 \rw p_2}}}. $$ {\rm (i)} If $p_1\le q_2$, then \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \sup_{x\in \I} \vp_1(x) \sup_{t\in \I} {\mathcal V}(t,x) \|u_2\|_{q_2,(t,\infty)}. \end{equation*} {\rm (ii)} If $q_2<p_1$, then \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \sup_{x\in \I} \vp_1(x) \bigg(\int_0^\i {\mathcal V}(t,x)^{p_1 \rw q_2} d\bigg( - \|u_2\|_{q_2,(t,\infty)}^{p_1 \rw q_2}\bigg) \bigg)^{\frac{1}{p_1 \rw q_2}}. \end{equation*} \end{thm} \begin{proof} By Lemma \ref{mainlemma}, we have that \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} = \sup_{g\in \M^+\I} \frac{1}{\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}^{\frac{1}{p_2}}} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw L_{p_2}(v_2 H^*(g)^{\frac{1}{p_2}})}. \end{align*} Since $q_1\le p_2$, applying Theorem~[\ref{Emb-Cop-Lp}, (i)], we obtain that \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap \left\{ \sup_{g\in \M^+\I} \ddfrac {\sup_{t\in \I} \|u_1\|_{q_1,(0,t)}^{- p_2} \|H^*g\|_{\frac{p_1}{p_1 - p_2},v^{p_2},(0,t)}} {\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}}\right\}^{\frac{1}{p_2}}. \end{align*} \rm{(i)} If $p_1\le q_2$, then applying Theorem~[\ref{IterHar3.2.0}, (i)], we arrive at \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \sup_{x\in \I} \vp_1(x) \sup_{t\in \I} V(t,x) \|u_2\|_{q_2,(t,\infty)}. \end{equation*} \rm{(ii)} If $q_2 < p_1$, then applying Theorem~[\ref{IterHar3.2.0}, (ii)], we arrive at \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \sup_{x\in \I} \vp_1(x) \bigg(\int_0^\i {\mathcal V}(t,x)^{p_1 \rw q_2} d\bigg( - \|u_2\|_{q_2,(t,\infty)}^{p_1 \rw q_2}\bigg) \bigg)^{\frac{1}{p_1 \rw q_2}}. \end{equation*} \end{proof} \begin{rem} In view of Remark~\ref{limsupcondition}, if \begin{align*} \limsup_{t\rw 0+} V(t) \|u_1\|_{q_1,(0,t)}^{-1} = \limsup_{t\rw +\infty} V(t)\|u_1\|_{q_1,(0,t)} = \limsup_{t\rw 0+}\|u_1\|_{q_1,(0,t)} = \limsup_{t\rw +\infty} \|u_1\|_{q_1,(0,t)}^{-1} = 0, \end{align*} then $\vp_1 \in Q_{V^{\frac{1}{p_1 \rw p_2}}}$. \end{rem} \begin{thm}\label{maintheorem3} Let $0<p_1, p_2, q_1, q_2<\i$, $p_2 < p_1$ and $p_2 < \min\{q_1,q_2\}$. Let $v_1, v_2\in \W\I$, $u_1 \in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$. Assume that $V$ is admissible and $$ \vp_2(x):= \bigg(\int_0^\i [{\mathcal V}(x,t)V(t)]^{q_1 \rw p_2} d \bigg( - \|u_1\|_{q_1,(0,t)}^{- q_1 \rw p_2}\bigg)\bigg)^{\frac{1}{q_1 \rw p_2}} \in Q_{V^{\frac{1}{p_1 \rw p_2}}}. $$ {\rm (i)} If $\max\{p_1,q_1\} \leq q_2$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap & \sup_{x\in \I} \vp_2(x) \sup_{t\in \I} {\mathcal V}(t,x) \|u_2\|_{q_2,(t,\infty)}\\ & + \|u_1\|_{q_1,\I}^{-1} \sup_{t\in \I} V(t) \|u_2\|_{q_2,(t,\infty)}. \end{align*} {\rm (ii)} If $p_1\le q_2<q_1$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1) \rw \ces_{p_2,q_2}(u_2,v_2)} & \\ & \hspace{-3cm} \ap \bigg( \int_0^\i \vp_2(x)^{\frac{q_1 \rw q_2 \cdot q_1 \rw p_2}{q_2 \rw p_2}} V(x)^{q_1 \rw p_2} \bigg(\sup_{t\in(0,\infty)} {\mathcal V}(t,x) \|u_2\|_{q_2,(t,\infty)}\bigg)^{q_1 \rw q_2} d \bigg( - \|u_1\|_{q_1,(0,x)}^{- q_1 \rw p_2}\bigg) \bigg)^{\frac{1}{q_1 \rw q_2}} \\ & \hspace{-2.5cm} + \|u_1\|_{q_1,\I}^{-1} \sup_{t\in \I} V(t) \|u_2\|_{q_2,(t,\infty)}. \end{align*} {\rm (iii)} If $q_1\le q_2<p_1$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1) \rw \ces_{p_2,q_2}(u_2,v_2)} \ap & \sup_{x\in\I} \vp_2(x) \bigg(\int_0^\i {\mathcal V}(t,x)^{p_1 \rw q_2} d \bigg( - \|u_2\|_{q_2, (t,\infty)}^{p_1 \rw q_2}\bigg)\bigg)^{\frac{1}{p_1 \rw q_2}}\\ & + \|u_1\|_{q_1,\I}^{-1} \bigg( \int_0^\i V(t)^{p_1 \rw q_2} d \bigg( - \|u_2\|_{q_2, (t,\infty)}^{p_1 \rw q_2}\bigg) \bigg)^{\frac{1}{p_1 \rw q_2}}. \end{align*} {\rm (iv)} If $q_2<\min\{p_1,q_1\}$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1) \rw \ces_{p_2,q_2}(u_2,v_2)} & \\ & \hspace{-4cm} \ap \bigg( \int_0^\i \vp_2(x)^{\frac{q_1 \rw q_2 \cdot q_1 \rw p_2}{q_2 \rw p_2}} V(x)^{q_1 \rw p_2} \bigg(\int_0^{\infty}{\mathcal V}(t,x)^{p_1 \rw q_2} d \bigg(-\|u_2\|_{q_2,(t,\infty)}^{p_1 \rw q_2} \bigg) \bigg)^{\frac{q_1 \rw q_2}{p_1 \rw q_2}} d\bigg(-\|u_1\|_{q_1,(0,x)}^{- q_1 \rw p_2} \bigg) \bigg)^{\frac{1}{q_1 \rw q_2}} \\ & \hspace{-3.5cm} +\|u_1\|_{q_1,\I}^{-1} \bigg( \int_0^\i V(t)^{p_1 \rw q_2} d \bigg( - \|u_2\|_{q_2, (t,\infty)}^{p_1 \rw q_2}\bigg) \bigg)^{\frac{1}{p_1 \rw q_2}}. \end{align*} \end{thm} \begin{proof} By Lemma \ref{mainlemma}, applying Theorem~[\ref{Emb-Cop-Lp}, (ii)], we have that \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1) \rw \ces_{p_2,q_2}(u_2,v_2)} & \\ & \hspace{-3.5cm} \ap \|u_1\|_{q_1,\I}^{-1} \left\{\sup_{g\in \M^+\I} \ddfrac{ \|H^*g\|_{\frac{p_1}{p_1 - p_2},v^{p_2},(0,\infty)}} {\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}} \right\}^{\frac{1}{p_2}} \\ & \hspace{-3cm} + \left\{\sup_{g\in \M^+\I} \ddfrac{\bigg(\int_0^\i \|H^* g\|_{\frac{p_1}{p_1 - p_2},v^{p_2},(0,t)}^{\frac{q_1}{q_1-p_2}} d \bigg( - \|u_1\|_{q_1,(0,t)}^{- \frac{q_1 p_2}{q_1 - p_2}} \bigg)\bigg)^{\frac{q_1-p_2}{q_1}}} {\|g\|_{\frac{q_2}{q_2-p_2},u_2^{-p_2},\I}} \right\}^{\frac{1}{p_2}}\\ & \hspace{-3.5cm} := C_1 + C_2. \end{align*} Note that $$ C_1 = \|u_1\|_{q_1,\I}^{-1} \left\{ \|\Id\|_{L_{\frac{q_2}{q_2-p_2}}\big(u_2^{-p_2}\big) \rw \cop_{1, \frac{p_1}{p_1-p_2}}\big(v^{p_2}, {\bf 1}\big)}\right\}^{\frac{1}{p_2}} $$ Assume first that $p_1\leq q_2$. Applying Theorem~[\ref{Emb-Lp-Cop}, (i)], we arrive at \begin{align}\label{C_1 1} C_1 \ap \|u_1\|_{q_1,\I}^{-1} \sup_{t\in \I} V(t)\, \|u_2\|_{q_2,(t,\infty)}. \end{align} {\rm (i)} Let $q_1\le q_2$. Using Theorem~[\ref{IterHar3.1.0}, (i)], we obtain that \begin{equation*} C_2 \ap \sup_{x\in \I} \vp_2(x) \, \sup_{t\in (0,\infty)} {\mathcal V} (t,x) \,\|u_2\|_{q_2,(t,\infty)}. \end{equation*} Consequently, the proof is completed in this case. {\rm (ii)} Let $q_2<q_1$. Applying Theorem~[\ref{IterHar3.1.0}, (ii)], we have that \begin{align*} C_2 \ap \bigg( \int_0^\i \vp_2(x)^{\frac{q_1 \rw q_2 \cdot q_1 \rw p_2}{q_2 \rw p_2}} V(x)^{q_1 \rw p_2} \bigg(\sup_{t\in(0,\infty)} {\mathcal V}(t,x) \|u_2\|_{q_2,(t,\infty)}\bigg)^{q_1 \rw q_2} d \bigg( - \|u_1\|_{q_1,(0,x)}^{- q_1 \rw p_2}\bigg) \bigg)^{\frac{1}{q_1 \rw q_2}}, \end{align*} and the statement follows in this case. Let us now assume that $q_2 < p_1$. Then using Theorem~[\ref{Emb-Lp-Cop}, (ii)], we have that \begin{align}\label{C_1 2} C_1\ap \|u_1\|_{q_1,\I}^{-1} \bigg( \int_0^\i V(t)^{p_1 \rw q_2} d \bigg( - \|u_2\|_{q_2, (t,\infty)}^{p_1 \rw q_2}\bigg) \bigg)^{\frac{1}{p_1 \rw q_2}}. \end{align} {\rm (iii)} Let $q_1\le q_2$, then Theorem~[\ref{IterHar3.1.0}, (iii)] yields that \begin{align*} C_2\ap \sup_{x\in\I} \vp_2(x) \bigg(\int_0^\i {\mathcal V}(t,x)^{p_1 \rw q_2} d \bigg( - \|u_2\|_{q_2, (t,\infty)}^{p_1 \rw q_2}\bigg)\bigg)^{\frac{1}{p_1 \rw q_2}}, \end{align*} these complete the proof in this case. {\rm (iv)} If $q_2 < q_1$, then on using Theorem~[\ref{IterHar3.1.0}, (iv)], we arrive at \begin{align*} C_2 \ap \bigg( \int_0^\i \vp_2(x)^{\frac{q_1 \rw q_2 \cdot q_1 \rw p_2}{q_2 \rw p_2}} V(x)^{q_1 \rw p_2} \bigg(\int_0^{\infty}{\mathcal V}(t,x)^{p_1 \rw q_2} d \bigg(-\|u_2\|_{q_2,(t,\infty)}^{p_1 \rw q_2} \bigg) \bigg)^{\frac{q_1 \rw q_2}{p_1 \rw q_2}} d\bigg(-\|u_1\|_{q_1,(0,x)}^{- q_1 \rw p_2} \bigg) \bigg)^{\frac{1}{q_1 \rw q_2}}, \end{align*} and the proof follows. \end{proof} \begin{rem} Assume that $\vp_2(x) < \infty, ~ x > 0$. In view of Remark~\ref{nondegrem}, if $$ \int_0^1 \bigg(\int_0^t u_1^{q_1} \bigg)^{-\frac{q_1}{q_1-p_2}} u_1^{q_1}(t) \,dt = \int_1^{\infty} V(t)^{\frac{q_1 p_2}{q_1-p_2}} \bigg(\int_0^t u_1^{q_1} \bigg)^{-\frac{q_1}{q_1-p_2}} u_1^{q_1}(t) \,dt = \infty, $$ then $\vp_2 \in Q_{V^{\frac{1}{p_1 \rw p_2}}}$. \end{rem} Now consider the case when $p_1 = p_2=p$. \begin{thm}\label{maintheorem2} Let $0<q_1< p < q_2<\i$. Assume that $v_1, v_2\in \W\I$, $u_1 \in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$. Then \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \sup_{t \in \I} \big\| \|u_1\|_{q_1,(0,\cdot)}^{-1} \big\|_{\i,v,(0,t)} \|u_2\|_{q_2,(t,\i)}. \end{equation*} \end{thm} \begin{proof} By Lemma \ref{mainlemma}, applying Theorem~[\ref{Emb-Cop-Lp}, (i)], we have that \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} & \ap \left\{\sup_{g\in \M^+\I} \ddfrac{ \|H^*g\|_{\infty,v^p \|u_1\|_{q_1,(0,\cdot)}^{-p},\I}} {\|g\|_{\frac{q_2}{q_2-p},u_2^{-p},\I}} \right\}^{\frac{1}{p}}\\ & = \left\{\|\Id\|_{L_{\frac{q_2}{q_2-p}}\big(u_2^{-p}\big)\rw\cop_{1,\i}\big(v^p \|u_1\|_{q_1,(0,\cdot)}^{-p}, {\bf 1}\big)}\right\}^{\frac{1}{p}}. \end{align*} Therefore, by Theorem~[\ref{Emb-Lp-Cop}, (i)], \begin{equation*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)}\ap \sup_{t \in \I} \big\| \|u_1\|_{q_1,(0,\cdot)}^{-1} \big\|_{\i,v,(0,t)} \|u_2\|_{q_2,(t,\i)}. \end{equation*} \end{proof} Before proceeding to the case $p = p_ 1 = p_2 < q_2$, we prove another variant of "gluing" lemma. The idea of proof comes in the same line as in \cite[Theorem 3.1]{gogperstepwall}. \begin{lem}\label{gluing.lem} Let $\b$ be positive number and $u \in \W\I$, $g \in \M^+\I$. Assume that $h$ is a non-negative continuous function on $\I$. Then \begin{align*} \int_0^{\infty} \bigg( \int_0^{\infty} {\mathcal U}(x,t)g(t)\,dt \bigg)^{\b - 1} \bigg(\sup_{t\in \I} {\mathcal U}(t,x) h(t)\bigg)^{\b}g(x)\,dx & \\ & \hspace{-5.5cm} \ap \int_0^{\infty} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg(\sup_{t \in (x, \infty)} h(t)\bigg)^{\b} g(x)\,dx \\ & \hspace{-5cm} + \int_0^{\infty} \bigg( \int_x^{\infty} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \bigg( \sup_{t \in (0,x)} U(t)h(t)\bigg)^{\b}\,U(x)^{-1}g(x)\,dx. \end{align*} \end{lem} \begin{proof} Denote by \begin{align*} A_1 : & = \int_0^{\infty} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg(\sup_{t \in (x, \infty)} h(t)\bigg)^{\b} g(x)\,dx, \\ A_2 : & = \int_0^{\infty} \bigg( \int_x^{\infty} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \bigg( \sup_{t \in (0,x)} U(t)h(t)\bigg)^{\b}\,U(x)^{-1}g(x)\,dx. \end{align*} Obviously, \begin{align*} \int_0^{\infty} \bigg( \int_0^{\infty} {\mathcal U}(x,t)g(t)\,dt \bigg)^{\b - 1} \bigg(\sup_{t\in \I} {\mathcal U}(t,x) h(t)\bigg)^{\b}\,g(x)dx & \\ & \hspace{-7cm} \ap \int_0^{\infty} \bigg( \int_0^x g(t)\,dt + U(x) \int_x^{\infty} U(t)^{-1}g(t) \,dt\bigg)^{\b - 1} \bigg(U(x)^{-1} \sup_{t \in (0,x)} U(t)h(t) + \sup_{t \in (x,\infty)}h(t)\bigg)^{\b} g(x)\,dx \\ & \hspace{-7cm} \ap A_1 + A_2 + B_1 + B_2, \end{align*} where \begin{align*} B_1 : & = \int_0^{\infty} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg( U(x)^{-1} \sup_{t \in (0,x)} U(t)h(t)\bigg)^{\b} g(x)\,dx, \\ B_2 : & = \int_0^{\infty} \bigg( \int_x^{\infty} U(t)^{-1}g(t) \, dt\bigg)^{\b - 1} \bigg(U(x)\sup_{t \in (x,\infty)} h(t)\bigg)^{\b} U(x)^{- 1}g(x)\,dx. \end{align*} It is enough to show that $B_i \ls A_1 + A_2$, $i = 1,2$. Let us show that $B_1 \ls A_1 + A_2$. We will consider the case when $\int_0^{\infty} g(t)\,dt < \infty$ (The case when $\int_0^{\infty} g(t)\,dt = \infty$ is much simpler to treat). Define a sequence $\{x_k\}_{k = -\infty}^M$ such that $\int_0^{x_k} g(t)\,dt = 2^k$ if $-\infty < k \le M$ and $2^M \le \int_0^{\infty} g(t)\,dt < 2^{M+1}$. Then \begin{align*} B_1 & \le \int_0^{\infty} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg( \sup_{y \in (x,\infty)} U(y)^{-1} \sup_{t \in (0,y)} U(t)h(t)\bigg)^{\b} g(x)\,dx \\ & \ap \sum_{k = - \infty}^M 2^{k \b}\bigg( \sup_{y \in (x_k,\infty)} U(y)^{-1} \sup_{t \in (0,y)} U(t)h(t)\bigg)^{\b} \\ & \ap \sum_{k = - \infty}^M 2^{k \b}\bigg( \sup_{y \in (x_k,x_{k+1})} U(y)^{-1} \sup_{t \in (0,y)} U(t)h(t)\bigg)^{\b}. \end{align*} For every $-\infty < k \le M$ there exists $y_k \in (x_k,x_{k+1})$ such that $$ \sup_{y \in (x_k,x_{k+1})} U(y)^{-1} \sup_{t \in (0,y)} U(t)h(t) \le 2 U(y_k)^{-1} \sup_{t \in (0,y_k)} U(t)h(t). $$ Therefore, \begin{align*} B_1 & \ls \sum_{k = - \infty}^M 2^{k \b}\bigg( U(y_k)^{-1} \sup_{t \in (0,y_k)} U(t)h(t)\bigg)^{\b} \\ & \ap \sum_{k = - \infty}^M 2^{k \b}\bigg( U(y_k)^{-1} \sup_{t \in (0,y_{k-2})} U(t)h(t)\bigg)^{\b} + \sum_{k = - \infty}^M 2^{k \b}\bigg( U(y_k)^{-1} \sup_{t \in (y_{k-2},y_k)} U(t)h(t)\bigg)^{\b} = : I + II. \end{align*} Note that $2^k \le \int_0^{y_k} g(x)\,dx \le 2^{k+1}$ and $2^{k-1} \le \int_{y_{k-2}}^{y_k} g(x)\,dx \le 2^{k+1}$, $- \infty < k \le M$. It yields that \begin{align*} I & \ls \sum_{k = - \infty}^M \int_{y_{k-2}}^{y_k} \bigg( \int_x^{y_k} g(t)\,dt\bigg)^{\b - 1} g(x)\,dx \cdot \bigg( U(y_k)^{-1} \sup_{t \in (0,y_{k-2})} U(t)h(t)\bigg)^{\b} \\ & \le \sum_{k = - \infty}^M \int_{y_{k-2}}^{y_k} \bigg( \int_x^{y_k} U(t)^{-1}g(t)\,dt\bigg)^{\b - 1} U(x)^{-1}g(x)\,dx \cdot \bigg( \sup_{t \in (0,y_{k-2})} U(t)h(t)\bigg)^{\b} \\ & \le \sum_{k = - \infty}^M \int_{y_{k-2}}^{y_k} \bigg( \int_x^{\infty} U(t)^{-1}g(t)\,dt\bigg)^{\b - 1} \bigg( \sup_{t \in (0,x)} U(t)h(t)\bigg)^{\b} U(x)^{-1}g(x)\,dx \\ & \ls \int_0^{\infty} \bigg( \int_x^{\infty} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \bigg( \sup_{t \in (0,x)} U(t)h(t)\bigg)^{\b}\,U(x)^{-1}g(x)\,dx = A_2. \end{align*} For $II$ we have that \begin{align*} II & \ls \sum_{k = - \infty}^M \int_{y_{k-4}}^{y_{k-2}} \bigg( \int_{y_{k-4}}^x g(t)\,dt\bigg)^{\b - 1} g(x)\,dx \cdot \bigg( U(y_k)^{-1} \sup_{t \in (y_{k-2},y_k)} U(t)h(t)\bigg)^{\b} \\ & \le \sum_{k = - \infty}^M \int_{y_{k-4}}^{y_{k-2}} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} g(x)\,dx \cdot \, \bigg(\sup_{t \in (y_{k-2},\infty)} h(t)\bigg)^{\b} \\ & \le \sum_{k = - \infty}^M \int_{y_{k-4}}^{y_{k-2}} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg(\sup_{t \in (x,\infty)} h(t)\bigg)^{\b} g(x)\,dx \\ & \ls \int_0^{\infty} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg(\sup_{t \in (x, \infty)} h(t)\bigg)^{\b} \,g(x)dx = A_1. \end{align*} Combining, we get that $B_1 \ls A_1 + A_2$. Now we show that $B_2 \ls A_1 + A_2$. We will consider the case when $\int_0^{\infty} U(t)^{-1}g(t)\,dt < \infty$ (It is much simpler to deal with the case when $\int_0^{\infty} U(t)^{-1}g(t)\,dt = \infty$). Define a sequence $\{x_k\}_{k = N}^{\infty}$ such that $2^{-k} = \int_{x_k}^{\infty} U(\tau)^{-1} g(\tau)\,d\tau$ if $N \le k < \infty$ and $2^{-N} < \int_0^{\infty} U(\tau)^{-1} g(\tau)\,d\tau \le 2^{- N + 1}$. By using elementary calculations, we find that \begin{align*} B_2 & \le \int_0^{\infty} \bigg( \int_x^{\infty} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \bigg(\sup_{y \in (0,x)} U(y)\sup_{t \in (y,\infty)} h(t)\bigg)^{\b}\,U(x)^{-1}g(x)dx \\ & \ap \sum_{k = N}^{\infty} 2^{- k \b} \bigg(\sup_{y \in (0,x_k)} U(y)\sup_{t \in (y,\infty)} h(t)\bigg)^{\b} \\ & \ap \sum_{k = N}^{\infty} 2^{- k \b} \bigg(\sup_{y \in (x_{k-1},x_k)} U(y)\sup_{t \in (y,\infty)} h(t)\bigg)^{\b}. \end{align*} For every $k = N,\, N+1,\ldots$ there exists $y_k \in (x_{k-1},x_k)$ such that $$ \sup_{y \in (x_{k-1},x_k)} U(y)\sup_{t \in (y,\infty)} h(t) \le 2 U(y_k) \sup_{t \in (y_k,\infty)} h(t). $$ Hence \begin{align*} B_2 & \ls \sum_{k = N}^{\infty} 2^{- k \b} \bigg( U(y_k) \sup_{t \in (y_k,\infty)} h(t) \bigg)^{\b} \\ & \ap \sum_{k = N}^{\infty} 2^{- k \b} \bigg( U(y_k) \sup_{t \in (y_k,y_{k+2})} h(t) \bigg)^{\b} + \sum_{k = N}^{\infty} 2^{- k \b} \bigg( U(y_k) \sup_{t \in (y_{k+2},\infty)} h(t)\bigg)^{\b} = : III + IV. \end{align*} Since $2^{-k-1} \le \int_{y_k}^{\infty} U(\tau)^{-1} g(\tau)\,d\tau \le 2^{-k}$ and $2^{-k-2} \le \int_{y_k}^{y_{k+2}} U(\tau)^{-1} g(\tau)\,d\tau \le 2^{-k}$, $k=N,\,N+1,\ldots$, we have that \begin{align*} III & \ls \sum_{k = N}^{\infty} \int_{y_{k+2}}^{y_{k+4}} \bigg( \int_x^{y_{k+4}} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \,U(x)^{-1}g(x)\,dx \cdot \bigg( U(y_k) \sup_{t \in (y_k,y_{k+2})} h(t) \bigg)^{\b} \\ & \le \sum_{k = N}^{\infty} \int_{y_{k+2}}^{y_{k+4}} \bigg( \int_x^{\infty} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \bigg( \sup_{t \in (0,x)} U(t)h(t) \bigg)^{\b}\,U(x)^{-1}g(x)\,dx \\ & \ls \int_0^{\infty} \bigg( \int_x^{\infty} U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \bigg( \sup_{t \in (0,x)} U(t)h(t)\bigg)^{\b}\,U(x)^{-1}g(x)\,dx \ap A_2. \end{align*} Moreover, \begin{align*} IV & \ls \sum_{k = N}^{\infty} \int_{y_k}^{y_{k+2}} \bigg( \int_{y_k}^x U(\tau)^{-1} g(\tau)\,d\tau\bigg)^{\b - 1} \,U(x)^{-1}g(x)dx \cdot \bigg( U(y_k) \sup_{t \in (y_{k+2},\infty)} h(t)\bigg)^{\b} \\ & \le \sum_{k = N}^{\infty} \int_{y_k}^{y_{k+2}} \bigg( \int_{y_k}^x g(\tau)\,d\tau\bigg)^{\b - 1} g(x)\,dx \cdot \bigg( \sup_{t \in (y_{k+2},\infty)} h(t)\bigg)^{\b} \\ & \le \sum_{k = N}^{\infty} \int_{y_k}^{y_{k+2}} \bigg( \int_0^x g(\tau)\,d\tau\bigg)^{\b - 1} \bigg( \sup_{t \in (x,\infty)} h(t)\bigg)^{\b} g(x)\,dx \\ & \ls \int_0^{\infty} \bigg( \int_0^x g(t)\,dt\bigg)^{\b - 1} \bigg(\sup_{t \in (x, \infty)} h(t)\bigg)^{\b} \,g(x)dx \ap A_1. \end{align*} Therefore, we obtain $B_2 \ls A_1 + A_2$. The proof is complete. \end{proof} \begin{thm}\label{maintheorem4} Let $0<q_1, q_2<\i$, and $0 < p<q_2$. Let $v_1, v_2\in \W\I$, $u_1 \in \dual{\O_{q_1}}$ and $u_2\in \O_{q_2}$. Assume that $v\in W\I \cap C\I$ and $0< \|u_2^{-1}\|_{q_2 \rw p,(x,\infty)}<\i,\, x > 0$. {\rm (i)} If $q_1\le q_2$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1)\rw \ces_{p_2,q_2}(u_2,v_2)} \ap & \, \sup_{x\in \I} \vp_2(x) \, \sup_{t\in (0,\infty)} {\mathcal V} (t,x) \,\|u_2\|_{q_2,(t,\infty)} \\ & + \|u_1\|_{q_1,\I}^{-1} \sup_{t \in \I} V(t) \|u_2\|_{q_2,(t,\infty)}. \end{align*} {\rm (ii)} If $q_2 < q_1$, then \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1) \rw \ces_{p_2,q_2}(u_2,v_2)} & \\ & \hspace{-3.5cm} \ap \bigg( \int_0^\i \vp_2(x)^{\frac{q_1 \rw q_2 \cdot q_1 \rw p}{q_2 \rw p}} V(x)^{q_1 \rw p} \bigg(\sup_{t\in(0,\infty)} {\mathcal V}(t,x) \|u_2\|_{q_2,(t,\infty)}\bigg)^{q_1 \rw q_2} d \bigg( - \|u_1\|_{q_1,(0,x)}^{- q_1 \rw p}\bigg) \bigg)^{\frac{1}{q_1 \rw q_2}}\\ & \hspace{-3cm} + \|u_1\|_{q_1,\I}^{-1} \sup_{t \in \I} V(t) \|u_2\|_{q_2,(t,\infty)}. \end{align*} \end{thm} \begin{proof} By Lemma \ref{mainlemma}, applying Theorem~[\ref{Emb-Cop-Lp}, (ii)], we get that \begin{align*} \|\Id\|_{\cop_{p_1,q_1}(u_1,v_1) \rw \ces_{p_2,q_2}(u_2,v_2)} \ap & \|u_1\|_{q_1,(0,\I}^{-1} \left\{\sup_{g\in \M^+\I} \ddfrac{ \|H^*g\|_{\infty,v^p,\I}} {\|g\|_{\frac{q_2}{q_2-p},u_2^{-p},\I}} \right\}^{\frac{1}{p}}\\ & + \left\{\sup_{g\in \M^+\I} \ddfrac{\bigg(\int_0^\i \|H^*g\|_{\i,v^p,(0,t)}^{\frac{q_1}{q_1-p}} d \bigg(- \|u_1\|_{q_1,(0,t)}^{-\frac{q_1 p}{q_1 - p}}\bigg)\bigg)^{\frac{q_1-p}{q_1}}} {\|g\|_{\frac{q_2}{q_2-p},u_2^{-p},\I}} \right\}^{\frac{1}{p}}\\ := & C_3 + C_4. \end{align*} Note that $$ C_3 = \|u_1\|_{q_1,\I}^{-1} \left[\|\Id\|_{L_{\frac{q_2}{q_2-p}}\big(u_2^{-p}\big)\rw \cop_{1, \i}(v^p, {\bf 1})}\right]^{\frac{1}{p}}. $$ Using Theorem~[\ref{Emb-Lp-Cop}, (i)], we have that \begin{equation*} C_3 \ap \|u_1\|_{q_1,\I}^{-1} \sup_{t \in \I} V(t) \|u_2\|_{q_2,(t,\infty)}. \end{equation*} \item[(i)] Let $q_1 \le q_2$, then Theorem~[\ref{IterHarSupremal.1}, (i)] yields that \begin{equation*} C_4 \ap \sup_{x\in \I} \bigg(\int_0^\i [{\mathcal V}(x,t)V(t)]^{q_1 \rw p} d \bigg( - \|u_1\|_{q_1,(0,t)}^{- q_1 \rw p}\bigg) \bigg)^{\frac{1}{q_1 \rw p}} \|u_2\|_{q_2,(x,\infty)}. \end{equation*} Since $\vp_2 / V$ is equivalent to decreasing function we have that \begin{align*} \sup_{x\in \I} \vp_2(x) \|u_2\|_{q_2,(x,\infty)} & = \sup_{x\in \I} \vp_2(x) V(x)^{-1}\, \sup_{t\in (0,x)} V(t) \,\|u_2\|_{q_2,(t,\infty)} \\ & = \sup_{x\in \I} \vp_2(x) \, \sup_{t\in (0,\infty)} {\mathcal V} (t,x) \,\|u_2\|_{q_2,(t,\infty)}, \qquad x > 0. \end{align*} \item[(ii)] Let $q_2 < q_1$, then Theorem~[\ref{IterHarSupremal.1}, (ii)] yields that \begin{align*} C_4 \ap & \bigg(\int_0^{\infty}\bigg(\int_x^\i d \bigg(- \|u_1\|_{q_1,(0,t)}^{- q_1 \rw p}\bigg)\bigg)^{\frac{q_1 \rw q_2}{q_2 \rw p}} \bigg( \sup_{0 < \tau \le x} V(\tau) \|u_2\|_{q_2,(\tau,\infty)}\bigg)^{q_1 \rw q_2} d \bigg( - \|u_1\|_{q_1,(0,x)}^{- q_1 \rw p}\bigg) \bigg)^{\frac{1}{q_1 \rw q_2}} \\ & +\bigg(\int_0^\i \bigg(\int_0^x V(t)^{q_1 \rw p} d \bigg( - \|u_1\|_{q_1,(0,t)}^{- q_1 \rw p}\bigg)\bigg)^{\frac{q_1 \rw q_2}{q_2 \rw p}} V(x)^{q_1 \rw p} \|u_2\|_{q_2,(x,\infty)}^{q_1 \rw q_2} d \bigg( - \|u_1\|_{q_1,(0,x)}^{- q_1 \rw p}\bigg)\bigg)^{\frac{1}{q_1 \rw q_2}}\\ \ap & \bigg( \int_0^\i \vp_2(x)^{\frac{q_1 \rw q_2 \cdot q_1 \rw p}{q_2 \rw p}} V(x)^{q_1 \rw p} \bigg(\sup_{t\in(0,\infty)} {\mathcal V}(t,x) \|u_2\|_{q_2,(t,\infty)}\bigg)^{q_1 \rw q_2} d \bigg( - \|u_1\|_{q_1,(0,x)}^{- q_1 \rw p}\bigg) \bigg)^{\frac{1}{q_1 \rw q_2}}. \end{align*} In the last equivalence we have used Lemma \ref{gluing.lem} with $$ u(x) = V(x)^{q_1 \rw p - 1}v(x), ~ g(t)dt = V(t)^{q_1 \rw p}d \bigg(- \|u_1\|_{q_1,(0,t)}^{- q_1 \rw p}\bigg),~ \b = \frac{q_1 \rw q_2}{q_1 \rw p} ~\mbox{and}~ h(t) = \|u_2\|_{q_2,(t,\infty)}^{q_1 \rw p}. $$ It is clear that $U(x) \ap V(x)^{q_1 \rw p}$ and ${\mathcal U}(x,t) \ap {\mathcal V}(x,t)^{q_1 \rw p}$. \end{proof} \begin{bibdiv} \begin{biblist} \bib{askeyboas}{article}{ author={Askey, R.}, author={Boas, R. P., Jr.}, title={Some integrability theorems for power series with positive coefficients}, conference={title={Mathematical Essays Dedicated to A. J. Macintyre},}, book={publisher={Ohio Univ. Press, Athens, Ohio},}, date={1970}, pages={23--32}, review={\MR{0277956 (43 \#3689)}}, } \bib{astas5}{article}{ author={Astashkin, S. V.}, title={On the geometric properties of Ces\`aro spaces}, language={Russian, with Russian summary}, journal={Mat. Sb.}, volume={203}, date={2012}, number={4}, pages={61--80}, issn={0368-8666}, translation={ journal={Sb. Math.}, volume={203}, date={2012}, number={3-4}, pages={514--533}, issn={1064-5616}, }, review={\MR{2976287}}, doi={10.1070/SM2012v203n04ABEH004232}, } \bib{astasmal2008}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Ces\`aro function spaces fail the fixed point property}, journal={Proc. Amer. Math. Soc.}, volume={136}, date={2008}, number={12}, pages={4289--4294}, issn={0002-9939}, review={\MR{2431042 (2009g:46045)}}, doi={10.1090/S0002-9939-08-09599-3}, } \bib{astasmal2009}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Structure of Ces\`aro function spaces}, journal={Indag. Math. (N.S.)}, volume={20}, date={2009}, number={3}, pages={329--379}, issn={0019-3577}, review={\MR{2639977 (2011c:46056)}}, doi={10.1016/S0019-3577(10)00002-9}, } \bib{astasmal2010}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Rademacher functions in Ces\`aro type spaces}, journal={Studia Math.}, volume={198}, date={2010}, number={3}, pages={235--247}, issn={0039-3223}, review={\MR{2650988 (2011m:46040)}}, doi={10.4064/sm198-3-3}, } \bib{astasmalig10}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Geometry of Ces\`aro function spaces}, language={Russian}, journal={Funktsional. Anal. i Prilozhen.}, volume={45}, date={2011}, number={1}, pages={79--82}, issn={0374-1990}, translation={ journal={Funct. Anal. Appl.}, volume={45}, date={2011}, number={1}, pages={64--68}, issn={0016-2663}, }, review={\MR{2848742 (2012f:46051)}}, doi={10.1007/s10688-011-0007-8}, } \bib{astashkinmaligran11}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Interpolation of Ces\`aro sequence and function spaces}, journal={Studia Math.}, volume={215}, date={2013}, number={1}, pages={39--69}, issn={0039-3223}, review={\MR{3071806}}, doi={10.4064/sm215-1-4}, } \bib{asmal12}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={A short proof of some recent results related to Ces\`aro function spaces}, journal={Indag. Math. (N.S.)}, volume={24}, date={2013}, number={3}, pages={589--592}, issn={0019-3577}, review={\MR{3064562}}, doi={10.1016/j.indag.2013.03.001}, } \bib{asmal13}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Interpolation of Ces\`aro and Copson spaces}, conference={ title={Banach and function spaces IV (ISBFS 2012)}, }, book={ publisher={Yokohama Publ., Yokohama}, }, date={2014}, pages={123--133}, review={\MR{3289767}}, } \bib{asmalsurvey}{article}{ author={Astashkin, S. V.}, author={Maligranda, L.}, title={Structure of Ces\`{a}ro function spaces: a survey}, journal={Banach Center Publ.}, volume={102}, date={2014}, pages={13--40}, } \bib{belliftrig}{article}{ author={Belinskii, E. S.}, author={Liflyand, E. R.}, author={Trigub, R. M.}, title={The Banach algebra $A^*$ and its properties}, journal={J. Fourier Anal. Appl.}, volume={3}, date={1997}, number={2}, pages={103--129}, issn={1069-5869}, review={\MR{1438893 (98a:42003)}}, doi={10.1007/s00041-001-4052-1}, } \bib{bennett1996}{article}{ author={Bennett, G.}, title={Factorizing the classical inequalities}, journal={Mem. Amer. Math. Soc.}, volume={120}, date={1996}, number={576}, pages={viii+130}, issn={0065-9266}, review={\MR{1317938 (96h:26020)}}, doi={10.1090/memo/0576}, } \bib{boas1967}{book}{ author={Boas, R. P., Jr.}, title={Integrability theorems for trigonometric transforms}, series={Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 38}, publisher={Springer-Verlag New York Inc., New York}, date={1967}, pages={v+66}, review={\MR{0219973 (36 \#3043)}}, } \bib{boas1970}{article}{ author={Boas, R. P., Jr.}, title={Some integral inequalities related to Hardy's inequality}, journal={J. Analyse Math.}, volume={23}, date={1970}, pages={53--63}, issn={0021-7670}, review={\MR{0274685 (43 \#447)}}, } \bib{cargogmarpick}{article}{ author={Carro, M.}, author={Gogatishvili, A.}, author={Martin, J.}, author={Pick, L.}, title={Weighted inequalities involving two Hardy operators with applications to embeddings of function spaces}, journal={J. Operator Theory}, volume={59}, date={2008}, number={2}, pages={309--332}, issn={0379-4024}, review={\MR{2411048 (2009f:26024)}}, } \bib{chencuihudsims}{article}{ author={Chen, S.}, author={Cui, Y.}, author={Hudzik, H.}, author={Sims, B.}, title={Geometric properties related to fixed point theory in some Banach function lattices}, conference={ title={Handbook of metric fixed point theory}, }, book={ publisher={Kluwer Acad. Publ., Dordrecht}, }, date={2001}, pages={339--389}, review={\MR{1904283 (2003f:46031)}}, } \bib{CuiPluc}{article}{ author={Cui, Y.}, author={P{\l}uciennik, R.}, title={Local uniform nonsquareness in Ces\`aro sequence spaces}, journal={Comment. Math. Prace Mat.}, volume={37}, date={1997}, pages={47--58}, issn={0373-8299}, review={\MR{1608225 (99b:46025)}}, } \bib{cuihud1999}{article}{ author={Cui, Y.}, author={Hudzik, H.}, title={Some geometric properties related to fixed point theory in Ces\`aro spaces}, journal={Collect. Math.}, volume={50}, date={1999}, number={3}, pages={277--288}, issn={0010-0757}, review={\MR{1744077 (2001f:46033)}}, } \bib{cuihud2001}{article}{ author={Cui, Y.}, author={Hudzik, H.}, title={Packing constant for Cesaro sequence spaces}, booktitle={Proceedings of the Third World Congress of Nonlinear Analysts, Part 4 (Catania, 2000)}, journal={Nonlinear Anal.}, volume={47}, date={2001}, number={4}, pages={2695--2702}, issn={0362-546X}, review={\MR{1972393 (2004c:46033)}}, doi={10.1016/S0362-546X(01)00389-3}, } \bib{cuihudli}{article}{ author={Cui, Y.}, author={Hudzik, H.}, author={Li, Y.}, title={On the Garcia-Falset coefficient in some Banach sequence spaces}, conference={ title={Function spaces}, address={Pozna\'n}, date={1998}, }, book={ series={Lecture Notes in Pure and Appl. Math.}, volume={213}, publisher={Dekker, New York}, }, date={2000}, pages={141--148}, review={\MR{1772119 (2001h:46009)}}, } \bib{cmp}{article}{ author={Cui, Y.}, author={Meng, C.-H.}, author={P{\l}uciennik, R.}, title={Banach-Saks property and property $(\beta)$ in Ces\`aro sequence spaces}, journal={Southeast Asian Bull. Math.}, volume={24}, date={2000}, number={2}, pages={201--210}, issn={0129-2021}, review={\MR{1810056 (2001m:46031)}}, doi={10.1007/s100120070003}, } \bib{ego2008}{article}{ author={Evans, W. D.}, author={Gogatishvili, A.}, author={Opic, B.}, title={The reverse Hardy inequality with measures}, journal={Math. Inequal. Appl.}, volume={11}, date={2008}, number={1}, pages={43--74}, issn={1331-4343}, review={\MR{2376257 (2008m:26029)}}, doi={10.7153/mia-11-03}, } \bib{gil1970}{article}{ author={Gilbert, J. E.}, title={Interpolation between weighted $L^{p}$-spaces}, journal={Ark. Mat.}, volume={10}, date={1972}, pages={235--249}, issn={0004-2080}, review={\MR{0324393 (48 \#2745)}}, } \bib{gmp}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R. Ch.}, author={Persson, L.-E.}, title={Some new iterated Hardy-type inequalities}, journal={J. Funct. Spaces Appl.}, date={2012}, pages={Art. ID 734194, 30}, } \bib{GogMusPers2}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R. Ch.}, author={Persson, L.-E.}, title={Some new iterated Hardy-type inequalities: the case $\theta = 1$}, journal={J. Inequal. Appl.}, date={2013}, pages={29 pp.}, issn={}, doi={10.1186/1029-242X-2013-515}, } \bib{GogMusIHI}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R. Ch.}, title={Weighted iterated Hardy-type inequalities}, journal={Preprint, arXiv:1503.04079}, date={2015}, pages={}, issn={}, doi={}, } \bib{GogMusISI}{article}{ author={Gogatishvili, A.}, author={Mustafayev, R. Ch.}, title={Iterated Hardy-type inequalities involving suprema}, journal={Preprint, arXiv:1504.03932}, date={2015}, pages={}, issn={}, doi={}, } \bib{gop}{article}{ author={Gogatishvili, A.}, author={Opic, B.}, author={Pick, L.}, title={Weighted inequalities for Hardy-type operators involving suprema}, journal={Collect. Math.}, volume={57}, date={2006}, number={3}, pages={227--255}, } \bib{gogperstepwall}{article}{ author={Gogatishvili, A.}, author={Persson, L.-E.}, author={Stepanov, V. D.}, author={Wall, P.}, title={Some scales of equivalent conditions to characterize the Stieltjes inequality: the case $q < p$}, journal={Math. Nachr.}, volume={287}, date={2014}, number={2-3}, pages={242--253}, issn={0025-584X}, review={\MR{3163577}}, doi={10.1002/mana.201200118}, } \bib{grosse}{book}{ author={Grosse-Erdmann, K.-G.}, title={The blocking technique, weighted mean operators and Hardy's inequality}, series={Lecture Notes in Mathematics}, volume={1679}, publisher={Springer-Verlag, Berlin}, date={1998}, pages={x+114}, isbn={3-540-63902-0}, review={\MR{1611898 (99d:26024)}}, } \bib{hashus}{article}{ author={Hassard, B. D.}, author={Hussein, D. A.}, title={On Ces\`aro function spaces}, journal={Tamkang J. Math.}, volume={4}, date={1973}, pages={19--25}, issn={0049-2930}, review={\MR{0333700 (48 \#12025)}}, } \bib{jagers}{article}{ author={Jagers, A. A.}, title={A note on Ces\`aro sequence spaces}, journal={Nieuw Arch. Wisk. (3)}, volume={22}, date={1974}, pages={113--124}, issn={0028-9825}, review={\MR{0348444 (50 \#942)}}, } \bib{johnson1974}{article}{ author={Johnson, R.}, title={Lipschitz spaces, Littlewood-Paley spaces, and convoluteurs}, journal={Proc. London Math. Soc. (3)}, volume={29}, date={1974}, pages={127--141}, issn={0024-6115}, review={\MR{0355578 (50 \#8052)}}, } \bib{kamkub}{article}{ author={Kami{\'n}ska, A.}, author={Kubiak, D.}, title={On the dual of Ces\`aro function space}, journal={Nonlinear Anal.}, volume={75}, date={2012}, number={5}, pages={2760--2773}, issn={0362-546X}, review={\MR{2878472 (2012m:46034)}}, doi={10.1016/j.na.2011.11.019}, } \bib{kufmalpers}{book}{ author={Kufner, A.}, author={Maligranda, L.}, author={Persson, L.-E.}, title={The Hardy inequality}, note={About its history and some related results}, publisher={Vydavatelsk\'y Servis, Plze\v n}, date={2007}, pages={162}, isbn={978-80-86843-15-5}, review={\MR{2351524 (2008j:26001)}}, } \bib{kp}{book}{ author={Kufner, A.}, author={Persson, L.-E.}, title={Weighted inequalities of Hardy type}, publisher={World Scientific Publishing Co. Inc.}, place={River Edge, NJ}, date={2003}, pages={xviii+357}, isbn={981-238-195-3}, review={\MR{1982932 (2004c:42034)}}, } \bib{mu2015}{article}{ author={Mustafayev, R. Ch.}, author={{\"U}nver, T.}, title={Reverse Hardy-type ineqialities for supremal operators with measures}, journal={accepted in Math. Inequal. Appl.}, date={2015}, pages={}, issn={}, } \bib{ok}{book}{ author={Opic, B.}, author={Kufner, A.}, title={Hardy-type inequalities}, series={Pitman Research Notes in Mathematics Series}, volume={219}, publisher={Longman Scientific \& Technical}, place={Harlow}, date={1990}, pages={xii+333}, isbn={0-582-05198-3}, review={\MR{1069756 (92b:26028)}}, } \bib{r}{book}{ author={Rudin, W.}, title={Principles of mathematical analysis}, series={Second edition}, publisher={McGraw-Hill Book Co.}, place={New York}, date={1964}, pages={ix+270}, review={\MR{0166310 (29 \#3587)}}, } \bib{prog}{article}{ author={}, title={Programma van Jaarlijkse Prijsvragen (Annual Problem Section)}, journal={Nieuw Arch. Wiskd.}, volume={16}, date={1968}, number={}, pages={47--51}, } \bib{shiue}{article}{ author={Shiue, J.-S.}, title={A note on Ces\`aro function space}, journal={Tamkang J. Math.}, volume={1}, date={1970}, number={2}, pages={91--95}, issn={0049-2930}, review={\MR{0276751 (43 \#2491)}}, } \bib{syzhanglee}{article}{ author={Sy, P. W.}, author={Zhang, W. Y.}, author={Lee, P. Y.}, title={The dual of Ces\`aro function spaces}, language={English, with Serbo-Croatian summary}, journal={Glas. Mat. Ser. III}, volume={22(42)}, date={1987}, number={1}, pages={103--112}, issn={0017-095X}, review={\MR{940098 (89g:46059)}}, } \end{biblist} \end{bibdiv} \end{document}
train/arxiv
BkiUa-84c3aisKzQyWdR
5
1
\section{Motivation} Historically, the gray wolf and coyote populations have coexisted at Yellowstone National Park (YNP) \cite{Merkle:2009aa}. However, due to their predation on farmer's livestock and the negative connotation associated to such predators, they were the focus of predator control programs in the late 1800's and early 1900's \cite{Hayward:2009aa,Fritts:1997aa}. The predator control programs utilized various methods to accelerate the extirpation of the species, including hunting, poisoning and the introduction of the parasitic mite \textit{Sarcoptic scabiei} by state veterinarians \cite{Weaver:1978aa,Merkle:2009aa}. The predator control program implemented to end both the wolf and coyote population was partially successful. Between 1914 and 1926 a total of 136 wolves were killed inside YNP \cite{Weaver:1978aa}. By 1930, wolves were completely eliminated from YNP \cite{Merkle:2009aa}. Despite similar persecution, by the end of 1930 there were 400 coyotes still present at YNP \cite{Murie:1940aa}. In an attempt to reintroduce the now threatened and endangered gray wolf into their natural habitat and restore the original ecosystem of YNP, a total of forty-one wolves were transported from Alberta, Canada to Yellowstone National Park between 1995 and 1996 after a more than 60 year absence \cite{Smith:2007aa,Fritts:1997aa}. The reintroduction was initially successful, and the total number of wolves increased steadily with the wolf growth rate averaging about 17\% a year \cite{Hayward:2009aa} until it reached a high of 174 wolves at the end of 2003 \cite{Smith:2011aa}. However, within the last 9 years their has been a decline in the wolf population, with the northern range population experiencing a 60\% decrease in the population since 2007 and the interior range population experiencing a 23\% decrease in the same time period. The decrease in the population was rapid, and is suggestive of disease induced death \cite{Smith:2011aa}. The wolves at YNP have been affected by many diseases including canine distemper virus (CDV), canine herpes virus (CHV), canine papovirus, \textit{Brucella canis} and sarcoptic mange \cite{Smith:2007aa,Almberg:2012aa}. While all diseases have affected the wolf, in this study we focus on the effects of sarcoptic mange on the wolf population as the effects of the other diseases have been studied previously \cite{Smith:2007aa,Almberg:2010aa,Almberg:2012aa}. Moreover, since the disease was initially introduced as a control measure in 1914 for the wolf population, park management should consider treatment in the cases of extreme infection \cite{Smith:2007aa}. Sarcoptic mange is a highly contagious skin disease caused by the parasitic mite \textit{Sarcoptic scabiei} that burrow into the epidermis of the host species \cite{Jimenez:2010aa,Polley:2002aa}. Transmission of the the mite is caused by direct contact and contact with infected areas like dens \cite{Jimenez:2010aa}. However, the pathogens can survive off the host for days and sometimes weeks under certain microclimate conditions at the drop-off site \cite{Arlian:1989aa}. The wolves experience an allergic response to the waste secreted by the mites which causes irritation and pruritis, and leaves the infected animals suffering from alopecia, hyperkeratosis, seborrhea, scabs, ulcerations and lesions \cite{Jimenez:2010aa,Almberg:2009aa}. In severe cases it can affect the hosts entire health, leading to poor body conditions and leaving the susceptible to secondary infections or hypothermia in the winter due to the hair loss \cite{Jimenez:2010aa}. Moreover, some research suggests that wolves suffering from sarcoptic mange may change their social behavior. The weak and afflicted wolves are observed choosing to leave their pack and traveling alone; they are unlikely to survive, especially in the winter \cite{Jimenez:2010aa,Wydeven:2003aa}. Sarcoptic mange was first observed for the reintroduced wolves at YNP in 2003 when a wolf was sited at Daly Creek with hair loss \cite{Smith:2004aa}. Some populations can survive a sarcoptic mange epizootic, like the coyote. It is important to note that while coyotes do survive the epizootic, sarcoptic mange does reduce ovulation and pregnancy rates in coyotes, as well as increasing mortality rates by $\sim$70\% \cite{Pence:1994aa}. However, the effects on the population dynamics of gray wolves should be studied and monitored closely since it is considered a threat to small, recovering populations \cite{Almberg:2012aa}. Another major threat to the recovering gray wolf population are humans. Although gray wolves are not hunted within the park, they have still been affected by vehicular death, park management actions, legal kills (i.e., if a wolf has killed a farmers livestock) and illegal hunting. Between 1995 and 2003, 38\% of reported wolf deaths were human related at Yellowstone \cite{Smith:2003ab}. Because of the high number of deaths attributed to human activity, human-caused mortality needs to be considered in order to understand the population dynamics of wolves. The goal of the study is to understand how different factors may affect the decline in population size of a dominant predator in relation to human-related mortality, disease, and other factors and so we consider a subordinate predator, the coyote. The sympatric predator, the coyote, at Yellowstone National Park has thrived despite facing similar persecution and environmental factors \cite{Merkle:2009aa}. We develop and analyze a mathematical model which considers the effects of disease on two competing species that are affected by a host-specific disease, and that experience human-related mortality in order to gain insight on why the dominant predator in the YNP ecosystem, the wolf, is seemingly unable to sustain a stable population while the less dominant predator, the coyote, has thrived \cite{Merkle:2009aa,Berger:2007aa}. Because wolves and coyotes compete for ungulate carcasses and habitat \cite{Merkle:2009aa}, we consider interference competition between the species in our model. Several models have informed the development of the framework presented in this paper. A two competing species with an infectious disease has been developed by both Han and Pugliese \cite{Han:2009aa}, and by Saenz and Hethcote \cite{Saenz:2006aa}. Our model differs from both of these models in multiple ways, including the inclusion of human related mortality. It is markedly different from the model developed by Han and Pugliese in that we do not assume competition acts only upon the death rate, but also upon the birth rate. We also refrain from considering interspecies transmission of disease as the two previous models do, since research has suggested that \textit{S. scabiei} shows high degree of host-specificity \cite{Pence:2002aa}; moreover, for social animals, like wolves and coyotes, intraspecies transmission will likely be higher than interspecies transmission \cite{Almberg:2012aa} and can be assumed negligible. Since the reintroduction of gray wolves has benefited the ecosystem by, for example, regulating the size of various species of ungulates and coyotes \cite{Fortin:2005aa,Switalski:2003aa,Berger:2007aa}, it is important to have continued success of the reintroduction. By studying how different factors could lead to extinction of a dominant predator in an ecosystem, we hope to provide insight to park management about some factors that could be contributing to the decline of gray wolves at Yellowstone National Park. \section{Competing Species with Infectious Disease and Human-Related Mortality Model} We consider a two competing species based on the competing species with infectious disease developed by Han and Pugliese, and by Saenz and Hethcote, with the addition of human-caused mortality. It is important to note that death of coyotes by wolves occurs, mortality due to wolves is low \cite{Berger:2007aa}, and thus is not included in our model. Moreover, we assume a disease which displays host-specificity, since our motivation is the \textit{S. scabiei} mite which displays host-specificity \cite{Pence:2002aa}. Therefore we assume no interspecies transmission of the disease. We also assume that the disease has no affect on birth rates. Although research has shown a reduction in reproduction as a result of sarcoptic mange for coyotes \cite{Pence:1994aa} and for wolves \cite{Smith:2009aa}, we reduce the complexity of our model by initially assuming no effect of mange on reproduction or pup survival. Also, while some research suggests that some mammals may develop temporary immunity from sarcoptic mange \cite{Pence:1994aa,Polley:2002aa}, no conclusive argument has been reached on the existence or length of this immunity \cite{Pence:2002aa}, and we therefore exclude a recovery class from our model. Thus we model the progression of the disease using an SIS approach for each species. The coupled model is listed below: \begin{align} \frac{dS_1}{dt} &= \alpha_1 N_1 \left(1-\frac{N_1+\omega_{12}N_2}{k_1}\right) + \gamma_1 I_1 - \eta_1 S_1 - \beta_{1} S_1 I_1 \nonumber\\ \frac{dI_1}{dt} &= \beta_{1} S_1 I_1 - G_1 I_1 \nonumber \\ \frac{dS_2}{dt} &= \alpha_2 N_2 \left(1-\frac{N_2+\omega_{21}N_1}{k_2}\right) + \gamma_2 I_2 - \eta_2 S_2 - \beta_{2} S_2 I_2 \nonumber \\ \frac{dI_2}{dt} &= \beta_{2} S_2 I_2 - G_2 I_2 \end{align} \noindent where \begin{align} G_1 &= \eta_1 + \delta_1 + \gamma_1 \nonumber \\ G_2 &= \eta_2 + \delta_2 + \gamma_2 \end{align} A compartmental diagram of the model is shown in Figure \ref{fig:compartmental_model}. The total population density of species 1 is given is given by $N_1 = S_1 + I_1$. The species 1 grows with an intrinsic growth rate $\alpha_1$, and is limited by a carrying capacity $k_1$. The inhibitory affect of species 1 on themselves is represented by $\frac{1}{k_1}$, and the inhibitory affect of species 2 on species 1 is represented by $\frac{\omega_{12}}{k_1}$. The competition coefficient $\omega_{12}$ is defined as the degree to which an individual of one species affects, through competition, the growth of the second species \cite{Schoener:1974aa}. Species 1 dies from human activities at a per capita rate $\eta_1$, and are infected by disease with a transmission potential $\beta_1$. They recover from the disease with rate $\gamma_1$ and die from the disease at a rate $\delta_1$. Analogous classes and parameters exist for species 2. A summary of the class and parameter definitions, and their values, are given in Table \ref{parameter_definitions}. \begin{center} \begin{figure}[H] \includegraphics[scale = 0.65]{./compartmental_model} \caption{Compartmental Model} \label{fig:compartmental_model} \end{figure} \end{center} \begin{table}[H] \centering \caption{Parameters and Classes} \label{parameter_definitions} \begin{tabular}{|l|l|l|} \hline Class & Definition & \\ \hline $N_1$ & Total population of species 1 & \\ $N_2$ & Total population of species 2 & \\ $S_1$ & Susceptibles of species 1 & \\ $S_2$ & Susceptibles of species 2 & \\ $I_1$ & Infected of species 1 & \\ $I_2$ & Infected of species 2 & \\ \hline Parameter & Definition & Unit \\ \hline $k_1$ & Species 1 carrying capacity & species 1 \\ $k_2$ & Species 2 carrying capacity & species 2 \\ $\alpha_1$ & Intrinsic growth rate of species 1 & species 1/time \\ $\alpha_2$ & Intrinsic growth rate of species 2 & species 2/time\\ $\omega_{12}$ & Competition coefficient & species 1/species 2 \\ $\omega_{21}$ & Competition coefficient & species 2/species 1 \\ $\gamma_1$ & Per capita recovery rate for species 1 & 1/time \\ $\gamma_2$ & Per capita recovery rate for species 2 & 1/time \\ $\eta_1$ & Per capita death rate of species 1 by humans & 1/time \\ $\eta_2$ & Per capita death rate of species 2 by humans & 1/time \\ $\delta_1$ & Per capita disease death rate of species 1 & 1/time \\ $\delta_2$ & Per capita disease death rate of species 2 & 1/time \\ $\beta_{1}$ & Transmission coefficient for species 1 & 1/time \\ $\beta_{2}$ & Transmission coefficient for species 2 & 1/time \\ \hline \end{tabular} \end{table} \section{Analysis of Competing Two Predators Model in the Presence of Disease} Using the Next-Generation Matrix \cite{Diekmann:1990aa}, we find that the basic reproduction number $R_0$ for the entire system is $$R_0 = max (R_1, R_2)$$ \noindent where $R_1$ is the basic reproduction number for species 1 and $R_2$ is the basic reproduction number for species 2 and are defined to be \begin{align*} R_1 = \frac{\beta_1 k_1 (\alpha_1 - \eta_1)}{G_1 \alpha_1 } \\ R_2 = \frac{\beta_2 k_2 (\alpha_2 - \eta_2)}{G_2 \alpha_2 } \end{align*} Analyzing $R_1$, we see that the threshold for species 1 depends on the average infectious period for species 1, $\frac{\beta_1}{G_1}$ multiplied by the number of susceptibles we have at equilibrium when there is no infection. We arrive at similar conclusions if we analyze $R_2$. We note that because the basic reproduction number cannot be negative, we impose the restriction on our system that $G_1 < \alpha_1$ and $G_2 < \alpha_2$, i.e. that the number of species hunted is not greater than the intrinsic growth rate of the species. Setting the right hand side of the system to zero we found that there are at least five equilibrium points, namely extinction state $(E_0)$, two one-host disease free states $(E_1,E_2)$, two one-host endemic states $(E_3,E_4)$ and a two-host disease free state $(E_5)$. We found coexistence endemic equilibrium to be algebraically intractable. \subsection{Trivial Equilibrium Point} The trivial equilibrium point is $$E_0 = (0,0,0,0)$$ \noindent Two of the eigenvalues of the Jacobian $J$ of the system evaluated at $E_0$ are always negative ($-\eta_1$ and $-\eta_c$). The other two eigenvalues are negative when the following inequalities hold: $$\alpha_1 < \eta_1 \text{ and } \alpha_2 < \eta_2$$ \noindent However, this cannot be true because the basic reproduction number would then be negative, which is not possible. Therefore the trivial solution is locally asymptotic unstable. \subsection{One-Host Disease Free Equilibrium and Stability} \noindent For each of the species there is a one-host disease free equilibrium point. These two equilibrium points are \begin{align} E_1 &= \left(\frac{k_1}{{\alpha_1} }\left(\alpha_1 - \eta_1\right),0,0,0 \right) \nonumber \\ E_2 &= \left(0,0,\frac{k_2}{\alpha_2} \left(\alpha_2 - \eta_2 \right),0\right) \nonumber \end{align} \noindent For $E_1$ to exist, we need $S^{*}_1 > 0$. So we need \begin{align*} \frac{k_1}{\alpha_1} \left(\alpha_1 - \eta_1 \right) > 0 \end{align*} \noindent which is always true. Two of the eigenvalues of the Jacobian evaluated at $E_1$ are always negative ($-G_2$ and $(\eta_1 - \alpha_1)$). Hence we need the other two eigenvalues to be negative to have local asymptotic stability; the inequalities are as follows: \begin{align} -G_1 + \frac{\beta_1 k_1}{\alpha_1} \left(\alpha_1 - \eta_1 \right) &< 0 \label{eqpt1:eigenvalue1} \\ \frac{\alpha_2 k_1 \omega_{21} (\eta_1 - \alpha_1) + \alpha_1 k_2 (\alpha_2 - \eta_2)}{\alpha_1 k_2} &<0 \label{eqpt1:eigenvalue2} \end{align} \noindent Rearranging inequality \eqref{eqpt1:eigenvalue1}, we get that \begin{align*} \frac{k_1}{\alpha_1} \left(\alpha_1 - \eta_1 \right) &< \frac{G_1}{\beta_1} \\ R_1 &< 1 \end{align*} \noindent If $R_1 < 1$ then the disease will die out in species 1. We derive the second condition for stability from inequality \eqref{eqpt1:eigenvalue2} as follows: \begin{align*} \frac{k_1}{\alpha_1} (\eta_1 - \alpha_1) \cdot \frac{\alpha_2 \omega_{21}}{k_2} + (\alpha_2 - \eta_2) &< 0 \\ \frac{k_2 (\alpha_2 - \eta_2)}{\alpha_2 \omega_{21}} &< S^{*}_1 \end{align*} \noindent which means that the number of susceptibles of species that remain when there is no infection must be greater than some fraction of the number of susceptibles of species 2. We can continue to arrange the above inequality so that \begin{align*} \frac{\beta_1 k_2 (\alpha_2 - \eta_2)}{G_1 \alpha_2 \omega_{21}} &< R_1 \\ \frac{\beta_1 k_2 (\alpha_2 - \eta_2)}{G_1 \alpha_2 \omega_{21}} \cdot \frac{\beta_2}{G_2} \cdot \frac{G_2}{\beta_2} &< R_1 \\ \frac{\beta_1 G_2}{G_1 \beta_2 \omega{21}} R_2 &< R_1 \end{align*} \noindent Uniting these conditions we get that $E_1$ is locally asymptotically stable if $$\frac{\beta_1 G_2}{G_1 \beta_2 \omega_{21}} R_2 < R_1 < 1$$ \noindent Similar conditions can be defined for $E_2$. \subsection{One-Host Endemic Equilibrium} There are two one-host endemic equilibrium points, one for each of the species in our system. These two equilibrium points are as follows: \begin{align*} E_3 &= \left(\frac{G_1}{\beta_1},I^{*}_1,0,0 \right) \\ E_4 &= \left(0,0,\frac{G_2}{\beta_2},I^{*}_2 \right) \end{align*} \noindent where \begin{align} A &= 1/2\,{\frac {-k_{{1}} \left( G_{1}-\alpha_{{1}}-\gamma_{{1}} \right) \beta_{{1}}-2\,G_{1}\,\alpha_{{1}} \pm \sqrt {-4\,\beta_{{1}} \left( -1/4 \,k_{{1}} \left( G_{1}-\alpha_{{1}}-\gamma_{{1}} \right) ^{2}\beta_{{1 }}+G_{1}\,\alpha_{{1}} \left( -\delta_1 \right) \right) k_{{1}}}}{\alpha_{{1}}\beta_{{1}}}} \label{A}\\ B &= 1/2\,{\frac {-k_{{2}} \left( G_{2}-\alpha_{{2}}-\gamma_{{2}} \right) \beta_{{2}}-2\,G_{2}\,\alpha_{{2}} \pm \sqrt {-4\,\beta_{{2}} \left( -1/4 \,k_{{2}} \left( G_{2}-\alpha_{{2}}-\gamma_{{2}} \right) ^{2}\beta_{{2 }}+G_{2}\,\alpha_{{2}} \left( -\delta_2 \right) \right) k_{{2}}}}{\alpha_{{2}}\beta_{{2}}}} \label{B} \end{align} \noindent and $I^{*}_1$ is the positive value of $A$ and $I^{*}_2$ is the positive value of $B$. We have shown in Appendix \ref{unique_pos} that only one of the values of $A$ or $B$ is positive at any given time. Because the stability of $E_3$ and $E_4$ is algebraically intractable, we have run numerical simulations to determine the behavior of our system around these equilibrium points and have found them to be locally stable. \subsection{Two-Host Disease Free Equilibrium } Our two-host disease free equilibrium is as follows: $$E_5 = \left( \frac{\alpha_1 k_2 \omega_{12} (\eta_2 - \alpha_2) + \alpha_2 k_1 (\alpha_1 - \eta_1)}{\alpha_2 \alpha_1 (1-\omega_{12} \omega_{21})}, 0, \frac{\alpha_2 k_1 \omega_{21} (\eta_1 - \alpha_1) + \alpha_1 k_2 (\alpha_2 - \eta_2)}{\alpha_2 \alpha_1 (1-\omega_{12} \omega_{21})},0 \right) $$ \noindent and will exist if $$\eta_1 <\alpha_1\text{ and }\eta_2 < \alpha_2 \text{ and } \omega_{12}\omega_{21} < 1,$$ \\ \noindent which means that the intrinsic growth rate of each population is greater than the respective rates at which they are hunted, and if the competition of the species is relatively low, both species cannot be using extreme amount of resources. We are unable to determine conditions for stability using either the eigenvalues of the Jacobian evaluated at $E_5$ or the Routh-Hurwitz criterion (see Appendix \ref{app:one-host_DFE}). We have run numerical simulations to determine the stability around the two-host disease free equilibrium and have found it locally stable. \subsection{Coexistent Endemic Equilibrium} The expression of the endemic equilibrium is algebraically intractable. It is possible to have several endemic equilibrium with mass action disease transmission \cite{Bokil:2010aa}. Therefore we use numerical simulations to determine the behavior of the system if it approaches the coexistent endemic equilibrium. \section{Numerical Simulations} In order to understand the competitive dynamics of predators, we assume the dominance of one species over the other. In these scenarios species 1 is the dominant predator, and species 2 is the subordinate predator. \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario1_total_pop} \end{subfigure} \caption{$\omega_{21} = 2 \cdot \omega_{12}$ and $k_2 = 2 \cdot k_1$, other parameters fixed} \label{fig:scenario1} \end{figure} Although species 1 is the dominant predator, because the land can support more of the subordinate predators (its carrying capacity is larger) it's population will grow and stabilize at larger numbers and limit the growth of the dominant predator. \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario2_total_pop} \end{subfigure} \caption{$\omega_{21} = 2 \cdot \omega_{12}$ and $\beta_1 = 2 \cdot \beta_2$, other parameters fixed} \label{fig:scenario2} \end{figure} Although species 1 is the dominant predator, because it's infection rate is twice as high (perhaps because it is a more social creature the subordinate predator's carrying capacity is larger it's population will grow and stabilize at larger numbers than that of the dominant predator. \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario3} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario3_total_pop} \end{subfigure} \caption{$\omega_{21} = 2 \cdot \omega_{12}$ and $\delta_1 = 3 \cdot \delta_2$, other parameters fixed} \label{fig:scenario3} \end{figure} Although species 1 is the dominant predator, because it's disease-related death rate is three times as high (perhaps because it is more prone to secondary infections) the subordinate predator's population will grow and stabilize at larger numbers than that of the dominant predator. \begin{figure}[H] \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario4} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[scale = 0.7]{./scenario4_total_pop} \end{subfigure} \caption{$\omega_{21} = 2 \cdot \omega_{12}$ and $\eta_1 = 3 \cdot \eta_2$, other parameters fixed} \label{fig:scenario4} \end{figure} Although species 1 is the dominant predator, because it's human-related mortality rate is three times as high (perhaps because it is hunted more) the subordinate predator's population will grow and stabilize at larger numbers than that of the dominant predator. \section{Conclusions} In this study we analyzed a four-dimensional system of differential equations that modeled interference competition with an infectious disease and human-related mortality. We developed the model in order to understand the dynamics of competitive predators that both face disease and human-related mortality. In our model we have shown the stability of the three boundary equilibrium, including the trivial equilibrium and the two one-host disease free equilibrium. The parameter space for the one-host endemic equilibriums exist is defined, however we were unable to define the region algebraically for when they are stable. However, through numerical simulations we were able to show that both of these equilibrium points are locally asymptotically stable in some parameter space. The parameter space for when both species can coexist, with and without the disease, is difficult to define algebraically. But we have shown that there is such parameter space for when the coexistence endemic equilibrium and the coexistence disease free equilibrium exist, and that they are locally asymptotically stable. Moreover, we have shown that a subordinate predator can play a role in controlling the population size of the more dominant predator. In Scenario 1, where the carrying capacity for the subordinate species $(k_2)$ is larger than the more dominant species $(k_1)$ shown in Figure \ref{fig:scenario1}, we see that because the environment can support more of the subordinate species that their population size gives them a competitive advantage over the dominant species. In Scenario 2, as shown in Figure \ref{fig:scenario2} when the dominant predator's transmission coefficient $\beta_1$ is larger, if for example they are a more social species or if their immune system is not as strong as the other species', then the subordinate predator's population will grow larger and help regulate the other species both through competition and the size of the population. In the other scenarios we ran, shown in Figure \ref{fig:scenario3} and Figure \ref{fig:scenario4}, we needed to increase the human-related mortality and the disease related deaths of the dominant species to three times that of the subordinate species for it to play a role in determining which population size was larger, and which species wins the competition. However, the outcomes are very dependent on the initial conditions and the parameters, and take these simulations as definitive answers to the outcome of competition in every case. But in some cases we can see that disease death and human-related mortality can shift the dynamics of the system so that the subordinate predator wins the competition. Further work will be needed to determine the stability of the entire system, and should include the addition of social structure. In, for example, the case of the wolves and coyotes of Yellowstone National Park, the more social and dominant creature, the wolf, will have a disadvantage due to higher disease transmission because of larger pack sizes. Moreover, if the population sizes are small, we could also consider a spatially explicit model to more accurately reflect the dynamics of the disease. The eco-epidemiological framework here serves as a basis to further explore the dynamics of competitive predators under different environmental influences. \section*{Acknowledgments} We would like to thank Dr. Carlos Castillo-Chavez, Executive Director of the Mathematical and Theoretical Biology Institute (MTBI), for giving us this opportunity to participate in this research program. We would also like to thank Co-Executive Summer Directors Dr.~Omayra Ortega and Dr.~Baojun Song for their efforts in planning and executing the day to day activities of MTBI. I would like to give a special thanks to Komi Messan and Juan Renova for their help and patience. This research was conducted in MTBI at the Simon A. Levin Mathematical, Computational and Modeling Sciences Center (SAL MCMSC) at Arizona State University (ASU). This project has been partially supported by grants from the National Science Foundation (DMS-1263374 and DUE-1101782), the National Security Agency (H98230-14-1-0157), the Office of the President of ASU, and the Office of the Provost of ASU. \newpage
train/arxiv
BkiUeODxK7IACRyfmzIv
5
1
\section{Introduction} The theory of noncommutative martingales is a fast-expanding area of mathematics, and its fruitful connections with the theory of operator algebras and noncommutative harmonic analysis have been evidenced in numerous articles; see for instance \cite{pisierxu, PisierShlyakhtenko, Xu2006, R3, Mei, MP,parcet,JOW-continuous-dif-sub, JOWA,JOW2,JNWZ}. One of the primary goals of this paper is to study the context of maximal inequalities for operator-valued martingales in the presence of a weight, i.e., a nonnegative and integrable function. To present the results from the appropriate perspective, let us discuss several closely related areas in the literature. For the relevant definitions and notations, we refer the reader to the next section. The fundamental results of Doob assert that if $x=(x_n)_{n\geq 0}$ is a martingale on some classical probability space $(\Omega,\F,\mathbb{P})$, then we have the weak-type estimate $$ \lambda \mathbb{P}(\sup_{n\geq 0}|x_n|\geq \lambda)\leq \|x\|_{L_1},\qquad \lambda>0,$$ and its strong-type analogue $$ \left\|\sup_{n\geq 0}|x_n|\right\|_{L_p}\leq \frac{p}{p-1}\|x\|_{L_p},\qquad 1<p\leq \infty.$$ One may ask about the noncommutative version of the above estimates. In this new context the martingale becomes a sequence of operators and one of the difficulties which need to be overcome is the lack of maximal functions. In the celebrated paper \cite{cucu}, Cuculescu proposed the following approach towards the weak-type estimate. Suppose that $x=(x_n)_{n\geq 0}$ is an $L_1$-bounded martingale on a filtered, tracial von Neumann algebra $(\mathcal{M},\tau)$. Then for any $\lambda>0$ there is a projection $q_\lambda$ such that $q_\lambda x_nq_\lambda\leq \lambda$ for each $n$ and \begin{displaymath} \lambda\tau\left(I-q_\lambda\right) \leq \|x\|_{L_1(\mathcal{M})}. \end{displaymath} It is easy to see that this estimate does extend the above weak-type bound, the projection $I-q_\lambda$ plays the role of the indicator function of the event $\{\sup_{n\geq 0}|x_n|\geq \lambda\}$. Thirty years later, the corresponding strong-type bound was proved in \cite{doob}, thus obtaining the noncommutative analogue of the classical result of Doob. The main idea is to directly introduce the maximal $L_p$-norm of a martingale directly, exploiting vector-valued $L_p$ spaces $L_p\left(\mathcal{M};\ell_{\infty}\right)$ introduced by Pisier \cite{pisier} in the mid-nineties. The result can be formulated as \begin{equation}\label{noncommdoob} \|x\|_{L_p(\mathcal{M};\ell_\infty)}\lesssim \left(\frac{p}{p-1}\right)^2 \|x\|_{L_p(\mathcal{M})},\qquad 1<p\leq \infty, \end{equation} where $\lesssim$ means that the inequality holds true up to some absolute constant and the quadratic order $O((p-1)^{-2})$ as $p\to 1$ is the best possible (see \cite{JX2}). In the proof of the above inequality, the author transferred the problem to the dual estimate \begin{equation}\label{dualdoob} \qquad \qquad \qquad \qquad \left\|\sum_{n\geq 0} \mathcal{E}_na_n\right\|_{L_p(\mathcal{M})}\lesssim p^2 \left\|\sum_{n\geq 0}a_n\right\|_{L_p(\mathcal{M})},\qquad 1\leq p<\infty, \end{equation} and established it with the use of complex interpolation and Hilbert module theory arguments. A different proof, based on real interpolation, was given in \cite{jungexu}. The motivation for the results obtained in this paper comes from a very natural question about the weighted analogue of \eqref{noncommdoob}. Let us recall some basic facts from the commutative setting, in which the theory of weighted estimates has been widely developed. Let $d\geq 1$ be a fixed dimension. The Hardy-Littlewood maximal operator $M$ on $\R^d$ acts on locally integrable functions $f:\R^d\to \R$ by the formula \begin{displaymath} Mf\left(x\right)=\sup \frac{1}{|Q|} \int_Q |f\left(y\right)|dy, \end{displaymath} where the supremum is taken over all cubes $Q$ containing $x$, having sides parallel to the axes. Let $w$ be a weight, i.e., a nonnegative and locally integrable function and let $1<p<\infty$ be a fixed exponent. In the seminal paper \cite{mucken}, Muckenhoupt characterized those $w$, for which the maximal operator is bounded as an operator on $L_p(w)$, i.e., those $w$, for which there exists a finite constant $C_{p,w}$ depending only on the parameters indicated such that \begin{equation}\label{weightlp} \int_{\mathbb{R}^d} \left(Mf\right)^pw\mbox{d}x \leq C_{p,w} \int_{\mathbb{R}^d} |f|^pw\mbox{d}x. \end{equation} He also studied the analogous problem for weak-type $(p,p)$ inequality: \begin{equation}\label{weightweak} \lambda^p\int_{\{ x: Mf\left(x\right)\geq \lambda\}} w\mbox{d}x \leq C_{p,w} \int_{\mathbb{R}^d} |f|^pw\mbox{d}x. \end{equation} It turns out that both inequalities are true if and only if $w$ satisfies the so-called $A_p$ condition. The latter means that the $A_p$ characteristic of $w$, given by \begin{equation}\label{Ap} [w]_{A_p} := \sup_Q \left(\frac{1}{|Q|} \int_Q w\mbox{d}x \right)\left(\frac{1}{|Q|}\int_Q w^{\frac{1}{1-p}}\mbox{d}x\right)^{p-1} \end{equation} (the supremum is taken over all cubes $Q\subset \R^d$ with sides parallel to the axes), is finite. Soon after the appearance of \cite{mucken}, it was shown that the $A_p$ condition characterizes the weighted $L_p$ and weak-$L_p$ boundedness of large families of classical operators, including the Hilbert transform (Hunt, Muckenhoupt and Wheeden \cite{HMW}), general Calde\'on-Zygmund singular integrals (Coifman and Fefferman \cite{CF}), fractional and Poisson integrals (Sawyer \cite{Sa1,Sa2}), area functionals (Buckley \cite{buckley}, Lerner \cite{Le1}) and many more. In addition, following the work of Ikeda and Kazamaki \cite{IK} (see also Kazamaki \cite{K}), most of the results have been successfully transferred from the analytic to the probabilistic, martingale context (for some recent progress in this direction, see \cite{BO0,BO,BO1,O1,O2,O3}). There is a very interesting aspect of the weight theory, concerning the extraction of the optimal dependence of the constants involved on the characteristic of a weight. Let us illustrate this problem on the estimate \eqref{weightlp} above. As we have already discussed above, if $w\in A_p$, then the inequality holds with some finite constant $C_{p,w}$. The question is: given $1<p<\infty$, what is the optimal (i.e., the least) exponent $\kappa_p$ such that $C_{p,w}\leq c_p[w]_{A_p}^{\kappa_p}$ for some constant $c_p$ depending only on $p$? This topic has appeared for the first time in Buckley's work \cite{buckley}, where it was shown that in the context of maximal functions, the exponent $\kappa_p=1/(p-1)$ is the best. The breakthrough about the $A_2$ conjecture for general Calderon-Zygmund operators was triggered by Hytonen \cite{Hyt}. For similar results for other classical operators, see e.g. \cite{BO,LMPT,Le1,O3} and consult the references therein. There is a natural question how much of the weighted theory can be carried over to the noncommutative setting discussed previously. A partial answer to this question was provided in the context of matrix weights, which has been developed very intensively during the last decade. We will discuss here only the extension of Muckenhoupt's estimates. Suppose that $n>1$, $d\geq 1$ are fixed integers. A matrix weight $W$ is an $n\times n$ self-adjoint matrix function on $\R^d$ (with locally integrable entries) such that $W(x)$ is nonnegative-definite for almost all $x\in \R^d$. Given $1\leq p<\infty$ and an $n\times n$ matrix weight $W$ on $\R^d$, we define the associated weighted space $L_p(W)$ to be the class of all measurable, vector-valued functions $f:\R^d\to \R^n$ such that $$ \|f\|_{L_p(W)}=\left(\int_{\R^d}|W(x)^{1/p}f(x)|^p\mbox{d}x\right)^{1/p}<\infty.$$ One of the challenging problems (see also \eqref{noncommdoob} above) is to generalize efficiently the Hardy-Littlewood maximal operator $M$ to this new setting. Another question arising immediately concerns the appropriate interpretation of the boundedness of this operator on weighted spaces: since $M$ acts between different spaces (it is reasonable to expect $Mf$ to be a nonnegative function on $\R^d$), the symbol $\|Mf\|_{L_p(W)}$ simply makes no sense. To handle this difficulty, it is instructive to inspect the following change-of-measure argument. Namely, for a given linear operator $T$, its boundedness on the space $L_p(W)$ is equivalent to the boundedness of $W^{1/p}TW^{-1/p}$ on the (unweighted) space $L_p(\R^n;\R^d)$. This suggests that for the maximal operator, one should compose it appropriately with powers of the weight $W$, and then study the boundedness of the resulting operator on the usual unweighted spaces. This idea has turned out to be successful, and it has been generalized to the wide class of Calder\'on-Zygmund singular integral operators by a number of authors (cf. \cite{BPW,CIM,CPO,Go,NPTV}), as well as to the context of fractional operators \cite{CIM}. We mention that the first sharp $L_2$ estimate with weight matrix for the square dyadic function is due to Hyt\"onen et al. in \cite{HPV}. However, essentially nothing is known in the context of martingales on tracial von Neumann algebras. While it is natural to treat the weights and martingales as operators, the noncommutativity makes the analysis of the joint behavior of these objects extremely difficult. We have decided to restrict ourselves to the special, semicomutative setting, in which many technicalities disappear, but on the other hand, the questions are still interesting and challenging. Namely, we will assume that the underlying von Neumann algebra is of the form $\mathcal{M}=L_\infty(X,\F,\mu)\overline\otimes \mathcal{N}$, where $(X,\F,\mu)$ is a classical $\sigma$-finite measure space and $\mathcal{N}$ is another von Neumann algebra. We will consider filtrations which act on the first component only (i.e., are of the form $L_\infty(X,\F_n,\mu)\overline \otimes \mathcal{N}$, $n=0,\,1,\,\,2,\,\ldots$). Furthermore, a weight will be a nonnegative operator of the form $w\otimes I$, and hence it will commute with every element of $\mathcal{M}$: this, in particular, allows a very simple (and natural) definition of the $A_p$ characteristic: $[w\otimes I]_{A_p}:=[w]_{A_p}$. With no risk of confusion, we will often simplify the notation and write $w$ instead of $w\otimes I$. Note that $\mathcal{M}$ can be identified with $L_\infty(X;\mathcal{N})$, the space of functions on $X$ taking values in $\mathcal{N}$. That is, we will consider the case of operator-valued martingales on $X$, with weights being elements of the commutant. This semicomutative context has been studied by many authors and applied in various problems of noncommutative harmonic analysis; we mention here the excellent exposition \cite{Mei} by Mei. We will establish the following statement. \begin{theorem}\label{ref} Let $1<p<\infty$. Then for any $x\in \mathcal{M}$ and any weight $ {w}\in A_p$, \begin{equation}\label{wdoob} \|x\|_{L_p^{ {w}}(\mathcal{M};\ell_\infty)}\lesssim_p [{w}]_{A_p}^{1/(p-1)}\|x\|_{L_p^ {w}(\mathcal{M})}. \end{equation} The exponent $1/(p-1)$ is the best possible, since it is already optimal in the classical case. \end{theorem} An important comment about the commutative setting is in order. The classical version of the above statement was first obtained by Buckley \cite{buckley} in the special dyadic case, with the use of interpolation and self-improving properties of Muckenhoupt's weights. An alternative proof in the commutative setting, basing on Bellman function method, can be found in \cite{O34}, but it still exploits some regularity of martingales. Indeed, the continuity of paths is used there. Without any additional regularity assumptions, the classical version of Theorem \ref{ref} for martingales adapted to general filtration was established in \cite{lerner, O1}. We point out that the proofs provided in \cite{lerner, O1} rely on the development of new ideas: change-of-measure arguments, sparse operators and the Bellman function method. The traditional techniques which are frequently used in the theory of weights (e.g., self-improvement, reverse H\"older inequalities) simply fail to hold for the general filtration (see Remark \ref{nonhomog}). In our considerations below, we will study the estimate \eqref{wdoob} without any assumption on the filtration. Both `commutative' proofs, presented in \cite{lerner} and \cite{O1}, exploit a number of pointwise estimates which are no longer valid in the context of operators. The proof of Theorem \ref{ref} will be divided into two cases according to the exponent $p$. For $1<p\leq 2$, we adopt a duality approach according to the sharp weighted dual Doob's inequality. However, for $2<p<\infty$, we have to invent a completely different method. The paper is organized as follows. Some preliminary results and notation are presented in Section $2$. In Section $3$, we prove the main theorem, which is noncommutative weighted Doob's inequality with optimal dependence on the characteristic $[w]_{A_p}$. A corresponding weak weighted $L_p$-bound with optimal dependence on $[w]_{A_p}$ is also provided in this section. Sections $4$ and $5$ contain applications of Theorem \ref{ref}. In Section 4, we study a weighted version of the noncommutative $L_p$ bound in the context of maximal operators on general metric spaces satisfying the doubling condition. These results, which can be regarded as noncommutative version of \eqref{weightlp} and \eqref{weightweak}, extend Mei's result \cite[Chapter 3]{Mei} to the weighted case. Section 5 is devoted to a weighted noncommutative $L_p$ bound for maximal truncations of a certain wide class of singular integral operators on $\R$. In the Appendix, the final part of the paper, we have decided to present alternative proofs of \eqref{wdoob}. Although the arguments yield suboptimal dependence on the characteristic (and exploit stronger regularity assumptions on the filtration), we believe that they are of independent interest and connections. \section{Preliminaries} We will introduce and discuss here some basic facts from the operator theory which will be needed in our later considerations. For the detailed and systematic exposition of the subject we refer the reader to the monographs \cite{operator2} and \cite{operator1}. \subsection{Measurable operators} Throughout, the letter $\mathcal{M}$ will stand for a semifinite von Neumann algebra of operators acting on some given Hilbert space $\mathcal{H}$, equipped with a faithful and normal trace $\tau$. Let $x$ be a densely defined self-adjoint operator on $\mathcal{H}$, with the spectral resolution \begin{displaymath} x = \int_{-\infty}^{\infty} sde^x_s. \end{displaymath} Then for any Borel set $B \subset \mathbb{R}$, we define the associated spectral projection by \begin{displaymath} {I}_B\left(x\right) = \int_{-\infty}^{\infty} \chi_B\left(s\right)de^x_s. \end{displaymath} One can similarly introduces the operator $f(x)$ for sufficiently regular function $f$ on $\R$. A closed, densely defined operator $x$ on $H$ is said to be {affiliated} with $\mathcal{M}$ if for all unitary operators $u$ belonging to the commutant $\mathcal{M}'$ of $\mathcal{M}$, we have the identity $u^*au=a$. An operator $x$ affiliated with $\mathcal{M}$ is said to be $\tau$-\textit{measurable}, if there is $s\geq 0$ such that $\tau\left(I_{\left(s,\infty\right)}\left(|x|\right)\right) < \infty$, where $|x| = \left(x^*x\right)^{1/2}$. We denote the space of all $\tau$-measurable operators by $L_0\left(\mathcal{M},\tau\right)$. Then, for all $0 < p \leq \infty$, the noncommutative $L_p$ space associated with $\left(\mathcal{M},\tau\right)$ is $$L_p\left(\mathcal{M},\tau\right) = \lbrace x\in L_0\left(\mathcal{M},\tau\right) : \tau\left(|x|^p\right) < \infty \rbrace.$$ The associated (semi-)norm is defined by $\displaystyle \|x\|_p = \left(\tau\left(|x|^p\right)\right)^{1/p},$ which is understood as the standard operator norm in the boundary case $p=\infty$. \subsection{Noncommutative martingales and martingale transforms} A filtration is an increasing sequence $\left(\mathcal{M}_n\right)_{n \geq 0}$ of von Neumann subalgebras of $\mathcal{M}$ such that the union $\bigcup_{n \geq 0} \mathcal{M}_n$ is w$^\ast$-dense in $\mathcal{M}$. In such a case, for each $n \geq 0$ there is a conditional expectation $\mathcal{E}_n:\mathcal{M} \to \mathcal{M}_n$ associated with $\mathcal{M}_n$: one defines this object as the dual map of natural inclusion $i : L_1\left(\mathcal{M}_n\right) \to L_1\left(\mathcal{M}\right)$. It can be easily verified that $ \mathcal{E}_n\left(axb\right) = a\mathcal{E}_n\left(x\right)b$ for all $x\in \mathcal{M}$ and $a,\,b\in \mathcal{M}_n $; furthermore, $\mathcal{E}_n$ is $\tau$-preserving, i.e., we have $ \tau \circ \mathcal{E}_n = \tau$. In addition, the collection of conditional expectations satisfies the tower property $\mathcal{E}_n\mathcal{E}_m = \mathcal{E}_m\mathcal{E}_n = \mathcal{E}_{\min{\left(m,n\right)}} $. Finally, for any $1\leq p\leq \infty$, the operator $\mathcal{E}_n$ extends to a contractive projection from $L_p\left(\mathcal{M},\tau\right)$ onto $L_p\left(\mathcal{M}_n,\tau_{|\mathcal{M}_n}\right)$. A sequence $x = \left(x_n\right)_{n \geq 0} \subset L_1\left(\mathcal{M}\right)+L_\infty(\mathcal{M})$ is called a martingale with respect to $\left(\mathcal{M}_n\right)_{n \geq 0}$, if the equality $\mathcal{E}_{n}\left(x_{n+1}\right) = x_{n}$ holds for all $n \geq 0$. If, in addition, $x_n \in L_p\left(\mathcal{M}\right)$ for all $n\geq 0$, then $x$ is called an $L_p$-martingale with respect to $\left(\mathcal{M}_n\right)_{n \geq 0}$ and we set \begin{displaymath} \|x\|_p = \sup_{n \geq 0}\|x_n\|_p. \end{displaymath} The martingale $x$ is said to be $L_p$-bounded if $\|x\|_p<\infty$. Given a martingale $x=(x_n)_{n\geq 0}$, we define its difference sequence $dx=(dx_n)_{n\geq 0}$ by $dx_0=x_0$ and $dx_n=x_n-x_{n-1}$, $n=1,\,2,\,\ldots$. A martingale $y=(y_n)_{n\geq 0}$ is called a transform of $x=(x_n)_{n\geq 0}$, if there is a deterministic sequence $\e=(\e_n)_{n\geq 0}$ with values in $[-1,1]$ such that $dy_n=\e_ndx_n$ for all $n\geq 0$. Martingale transforms satisfy the $L_p$ estimate \begin{equation}\label{Lp} \|y_n\|_p\leq C_p\|x_n\|_p,\qquad n=0,\,1,\,2\,\ldots,\,\, 1<p<\infty, \end{equation} for some constant $C_p$ depending only on $p$. Actually, it can be shown that the optimal orders, as $p\to 1$ or $p\to \infty$, are $O((p-1)^{-1})$ and $O(p)$, respectively. Furthermore, one can allow a slightly larger class of transforming sequences $\e$. See \cite{pisierxu} and \cite{rand} for more on this subject. \smallskip \subsection{Maximal spaces} Now we discuss the suitable space required to define meaningful maximal functions. Let $1\leq p\leq \infty$. We define $L_p\left(\mathcal{M}; \ell_\infty\right)$ as the space of all sequences $x = \left(x_n\right)_{n \geq 0} \subset L_p\left(\mathcal{M}\right)$ which admit the decomposition \begin{displaymath} x_n = ay_n b \ \ \ \ \ \ \ \ \hbox{for all} \ n \geq 0, \end{displaymath} for some $a,\,b \in L_{2p}\left(\mathcal{M}\right)$ and $y = \left(y_n\right)_{n \geq 0} \subset L_\infty\left(\mathcal{M}\right)$. We equip this space with the norm \begin{displaymath} \|x\|_{L_p\left(\mathcal{M}; \ell_\infty\right)} = \inf \left\lbrace\|a\|_{2p} \sup_{n \geq 0} \|y_n\|_{\infty} \|b\|_{2p}\right\rbrace, \end{displaymath} where infimum runs over all factorizations of $x$ as above. The following dual reformulation will be important to us later. Namely, we define $L_p\left(\mathcal{M}; \ell_1\right)$ as the space of all sequences $x = \left(x_n\right)_{n \geq 0} \subset L_p\left(\mathcal{M}\right)$, which are of the form \begin{displaymath} x_n = \sum_{k \geq 0} u_{kn}^\ast v_{kn} \ \ \ \ \ \ \ \hbox{for all} \ n \geq 0, \end{displaymath} where families $\left(u_{kn}\right)_{k,n \geq 0}, \left(v_{kn}\right)_{k,n \geq 0} \subset L_{2p}\left(\mathcal{M}\right)$ satisfy \begin{displaymath} \sum_{k,n \geq 0} u_{kn}^\ast u_{kn} \in L_p\left(\mathcal{M}\right) \ \ \ \ \ \hbox{and} \ \ \ \ \ \sum_{k,n \geq 0} v_{kn}^\ast v_{kn} \in L_p\left(\mathcal{M}\right). \end{displaymath} The space $L_p\left(\mathcal{M}; \ell_1\right)$ is equipped with the norm \begin{displaymath} \|x\|_{L_p\left(\mathcal{M}; \ell_1\right)} = \inf \Bigg\lbrace\left\|\sum_{k,n \geq 0} u_{kn}^\ast u_{kn}\right\|_p^{\frac{1}{2}} \left\|\sum_{k,n \geq 0} v_{kn}^\ast v_{kn}\right\|_p^{\frac{1}{2}}\Bigg\rbrace, \end{displaymath} where infimum runs over all decompositions of $x$ as above. Both $L_p\left(\mathcal{M}; \ell_\infty\right)$ and $L_p\left(\mathcal{M}; \ell_1\right)$ are Banach spaces and the following theorem is true (see \cite{doob}). \begin{theorem} \label{dual} Let $1 \leq p < \infty$ and $p'$ be the conjugate of $p$. Then \begin{displaymath} L_{p}\left(\mathcal{M}; \ell_1\right)^*=L_{p'}\left(\mathcal{M}; \ell_\infty\right) \ \ \ \ \ \hbox{isometrically} \end{displaymath} with the duality bracket given by \begin{displaymath} \left(x,y\right) = \sum_{n \geq 0} \tau\left(x_ny_n\right) \end{displaymath} for $x \in L_p\left(\mathcal{M}; \ell_\infty\right)$ and $y \in L_{p'}\left(\mathcal{M}; \ell_1\right)$. \end{theorem} The above spaces have a much simpler description when restricted to nonnegative operators. Consider $x = \left(x_n\right)_{n \geq 0}$, where $x_n \geq 0$ for all $n \geq0$. Then we have $$ \|x\|_{L_{p}\left(\mathcal{M}; \ell_1\right)} = \Big\|\sum_{n \geq 0} x_n\Big\|_{L_p}.$$ Furthermore, $x$ belongs to $L_p\left(\mathcal{M}; \ell_\infty\right)$ if and only if there exists a positive operator $a \in L_p\left(\mathcal{M}\right)$ such that $x_n \leq a$ for all $n \geq 0$. In addition, $\|x\|_{ L_p\left(\mathcal{M}; \ell_\infty\right)} = \inf\lbrace \|a\|_{L_p} : x_n \leq a\mbox{ for all }n\rbrace$. We would also like to conclude with the remark that the definition of $L_p(\mathcal{M};\ell_\infty)$ extends easily to the case in which the sequences are indexed by an arbitrary set $I$: the relevant factorization makes perfect sense. Denoting the corresponding space by $L_p(\mathcal{M};\ell_\infty(I))$, it is not difficult to check the identity \begin{equation}\label{not_difficult} \|x\|_{L_p(\mathcal{M};\ell_\infty(I))}=\sup_{J\text{ finite}}\|x\|_{L_p(\mathcal{M};\ell_\infty(J))}. \end{equation} This observation, with $I=\mathbb{Z}$ or $I=[0,\infty)$, will be important for our applications below. \smallskip \subsection{Martingale weights.} In this subsection, we introduce some basic information on weighted theory in the commutative context. Suppose that $(X,\F,\mu)$ is a classical measure space, filtered by $(\F_n)_{n\geq 0}$, a nondecreasing family of sub-$\sigma$-fields of $\F$ such that $\sigma\left(\bigcup_{n\geq 0}\F_n\right)=\F$ and such that $(X,\F_0,\mu)$ is $\sigma$-finite. A weight is a positive function belonging to $L_1(X)+L_\infty(X)$; typically, such an object will be denoted by $u$, $w$ or $v$. Any weight $w$ gives rise to the corresponding measure on $X$, also denoted by $w$, and defined by $w(A)=\int_A w\mbox{d}\mu$ for all $A\in \mathcal{F}$. Given $1<p<\infty$, a weight $w$ satisfies the (martingale) $A_p$ condition, if the $A_p$ characteristic $$ [w]_{A_p}=\sup_{n\geq 0}\left\|\mathbb{E}_n (w) \mathbb{E}_n(w^{1/(1-p)})^{p-1}\right\|_{L_\infty(X)}$$ is finite. If the filtration is atomic, i.e., for each $n$ the $\sigma$-field $\F_n$ is generated by pairwise disjoint sets of positive and finite measure, then the characteristic can be rewritten in the more usual form $$ [w]_{A_p}=\sup \left(\frac{1}{\mu(Q)}\int_Q w\mbox{d}\mu\right)\left(\frac{1}{\mu(Q)}\int_Q w^{1/(1-p)}\mbox{d}\mu\right)^{p-1},$$ where the supremum is taken over atoms $Q$ of the filtration. The dual weight to $w\in A_p$ is given by $v=w^{1/(1-p)}$. It follows directly from the definition of the characteristic that $v\in A_{p'}$ and $[v]_{A_{p'}}=[w]_{A_p}^{1/(p-1)}$. There are versions of the $A_p$ condition in the boundary cases $p\in \{1,\infty\}$, which can be obtained by a simple passage to the limit. We will only present here the case $p=1$, as the choice $p=\infty$ will not be presented in our considerations. Namely, a weight $w$ satisfies Muckenhoupt's condition $A_1$, if its characteristic $$ [w]_{A_1}=\sup_{n\geq 0} \left\|\mathcal{E}_n(w)/w\right\|_{L_\infty(X)}$$ is finite. If the filtration is atomic, then we have the identity $$ [w]_{A_1}=\sup_Q \operatorname*{esssup}_X \frac{\frac{1}{\mu(Q)}\int_Q w\mbox{d}\mu}{w}.$$ \subsection{Noncommutative weighted $L_p$ spaces} Assume that $(X,\F,\mu)$ is a classical measure space and let $(\F_n)_{n\geq 0}$ be a discrete-time filtration such that $\sigma(\bigcup_{n\geq 0}\F_n)=\F$ and such that $(X,\F_0,\mu)$ is $\sigma$-finite. Suppose further that $\mathcal{N}$ is a given semifinite von Neumann algebra with a faithful, normal trace $\nu$. We set $\mathcal{M}=L_\infty(X,\F,\mu)\overline\otimes \mathcal{N}$ and endow this algebra with a standard tensor trace $\tau=\mu\otimes \nu$ and the filtration $\mathcal{M}_n=L_\infty(X,\F_n,\mu)\overline\otimes \mathcal{N}$, $n=0,\,1,\,2,\,\ldots$. Then the associated conditional expectations are given by $\mathcal{E}_n=\E(\cdot|\F_n)\otimes I_\mathcal{N}$, where $\E(\cdot|\F_n)$ is the classical conditional expectation with respect to $\mathcal{F}_n$. Furthermore, the elements of $\mathcal{M}$ can be regarded as bounded functions taking values in $\mathcal{N}$ and the $L_p$-bounded martingales in this context can be identified with $L_p$-bounded martingales on $(X,\F,\mu)$ with values in $L_p(\mathcal{N})$. In our considerations below, a weight will be a positive operator of the form $w\otimes I$, where $w$ is a classical weight on $(X,\F,\mu)$. Such operators commute with all elements of $\mathcal{M}$ and all conditional expectations $\mathcal{E}_n(w\otimes I)$ also enjoy this property. We say that $ { {w\otimes I}}$ satisfies Muckenhoupt's condition $A_p$ (or belongs to the $A_p$ class), if the scalar weight $w$ has this property. Furthermore, we set $[ { {w}\otimes I}]_{A_p}=[w]_{A_p}$. From now on, we will skip the tensor and identify $w\otimes I$ with $w$; this should not lead to any confusion. Given $1\leq p<\infty$ and $w$ as above, the associated noncommutative weighted $L_p$ space is defined by $$ L_p^ { {w}}(\mathcal{M})=\left\{x\in L_0(\mathcal{M},\tau)\,:\,x { {w}}^{1/p}\in L_p(\mathcal{M})\right\}.$$ That is to say, $L_p^ { {w}}(\mathcal{M})$ is the usual noncommutative $L_p$ space with respect to the weighted trace $\tau^{ { {w}}}(x):=\tau(x { {w}})$, $x\in \mathcal{M}$. The fact that $ { {w}}$ is positive and commutes with all the elements of $\mathcal{M}$ implies that $\tau^{ { {w}}}$ is indeed a trace. This change-of-measure argument, based on passing from one trace to another, will play an important role in our considerations below. In particular, we will need the following simple fact. Here and in what follows, $\mathcal{E}_n^{ { {w}}}$ denotes the conditional expectation, with respect to $\mathcal{M}_n$ and the trace $\tau^{ { {w}}}$ (while $\mathcal{E}_n$ is the usual conditional expectation, relative to the unweighted trace $\tau$). \begin{lemma} For any $x\in L_1^w(\mathcal{M})$ we have \begin{equation} \label{1} \mathcal{E}^{ {{w}}}_n\left(x\right) = \mathcal{E}_n\left(x {w}\right)\left(\mathcal{E}_n\left( {w}\right)\right)^{-1}. \end{equation} \end{lemma} \begin{proof} Let us check whether the right-hand side of \eqref{1} enjoys all the properties of conditional expectation. Obviously, it belongs to $\mathcal{M}_n$. Furthermore, if $a,\,b$ are arbitrary elements of $\mathcal{M}_n$, then by the commuting property of $ { {w}}$, $$\mathcal{E}_n\left(axb { {w}}\right)\left(\mathcal{E}_n\left( { {w}}\right)\right)^{-1}= \mathcal{E}_n\left(ax { {w}}b\right)\left(\mathcal{E}_n\left( { {w}}\right)\right)^{-1}= a\mathcal{E}_n\left(x { {w}}\right)b\left(\mathcal{E}_n\left( { {w}}\right)\right)^{-1} =a\mathcal{E}_n\left(x { {w}}\right)\left(\mathcal{E}_n\left( { {w}}\right)\right)^{-1}b.$$ Finally, the right-hand side of \eqref{1} preserves the trace $\tau^{ { {w}}}$: indeed, \begin{align*} \tau^{ { {w}}}\left(\mathcal{E}_n\left(x { w}\right)\left(\mathcal{E}_n\left( {w}\right)\right)^{-1}\right) &=\tau\left(\mathcal{E}_n\left(x { {w}}\right)\left(\mathcal{E}_n\left( { {w}}\right)\right)^{-1} { {w}}\right)\\ &=\tau\left(\mathcal{E}_n\left(x { {w}}\right)\left(\mathcal{E}_n\left( { {w}}\right)\right)^{-1}\mathcal{E}_n( { {w}})\right)=\tau\left(\mathcal{E}_n\left(x { {w}}\right)\right)=\tau(x { {w}})=\tau^{ { {w}}}(x). \end{align*} This proves the claim. \end{proof} \begin{remark} The above lemma has a very transparent meaning if the underlying filtration $(\F_n)_{n\geq 0}$ is atomic. In such a case, we have the following explicit formula for $\mathcal{E}_n^ { {w}}$: if we identify $\mathcal{M}$ with operator-valued random variables, then \begin{displaymath} \mathcal{E}_n^ { {w}}x = \sum_{Q \in {At}_n} \frac{1}{w\left(Q\right)}\int_{Q} x\left(\omega\right) w\mu(\mbox{d}\omega) \cdot \raisebox{2pt}{$\chi$}_Q, \end{displaymath} where $At_n$ is the collection of all atoms of $\F_n$ and $w\left(Q\right) = \int_Q w\mbox{d}\mu$. \end{remark} By a similar argument, which rests on the passage from the trace $\tau$ to its weighted version $\tau^w$, one defines the appropriate weighted maximal spaces $L_p^w(\mathcal{M};\ell_\infty)$ and $L_p^w(\mathcal{M};\ell_1)$. \bigskip \section{Weighted Doob's maximal inequality} In this section, we provide the proof of Theorem \ref{ref} and also establish its weak version. Both results are of optimal dependence on the characteristic of the weight involved. \subsection{Weighted maximal inequalities} The purpose of this subsection is to prove Theorem \ref{ref}. We start with the following statement, which extends dual Doob's inequality to the weighted case. \begin{theorem} \label{doob2} Let $1\leq p <\infty$ and $ {w} \in A_p$. For any sequence $\left(a_n\right)_{n\geq 0}$ of positive elements of $L_p^ {w}(\mathcal{M})$ we have \begin{equation}\label{idual} \left\|\sum_{n\geq 0} \mathcal{E}_n\left(a_n\right)\right\|_{L_p^ {w}(\mathcal{M})} \leq c_p[ {w}\,]_{A_p}\left\|\sum_{n \geq 0}a_n\right\|_{L_p^ {w}(\mathcal{M})}, \end{equation} where $c_p$ depends only on $p$. The exponent of $[ {w}\,]_{A_p}$ is the best possible. \end{theorem} \begin{proof}[Proof of Theorem \ref{doob2} for $p=1$ and $p\geq 2$] By the $\sigma$-finiteness of $(X,\F_0,\mu)$, a simple splitting argument allows us to assume that $\mu(X)<\infty$. Then in particular $w$ must be integrable. Now, if $p=1$, then we have $$ \left\|\sum_{n\geq 0} \mathcal{E}_n\left(a_n\right)\right\|_{L_1^ { w}(\mathcal{M})}=\tau\left(\sum_{n\geq 0} \mathcal{E}_n\left(a_n\right) { {w}}\right)=\tau\left(\sum_{n\geq 0} a_n\mathcal{E}_n\left( { {w}}\right)\right)\leq [ { {w}}]_{A_1}\tau\left(\sum_{n\geq 0} a_n { {w}}\right),$$ so the desired bound holds with $c_1=1$. Next, suppose that $p\geq 2$ and let $v=w^{1/(1-p)}$ be the dual weight to $w$. Muckenhoupt's condition implies that for any $n\geq 0$, \begin{equation}\label{App} \mathcal{E}_n( { {w}})\mathcal{E}_n( { {v}})=\E(w|\F_n)\E(v|\F_n)\otimes I\leq [w]_{A_p}(\E(v|\F_n))^{2-p}\otimes I\leq [w]_{A_p}\E(v^{2-p}|\F_n)\otimes I, \end{equation} where the last passage is due to Jensen's inequality (and the assumption $p\geq 2$). Now, fix a positive operator $g\in \mathcal{M}$ satisfying $\|g\|_{L_{p'}^ { {v}}} \leq 1$. Note that $g\in L_1(\mathcal{M})$: by H\"older's inequality, $\|g\|_{L_1(\mathcal{M})}\leq \|g\|_{L_{p'}^ { v}}\|w\|_{L_1}^{1/p}<\infty$. By properties of conditional expectations, we may write \begin{displaymath} \tau\left(\sum_{n\geq 0}\mathcal{E}_n(a_n)g\right)=\sum_{n \geq 0} \tau\left(\mathcal{E}_n\left(a_n\right)g\right) = \sum_{n \geq 0} \tau\left(\mathcal{E}_n\left(a_n\right)\mathcal{E}_n\left(g\right)\right). \end{displaymath} Using the identity $\left(\ref{1}\right)$ and the commuting properties of $ { {w}}$, $ { {v}}$ and their conditional expectations, we obtain \begin{align*} \sum_{n \geq 0} \tau\left(\mathcal{E}_n\left(a_n\right)\mathcal{E}_n\left(g\right)\right) &= \sum_{n \geq 0} \tau\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right)\mathcal{E}_n\left( { {v}}\right)\mathcal{E}_n\left( { {w}}\right)\right), \end{align*} which, by \eqref{App}, does not exceed \begin{align*} \sum_{n \geq 0} \tau\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right)[ { {w}}]_{A_p}\mathcal{E}_n\left( { {v}}^{2-p}\right)\right) &= [ { {w}}]_{A_p} \sum_{n \geq 0} \tau\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right) { {v}}^{2-p}\right)\\ &= [ { {w}}]_{A_p} \sum_{n \geq 0} \tau\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right) { {v}}^{\frac{1}{p}} { {w}}^{\frac{1}{p'}}\right). \end{align*} As we mentioned above, $g$ belongs to the space $L_1(\mathcal{M})$ and hence $g { w}^{-1}\in L_1^ { w}(\mathcal{M})$. By noncommutative Doob's inequality in $L_{p'}$, applied to the nonnegative martingale $\left(\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right)\right)_{n \geq 0}$ (on von Neumann algebra $\left(\mathcal{M}, \tau^ { {w}}\right)$), there exists an operator $a$ such that $\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right)\leq a$ for every $n \geq 0$ and \begin{displaymath} \|a\|_{L_{p'}^ { {w}}(\mathcal{M})} \leq C_{p'}\|g { {w}}^{-1}\|_{L_{p'}^ { {w}}(\mathcal{M})} = C_{p'}\left(\tau\left(g^{p'} { {w}}^{-p'} { {w}}\right)\right)^{\frac{1}{p'}} = C_{p'}\|g\|_{L_{p'}^ { {v}}} \leq C_{p'}. \end{displaymath} Here the last estimate follows from the assumption $\|g\|_{L_{p'}^ { {v}}(\mathcal{M})}\leq 1$ we imposed at the beginning. Consequently, by the tracial property (and the fact that $ { {w}}$ and $ { {v}}$ commute with all elements of $\mathcal{M}$) we get $$ \tau\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)\mathcal{E}^{ { {w}}}_n\left(g { {w}}^{-1}\right) { {v}}^{\frac{1}{p}} { {w}}^{\frac{1}{p'}}\right)\leq \tau\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)a { {v}}^{\frac{1}{p}} { {w}}^{\frac{1}{p'}}\right).$$ Therefore, by the H\"older inequality, \begin{align*} \tau\left(\sum_{n \geq 0} \mathcal{E}_n\left(a_n\right)g\right) &\leq [ { {w}}]_{A_p} \tau\left(\sum_{n \geq 0}\left(\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right) { {v}}^{\frac{1}{p}}\right)a { {w}}^{\frac{1}{p'}}\right)\\ &\leq [ { w}\,]_{A_p} \left\|\sum_{n \geq 0}\mathcal{E}^{ { {v}}}_n\left(a_n { {v}}^{-1}\right)\right\|_{L_p^ { {v}}(\mathcal{M})}\|a\|_{L_{p'}^ { {w}}(\mathcal{M})}\\ &\leq C_{p'}C_p[ { {w}}]_{A_p} \left\|\sum_{n \geq 0}a_n { {v}}^{-1}\right\|_{L_p^ { {v}}(\mathcal{M})} = c_p[ { {w}}]_{A_p} \left\|\sum_{n \geq 0}a_n\right\|_{L_p^ { {w}}(\mathcal{M})}. \end{align*} Here in the last line we have exploited the dual form of Doob's inequality \eqref{dualdoob}, applied to the nonnegative sequence $\left(a_n { {v}}^{-1}\right)_{n \geq 0}$ on von Neumann algebra $\left(\mathcal{M}, \tau^ { {v}}\right)$. To finish the proof, we specify $g=(\sum_{n\geq 0}\mathcal{E}_na_n)^{p-1} { {w}}/\|\sum_{n\geq 0}\mathcal{E}_na_n\|_{L_p^ { {w}}(\mathcal{M})}^{p-1}$: then $\|g\|_{L_{p'}^ { {v}}}=1$ and $$ \tau\left(\sum_{n \geq 0} \mathcal{E}_n\left(a_n\right)g\right) =\left\|\sum_{n\geq 0} \mathcal{E}_n\left(a_n\right)\right\|_{L_p^ { {w}}(\mathcal{M})}. \qquad \qquad \qedhere$$ \end{proof} \begin{remark} The above reasoning can be repeated in the case $1<p<2$, but then \eqref{App} does not hold any more. Instead, we may write \begin{align*} \mathcal{E}_n( { {w}})\mathcal{E}_n( { {v}})&=\E(w|\F_n)\E(v|\F_n)\otimes I\\ &=\E(v^{1-p}|\F_n)\E(v|\F_n)\otimes I\\ &=\E(v^{\frac{1}{1-p'}}|\F_n)^{p'-1}\E(v|\F_n)\E(v^{\frac{1}{1-p'}}|\F_n)^{2-p'}\otimes I\\ &\leq [v]_{A_{p'}} \E(v^{1-p}|\F_n)^{2-p'} \otimes I\\ &\leq [v]_{A_{p'}} \E(v^{(1-p)(2-p')}|\F_n) \otimes I \\ & = [v]_{A_{p'}} \E(v^{2-p}|\F_n) \otimes I \end{align*} where the last inequality is due to Jensen's inequality and the assumption that $p<2$. Note that $[v]_{A_{p'}}=[w]_{A_p}^{1/(p-1)}$, so we get \eqref{idual}, but with the worse, nonlinear dependence $[ { w}]_{A_p}^{1/(p-1)}$. To overcome this difficulty, we will use a different approach. \end{remark} The proof of Theorem \ref{doob2} in the range $1<p<2$ is postponed for a while. We now use the duality approach to prove Theorem \ref{ref} for $1<p\leq 2$. \begin{proof}[Proof of Theorem \ref{ref} for $1<p\leq 2$] We deduce the assertion from the previous statement. Again, we may assume that $\mu(X)<\infty$. Pick an arbitrary positive element $x$ of $L_p^ { w}\left(\mathcal{M}\right)$. Then $$\|x\|_{L_1(\mathcal{M})}\leq \|x\|_{L_p^ { w}(\mathcal{M})}\| { v}\|_{L_1(\mathcal{M})}^{1/p'}<\infty$$ and hence $(x_n)_{n\geq 0} = (\mathcal{E}_n\left(x\right))_{n\geq 0}$ is a well-defined $L_1$-bounded martingale on $(\mathcal{M},\tau)$. This sequence is contained in $L_p^ { w}(\mathcal{M})$, by \eqref{idual}. Next, consider an arbitrary operator $y \in L_{p'}^ { {w}}\left(\mathcal{M}; \ell_1\right)$ and let $\left(a_{kn}\right)_{k,n \geq 0}, \left(b_{kn}\right)_{k,n \geq 0}$ be families of elements of $L_{2p'}^ { w}\left(\mathcal{M}\right)$, satisfying \begin{displaymath} y_n = \sum_{k \geq 0} a_{kn}^\ast b_{kn} \ \ \ \ \ \ \ \hbox{for all} \ n \geq 0. \end{displaymath} Then, by H\"older's inequality and properties of conditional expectations, \begin{align*} \Bigg \vert\sum_{n \geq 0} \tau^ { w} \left(x_ny_n\right)\Bigg\vert = \Bigg\vert\sum_{n,k \geq 0} \tau \left(\mathcal{E}_n\left(x\right)a_{kn}^\ast b_{kn} { w}\right)\Bigg\vert &= \Bigg\vert\sum_{n,k \geq 0} \tau \left(\mathcal{E}_n\left(a_{kn}^\ast b_{kn} { w}\right)x\right)\Bigg\vert\\ &= \Bigg\vert \tau \left(\sum_{n,k \geq 0} \left(\mathcal{E}_n\left(a_{kn}^\ast b_{kn} { w}\right) { w}^{-\frac{1}{p}}\right)x { w}^{\frac{1}{p}}\right)\Bigg\vert\\ &\leq \left\|xw^{\frac{1}{p}}\right\|_{L_{p}\left(\mathcal{M}\right)}\left\|{\sum_{n,k \geq 0} \mathcal{E}_n\left(a_{kn}^\ast b_{kn} { w}\right) { w}^{-\frac{1}{p}}}\right\|_{L_{p'}\left(\mathcal{M}\right)}\\ &=\|x\|_{L_{p}^ { w}\left(\mathcal M\right)}\left\|{\sum_{n,k \geq 0} \mathcal{E}_n\left(a_{kn}^\ast b_{kn} { w}\right)}\right\|_{L_{p'}^ { v}\left(\mathcal{M}\right)}. \end{align*} Now, by the H\"older inequality (\cite[Proposition 2.15]{doob}) and Theorem \ref{doob2} applied to $ { v}\in A_{p'}$ (note that $p'\geq 2$), we may proceed as follows: \begin{align*} \left\|\sum_{n,k \geq 0} \mathcal{E}_n\left(a_{kn}^\ast { w}^{\frac{1}{2}} b_{kn} { w}^{\frac{1}{2}}\right)\right\|_{L_{p'}^ { v}\left(\mathcal M\right)} &\leq \left\|\sum_{n,k \geq 0} \mathcal{E}_n\left(a_{kn}^\ast a_{kn} { w}\right)\right\|_{L_{p'}^ { v}\left(\mathcal{M}\right)}^{\frac{1}{2}}\left\|\sum_{n,k \geq 0} \mathcal{E}_n\left(b_{kn}^\ast b_{kn} { w}\right)\right\|_{L_{p'}^ { v}\left(\mathcal{M}\right)}^{\frac{1}{2}}\\ &\leq c_{p'}[ { v}]_{A_{p'}} \left\|\sum_{n,k \geq 0} a_{kn}^\ast a_{kn} { w}\right\|_{L_{p'}^ { v}\left(\mathcal M\right)}^{\frac{1}{2}}\left\|\sum_{n,k \geq 0} b_{kn}^\ast b_{kn} { w}\right\|_{L_{p'}^ { v}\left(\mathcal M\right)}^{\frac{1}{2}}\\ &= c_{p'}[ { v}]_{A_{p'}} \left\|\sum_{n,k \geq 0} a_{kn}^\ast a_{kn}\right\|_{L_{p'}^ { w}\left(\mathcal M\right)}^{\frac{1}{2}}\left\|\sum_{n,k \geq 0} b_{kn}^\ast b_{kn}\right\|_{L_{p'}^ { w}\left(\mathcal M\right)}^{\frac{1}{2}}\\ &\leq c_{p'}[ { v}]_{A_{p'}} \|y\|_{L_{p'}^ { w}\left(\mathcal{M}; \ell_1\right)} \\ &= c_{p'}[ { w}]_{A_{p}}^{1/(p-1)} \|y\|_{L_{p'}^ { w}\left(\mathcal{M}; \ell_1\right)}. \end{align*} Since $\sum_{n \geq 0} \tau^ { w} \left(x_ny_n\right)$ is the duality bracket between $L_p^ { w}(\mathcal{M};\ell_\infty)$ and $L_{p'}^ { w}(\mathcal{M};\ell_1)$, we obtain the desired estimate $$ \|x\|_{L_p^ { w}(\mathcal{M};\ell_\infty)}\leq c_p[ { w}\,]_{A_p}^{1/(p-1)}\|x\|_{L_p^ { w}(\mathcal{M})}$$ for positive $x$. The passage to general operators follows from a standard argument. \end{proof} Next, we turn to the case $2<p<\infty$. In this case, we have to invent a different method. Moreover, this argument does not work for $1\leq p<2.$ \begin{proof}[Proof of Theorem \ref{ref} for $p>2$] Fix $ { w}\in A_p$. By standard decomposition, it is enough to show the claim for positive operators $x\in L_p^ { {w}}(\mathcal{M})$. Our goal is to majorize the martingale $(x_n)_{n\geq 0}$ by an operator, whose norm in $L_p^ { {w}}(\mathcal{M})$ is not bigger than $[w]_{A_p}^{1/(p-1)}\|x\|_{L_p^ { {w}}(\mathcal{M})}$, up to some constant depending only on $p$. We begin with the observation that $x^{p-1} { {v}}^{1-p}$ is positive and belongs to $L_{p'}^ { {v}}(\mathcal{M})$: this is due to the identity $\|x^{p-1} { v}^{1-p}\|_{L_{p'}^ { v}(\mathcal{M})}=\|x\|_{L_p^ { w}(\mathcal{M})}^{p-1}$. Thus, we may apply Doob's inequality in $L_{p'}^ { v}(\mathcal M)$ to the nonnegative martingale $\left(\mathcal{E}_n^{ { v}}\left(x^{p-1} { v}^{1-p}\right)\right)_{n \geq 0}$ in $(\mathcal{M},\tau^ { v})$, obtaining an operator $a$ such that $ \mathcal{E}_n^{ { v}}\left(x^{p-1} { v}^{1-p}\right)\leq a$ for every $n \geq 0$ and \begin{equation}\label{bounda1} \|a\|_{L_{p'}^ { v}(\mathcal{M})}\leq c_p\|x\|_{L_p^ { w}(\mathcal{M})}^{p-1}. \end{equation} Next, we apply Doob's inequality again, this time in $L_{p'}^{ { {w}}}(\mathcal{M})$, to the nonnegative martingale $\left(\mathcal{E}_n^{ { w}}\left(a { w}^{-1}\right)\right)_{n \geq 0}$. As the result, we get an operator $b$ such that $\mathcal{E}_n^{ { w}}\left(a { {w}}^{-1}\right)\leq b$ for $n \geq 0$ and whose norm satisfies \begin{equation}\label{boundb1} \|b\|_{L_{p'}^ { w}(\mathcal{M})} \leq c_{p}\|a { w}^{-1}\|_{L_{p'}^ { w}\left(\mathcal M\right)}=c_p\|a\|_{L_{p'}^ { {v}}(\mathcal{M})}. \end{equation} Using the change of measure formula \eqref{1}, the fact that $ \mathcal{E}_n\left( { w}\right)\left(\mathcal{E}_n\left( { v}\right)\right)^{p-1}\leq [ { w}\,]_{A_p}$ and the estimate $\mathcal{E}^{ { v}}_n\left(x { v}^{-1}\right)\leq \mathcal{E}^{ { v}}_n\left(x^{p-1} { v}^{1-p}\right)^{1/(p-1)}$ which follows from the operator concavity of the function $t\mapsto t^{1/(p-1)}$ (here we use the assumption $p\geq 2$), we obtain \begin{align*} [ { w}\,]^{-\frac{1}{p-1}}_{A_p}x_n &\leq \Biggl(\left(\mathcal{E}_n\left( { w}\right)\right)^{-1}\left(\mathcal{E}_n\left( { v}\right)\right)^{1-p}\left(\mathcal{E}_n\left(x\right)\right)^{p-1}\Biggr)^{\frac{1}{p-1}} \\ &= \left(\mathcal{E}_n\left( { w}\right)\right)^{-\frac{1}{p-1}}\,\mathcal{E}^{ { v}}_n\left(x { v}^{-1}\right) \\ & \leq \left(\mathcal{E}_n\left( { w}\right)\right)^{-\frac{1}{p-1}}\left(\mathcal{E}^{ { v}}_n\left(x^{p-1} { v}^{1-p}\right)\right)^\frac{1}{p-1}. \end{align*} However, by the definition of $a$ and the operator monotonicity of the function $t\mapsto t^{1/(p-1)}$ (again, here we use the assumption $p\geq 2$) we get $$\Big(\mathcal{E}^{ { v}}_n\left(x^{p-1} { v}^{1-p}\right)\Big)^\frac{1}{p-1}=\Big(\mathcal{E}_n\Big[\mathcal{E}^{ { v}}_n\left(x^{p-1} { v}^{1-p}\right)\Big]\Big)^\frac{1}{p-1}\leq \big(\mathcal{E}_n(a)\big)^\frac{1}{p-1}.$$ Therefore, we can proceed with the previous bound as follows: \begin{align*} [ { w}\,]^{-\frac{1}{p-1}}_{A_p}x_n &\leq \Big(\mathcal{E}_n\left( { w}\right)^{-1} \mathcal{E}_n(a)\Big)^{\frac{1}{p-1}}= \Big(\mathcal{E}_n^{ { w}}\left(a { w}^{-1}\right)\Big)^{\frac{1}{p-1}} \leq b^{\frac{1}{p-1}}, \end{align*} where the last bound is due to the definition of $b$ and the operator monotonicity of $t\mapsto t^{1/(p-1)}$. Thus we have obtained the majorant $[ { w}\,]^{\frac{1}{p-1}}_{A_p}b^{\frac{1}{p-1}}$ for the nonnegative martingale $(x_n)_{n \geq 0}$, and it remains to apply \eqref{bounda1} and \eqref{boundb1} to get \begin{displaymath} \|b^{\frac{1}{p-1}}\|_{L_p^ { w}\left(\mathcal{M}\right)} = \|b\|_{L_{p'}^ { w}\left(\mathcal{M}\right)}^{\frac{1}{p-1}} \leq c_{p}^{\frac{1}{p-1}}\|a\|^{\frac{1}{p-1}}_{L_{p'}^ { v}\left(\mathcal{M}\right)} \leq c_{p}^\frac{2}{p-1}\|x\|_{L_{p}^ { w}\left(\mathcal M\right)}. \end{displaymath} That is, we have found the majorant of $(x_n)_{n\geq 0}$ whose $L_p^ { {w}}$ norm is bounded by $c_{p}^{\frac{2}{p-1}}[ { w}\,]^{\frac{1}{p-1}}_{A_p}\|x\|_{L_p^ { {w}}(\mathcal{M})}$, as desired. \end{proof} We are ready to complete the proof of Theorem \ref{doob2}. \begin{proof}[Proof of Theorem \ref{doob2} for $1<p<2$] Again, we proceed by duality. Fix a weight $ { w}\in A_p$, an arbitrary finite sequence $(a_n)_{n\geq 0}$ of positive operators contained in $L_p^ { w}(\mathcal{M})$ and any $g\in L_{p'}^ { v}(\mathcal{M})$ of norm one. By Theorem \ref{ref}, there exists a majorant $b$ of the martingale $(\mathcal{E}_n(g))_{n\geq 0}$, satisfying $\|b\|_{L_{p'}^ { v}(\mathcal{M})}\leq c_{p'}[v]_{A_{p'}}^{1/(p'-1)}=c_{p'}[w]_{A_p}$. Therefore, by H\"older's inequality, $$ \tau\left(\sum_{n\geq 0} \mathcal{E}_n\left(a_n\right)g\right)=\tau\left(\sum_{n\geq 0}a_n\mathcal{E}_n(g)\right)\leq \tau\left(\sum_{n\geq 0}a_n b\right)\leq c_{p'}[w]_{A_p}\left\|\sum_{n \geq 0}a_n\right\|_{L_p^ { w}(\mathcal{M})}.$$ The proof is completed by taking the supremum over all $g$ as above. \end{proof} The above proof works without any assumption on the regularity of the filtration. We would like to conclude this section by an example showing that in this general context, the standard self-improving properties and reverse H\"older inequalities may fail for $A_p$ weights. \begin{remark}\label{nonhomog} Consider the sequence $a_n=2^{-n}(n!)^{-1}$, $n=0,\,1,\,2,\,\ldots$. On the (commutative) probability space $([0,1],\mathcal{B}([0,1]),|\cdot|)$, consider the filtration $(\F_n)_{n\geq 0}$, where $\F_n$ is generated by the intervals $[0,a_n]$, $(a_n,a_{n-1}]$, $(a_{n-1},a_{n-2}]$, $\ldots$, $(a_1,a_0]$. Let $w$ be the weight given by $w=\sum_{n=0}^\infty n! \chi_{(a_{n+1},a_n]}$. This is an $A_1$ weight with $[w]_{A_1}\leq 2$: indeed, all the atoms of the filtration are of the form $(a_{n+1},a_n]$ or $[0,a_n]$ for some $n\geq 0$, and \begin{align*} \frac{1}{|(a_{n+1},a_n]|}\int_{a_{n+1}}^{a_n} w\mbox{d}x\leq \frac{1}{|[0,a_n]|}\int_0^{a_n} w\mbox{d}x&=2^n\cdot n!\sum_{k=n}^\infty k! \frac{2k+1}{2^{k+1}(k+1)!}\\ &\leq 2^n\cdot n!\sum_{k=n}^\infty \frac{1}{2^k}=2\cdot n!=2\operatorname*{essinf}_{[0,a_n]}w=2\operatorname*{essinf}_{(a_{n+1},a_n]}w. \end{align*} Furthermore, it is evident that for any $\alpha>1$, the function $w^\alpha$ is not integrable: the series $$ \sum_{n=0}^\infty \frac{(n!)^\alpha(2n+1)}{2^{n+1}(n+1)!}$$ diverges. Therefore, $w$ cannot satisfy reverse H\"older inequality. Similarly, the self-improvement property does not hold. Given any $1<p<\infty$, we know that $w\in A_{p'}$ (since $A_{p'}\subset A_1$) and hence the dual weight $v=w^{1/(1-p')}=w^{1-p}$ belongs to $A_p$. However, if $v$ lied in $A_{p-\e}$ for some $\e>0$, then $v^{1/(1-p+\e)}=w^{(1-p)/(1-p+\e)}$ would be integrable, a contradiction. \end{remark} \subsection{A weighted weak-type bound} The following weighted weak-type inequality is inspired by the result due to Cuculescu \cite{cucu}. As we mentioned before, the projection $I-q_\lambda$ plays the role of the indicator function of the event $\{\sup_{n\geq 0}|x_n|\geq \lambda\}$. Therefore, this result can be regarded as a noncommutative probabilistic version of \eqref{weightweak}. \begin{theorem}\label{weak_theorem} Let $1\leq p<\infty$ and $ {w} \in A_p$. Then for any positive $x \in L_p^ {w}\left(\mathcal{M}\right)$ and any $\lambda > 0$ there exists a projection $q \in \mathcal{M}$ such that $q\mathcal{E}_n\left(x\right)q \leq \lambda$ for all $n \geq 0$ and \begin{equation}\label{weak_type} \lambda\Big[\tau^ {w}\left(I-q\right)\Big]^{1/p} \leq [ {w}]_{A_p}^{1/p}\|x\|_{L_p^ {w}\left(\mathcal M\right)}. \end{equation} The dependence $[ {w}]_{A_p}^{1/p}$ on the characteristic cannot be improved (i.e., the exponent $1/p$ cannot be decreased) already in the commutative case. \end{theorem} \begin{proof} We study the case $p>1$ only; the argument in the boundary case $p=1$ is analogous, and we leave the details to the reader. By homogeneity, it is enough to consider the case $\lambda = 1$. Futhermore, using the $\sigma$-finiteness of $(X,\F_0,\mu)$, we may assume that $\mu(X)<\infty$. We recall the construction of Cuculescu's projections: let $q_{-1}$ = $I$ and for $n \geq 0$ define $q_n$ inductively by the equation \begin{displaymath} q_n = q_{n-1}{I}_{\left[0,1\right]}\left(q_{n-1}\mathcal{E}_n\left(x\right)q_{n-1}\right). \end{displaymath} The sequence $\left(q_n\right)_{n\geq -1}$ is nonincreasing and it enjoys following properties (for detailed proofs, see \cite{cucu} or \cite{rand}): \begin{enumerate}[\rm (i)] \item for every $n \geq 0$, $q_n \in \mathcal{M}_n$; \item $q_n$ commutes with $q_{n-1}\mathcal{E}_n\left(x\right)q_{n-1}$; \item $q_n\mathcal{E}_n\left(x\right)q_n \leq q_n$; \item $\left(q_{n-1}-q_n\right)\mathcal{E}_n\left(x\right)\left(q_{n-1}-q_n\right) \geq q_{n-1}-q_n$. \end{enumerate} Set $q = \bigwedge_{n=0}^\infty q_n$. Then $q\mathcal{E}_n\left(x\right)q = qq_n\mathcal{E}_n\left(x\right)q_nq \leq qq_nq \leq I$, so by the above properties, we obtain \begin{align*} \tau^ { w}\left(I-q_n\right) = \sum_{k=0}^n \tau\left(\left(q_{k-1}-q_k\right) { w}\right) &\leq \sum_{k=0}^n \tau\left(\left(q_{k-1}-q_k\right)\mathcal{E}_k\left(x\right)\left(q_{k-1}-q_k\right) { w}\right) \\ & = \sum_{k=0}^n \tau\left(\left(q_{k-1}-q_k\right)x\left(q_{k-1}-q_k\right)\mathcal{E}_k\left( { w}\right)\right)\\ & = \tau\left(\sum_{k=0}^n \left(q_{k-1}-q_k\right)\mathcal{E}_k\left( { w}\right) { w}^{-\frac{1}{p}}x { w}^{\frac{1}{p}}\right), \end{align*} where in the last line we have exploited the tracial property and commuting of $ { w}$ with all elements of $\mathcal{M}$. By H\"older's inequality, mutual orthogonality of projections $\left(\left(q_{k-1}-q_k\right)\right)_{k \geq 0}$ and the definition of $[w]_{A_p}$, we may proceed as follows (recall that $ { v}= { w}^{1-p'}$ is the dual weight of $ { w}$): \begin{align*} \tau^ { w}\left(I-q_n\right) &\leq \tau\left(\left(\sum_{k=0}^n \left(q_{k-1}-q_k\right)\mathcal{E}_k\left(w\right)\right)^{p'} { w}^{-\frac{1}{p-1}}\right)^{\frac{1}{p'}}\tau\left(x^p { w}\right)^{\frac{1}{p}}\\ & = \left(\sum_{k=0}^n\tau\left( \left(q_{k-1}-q_k\right)\left(\mathcal{E}_k\left( { w}\right)\right)^{p'} { v}\right)\right)^{\frac{1}{p'}}\|x\|_{L_p^ { w}\left(\mathcal{M}\right)}\\ & = \left(\sum_{k=0}^n\tau\left( \left(q_{k-1}-q_k\right)\mathcal{E}_k\left( { w}\right)\left(\mathcal{E}_k\left( { w}\right)\right)^{\frac{1}{p-1}}\mathcal{E}_k\left( { v}\right)\right)\right)^{\frac{1}{p'}}\|x\|_{L_p^ { w}\left(\mathcal{M}\right)}\\ & \leq \left([ { w}]_{A_p}^{\frac{1}{p-1}}\sum_{k=0}^n\tau\left(\left(q_{k-1}-q_k\right)\mathcal{E}_k\left( { w}\right)\right)\right)^{\frac{1}{p'}}\|x\|_{L_p^ { w}\left(\mathcal{M}\right)}\\ & = [ { w}]_{A_p}^{\frac{1}{p}} \left(\sum_{k=0}^n \tau\left(\left(q_{k-1}-q_k\right) { w}\right)\right)^{\frac{1}{p'}} \|x\|_{L_p^ { w}\left(\mathcal{M}\right)}\\ & = [ { w}]_{A_p}^{\frac{1}{p}} \left( \tau^ { w}\left(I-q_n\right)\right)^{\frac{1}{p'}} \|x\|_{L_p^ { w}\left(\mathcal{M}\right)}. \end{align*} But the assumption $\mu(X)<\infty$ implies $w\in L_1$ and hence the trace $\tau^ { w}\left(I-q_n\right)$ is finite. Therefore, multiplying both sides by $\left( \tau^ { w}\left(I-q_n\right)\right)^{-\frac{1}{p'}}$ we obtain $\left( \tau^ { w}\left(I-q_n\right)\right)^{\frac{1}{p}} \leq [ { w}]_{A_p}^{\frac{1}{p}} \|x\|_{L_p^ { w}\left(\mathcal{M}\right)}$, and passing with $n \to \infty$ gives the weighted weak type inequality. \end{proof} \begin{remark}\label{sigma-fin} In our considerations below, we will need versions of Theorem \ref{ref} and Theorem \ref{weak_theorem} for filtrations indexed by $\mathbb{Z}$. Using \eqref{not_difficult}, one easily obtains these statements under the assumption that $(X,\F_n,\mu)$ is $\sigma$-finite for each $n$. \end{remark} \section{Maximal inequalities on metric spaces} In this section, as an application of Theorem \ref{ref}, we establish the noncommutative weighted Hardy-Littlewood maximal inequalities on metric spaces. These results can be considered as noncommutative version of \eqref{weightlp} and \eqref{weightweak}. In particular, Mei's results \cite[Chapter 3]{Mei} are extended to the weighted case. Suppose that $(X,d)$ is a metric space equipped with the $\sigma$-field of its Borel subsets $\F$ and a Radon measure $\mu$. The symbol $B(x,r)=\{y\in X\,:\,d(y,x)\leq r\}$ stands for the closed ball of center $x$ and radius $r$. We assume the non-degeneracy condition $0<\mu(B)<\infty$ for any ball $B$ of positive radius. Furthermore, we will work with measures $\mu$ satisfying the so-called doubling condition: there exists a finite constant $\kappa$ such that $\mu(B(x,2r))\leq \kappa\mu(B(x,r))$ for all $x\in X$ and $r>0$. Given $1<p<\infty$ and a weight $w$ on $X$, we say that $w$ satisfies Muckenhoupt's condition $A_p$, if its $A_p$ characteristic $$ [w]_{A_p}:=\sup_{x\in X,\,r>0} \left(\frac{1}{\mu(B(x,r))}\int_{B(x,r)}w\mbox{d}\mu\right)\left(\frac{1}{\mu(B(x,r))}\int_{B(x,r)}w^{1/(1-p)}\mbox{d}\mu\right)^{p-1}$$ is finite. A weight $w$ belongs to the class $A_1$, if there is a constant $c$ such that for all $r>0$ and all $x\in X$, $$ \frac{1}{\mu(B(x,r))}\int_{B(x,r)}w\mbox{d}\mu \leq c\operatorname*{essinf}_{B(x,r)}w.$$ The smallest $c$ with the above property is denoted by $[w]_{A_1}$ and called the $A_1$ characteristic of $w$. Finally, consider the von Neumann algebra $\mathcal{N}$ and put $\mathcal{M}=L_\infty(X,\F,\mu)\bar{\otimes}\mathcal{N}$. Given $1\leq p<\infty$ and $r>0$, define the averaging operator $\mathcal{A}_r$ acting on locally integrable $f:X\to L_p(\mathcal{N})$ by the formula $$ \mathcal{A}_rf(x)=\frac{1}{\mu(B(x,r))}\int_{B(x,r)} f\mbox{d}\mu,\qquad x\in X.$$ In particular, if $1\leq p<\infty$ and $w\in A_p$, then $\mathcal{A}_rf$ is well defined for $f\in L_p^w(\mathcal{M})$: any $f\in L_p^w(\mathcal{M})$ is locally integrable as a function from $X$ to $L_1(\mathcal{N})$. Indeed, if $p>1$, then H\"older's inequality gives $$ \int_{B(x,r)}\|f\|_{L_1(\mathcal{N})}\mbox{d}\mu\leq \|f\|_{L_p^w(\mathcal{M})}\left(\int_{B(x,r)} w^{1/(1-p)}\mbox{d}\mu\right)^{(p-1)/p}<\infty.$$ For $p=1$ the argument is even simpler: $\int_{B(x,r)}\|f\|_{L_1(\mathcal{N})}\mbox{d}\mu\leq \|f\|_{L_1^w(\mathcal{M})}\int_{B(x,r)}w^{-1}\mbox{d}\mu<\infty.$ The main result of this section is stated below. It can be regarded as the noncommutative version of \eqref{weightlp} and \eqref{weightweak}, with the extraction of the optimal dependence on $[w]_{A_p}$. \begin{theorem}\label{thm_HL} Let $1\leq p<\infty$ and assume that $w$ is an $A_p$ weight on $X$. Then for any $f\in L_p^w(\mathcal{M})$ and any $\lambda>0$ there is a projection $q\in \mathcal{M}$ satisfying $$ \lambda \Big[ \tau^ {w}(I-q)\Big]^{1/p}\lesssim_p [w]_{A_p}^{1/p}\|f\|_{L_p^w(\mathcal{M})}$$ and $q\mathcal{A}_rfq\leq \lambda q$ for all $r>0$. Furthermore, if $p>1$, then there exists a constant $c_p$ depending only on $p$ such that for any $f\in L_p(X;L_p(\mathcal{M}))$, $$ \|(\mathcal{A}_rf)_{r>0}\|_{L_p^w(\mathcal{M},\ell_\infty)}\leq c_p[w]_{A_p}^{1/(p-1)}\|f\|_{L_p^w(\mathcal{M})}.$$ \end{theorem} Our argument will exploit the following fact proved in \cite[Theorem 4.1]{HK}. \begin{lemma}\label{partit} Let $(X,d)$ be the metric space equipped with a Radon measure $\mu$ satisfying the above requirements. Then there exist a constant $C$ and a finite collection of families $\mathcal{P}^1$, $\mathcal{P}^2$, $\ldots$, $\mathcal{P}^N$, where each $\mathcal{P}^k=(\mathcal{P}^k_j)_{j\in \mathbb{Z}}$ is a sequence of partitions of $X$, such that the following holds. \begin{enumerate}[\rm (i)] \item For each $1\leq k\leq N$ and each $j\in \mathbb{Z}$, the partition $\mathcal{P}^{k}_{j+1}$ is a refinement of $\mathcal{P}^k_j$. \item For all $x\in X$ and $r>0$, there is $1\leq k\leq N$, $j\in \mathbb{Z}$ and an element $Q\in \mathcal{P}^k_j$ such that $B(x,r)\subseteq Q$ and $\mu(Q)\leq C\mu(B(x, r))$. \item Any $Q\in \bigcup_{k,j}\mathcal{P}^k_j$ is contained within some ball $B(x,r)$ such that $\mu(B(x,r))\leq C\mu(Q)$. \end{enumerate} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm_HL}]By a standard decomposition argument, we may assume that $f$ is nonnegative: we have $f(\omega)\geq 0$ for any $\omega\in X$. Let $N$ be the number guaranteed by the above lemma and fix $k\in \{1,2,\ldots,N\}$. For $n\in \mathbb{Z}$, let $\mathfrak{F}^k_n$ be the $\sigma$-field generated by $\mathcal{P}^k_n$ and denote by $\mathcal{E}^k_n$ the associated conditional expectation. Note that the martingale $w=(\mathcal{E}_n^kw)_{n\in \mathbb{Z}}$ satisfies the $A_p$ condition: by Lemma \ref{partit} (iii), for any $Q\in \bigcup_{n\in \mathbb{Z}} \mathcal{P}^k_n$ we have \begin{align*} & \left(\frac{1}{\mu(Q)}\int_Q w\right)\left(\frac{1}{\mu(Q)}\int_Q w^{1/(1-p)}\right)^{p-1}\\ &\leq C^p \left(\frac{1}{\mu(B(x,r))}\int_{B(x,r)}w\mbox{d}\mu\right)\left(\frac{1}{\mu(B(x,r))}\int_{B(x,r)}w^{1/(1-p)}\mbox{d}\mu\right)^{p-1}\leq C^p[w]_{A_p}, \end{align*} where $B(x,r)$ is the ball containing $Q$. An analogous argument works for $p=1$. By Theorem \ref{weak_theorem} and Remark \ref{sigma-fin}, applied to the martingale $f=(\mathcal{E}^k_nf)_{n\in \mathbb{Z}}$, for any $\lambda>0$ there exists a projection $q_k$ such that $q_k\mathcal{E}_n^kfq_k\leq \lambda$ for all $n\in \mathbb{Z}$ and $\lambda \tau^{w}(I-q_k)^{1/p}\leq C[w]_{A_p}^{1/p}\|f\|_{L_p^w(\mathcal{M})}$. Take $q=\bigwedge_{k=1}^N q_k $, the projection onto the intersection $\bigcap q_k(H)$. Since $I-\bigwedge_{k=1}^N q_k\leq \sum_{k=1}^N (I-q_k)$, we get $$ \lambda \big[\tau^w(I-q)\big]^{1/p}\leq N^{1/p}C[w]_{A_p}^{1/p}\|f\|_{L_p^w(\mathcal{M})}.$$ Now we apply the second part of Lemma \ref{partit}: given an arbitrary ball $B(x,r)$, there is an associated set $Q$, belonging to some $\mathcal{P}^k_n$. Therefore, \begin{equation}\label{bodd} \mathcal{A}_rf(x)=\frac{1}{B(x,r)}\int_{B(x,r)}f\mbox{d}\mu\leq \frac{C}{\mu(Q)}\int_Q f\mbox{d}\mu=C\mathcal{E}^k_nf(x) \end{equation} and consequently $q\mathcal{A}_rfq\leq C\lambda$ for all $r$. This proves the weighted weak-type inequality for $(\mathcal{A}_rf)_{r>0}$. Concerning the strong-type estimate, note that \eqref{bodd} yields \begin{align*} \|(\mathcal{A}_rf)_{r>0}\|_{L_p^w(\mathcal{M};\ell_\infty)}&\leq C\left\|\left(\sum_{k=1}^N \mathcal{E}^k_nf\right)_{n\in \mathbb{Z}}\right\|_{L_p^w(\mathcal{M};\ell_\infty)}\\ &\leq C\sum_{k=1}^N\left\|\left( \mathcal{E}^k_nf\right)_{n\in \mathbb{Z}}\right\|_{L_p^w(\mathcal{M};\ell_\infty)}\leq C'N[w]_{A_p}^{1/(p-1)}\|f\|_{L_p^w(\mathcal{M})}. \end{align*} This gives the claim. \end{proof} \begin{remark} In particular, one may apply the above estimates in the context when $X$ is a locally compact group $G$, equipped an invariant metric $d$ and the right-invariant Haar measure $m$. The averaging operators $$ \mathcal{A}_rf(g)=\frac{1}{\mu(B(g,r))}\int_{B(g,r)}f(h)\mbox{d}m(h)=\frac{1}{\mu(B(e,r))}\int_{B(e,r)} f(gh)\mbox{d}m(h)$$ appear naturally in the study of ergodic theorems, concerning the action of amenable groups on noncommutative $L_p$ spaces (cf. \cite{HLW}). \end{remark} \section{A weighted inequality for maximal singular integrals} The next application of Theorem \ref{ref} concerns weighted bounds for maximal singular integrals of operator-valued functions in dimension one. Let us start with some motivation. The Hilbert transform $\mathcal{H}$, the fundamental object in harmonic analysis, is an operator which acts on locally integrable functions $f:\R\to \R$ by $$ \mathcal{H}f(s)=\mbox{p.v.}\frac{1}{\pi}\int_\R \frac{f(t)}{s-t}\mbox{d}t.$$ Here `p.v.' refers to the principal value of the integral: $ \mathcal{H}f(s)=\lim_{\e\downarrow 0} \mathcal{H}^{\e}f(s),$ and $$ \mathcal{H}^\e f(s)=\frac{1}{\pi}\int_{|s-t|>\e} \frac{f(t)}{s-t}\mbox{d}t$$ is the truncated Hilbert transform. The above limiting procedure makes sense for certain vector-valued functions as well: one can define $\mathcal{H}f$ for $f$ taking values in the so-called UMD Banach spaces. Recall that a Banach space $\mathbb{B}$ is UMD (Unconditional for Martingale Differences), if the following holds. For any (equivalently, for all) $1<p<\infty$, there exists a finite constant $c_{p,\mathbb{B}}$ such that for any (classical, commutative) martingale difference $d=(d_k)_{k\geq 0}$ with values in $\mathbb{B}$, given on some filtered probability space $(\Omega,\F,(\F_k)_{k\geq 0},\mathbb{P})$, and any deterministic sequence $\e=(\e_k)_{k\geq 0}$ with values in $[-1,1]$ we have $$ \left\|\sum_{k=0}^n \e_kd_k\right\|_{L_p(\Omega;\mathbb{B})}\leq c_{p,\mathbb{B}}\left\|\sum_{k=0}^n d_k\right\|_{L_p(\Omega;\mathbb{B})},\qquad n=0,\,1,\,2,\,\ldots.$$ Here the probability space, as well as the filtration, are allowed to vary. Note that for any $1<p<\infty$ and any von Neumann algebra $\mathcal{N}$, the space $L_p(\mathcal{N})$ is UMD: this follows directly from \eqref{Lp}, applied to $\mathcal{M}=L_\infty(\Omega,\F,\mathbb{P})\overline{\otimes} \mathcal{N}$. Next, a well-known result of Burkholder \cite{B1.1} asserts that if $\mathbb{B}$ is a UMD space, then $\|\mathcal{H}\|_{L_p(\R;\mathbb{B})\to L_p(\R;\mathbb{B})}\lesssim c_{p,\mathbb{B}}^2.$ Putting all the above facts together, we see that the action of the Hilbert transform on $L_p(\mathcal{M})=L_p(L_\infty(\R)\overline{\otimes} \mathcal{N})$, the space of $L_p(\mathcal{N})$-valued functions on $\R$, is well defined and bounded for $1<p<\infty$. One can also study analogous \emph{weighted} $L_p$ estimates for martingale transforms and the Hilbert transform. It follows from the results of Lacey \cite{Lac} that if $d=(d_k)_{k\geq 0}$ is a martingale difference with values in a UMD space $\mathbb{B}$, $\e=(\e_n)_{n\geq 0}$ is a predictable sequence of signs and $w$ is an $A_p$ weight on $\Omega$, then we have \begin{equation}\label{w_transform} \left\|\sum_{k=0}^n v_kd_k\right\|_{L_p^w(\Omega;\mathbb{B})}\lesssim [w]_{A_p}^{\max\{1/(p-1),1\}}\left\|\sum_{k=0}^n d_k\right\|_{L_p^w(\Omega;\mathbb{B})}. \end{equation} Here $\|f\|_{L_p^w(\Omega;\mathbb{B})}=\left(\int_\Omega \|f\|_\mathbb{B}^p\mbox{d}\mathbb{P}\right)^{1/p}$. Moreover, we have $\|\mathcal{H}\|_{L_p^w(\R;\mathbb{B})\to L_p^w(\R;\mathbb{B})}\lesssim [w]_{A_p}^{\max\{(p-1)^{-1},1\}}$. The exponent ${\max\{(p-1)^{-1},1\}}$ is optimal in both estimates above. In particular, specifying $\mathbb{B}=L_p(\mathcal{N})$ and $\mathcal{M}=L_\infty(\R)\overline{\otimes} \mathcal{N}$, as above, we get the corresponding version for noncommutative martingale transforms and \begin{equation}\label{Ha} \|\mathcal{H}\|_{L_p^w(\mathcal{M})\to L_p^w(\mathcal{M})}\lesssim [w]_{A_p}^{\max\{(p-1)^{-1},1\}}. \end{equation} There is another, related operator, playing an important role in harmonic analysis: the so-called maximal truncation $\mathcal{H}^*$, given by $\mathcal{H}^*f=\sup_{\e>0}|\mathcal{H}^\e f|$. This operator also satisfies the weighted bound \eqref{Ha}, which can be handled with the use of Cotlar's inequality or the direct majorization in terms of sparse operators (see \cite{Lac}). Both these approaches exploit a number of pointwise estimates which cannot be used in the noncommutative context. The purpose of this section is to establish a noncommutative maximal version of \eqref{Ha} for maximal truncation (with a slightly worse dependence on $[w]_{A_p}$). On the positive side, we will work in the more general class of convolution-type singular integrals on $\R$. Throughout, we assume that $K:(-\infty,0)\cup(0,\infty)\to \R$ is an odd, twice differentiable function (in the sense that $K'$ is absolutely continuous) which satisfies \begin{equation}\label{cond1} \lim_{s\to \infty} K(s)=\lim_{s\to \infty}K'(s)=0 \end{equation} and \begin{equation}\label{cond2} s^3K''(s)\in L^\infty(\R). \end{equation} We denote by $T_K$ the associated one-dimensional singular integral operator, defined by $$ T_Kf(s)=\operatorname{p.v.}\int_\R f(t)K(s-t)\mbox{d}t=\lim_{\e\downarrow 0}T_K^\e f(s),$$ where $T_K^\e f(s)$ is the truncation at level $\e$: $$ T_K^\e f(s)=\int_{|s-t|>\e} f(t)K(s-t)\mbox{d}t.$$ In analogy to the above setting, we may also introduce the maximal truncation $T^*_K$ by $T^*_Kf=\sup_{\e>0}|T^\e_Kf|$. In all the above definitions, $f$ is allowed to be vector-valued. Note that the choice $K(s)=1/(\pi s)$ brings us back to the context of Hilbert transform. As shown by Vagharshakyan \cite[Theorem 2.4]{Va}, the operator $T_K$ can be expressed as an average of appropriate one-dimensional dyadic shifts. To recall the necessary definitions, let $\varphi$, $\psi:\R\to \R$ be two functions supported on the unit interval $[0,1]$ and given there by the formulas $$ \varphi(x)=\begin{cases} -1 & \mbox{if }0\leq x<1/4,\\ 1 & \mbox{if }1/4\leq x< 3/4,\\ -1 & \mbox{if }3/4\leq x\leq 1 \end{cases} \qquad \mbox{and}\qquad \psi(x)=\begin{cases} 7 &\mbox{if }0<x<1/4,\\ -1 & \mbox{if }1/4\leq x< 1/2,\\ 1 & \mbox{if }1/2\leq x<3/4,\\ -7 & \mbox{if }3/4\leq x\leq 1. \end{cases}$$ For any (real or vector-valued) function $f$ on $\R$ and any interval $I=[a,b]$, we define the scaled function $f_I$ by $$ f_I(x)=\frac{1}{\sqrt{b-a}}f\left(\frac{x-a}{b-a}\right),\qquad x\in \R.$$ For any $\beta=\{\beta_l\}\in \{0,1\}^\mathbb{Z}$ and any $r\in [1,2)$, we define the dyadic grid $\mathbb{D}_{r,\beta}$ to be the following collection of intervals (see \cite{NTV} for the motivation and basic properties of this family): $$ \mathbb{D}_{r,\beta}=\left\{ r2^n\left([0,1)+k+\sum_{i<n} 2^{i-n}\beta_i\right)\right\}_{n\in\mathbb{Z},k\in \mathbb{Z}}.$$ We equip $\{0,1\}^\mathbb{Z}$ with the uniform probability measure $\mu$, uniquely determined by the requirement $$ \mu(\{\beta:(\beta_{i_1},\beta_{i_2},\ldots,\beta_{i_n})=a\})=2^{-n}$$ for any $n$, any sequence $i_1<i_2<\ldots<i_n$ of integers and any $a\in \{0,1\}^n$. The aforementioned result of Vagharshakyan asserts the following. \begin{theorem}\cite[Theorem 2.4]{Va} Suppose that the kernel $K$ satisfies \eqref{cond1} and \eqref{cond2}. Then there exists a coefficient function $\gamma:(0,\infty)\to \R$ satisfying $$ \|\gamma\|_\infty\leq C\|s^2K''(s)\|_\infty$$ such that \begin{equation}\label{defK} K(t-s)=\int_{\{0,1\}^\mathbb{Z}}\int_1^2 \sum_{I\in \mathbb{D}_{r,\beta}} \gamma(|I|)\varphi_I(s)\psi_I(t)\frac{\mbox{d}r}{r}\mbox{d}\mu(\beta) \end{equation} for all $s\neq t$. Here $C$ is some absolute constant and the series on the right is absolutely convergent almost everywhere. \end{theorem} In other words, $T_K$ can be expressed as an average of the Haar shift operators $$ T_{r,\beta}f=\sum_{I\in \mathbb{D}_{r,\beta}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I,$$ where $\langle f,g\rangle=\int_\R fg$. Such objects can be handled with the use of martingale methods. We are ready to establish the main result of this section. In what follows, $\mathcal{M}$ is the von Neumann algebra $L_\infty(\R)\overline\otimes \mathcal{N}$, and hence $L_p^w(\mathcal{M})$ can be identified with the class of appropriately integrable operator-valued functions on $\R$. \begin{theorem} For any $1<p<\infty$ and any kernel $K$ satisfying the above assumptions and any $A_p$ weight $w$ on the real line, we have the estimate $$ \|(T_K^\e)_{\e>0}\|_{L_p^w(\mathcal{M};\ell_\infty)}\leq \tilde{C}_p[w]_{A_p}^{1/(p-1)+\max\{1/(p-1),1\}}\|f\|_{L_p^w(\mathcal{M})}.$$ \end{theorem} \begin{proof} Fix $\e>0$ and $f\in L_p^w(\mathcal{M})$: we may treat it as a function on $\R$ with values in $L_0(\mathcal{N})$. Take two real numbers $s$, $t$ satisfying $|s-t|>\e$. Since both $\varphi_I$ and $\psi_I$ are supported on $I$, we see that $\varphi_I(s)\psi_I(t)=0$ if $|I|\leq \e$ and we may rewrite the identity \eqref{defK} in the form $$ K(t-s)=\int_{\{0,1\}^\mathbb{Z}}\int_1^2 \sum_{I\in \mathbb{D}_{r,\beta}, |I|> \e} \gamma(|I|)\varphi_I(s)\psi_I(t)\frac{\mbox{d}r}{r}\mbox{d}\mu(\beta).$$ Therefore we have $$ T_K^\e f(s)=\int_{\{0,1\}^\mathbb{Z}}\int_1^2 \sum_{I\in \mathbb{D}_{r,\beta}, |I|>\e} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I(s)\frac{\mbox{d}r}{r}\mbox{d}\mu(\beta)$$ and hence by Minkowski's inequality, \begin{align*} \|(T_K^\e f)_{\e>0}\|_{L_p^w(\mathcal{M};\ell_\infty)}&\leq \int_{\{0,1\}^\mathbb{Z}}\int_1^2 \left\|\left(\sum_{I\in \mathbb{D}_{r,\beta}, |I|>\e} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right)_{\e>0}\right\|_{L_p^w(\mathcal{M};\ell_\infty)}\frac{\mbox{d}r}{r}\mbox{d}\mu(\beta)\\ &\leq \int_{\{0,1\}^\mathbb{Z}}\int_1^2 \left\|\left(\sum_{I\in \mathbb{D}_{r,\beta}, |I|>\e,\atop I\text{ odd}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right)_{\e>0}\right\|_{L_p^w(\mathcal{M};\ell_\infty)}\frac{\mbox{d}r}{r}\mbox{d}\mu(\beta)\\ &\quad +\int_{\{0,1\}^\mathbb{Z}}\int_1^2 \left\|\left(\sum_{I\in \mathbb{D}_{r,\beta}, |I|>\e, \atop I\text{ even}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right)_{\e>0}\right\|_{L_p^w(\mathcal{M};\ell_\infty)}\frac{\mbox{d}r}{r}\mbox{d}\mu(\beta). \end{align*} Here and below, $I\in \mathbb{D}_{r,\beta}$ is called odd (even), if so is the number $\log_2(|I|/r)$. From now on, we will restrict our analysis to `even sums' only; the first summand in the last line above can be dealt with analogously. The sequence $$ \sum_{I\in \mathbb{D}_{r,\beta}, |I|\geq 4^{n},\atop I\text{ even}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I,\qquad n\in \mathbb{Z},$$ is a martingale with respect to its natural filtration. It is crucial here that we assume the `double spread' on $\log_2(|I|/r)$ (i.e., we assume that $\log_2(|I|/r)$ has the fixed parity): thanks to this condition, $(\sum_{I\in \mathbb{D}_{r,\beta}, |I|= r4^{n}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I)_{n\in \mathbb{Z}}$ is a martingale difference sequence. The application of Theorem \ref{ref} yields $$ \left\|\left(\sum_{I\in \mathbb{D}_{r,\beta}, |I|>\e,\atop I\text{ even}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right)_{\e>0}\right\|_{L_p^w(\mathcal{M};\ell_\infty)}\leq C_p [w]_{A_p}^{1/(p-1)}\left\|\sum_{I\in \mathbb{D}_{r,\beta},\atop I\text{ even}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right\|_{L_p^w(\mathcal{M})}.$$ The next step is to prove that the right-hand side is controlled by $\|f\|_{L_p^w(\mathcal{M})}$. This will follow from the Theorem \ref{UMD_theorem} below. \end{proof} From now on, we move to the classical context; all the functions and processes considered below are commutative. \begin{theorem}\label{UMD_theorem} Suppose that $\mathbb{B}$ is a UMD space and $f:\R\to \mathbb{B}$ is a Bochner integrable function. Then for $1<p<\infty$ and any $A_p$ weight $w$ on $\R$ we have \begin{equation}\label{weightedd} \left\|\sum_{I\in \mathbb{D}_{r,\beta},\atop I\text{ even}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right\|_{L_p^w(\R;\mathbb{B})}\leq C_p\|\gamma\|_\infty [w]_{A_p}^{\max\{1/(p-1),1\}}\|f\|_{L_p^w(\R;\mathbb{B})}. \end{equation} The same estimate holds if the sum on the left is taken over $I\in \mathbb{D}_{r,\beta}$ with odd $I$. \end{theorem} This result follows from Theorem 5.1 in \cite{HPTV} in the real-valued case, the context of $UMD$ spaces requires more effort. We guess that even in the vector setting the result is known, however, the proof presented below will use a number of novel arguments from martingale theory. We will exploit the so-called sparse operators, which have gained a lot of interest in the recent literature (we mention here the convenient references: Domelevo and Petermichl \cite{DP}, Lerner \cite{Le2}, Lorist \cite{Lo}, which contain argumentation related to that below). Let us briefly outline our approach. The idea is to control pointwise the sum in \eqref{weightedd} by a similar expression, in which the summation is taken over much smaller collection of intervals, satisfying the so-called \emph{sparseness} condition (the formal definitions will appear later). The proof of such a domination rests on an unweighted, weak-type version of \eqref{weightedd}, which will be obtained with the use of classical martingales; having established the control, one shows the weighted estimate by a change-of-measure argument, similar to that used in Section 3. We proceed to the formal analysis. The starting point is the following $L^p$ bound, the unweighted version of Theorem \ref{UMD_theorem}. \begin{theorem}\label{UMD_theorem2} Suppose that $\mathbb{B}$ is a UMD space and $f:\R\to \mathbb{B}$ is a Bochner integrable function. Then for $1<p<\infty$ and any bounded sequence $(\gamma(I))_{I\in \mathbb{D}_{r,\beta}}$ we have $$ \left\|\sum_{I\in \mathbb{D}_{r,\beta},\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \psi_I\right\|_{L_p(\R;\mathbb{B})}\leq C_p\|\gamma\|_\infty \|f\|_{L_p(\R;\mathbb{B})}.$$ The same estimate holds if the sum on the left is taken over $I\in \mathbb{D}_{r,\beta}$ with odd $I$. \end{theorem} \begin{proof} We will apply three times the $L^p$ estimate for martingale transforms, with respect to different filtrations. \smallskip \emph{Step 1.} Consider the truncated version of $\psi$, given by $$ \zeta(x)=\begin{cases} 1 &\mbox{if }0<x<1/4,\\ -1 & \mbox{if }1/4\leq x< 1/2,\\ 1 & \mbox{if }1/2\leq x<3/4,\\ -1 & \mbox{if }3/4\leq x\leq 1. \end{cases}$$ Then for any even integers $b<c$ we have \begin{equation}\label{first_step} \begin{split} &\left\|\sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \psi_I\right\|_{L_p(\mathcal{M})} \leq c_p\left\|\sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \zeta_I\right\|_{L_p(\mathcal{M})}. \end{split} \end{equation} To see this, split the function $\psi$ into two, the outer and the inner part: $$ \psi^{\text{out}}(x)=\begin{cases} 7 & \mbox{if }0\leq x<1/4,\\ 0 & \mbox{if }1/4\leq x< 3/4,\\ -7 & \mbox{if }3/4\leq x\leq 1 \end{cases} \qquad \mbox{and}\qquad \psi^{\text{inn}}(x)=\begin{cases} 0 &\mbox{if }0<x<1/4,\\ -1 & \mbox{if }1/4\leq x< 1/2,\\ 1 & \mbox{if }1/2\leq x<3/4,\\ 0 & \mbox{if }3/4\leq x\leq 1. \end{cases}$$ Introduce the corresponding versions for $\zeta$: then $\zeta^{\text{out}}=\psi^{\text{out}}/7$ and $\zeta^{\text{inn}}=\psi^{\text{inn}}$. We have \begin{equation}\label{mart1} \sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \psi_I= \sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \bigg[\gamma(I)\langle f,\varphi_I\rangle \psi_I^{\text{out}}+ \gamma(I)\langle f,\varphi_I\rangle \psi_I^{\text{inn}}\bigg] \end{equation} and \begin{equation}\label{mart2} \sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \zeta_I= \sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \bigg[\gamma(I)\langle f,\varphi_I\rangle \zeta_I^{\text{out}}+ \gamma(I)\langle f,\varphi_I\rangle \zeta_I^{\text{inn}}\bigg]. \end{equation} Since $\psi^{\text{inn}}$, $\psi^{\text{out}}$ have integral zero, the partial sums corresponding to the right-hand sides of \eqref{mart1} and \eqref{mart2} are martingales. Specifically, if $n$ is an even integer between $b$ and $c$, then the $n$-th differences are $$\sum_{I\in \mathbb{D}_{r,\beta},|I|=r2^n}\gamma(I)\langle f,\varphi_I\rangle \psi_I^{\text{out}}\qquad\mbox{and}\qquad \sum_{I\in \mathbb{D}_{r,\beta},|I|=r2^n}\gamma(I)\langle f,\varphi_I\rangle \zeta_I^{\text{out}},$$ while for odd $n$ (satisfying $b\leq n-1\leq c$), the differences are $$\sum_{I\in \mathbb{D}_{r,\beta},|I|=r2^{n-1}}\gamma(|I|)\langle f,\varphi_I\rangle \psi_I^{\text{inn}}\qquad\mbox{and}\qquad \sum_{I\in \mathbb{D}_{r,\beta},|I|=r2^{n-1}}\gamma(I)\langle f,\varphi_I\rangle \zeta_I^{\text{inn}}.$$ Furthermore, by the above discussion, the martingale associated with \eqref{mart2} is the transform of the martingale in \eqref{mart1} by a predictable sequence with values in $\{1,7\}$. This yields \eqref{first_step}. \smallskip \emph{Step 2.} Now we will prove that for any even integers $b<c$ we have \begin{equation}\label{second_step} \begin{split} &\left\|\sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \zeta_I\right\|_{L_p(\mathcal{M})} \leq c_p\left\|\sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \varphi_I\right\|_{L_p(\mathcal{M})}. \end{split} \end{equation} The argument is the same as previously, but we need a different filtration. Namely, we take $ \zeta^{\text{out}}=\zeta \chi_{[0,1/2)}$, $\zeta^{\text{inn}}=\zeta \chi_{[1/2,1)}$ and similarly for $\varphi^{\text{out}}$ and $\varphi^{\text{inn}}$. Then $\zeta^{\text{out}}=-\varphi^{\text{out}}$ and $\zeta^{\text{inn}}=\varphi^{\text{inn}}$, so the corresponding `finer' martingales associated with the left- and the right-hand side of \eqref{second_step} are transforms of each other by a predictable sequence of signs. \smallskip \emph{Step 3.} The final part is to note that \begin{equation}\label{third_step} \begin{split} &\left\|\sum_{I\in \mathbb{D}_{r,\beta},2^b\leq |I|/r\leq 2^c,\atop I\text{ even}} \gamma(I)\langle f,\varphi_I\rangle \varphi_I\right\|_{L_p(\mathcal{M})}\leq c_p\|\gamma\|_\infty \|f\|_{L_p(\R;\mathbb{B})}. \end{split} \end{equation} Let $\zeta^{\text{inn}}$ and $\zeta^{\text{out}}$ be the functions introduced in Step 1 above. It is easy to see that the collection $\{\varphi_I,\zeta_I^{\text{inn}},\zeta_I^{\text{out}}\}_{I\in \mathbb{D}_{r,\beta},\,I\text{ even}}$ is a basis in $L_p(\R;\mathbb{B})$ for any fixed $r$ and $\beta$: this is just the Haar basis, under scaling and translation. Expanding $f\in L_p(\R;\mathbb{B})$ into this basis, we get $$ f=\sum_{I\in \mathbb{D}_{r,\beta},\,I \text{ even}} \Big(\langle f,\varphi_I\rangle \varphi_I+\langle f,\zeta_I^{\text{inn}}\rangle \zeta_I^{\text{inn}}+\langle f,\zeta_I^{\text{out}}\rangle \zeta^{\text{out}}_I\Big)$$ and we see that the sum on the left of \eqref{third_step} is obtained by skipping some of the above terms and multiplying the other by the corresponding terms $\gamma(I)$. Thus \eqref{third_step} follows from the $L^p$ estimate for martingale transforms, where the transforming sequence takes values in the set $\{0,\gamma(I)\}_{I\in \mathbb{D}_{r,\beta}}$. Putting the above three steps together and letting $b\to-\infty$, $c\to \infty$, we get the desired assertion. \end{proof} \begin{remark} One might repeat the above argumentation, replacing the $L_p$ space with its weighted version $L_p^w$. Then one gets the estimate \eqref{weightedd}, but with a worse dependence on the characteristic: $[w]_{A_p}^{3\max\{1/(p-1),1\}}$. \end{remark} Now let us fix some additional notation. From now on, we will work with a single dyadic lattice $\mathbb{D}_{1,0}$. Given $\Omega\in \mathbb{D}_{1,0}$ with $|\Omega|=4^N$ for some integer $N$, we introduce its filtration $(\F_n^\Omega)_{n\geq 0}$ defined by $\F_0^\Omega=\{\emptyset,\Omega\}$ and, for any $n\geq 0$, \begin{align*} \F_{2n+1}^\Omega&=\sigma\Big(\big\{\varphi_I\,:\,I\mbox{ is a dyadic subinterval of }\Omega,\,|I|=4^{-n}|\Omega|\big\}\Big),\\ \F_{2n+2}^\Omega&=\sigma\Big(\big\{\psi_I\,:\,I\mbox{ is a dyadic subinterval of }\Omega,\,|I|=4^{-n}|\Omega|\big\}\Big). \end{align*} Next, suppose that $f\in L_1(\R;\mathbb{B})$ is a given function, let $\gamma=\{\gamma(I)\}_{I\in \mathbb{D}_{1,0}}$ be an arbitrary sequence bounded by $1$ and define $g^{\Omega}=\sum_{I\in \mathbb{D}_{1,0},\,I\subseteq \Omega,\,I\text{ even}}\gamma(I)\langle f,\varphi_I\rangle \psi_I$. Let $(f_n^\Omega)_{n\geq 0}$, $(g_n^\Omega)_{n\geq 0}$ be the martingales generated by $f|_\Omega$ and $g^\Omega|_\Omega$, relative to the filtration $\F^\Omega$. It is easy to check that the associated differences are $df_0^\Omega=\frac{1}{|\Omega|}\int_\Omega f$, $dg_0^\Omega=0$ and for $n\geq 0$, $$\begin{array}{lll} \displaystyle &\displaystyle df_{2n+1}^\Omega=\sum_{|I|/|\Omega|=4^{-n}} \langle f,\varphi_I\rangle \varphi_I,\qquad &\displaystyle dg_{2n+1}^\Omega=0\\ &\displaystyle df_{2n+2}^\Omega=\sum_{|I|/|\Omega|=4^{-n}}\Big(\langle f,\zeta_I^{\text{inn}}\rangle \zeta_I^{\text{inn}}+\langle f,\zeta_I^{\text{out}}\rangle \zeta^{\text{out}}_I\Big), \qquad &\displaystyle dg_{2n+2}^\Omega=\sum_{|I|/|\Omega|=4^{-n}} \gamma(I)\langle f,\varphi_I\rangle \psi_I, \end{array}$$ where $\zeta^{\text{inn}}$, $\zeta^{\text{out}}$ have been defined in Step 1 of the proof of the previous theorem. Observe that $\|dg_{2n+2}^\Omega\|_\mathbb{B}\leq 7\|df_{2n+1}^\Omega\|_\mathbb{B}$ for all $n$. Furthermore, note that the real-valued variables $(\|df_n^\Omega\|_\mathbb{B})_{n\geq 0}$ are predictable: for any $n\geq 1$, $\|df_n^\Omega\|_\mathbb{B}$ is $\F_{n-1}^\Omega$-measurable. \begin{theorem}\label{UMD_theorem3} Under the above notation, there is a universal constant $C$ for which \begin{equation}\label{auxil_weak} \left\|\sup_{n\geq 0}\Big\|g_n^\Omega\Big\|_\mathbb{B}\right\|_{L_{1,\infty}(\Omega;\R)} \leq C\left\|f\right\|_{L_{1}(\Omega;\mathbb{B})}. \end{equation} \end{theorem} \begin{proof} We will use the previous theorem combined with the extrapolation (good-lambda) method of Burkholder and Gundy. Fix $\beta>1$, $\delta \in (0,1)$ (the values will be specified later) and introduce the stopping times $\mu,\,\nu,\,\sigma$ by \begin{align*} \mu&=\inf\{n\geq 0: \|g_n^\Omega\|_\mathbb{B}\geq 1 \},\\ \nu& =\inf\{n\geq 0:\|g_n^\Omega\|_\mathbb{B}\geq \beta\}\\ \sigma&=\inf\left\{n\geq 0: \|f_n^\Omega\|_\mathbb{B} \vee \|df_{n+1}^\Omega\|_\mathbb{B}\geq \delta \right\}, \end{align*} with the standard convention $\inf\emptyset=\infty$ and $a\vee b=\max\{a,b\}$. To see that $\sigma$ is also a stopping time, one needs to refer to the predictability of $(\|df_n^\Omega\|_\mathbb{B})_{n\geq 0}$ discussed above. Denoting by $a\wedge b$ the minimum of $a$ and $b$, we may write \begin{equation}\label{Chebyshev} \begin{split} \mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq \beta,\,\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee\|df_{n+1}^\Omega\|_\mathbb{B})<\delta \right)&=\mathbb{P}(\mu\leq \nu <\infty,\,\sigma=\infty)\\ &\leq \mathbb{P}(\|g_{\nu\wedge \sigma}^\Omega-g_{\mu\wedge \sigma}^\Omega\|_\mathbb{B}\geq \beta-1-7\delta). \end{split} \end{equation} Here the latter passage is due to the triangle inequality: on the set $\{\mu\leq \nu <\infty,\,\sigma=\infty\}$ we have $\|g_{\nu\wedge \sigma}^\Omega\|_\mathbb{B}\geq \beta$ and $ \|g_{\mu\wedge \sigma}^\Omega\|_\mathbb{B}\leq \|g_{\mu\wedge \sigma-1}^\Omega\|_\mathbb{B}+7\|df_{\mu\wedge \sigma-1}^\Omega\|_\mathbb{B}\leq 1+7\delta$. Now, by Chebyshev's inequality, the last expression in \eqref{Chebyshev} does not exceed $\|g_{\nu\wedge \sigma}^\Omega-g_{\mu\wedge \sigma}^\Omega\|_{L_2(\Omega;\mathbb{B})}^2/(\beta-1-7\delta)^2$. The previous theorem implies that $$\|g_{\nu\wedge \sigma}^\Omega-g_{\mu\wedge \sigma}^\Omega\|_{L_2(\Omega;\mathbb{B})}\leq C_2\|f_{\nu\wedge\sigma}^\Omega-f_{\mu\wedge \sigma}^\Omega\|_{L_2(\Omega;\mathbb{B})}.$$ (Indeed, set $f:=f_{\nu\wedge\sigma}^\Omega-f_{\mu\wedge \sigma}^\Omega$ and use the same transforming sequence $(\gamma(I))_{I\in \mathbb{D}_{1,0}}$). Hence we obtain \begin{align*} \mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq \beta,\,\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee\|df_{n+1}^\Omega\|_\mathbb{B})<\delta \right)&\leq \frac{C_2^2\E \|f_{\nu\wedge\sigma}^\Omega-f_{\mu\wedge \sigma}^\Omega\|_\mathbb{B}^2}{(\beta-1-7\delta)^2}\\ &=\frac{C_2^2\E \|f_{\nu\wedge\sigma}^\Omega-f_{\mu\wedge \sigma}^\Omega\|_\mathbb{B}^2\chi_{\{\mu<\infty\}}}{(\beta-1-7\delta)^2}, \end{align*} where the latter passage is due to the identity $f_{\nu\wedge\sigma}^\Omega=f_{\mu\wedge \sigma}^\Omega$ on the set $\mu=\infty$. But by the definition of $\sigma$, we have $ \|f_{\nu\wedge\sigma}^\Omega-f_{\mu\wedge \sigma}^\Omega\|_\mathbb{B}\leq \|f_{\nu\wedge\sigma}^\Omega\|_\mathbb{B}+\|f_{\mu\wedge \sigma}^\Omega\|_\mathbb{B}\leq 4\delta.$ Now, since $\mathbb{P}(\mu<\infty)=\mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq 1\right)$, putting all the above observations together gives $$ \mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq \beta,\,\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee\|df_{n+1}^\Omega\|_\mathbb{B})<\delta \right)\leq \frac{16C_2^2\delta^2}{(\beta-1-7\delta)^2}\mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq 1\right).$$ Now we specify $\beta=3$ and $\delta=(32C_2)^{-1}$, and apply homogeneity argument to obtain that $$ \mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq 3\lambda ,\,\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee\|df_{n+1}^\Omega\|_\mathbb{B})<\delta \lambda\right)\leq \frac{1}{12}\mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq \lambda\right)$$ for $\lambda>0$ (here we used the fact that $\delta<1/4$, so $\beta-1-7\delta\geq \sqrt{3}$). This implies $$ \mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq 3\lambda\right)\leq \mathbb{P}\left(\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee\|df_{n+1}^\Omega\|_\mathbb{B})\geq \delta \lambda\right)+\frac{1}{12}\mathbb{P}\left(\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\geq \lambda\right)$$ and hence, multiplying both sides by $\lambda$, we obtain \begin{align*} &\frac{1}{3}\left\|\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\right\|_{L_{1,\infty}(\Omega;\R)}\leq 32C_2\left\|\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee \|df_{n+1}^\Omega\|_\mathbb{B})\right\|_{L_{1,\infty}(\Omega;\R)}+\frac{1}{12}\left\|\sup_{n\geq 0}\|g_n^\Omega\|_\mathbb{B}\right\|_{L_{1,\infty}(\Omega;\R)}. \end{align*} It remains to observe that by the triangle inequality and the weak-type $(1,1)$ bound for the (sub-)martingale maximal function, \begin{align*} \left\|\sup_{n\geq 0}(\|f_n^\Omega\|_\mathbb{B}\vee \|df_{n+1}^\Omega\|_\mathbb{B})\right\|_{L_{1,\infty}(\Omega;\R)}&\leq \left\|\sup_{n\geq 0}\|f_n\|_\mathbb{B}\right\|_{L_{1,\infty}(\Omega;\R)}+\left\|\sup_{n\geq 0}\|df_{n+1}^\Omega\|_\mathbb{B}\right\|_{L_{1,\infty}(\Omega;\R)}\\ &\leq 5\left\|\sup_{n\geq 0}\|f_n\|_\mathbb{B}\right\|_{L_{1,\infty}(\Omega;\R)}\leq 5\|f\|_{L_1(\Omega;\mathbb{B})}. \end{align*} The proof is complete. \end{proof} We turn our attention to the sparse domination. Let $\mathscr{D}$ denote the class of all dyadic subintervals of $[0,1)$ having measure $4^{-n}$ for some $n$. \begin{definition} A collection $\mathscr{S}\subset \mathscr{D}$ is called sparse, if there is a family $\{E(\Omega)\}_{\Omega\in \mathscr{S}}$ of pairwise disjoint sets such that $E(\Omega)\subseteq \Omega$ and $|E(\Omega)|\geq |\Omega|/2$ for all $\Omega\in \mathscr{S}$. \end{definition} \begin{proposition} Let $f:\R\to \mathbb{B}$ be a Bochner integrable function and let $\gamma=\{\gamma(I)\}_{I\in \mathscr{D}}$ be a sequence with values in $[-1,1]$. Then there exists a sparse family $\mathscr{S}\subset\mathscr{D}$ for which we have \begin{equation}\label{sparse_domination} \left\|\sum_{I\in \mathscr{D}} \gamma(I)\langle f,\varphi_I\rangle \psi_I\right\|_\mathbb{B}\leq (2C+7)\sum_{\Omega\in \mathscr{S}} \left(\frac{1}{|\Omega|}\int_\Omega \|f\|_\mathbb{B}\right)\chi_\Omega \end{equation} almost everywhere on $[0,1)$. Here $C$ is the weak-type constant in \eqref{auxil_weak}. \end{proposition} \begin{proof}The collection $\mathscr{S}$ will be obtained by the following algorithm. \smallskip \emph{Step 1.} We put $[0,1)$ into $\mathscr{S}$ and mark it as `unused'. \smallskip \emph{Step 2.} We pick an unused element $\Omega \in \mathscr{S}$ of maximal measure and define $\lambda_\Omega=\frac{2C}{|\Omega|}\int_\Omega \|f\|_\mathbb{B}$. Consider the martingale $(g^\Omega_n)_{n\geq 0}$ and split the set $ \{\omega\in \Omega:\sup_{n\geq 0}\|g^\Omega_n\|_\mathbb{B}\geq \lambda_\Omega\}$ into the union of pairwise disjoint and maximal elements $\Omega_1$, $\Omega_2$, $\ldots$ of $\mathscr{D}$. There might be finite or infinite number of such terms, we put them all into $\mathscr{S}$. \smallskip \emph{Step 3.} We define $ E(\Omega)=\{\omega\in \Omega:\sup_{n\geq 0}\|g^\Omega_n\|_\mathbb{B}<\lambda_\Omega\}$, mark $\Omega$ as `used' and go to Step 2. \smallskip Let us study the properties of the above objects. The class $\mathscr{S}$ we obtain is indeed contained in $\mathscr{D}$. By the construction, the sets $\{E(\Omega)\}_{\Omega\in \mathscr{S}}$ are pairwise disjoint, furthermore, the weak-type inequality \eqref{auxil_weak} implies $|E(\Omega)|\geq |\Omega|/2$ for any $\Omega\in \mathscr{S}$. This in particular gives $\sum_{\Omega \in \mathscr{S}}|\Omega|\leq 2$ and hence almost all $\omega\in [0,1)$ belong to a finite number of elements of $\mathscr{S}$. Let $j(w)$ be the unique positive integer such that $\omega\in E(\Omega_{j(\omega)}^\omega)\subset \Omega_{j(\omega)}^\omega\subset \Omega_{j(\omega)-1}^\omega\subset \ldots \subset \Omega_1^\omega=[0,1)$, with $\Omega_j^\omega\in \mathscr{S}$. We are ready to verify \eqref{sparse_domination}. Outside $[0,1)$ both sides vanish, and for $\omega\in [0,1)$ we write \begin{align*} &\left\|\sum_{I\in \mathscr{D}} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B}\\ &\leq \sum_{k=1}^{j(\omega)}\left\|\sum_{I\in \mathscr{D},\Omega_{k-1}^\omega\supseteq I\supsetneq \Omega_k^\omega} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B}+\left\|\sum_{I\in \mathscr{D},\Omega_{j(\omega)}^\omega\supseteq I} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B}. \end{align*} However, for any $\Omega$, the partial sums of $\sum_{I\in \mathscr{D},\Omega\supseteq I} \gamma(I)\langle f,\varphi_I\rangle \psi_I$ form the martingale $g^\Omega$. Thus, by the very definition of the splitting procedure in Step 2, we have $$ \left\|\sum_{I\in \mathscr{D},\Omega_{j(\omega)}^\omega\supseteq I} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B}\leq \frac{2C}{|\Omega_{j(\omega)}^\omega|}\left(\int_{\Omega_{j(\omega)}^\omega} \|f\|_\mathbb{B}\right) \chi_{\Omega_{j(\omega)}^\omega}(\omega).$$ For the expression $$ \left\|\sum_{I\in \mathscr{D},\Omega_{k-1}^\omega\supseteq I\supsetneq \Omega_k^\omega} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B}$$ we proceed similarly, however, we need a small modification, as the above construction shows that this is \emph{larger} than $\frac{2C}{|\Omega_{k-1}^\omega|}\int_{\Omega_{k-1}^\omega}\|f\|_\mathbb{B}$. Denoting the parent of $\Omega_k^\omega$ in $\mathscr{D}$ by $(\Omega_k^\omega)'$, we obtain \begin{align*} & \left\|\sum_{I\in \mathscr{D},\Omega_{k-1}^\omega\supseteq I\supsetneq \Omega_k^\omega} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B}\\ &\leq \left\|\sum_{I\in \mathscr{D},\Omega_{k-1}^\omega\supseteq I\supsetneq (\Omega_k^\omega)'} \gamma(I)\langle f,\varphi_I\rangle \psi_I(\omega)\right\|_\mathbb{B} +7\|\gamma\|_\infty \|\langle f,\varphi_{(\Omega_k^\omega)'}\rangle\|_\mathbb{B}|(\Omega_k^\omega)'|^{-1/2}\\ &\leq 2C\left(\frac{1}{|\Omega_{k-1}^\omega|}\int_{\Omega_{k-1}^\omega}\|f\|_\mathbb{B}\right)\chi_{\Omega_{k-1}^\omega}(\omega)+7\left(\frac{1}{|\Omega_k^\omega|}\int_{\Omega_k^\omega}\|f\|_\mathbb{B}\right)\chi_{\Omega_k^\omega}(\omega). \end{align*} This gives the claim. \end{proof} Finally, we are ready for the proof of the weighted estimate \eqref{weightedd}. The change-of-measure argument used below is inspired by \cite{Mo}. \begin{proof}[Proof of Theorem \ref{UMD_theorem}] We start with reductions. It suffices to show the claim for $p\geq 2$, then the case $1<p<2$ follows by duality. Next, by the approximation, scaling and translating, it is enough to show that $$ \left\|\sum_{I\in \mathscr{D}} \gamma(|I|)\langle f,\varphi_I\rangle \psi_I\right\|_{L_p^w(\R;\mathbb{B})}\leq C_p\|\gamma\|_\infty [w]_{A_p}^{\max\{1/(p-1),1\}}\|f\|_{L_p^w(\R;\mathbb{B})}. $$ By homogeneity, we may and do assume that $\|\gamma\|_\infty\leq 1$. Therefore, using the \eqref{sparse_domination}, we will be done if we prove the estimate $$ \left\|\sum_{\Omega\in \mathscr{S}} \left(\frac{1}{|\Omega|}\int_\Omega \|f\|_\mathbb{B}\right)\chi_\Omega\right\|_{L_p^w(\R;\mathbb{B})}\leq C_p\|\gamma\|_\infty [w]_{A_p}^{\max\{1/(p-1),1\}}\|f\|_{L_p^w(\R;\mathbb{B})}.$$ To this end, we let $v=w^{1/(1-p)}$ be the dual weight to $w$ and pick an arbitrary nonnegative $h\in L_{p'}^v(\R;\R)$. For any $I\in \mathscr{D}$ and any weight $u$, the symbol $\mathcal{E}_{I}^uf=\frac{1}{u(I)}\int_I fu\mbox{d}\omega$ will stand for the average of $f$ over $I$ with respect to the measure $ud\omega$. Then \begin{align*} &\int_0^1 \left(\sum_{\Omega\in \mathscr{S}} \left(\frac{1}{|\Omega|}\int_\Omega \|f\|_\mathbb{B}\right)\chi_\Omega\right)h \mbox{d}\omega\\ &=\sum_{\Omega \in \mathscr{S}} \frac{w(\Omega)v(\Omega)^{p-1}}{|\Omega|^p} \cdot |\Omega|^{p-1}v(\Omega)^{2-p}\mathcal{E}_\Omega^v (\|f\|_\mathbb{B}v^{-1})\mathcal{E}_\Omega^w(hw^{-1})\\ &\leq [w]_{A_p} \sum_{\Omega \in \mathscr{S}} |\Omega|^{p-1}v(\Omega)^{2-p}\mathcal{E}_\Omega^v (\|f\|_\mathbb{B}v^{-1})\mathcal{E}_\Omega^w(hw^{-1})\\ &\leq 2^{p-1}[w]_{A_p}\sum_{\Omega \in \mathscr{S}} |E(\Omega)|^{p-1}v(\Omega)^{2-p}\mathcal{E}_\Omega^v (\|f\|_\mathbb{B}v^{-1})\mathcal{E}_\Omega^w(hw^{-1}), \end{align*} where in the last passage we have used the sparseness estimate $|\Omega|\leq 2|E(\Omega)|$ for $\Omega\in\mathscr{S}$. Since $p\geq 2$ and $E(\Omega)\subset \Omega$, we have $v(\Omega)^{2-p}\leq v(E(\Omega))^{2-p}$. Furthermore, by H\"older's inequality, we see that $ |E(\Omega)|\leq w(E(\Omega))^\frac{1}{p}v(E(\Omega))^\frac{1}{p'},$ so $ |E(\Omega)|^{p-1}v(E(\Omega))^{2-p}\leq v(E(\Omega))^\frac{1}{p}w(E(\Omega))^\frac{1}{p'}.$ Plugging these observations above and applying H\"older's inequality again, we get \begin{align*} &\sum_{\Omega \in \mathscr{S}} |E(\Omega)|^{p-1}v(E(\Omega))^{2-p}\mathcal{E}_\Omega^v (\|f\|_\mathbb{B}v^{-1})\mathcal{E}_\Omega^w(hw^{-1})\\ &\leq \sum_{\Omega\in \mathscr{S}}v(E(\Omega))^\frac{1}{p}w(E(\Omega))^\frac{1}{p'}\cdot \mathcal{E}_\Omega^v (\|f\|_\mathbb{B}v^{-1})\mathcal{E}_\Omega^w(hw^{-1}) \\ &\leq \left(\sum_{\Omega\in \mathscr{S}} \left(\mathcal{E}_\Omega^v (\|f\|_\mathbb{B}v^{-1})\right)^pv(E(\Omega))\right)^\frac{1}{p}\left(\sum_{\Omega\in \mathscr{S}} \left(\mathcal{E}_\Omega^w(hw^{-1})\right)^{p'}w(E(\Omega))\right)^\frac{1}{p'}\\ &\leq \|M_v (\|f\|_{\mathbb{B}}v^{-1})\|_{L_p^v(\R;\R)}\|M_w (hw^{-1})\|_{L_{p'}^w(\R;\R)}\\ &\leq pp'\big\|\|f\|_{\mathbb{B}}v^{-1}\big\|_{L_p^v(\R;\R)}\big\|hw^{-1}\big\|_{L_{p'}^w(\R;\R)}=pp'\|f\|_{L_p^w(\R;\mathbb{B})}\|h\|_{L_{p'}^v(\R;\R)}. \end{align*} Here $M_w$ and $M_v$ are the classical dyadic maximal operators with respect to the measures $w$ and $v$, respectively. This yields the desired assertion by taking the supremum over all $h$ as above. \end{proof}
train/arxiv
BkiUcpjxK1ThhBMLfxye
5
1
\section{Introduction} \label{sec:introduction} A {\em hypergraph} $H$ is a pair $(V,\mathcal{E})$ where $\mathcal{E}\subseteq 2^V$. The elements of $V$ and $\mathcal{E}$ are referred to as {\em vertices} and {\em hyperedges} respectively. For $k\in \setN$, a {\em $k$-colouring} of the vertices of $H$ is a function $\varphi$ from $V$ to a set of cardinality $k$ e.g., the set $\{1,\ldots,k\}$. The colouring is \emph{proper} if for every $h \in \mathcal{E}$ with $|h| \geq 2$ there exist $x,y \in h$ with $\varphi(x) \neq \varphi(y)$. This extends the definition of the classical proper colouring of a graph. Denote by $\chi(H)$ the least integer $k$ such that $H$ admits a proper $k$-colouring. The following notion of CF-colouring is a further restriction of proper colouring: \begin{definition}\label{def:cf-coloring} A {\em conflict-free colouring} ({\em CF-colouring} for short) of $V$ is a colouring $\varphi$ such that for any nonempty $h \in \mathcal{E}$ at least one vertex $x \in h$ is uniquely coloured, meaning $\varphi(y) \neq \varphi(x)$ for all $y \in h \setminus \{x\}$. Denote by $\chi_{CF}(H)$ the least integer $k$ such that $H$ admits a CF-colouring with $k$ colours. Clearly $\chi_{CF}(H)\geq \chi(H)$. \end{definition} This notion was introduced to model radio frequencies allocations to antennas while avoiding interference \cite{ELRS,SmPHD}. This spawned a new area of research in combinatorics and computational geometry, with dozens of followup papers and theses. For more on CF-colouring and its applications see the survey \cite{CF-survey} and the references therein. In this paper we study an extension of the notion of CF-colouring to subsets of vertices. \begin{definition} Let $H=(V,\mathcal{E})$ be a hypergraph and let $t\in \setN$. A \emph{$t$-subset-CF-colouring} of $H$ with $k$ colours is a function $\varphi$ from $\binom{V}{t}$ to a $k$-element set such that for every hyperedge $h\in \mathcal{E}$ with $|h|>t$ there exists a $t$-subset $s \in \binom h t$ whose colour $\varphi (s)$ is distinct from all colours of other $t$-subsets in $\binom h t$. The \emph{$t$-subset-CF-chromatic number} $\chi^t_{CF}(H)$ is the least integer $k$ such that $H$ admits a $t$-subset-CF-colouring with $k$ colours. \end{definition} \subsection{Sparse hypergraphs} We show that in hypergraphs $H$ with $n$ vertices exhibiting a certain kind of sparsity property, as is the case in many geometrically defined hypergraphs we have $\chi^t_{CF}(H) = O(\log n)$ where the constant of proportionality in the big-Oh notation depends on $t$ and the the sparsity parameter. To make this statement precise we need the following definitions: The {\em Delaunay graph} of a hypergraph $H=(V,\mathcal{E})$ is the graph $\Del(H)=(V,\{h\in \mathcal{E}\colon |h|=2\})$. For a subset $V' \subset V$ the {\em induced sub-hypergraph} $H[V']$ is the hypergraph $(V', \{h \cap V': h \in \mathcal{E}\})$. \begin{definition} A hypergraph $H=(V,\mathcal{E})$ has the {\em Hereditary Linear Delaunay} (HLD) property with parameter $c \in \setR_{>0}$ if for every subset of vertices $V'\subset V$ the graph $\Del (H[V'])$ has at most $c|V'|$ edges. \end{definition} \begin{theorem}\label{thm:linear-delaunay} Let $H=(V,\mathcal{E})$ be a hypergraph. If $H$ has the HLD property with parameter $c\in \setR_{>0}$ then $\chi_{CF}^t(H)=O(ct^2 \log |V|)$. Moreover, there are such hypergraphs for which $\chi_{CF}^t(H)=\Omega(\log |V|)$. \end{theorem} In order to prove~\cref{thm:linear-delaunay} we first need to prove a common generalization of previous results on so-called {\em strong CF-colouring} \cite{ABG+05, hks09}. Then, we use it as an auxiliary colouring for subset-CF-colouring. Next, we prove the following result on hypergraphs which are derived from ``well-behaved" hypergraphs by allowing union of hyperedges: Given a hypergraph $H=(V,\mathcal{E})$, define a new hypergraph on the same vertex set by $H^{\cup} = (V, \{ e \cup f \colon e, f \in \mathcal{E}\})$. \begin{theorem} \label{thm:union} If $H=(V,\mathcal{E})$ has the HLD property for some parameter $c\in\setR_{>0}$, then $\chi^2_{CF}(H^{\cup})=O(c\log |V|)$. \end{theorem} Given two families of sets $\famB$ and $\famC$, the \emph{intersection hypergraph} of $\famB$ with respect to $\famC$ is the hypergraph $H(\famB,\famC)$ on the vertex-set $\famB$, in which any $c \in \famC$ defines a hyperedge $\{b \in \famB \colon b \cap c \neq \emptyset \}$. Many geometrically defined hypergraphs (also called {\em range spaces}) admit the HLD property. This includes $H(P,\mathcal{D})$ where $P$ is a set of points and $\mathcal{D}$ is the set of all discs in $\setR^2$ (since $\Del(H(P,\mathcal D))$ is then the standard Delaunay graph, which is planar). This was also extended to $H(D,\mathcal{D})$ for a finite family of discs $D$ \cite{KellerSm20}. Another example is $H(P,\mathcal{H})$ where $P$ is a set of points and $\mathcal{H}$ is the set of all halfspaces in $\setR^3$ (by \cite[Lem.~3.1]{smoro}, combined with the fact that the union of two planar graphs on $n\geq 3$ vertices has at most $6n-12$ edges). Pseudo-disc families are another example: \begin{definition} A finite set of simple closed Jordan regions in $\setR^2$ is a \emph{family of pseudo-discs} if the boundaries of any two regions in the family intersect at most twice. \end{definition} \begin{theorem}[\cite{Keszegh18}]\label{thm:Kesegh} If $\famB$ and $\famC$ are families of pseudo-discs, then $\Del( H(\famB,\famC))$ is planar. \end{theorem} In particular, $H(\famB, \famC)$ has the HLD property with $c=3$. For all these hypergraphs, denoting by $n$ the number of vertices, Theorem~\ref{thm:linear-delaunay} and Theorem~\ref{thm:union} give $\chi^t_{CF}(H)=O(t^2 \log n)$ and $\chi^2_{CF}(H^{\cup})=O(\log n)$. \subsection{Axis-parallel rectangles} Consider $H=H(P,\mathcal{R})$ where $P$ consists of $n$ points and $\mathcal{R}$ of axis-parallel rectangles, in $\setR^2$. The problem of bounding $\chi_{CF}(H)$ as a function of $n$ remains elusive after almost two decades of research; there is an exponential gap between the best known upper and lower bounds of $O(n^{0.368})$ and $\Omega(\frac{\log n}{\log^2 \log n})$ \cite{Chan12, ChenPST08}. In strong contrast to this for $t\geq 2$ we prove the following result: \begin{theorem}\label{thm:axis-parralel-rectangles} Let $t\ge 2$ be an integer and let $P$ be a set of $n$ points and $\mathcal{R}$ a set of axis-parallel rectangles in the plane. Then $\chi^{t}_{CF}(H(P,\mathcal{R}))=O(t^2 \log^2 n)$. \end{theorem} Since such hypergraphs do not have linear Delaunay graphs, our proof of Theorem~\ref{thm:axis-parralel-rectangles} uses different approach and is more involved technically. \subsection{General hypergraphs} Contrary to the above results, in general there is no quantitative relation between $\chi^t_{CF}$ and $\chi_{CF}$. For example, We show that there is no function $f$ such that $\chi^t_{CF}(H) \leq f(\chi_{CF}(H))$. \begin{theorem} \label{thm:unbounded} For any $t \geq 2$ there exists a sequence of hypergraphs $(H_i)_{i\in\setN}$ such that $\chi_{CF}(H_i)=2$ and $\lim_{i\to + \infty}\chi^t_{CF}(H_i) = +\infty$. \end{theorem} Notice also that there is no function $f$ such that $\chi_{CF}(H) \leq f(\chi^t_{CF}(H))$ already for $t=2$. One can take for example, the complete graph $K_n$ and easily verify that $\chi^t_{CF}(K_n)=1$ and $\chi_{CF}(K_n)=n$ for any $n$. We also show that $\chi_{CF}(H)$ can be much larger than $\chi^t_{CF}(H)$, even in non-trivial geometrically-defined hypergraphs. \begin{theorem} \label{thm:unbounded2} For any $n \in \mathbb{N}$, there is a hypergraph $H=(V,\mathcal{E})$ with $|V|=n$ such that $\chi^2_{CF}(H)=O(\log n)$ and $\chi_{CF}(H)=n$. \end{theorem} The phenomenon witnessed by Theorem~\ref{thm:unbounded2} can be realised by geometric intersection hypergraphs defined with respect to points and halfspaces in $\setR^4$ (\cref{subsec:interval_union}). \subsection{Organization of the paper} In Section~\ref{sec:prelim} we present several additional definitions and facts that we use throughout the paper. In Section~\ref{sec:rectangles} we prove Theorem~\ref{thm:axis-parralel-rectangles} In Section~\ref{sec:sparse_del} we prove Theorem~\ref{thm:linear-delaunay}. Finally, in Section~\ref{sec:negative-results} we prove Theorem~\ref{thm:unbounded} and Theorem~\ref{thm:unbounded2} and provide geometric realisations of the underlying hypergraphs. In that section we also prove Theorem~\ref{thm:union}. \section{Preliminaries} \label{sec:prelim} Fix a hypergraph $H=(V,\mathcal{E})$. We need several more definitions of various hypergraph colurings. We start with the definition of $t$-strong-CF-colouring extending that of CF-colouring, where we require at least $t$ uniquely coloured vertices in any hyperedge: \begin{definition}[\cite{hks09}] Let $t \in \setN$. A colouring of $H$ is a \emph{$t$-strong-CF-colouring} if each $h \in \mathcal{E}$ contains at least $\min \{|h|,t\}$ vertices whose colour is unique in $h$. Let $\chi_{t\text{-strong-CF}}(H)$ be the least number of colours required in a $t$-strong-CF-colouring of $H$. \end{definition} Next, consider yet another extension of CF-colouring: \begin{definition}[\cite{cheilarisUniqueMaximumConflictFreeColoring2013,CF-survey}] A \emph{unique-maximum colouring} of $H$ (UM-colouring for short) is a colouring of $V$ with an ordered set of colours (e.g.,\@ integers) in which the maximum colour in each hyperedge is also unique in the hyperedge. Let $\chi_{\text{UM}}(H)$ be the least number of colours needed in a UM-colouring of $H$. \end{definition} Note that any UM-colouring of $H$ is also a CF-colouring. Similarly to $t$-strong CF-colouring, we extend UM-colouring and define $t$-unique-maximum colourings: \begin{definition} Let $t \in \setN$. A colouring with an ordered set of colours is a \emph{$t$-UM-colouring} of $V$ if in any $h \in \mathcal{E}$, the $\min \{ |h|,t \}$ largest colours are unique. Let $\chi_{t\text{-UM}}(H)$ be the least number of colours required in a $t$-UM-colouring of $H$. \end{definition} The following definition is yet another extension of the standard notion of proper colouring of a hypergraph: \begin{definition} [\cite{hks09}] Let $t\in \setN$. A colouring is $t$-\emph{colourful} if any hyperedge $h\in \mathcal{E}$ contains at least $\min \{|h|,t \}$ pairwise distinctly-coloured vertices. Let $\chi_{t\text{-colourful}}(H)$ be the least number of colours required in a $t$-colourful colouring. \end{definition} Observe that for all $t \in \setN$ we have $\chi_{t\text{-UM}} \geq \chi_{t\text{-strong-CF}} \geq \chi_{t\text{-colourful}}$, since $t$-UM colourings are also $t$-strong-CF and $t$-strong-CF colourings are also $t$-colourful. \subsection{Colouring meta-algorithm} \begin{algorithm}[H] \caption{A general hypergraph colouring scheme} \hspace*{\algorithmicindent} \textbf{Input:} $H=(V,\mathcal{E})$, hypergraph colouring subroutine $\mathbf{aux}$\\ \hspace*{\algorithmicindent} \textbf{Output:} a colouring $\phi$ of $H$ \label{alg:gen-CF-framework} \begin{algorithmic}[1] % \State $i\gets 1$ \Comment{$i$ denotes an unused colour} \While{$V\neq \emptyset$} \State $\phi \gets \mathbf{aux}(H[V])$ \Comment{auxiliary colouring} \State {$V' \gets$ largest colour class of $\phi$ (arbitrarily chosen in case of a tie)} \State colour all vertices of $V'$ with $i$ \State $V\gets V\setminus V'$ \State $i\gets i+1$ \EndWhile \end{algorithmic} \end{algorithm} \paragraph*{Analysis of \cref{alg:gen-CF-framework}} If the subroutine $\mathbf{aux}$ uses at most $f(n)$ colours on $n$-vertex hypergraphs, then the number of vertices remaining to be coloured after $i$ iterations is at most $u_i$, where $u_{i+1} = u_i \cdot (1 - 1 / f(u_i))$ and $u_0=n$ is the number of vertices of $H$. The number of iterations in \Cref{alg:gen-CF-framework}, which also equals the number of colours in its output, is at most $\min \{T\in \setN\colon u_T<1\}$. For example, if $f(n)=O(1)$ then $T=O(\log u_0)$, if $f(n) = O(\log^{\alpha} n)$ for some positive $\alpha$ then $T = O(\log^{\alpha+1} u_0)$ and if $f(n)=O(n^\alpha)$ then $T = O(u_0^\alpha)$. \Cref{alg:gen-CF-framework} was used in \cite{smoro} for finding a UM-colouring of a hypergraph, by using the subroutine \textbf{aux} to be a proper colouring. It was then proved that UM-colourings and proper colourings are strongly related as is summarised in the following theorem: \begin{theorem}[\cite{smoro}]\label{thm:Proper-to-CF} For any finite hypergraph $H=(V,\mathcal{E})$, \[\chi_{\text{UM}}(H) = O\left( \max_{V'\subset V} \chi(H[V']) \cdot \log |V|\right).\] \end{theorem} In this paper we use the same meta-algorithm with other choices of auxiliary colourings that will be useful for our purposes. \subsection{Linear Delaunay graphs} We need one more technical lemma about hypergraphs with the HLD property. It was explicitly stated in \cite{AKP21}, based on earlier ideas from \cite[Lem.~2.6]{ADEP21} and \cite{BPR13}, and can be viewed as an abstract version of the Clarkson--Shor technique \cite{cs-arscg-89}: \begin{lemma} (\cite[Lem.~22]{AKP21}) \label{lem:k_good_pairs} Let $H=(V,\mathcal{E})$ be a hypergraph with the HLD property for some parameter $c\in \setR_{>0}$. Then, for any $k\in\setN$, \[ \left\lvert \left\{ \{v_1,v_2\} \in \binom{V}{2} \colon \exists h \in\mathcal{E},\ |h| \leq k \land \{v_1,v_2\} \subset h \right\} \right\rvert \leq c |V|ek, \] where $e$ is the base of the natural logarithm. \end{lemma} \section{Colouring subsets of points with respect to axis-parallel rectangles}\label{sec:rectangles} In this section we prove \cref{thm:axis-parralel-rectangles}. \begin{proof}[Proof of \cref{thm:axis-parralel-rectangles}] Let $n$, $t$, $P$ and ${\cal R}$ be as in the statement of the theorem. We consider the intersection hypergraph $H=H(P,{\cal R})$. Note that since $P$ is finite, we can assume that the set ${\cal R}$ is finite without changing the set of hyperedges of $H$ and while maintaining the properties mentioned below. We can assume, without loss of generality, that the points of $P$ lie on the $n\times n$ integer grid and furthermore that no two points in $P$ share an $x$- or $y$-coordinate. A small perturbation of $P$ and monotone transformations of the coordinates ensure both, without removing any hyperedge from $H(P,\mathcal{R})$. Put $I= \{-\lceil \log n\rceil, \dots, \lceil \log n\rceil\}$. For each $i\in I$, let $A_i$ be the set of all axis-parallel rectangles with width-to-height ratio $2^i$. Each $A_i$ is a family of pseudo-discs\footnote{After an infinitesimal perturbation of each $r \in A_i$ which does not affect $r\cap P$.} (see \cite{ACPIN2013}), and the following properties hold: \begin{enumerate} \item[(a)] For every $r \in \mathcal{R}$, there exist $i \in I$ and $r_1, r_2 \in A_i$ such that $r =r_1 \cup r_2$. \label{prop-cover} Indeed, if $r$ has width $w$ and height $h\leq w$ then it can be covered by two rectangles of height $h$ and width $2^{\lfloor \log (w/h)\rfloor} h$ which are contained in $r$ (see \cref{fig:propA}). The case $w< h$ is symmetric. \item[(b)] \label{prop-access}For every $r\in A_i$ such that $|r\cap P|\ge t+1$, there exists $d\in A_i$ such that $d \subset r$ and $|d\cap P|=t+1$. Such a rectangle $d$ can be obtained from $r$ by a translation and scaling, while maintaining its width-to-height ratio. \end{enumerate} \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{propA.pdf} \end{center} \caption{A rectangle $r$ covered by $r_1$ and $r_2$. The rectangle $r$ is of height 3 and width 8, and each $r_i$ is of height 3 and width 6.}\label{fig:propA} \end{figure} For $i\in I$, let \[ E_i = \left\{ \{p,q\}\in \binom {d \cap P} 2 \colon d\in A_i, |d\cap P|\le t+1\right\}.\] \Cref{lem:k_good_pairs} together with \cref{thm:Kesegh} applied to $H(P, A_i)$ give $|E_i|\leq 3e \cdot (t+1) |P|$. Let $G:=(P,\cup_{i\in I} E_i)$. First, we prove by induction on $|P|=n$ that $\chi(G) \leq 80t\log n +1$. The graph $G$ has at most $3e\cdot (t+1)|P|\cdot |I| \leq 40tn\log n$ edges, hence $G$ has a vertex $p \in P$ of degree at most $80t \log n$. Let $P'= P \setminus \{p\}$, then by the induction hypothesis, the graph $G'$ defined like $G$ on $P'$, satisfies $\chi(G') \leq 80t \log n +1$. To argue by induction that $\chi(G) \leq 80 t \log n +1$ we need to verify that if $\{x,y\}\in E(G)$ and $x,y \neq p$, then $\{x,y\}\in E(G')$. Indeed, if $\{x,y\}\in E(G)$ then there is some $d\in A_i$ for some $i\in I$ such that $|d\cap P|\le t+1$ and $\{x,y\}\in \binom {d \cap P} 2$, then $|d\cap P'|\le |d\cap P|\le t+1$ and $\{x,y\}\in \binom {d \cap P'} 2$, hence $\{x,y\}\in E(G')$, as required. Now, by colouring inductively $G'$ with at most $80 t \log n +1$ colours, and assigning $p$ a colour that differs from the colours of all its neighbours in $G$, we are done. The proper colourings of $G$ are, by Property ($b$), exactly the $(t+1)$-colourful colourings of $H(P,\cup_{i\in I} A_i)$. Similarly we obtain a $(t+1)$-colourful colouring $\phi^{P'}$ in $H(P',\cup_{i\in I} A_i)$ for any $P'\subset P$ with $O(t \log |P'|)$ colours. Applying \cref{alg:gen-CF-framework} with $\phi^{P'}$ as the auxiliary colouring, we obtain a UM-colouring $c$ of $H(P,\cup A_i)$ with $O(t \log^2 n)$ colours. \begin{claim}\label{claim:colourCount} Let $r\in \mathcal{R}$ and $P'$ be the set of points remaining after some iterations of \cref{alg:gen-CF-framework}. If $|r\cap P'| \leq t+2$, then each colour of $\phi^{P'}$ appears at most twice in $r \cap P'$, and if $|r \cap P'|= t+k$ (for $k \geq 3$), then each colour of $\phi^{P'}$ appears at most $k$ times in $r \cap P'$. \end{claim} \begin{proof} Let $i\in I$ and $r_1,r_2 \in A_i$ be such that $r=r_1\cup r_2$, as guaranteed by Property (a). Since the colouring $\phi^{P'}$ is $(t+1)$-colourful, each $r_i\cap P'$ ($i\in\{1,2\}$) receives at least $\min\{|r_i \cap P'|,t+1\}$ different colours. If either of $r_1\cap P'$ or $r_2\cap P'$ is equal to $r\cap P'$ the conclusion follows trivially. Otherwise, $|r_1\cap P'|, |r_2\cap P'|\leq t+1$, so all colours appear at most once in $r_1$ and at most once in $r_2$. The second part is similar. \end{proof} \paragraph*{Deriving the final colouring for $\binom P t$} Recall the colouring $c$ of $P$ defined before \Cref{claim:colourCount}. We now define the colouring $\psi$ of $\binom P t$ as follows. For any $t$-subset $S \subset P$, if under $c$ some colour appears at least three times in $S$ then $\psi (S)=\bot$ (a dummy colour). Otherwise, denote by $r(S)$ the minimal axis-parallel rectangle such that $S\subseteq r(S)$. For a point $p\in P$ denote by $x(p)$ the $x$-coordinate of $p$. Also let $m = \min c(S)$, the least colour assigned to $S$ under $c$. Define \begin{equation} \label{eq:2t} \psi(S) = \left( \sum_{x \in S} c(x),Q(S)\right), \end{equation} where the value $Q(S)$ encodes the following information about $S$:\footnote{We write ``$m$ appears in $S$'', whenever there exists a point in $S$ which colour is $m$.} \begin{itemize} \item A. Whether $m$ appears exactly once, or exactly twice in $S$, \item B. Whether $m$ appears exactly once, exactly twice, or more than twice in $r(S)$, \item C. In the event where $m$ appears only once in $S$, its location in $r(S)$: on one of four open edges, at one of four corners, or in the interior of $r(S)$, \item D. In the event where $m$ appears only once in $S$ but exactly twice in $r(S)$, that is $p\in S$, $p'\in (r(S)\cap P)\setminus S$, and $c(p)=c(p')=m$, whether $x(p) < x(p')$. \end{itemize} While the precise encoding scheme is immaterial, $Q$ can be chosen to take a constant number of values. Therefore, $\psi$ uses $O(t^2 \log^2 n)$ colours. \paragraph*{Correctness} Let us now check that $\psi$ is conflict-free, that is, that any rectangle $r\in {\cal R}$ such that $|r\cap P|\ge t+1$ contains a $t$-subset whose colour under $\psi$ is unique. Let $r\in \mathcal{R}$ and for every positive integer $\ell$ let $C_{\ell}$ be the set of points in $r\cap P$ receiving the $\ell$-th largest colour, in $r\cap P$, under $c$. Let $C_{\leq\ell}=\cup_{k\leq\ell} C_{k}$: the set of all points in $r \cap P$ which receive one of the $\ell$ largest colours, in $r\cap P$, under $c$. \Cref{claim:colourCount} can be restated as: if $|C_{\leq\ell}|\leq t + 2$ then $|C_1|, \dots, |C_\ell|\leq 2$, and if $|C_{\leq\ell}|= t + k$ for $k\geq 3$, then $|C_1|, \dots, |C_\ell|\leq k$. First, we consider the case where there is an index $\ell$ for which $|C_{\leq\ell}|=t$. In this case $C_{\leq\ell}$ is a $t$-subset with a sum of colours larger than the sum of any other $t$-subset of $r \cap P$ and hence uniquely coloured, by the first coordinate of \eqref{eq:2t}. Next, we consider the complementary case where there is no index $\ell$ such that $|C_{\leq\ell}|=t$, then by \cref{claim:colourCount} there must be an index $\ell$ such that $|C_{\leq\ell}|=t+1$ and $|C_{\ell}|=2$. (Actually, \cref{claim:colourCount} implies that every $C_i$ for $1\leq i \leq \ell$ contains either 1 or 2 points.) Let $p\neq p'$ be the two points getting the smallest colours in $C_{\leq \ell}$ under $c$, namely, $m=c(p)=c(p')=\min\{c(C_{\leq \ell})\}$. We further divide cases based on how many points out of $\{p,p'\}$ appear on the boundary of $r(C_{\leq\ell})$. We argue that in all but one of the cases below the sets $S=C_{\leq\ell}\setminus \{p\}$ and $S'=C_{\leq\ell}\setminus \{p'\}$ have $Q(S)\neq Q(S')$. As those subsets have the maximal sum out of all $t$-subsets in $r\cap P$ (this sum is the first coordinate of \eqref{eq:2t}), they are uniquely coloured out of all the $t$-subsets in $r\cap P$, i.e. we get two uniquely coloured subsets instead of the required one. Note that $m=c(p)=c(p')$ appears only once in each of $S$ and $S'$. \begin{itemize} \item 1. Neither $p$ nor $p'$ lie on the boundary of $r(C_{\leq\ell})$. Then $m$ appears twice in $r(S)=r(S')=r(C_{\leq\ell})$, so item (D) in the definition of $Q$ guarantees $Q(S)\neq Q(S')$. \item 2. Exactly one of $p$ and $p'$ lies on the boundary of $r(C_{\leq\ell})$, say $p$. Then $m$ appears twice in $r(S')$ but only once in $r(S)$, so $Q(S)\neq Q(S')$ (item (B)). \item 3. Both $p$ and $p'$ lie on the boundary of $r(C_{\leq\ell})$, on different edges as no two points have the same $x$- or $y$-coordinates. As long as $p$ and $p'$ do not appear on the same relative corner of $r(S')$ and $r(S)$, that is, not both $p$ and $p'$ appear at the top-right corner of $r(S')$ and $r(S)$, respectively, (and the same holds for the top-left, bottom-right and bottom-left corners), item (C) ensures $Q(S)\neq Q(S')$. Otherwise, let $r=r(C_{\leq\ell})$ and consider the iteration of \cref{alg:gen-CF-framework} in which $r \cap P'=C_{\leq\ell}$. In this iteration, $p$ and $p'$ were assigned the same colour under $c$, hence the same colour in the auxiliary colouring of the algorithm. This means that in the covering $r=r_1\cup r_2$ obtained by property (a), without loss of generality, $p \in r_1 \setminus r_2$ and $p' \in r_2 \setminus r_1$. Since $p $ and $p'$ are in the same relative corner of $r(S')$ and $r(S)$, we have $S'= C_{\leq\ell} \cap r_1$ or $S= C_{\leq\ell} \cap r_2$. Hence all points in $C_{\leq\ell} \setminus \{ p,p' \}$ get distinct colours under $c$. In this case, the set $S''$ which is $C_{\leq\ell}$ minus its point coloured with the smallest colour larger than $c(p)=c(p')$ is uniquely coloured under $\psi$. It has the maximal sum out of the $t$-subsets with the minimum appearing exactly twice (item (A)). \end{itemize} See \cref{fig:rect} for an illustration of the different cases where $t=5$ and $|C_{\le \ell}|=6$. In each of the cases, the drawn rectangle is $r(C_{\le \ell})$. Note that the drawn rectangles can contain more points of $(P\cap r(C_{\le \ell}))\setminus C_{\le \ell}$ which are not depicted in the figure. In \cref{fig:case3a}, the points $p$ and $p'$ are not on the same relative corner of $r(S)$ and $r(S')$. In \cref{fig:case3b}, the points $p$ and $p'$ are on the same relative corner of $r(S)$ and $r(S')$. Let $r_1,r_2$ be the rectangles which cover $r$ by \cref{prop-cover}. The points $p$ and $p'$ do not belong to the same $r_i$. Therefore all the other four points in $C_{\le \ell}$ belong to the same rectangle, which is $r_2$, so those points get distinct colours under $c$. In this case the uniquely coloured five-tuple $S''$ consists of $p,p'$ and the three maximal colours among the remaining four. \begin{figure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.75 \linewidth]{case1} \caption{Case 1} \label{fig:case1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{case2} \caption{Case 2} \label{fig:case2} \end{subfigure} \newline \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{case3a} \caption{First possibility of case 3} \label{fig:case3a} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.8\linewidth]{case3b} \caption{Second possibility of case 3} \label{fig:case3b} \end{subfigure} \caption{The different cases in the proof of \cref{thm:axis-parralel-rectangles}. In the first possibility of Case 3, $p$ and $p'$ are not on the same relative corner of $r(S)$ and $r(S')$. In the second possibility of Case 3, both $p$ and $p'$ are on the same relative corner (bottom right) of $r(S)$ and $r(S')$. Here in the covering $r=r_1 \cup r_2$, the points $p$ and $p'$ do not belong to the same $r_i$. Therefore, all other 4 points belong to the same $r_i$ (which is $r_2$ in the figure), hence they get distinct colours under $c$. In this case, the uniquely-coloured 5-tuple, $S''$, consists of $p,p'$ and 3 out of the 4 other points, removing the fourth point whose colour under $c$ is the minimal among the four.} \label{fig:rect} \end{figure} \end{proof} \section{Hypergraphs with linear Delaunay graphs} \label{sec:sparse_del} \subsection{General combinatorial considerations} Horev et al. \cite{hks09} showed that using a $(t+1)$-colourful colouring subroutine in \cref{alg:gen-CF-framework} yields a valid $t$-strong-CF-colouring. However, a more careful analysis of this algorithm reveals that the colouring is, in fact, a $t$-UM-colouring. Hence, we have: \begin{claim}\label{cl:colourful2um} Let $t,k\in \setN$ and let $H=(V,\mathcal{E})$ with $|V|=n$ where for any $V' \subset V$, $\chi_{(t+1)\text{-colourful}}(H[V']) \leq k$. Then $\chi_{t\text{-UM}}(H)= O( k \log n)$. \end{claim} Let $H=(V,\mathcal{E})$ be a hypergraph where $V=\{v_1,\ldots,v_n\}$. Given a $t$-strong-CF-colouring $c$ of $V$ with $x$ colours define a colouring of $\binom V t$ by assigning every $t$-subset $S=\{v_{i_1},\ldots,v_{i_t}\}$ ($i_1 < i_2 \cdots < i_t$) of $V$ a colour $c(S)$ which is the sequence $(c(v_{i_1}),\ldots,c(v_{i_t}))$. It is easy to check that the resulting colouring is a $t$-subset-CF-colouring and uses at most $k^t$ colours. Hence, we have the following observation: \begin{observation} \label{cl:t-strong} Let $H$ be a hypergraph with $\chi_{t\text{-strong-CF}}(H)=k$. Then $\chi_{CF}^t(H)=O(k^t)$. \end{observation} However, if the colouring $c$ of $V$ has the stronger property of being $t$-UM, the corresponding $t$-subsets CF-chromatic number is significantly smaller. \begin{lemma}\label{lem:t-UM} Let $H=(V,\mathcal{E})$ be a hypergraph with $\chi_{t\text{-UM}}(H)=k$. Then $\chi_{CF}^t(H)=O(tk)$. \end{lemma} \begin{proof} Let $\varphi \colon V \rightarrow \{1,\ldots,k\}$ be a $t$-UM-colouring of $H$. Define $\phi \colon \binom{V}{t} \rightarrow \{1,\ldots,tk\}$ by $\phi (S) = \Sigma_{v\in S} \varphi(v)$. Clearly, the maximum colour given under $\phi$ in each hyperedge of size at least $t$ is unique. \end{proof} All results on $t$-strong CF-colouring of which we are aware rely on \cref{alg:gen-CF-framework} (via $(t+1)$-colourful auxiliary colourings), and therefore the resulting colourings are in fact $t$-UM. However, this $t$-UM property has not been used explicitly before, neither in the context of $t$-strong-CF-colouring, nor in the more general context of CF-colouring. In the following subsection we exploit this property, applying \cref{lem:t-UM}. \subsection{Geometric applications} \begin{corollary}\label{cor:pdUM} Let $\mathcal{D}$ be a collection of $n$ pseudo-discs and let $P$ be a set of points in the plane. Then $\chi_{CF}^t(H({\cal D},P))=O(t^2 \log n)$. \end{corollary} \begin{proof} It is known that $H({\cal D},P)$ admits a $t$-UM-colouring with $O(t \log n)$ colours\footnote{The more general result in \cite{hks09} applies to families with linear union complexity. It is well known that this includes pseudo-disc families.} \cite{hks09}. By \cref{lem:t-UM}, the bound follows. \end{proof} Similarly, since the intersection hypergraph of a family of $n$ axis-parallel rectangles with respect to points admits a $t$-UM-colouring with $O(t \log^2 n)$ colours \cite{hks09}, we have: \begin{corollary}\label{cor:recCF} Let $\mathcal{R}$ be a collection of $n$ axis-parallel rectangles and let $P$ be a set of points in the plane. Then $\chi_{CF}^t(H({\cal R},P))=O(t^2 \log^2 n)$. \end{corollary} \subsection{Colouring \texorpdfstring{$t$}{t}-subsets in hypergraphs with linear Delaunay graph} In this section we prove \cref{thm:linear-delaunay}. Note that \cref{cor:pdUM} above is an immediate consequence of \cref{thm:linear-delaunay} since, by \cref{thm:Kesegh}, the corresponding hypergraphs have a HLD property, while \cref{cor:recCF} does not follow from \cref{thm:linear-delaunay}. In fact, for the corresponding hypergraphs their Delaunay graphs may have a quadratic number of edges. An easy example follows from drawing $n/2$ horizontal line segments and $n/2$ vertical line segments so that their Delaunay graph is the complete bipartite graph $K_{\frac{n}{2},\frac{n}{2}}$. The proof of \cref{thm:linear-delaunay} uses \cref{lem:t-UM}. Therefore, we start by showing that for a wide class of hypergraphs $H$ there is a $t$-UM-colouring with a `small' number of colours: \begin{theorem}\label{thm:Delaunay} Let $H=(V,\mathcal{E})$ be a hypergraph with the HLD property for some $c\in\setR_{>0}$. Then $\chi_{t\text{-UM}}(H)=O(c t \log |V|)$. \end{theorem} \cref{thm:Delaunay} generalises previous results for the intersection hypergraph of pseudo-discs and points \cite{hks09}, and for the intersection hypergraph of points and discs\footnote{The proof in \cite{ABG+05} relies on disc-specific arguments that fail for pseudo-discs.} \cite{ABG+05}. \begin{proof}[Proof of \cref{thm:Delaunay}] By \cref{cl:colourful2um}, it suffices to prove that any induced sub-hypergraph $H[V']$ on some $V' \subset V$ admits a $(t+1)$-colourful colouring with at most $2cet+1$ colours. Such a colouring can be obtained by induction on $|V|$. Assume we proved the assertion for hypergraphs with $|V|-1$ vertices. Define a graph $G_t$ in which we connect two vertices if they participate together in some hyperedge of size at most $t$ of $H$. By \cref{lem:k_good_pairs}, the number of edges in $G_t$ is at most $c|V|et$, hence the average degree of $G_t$ is at most $2cet$. So there must be a vertex $v \in V$ whose degree in $G_t$ is at most $2cet$. This means that $v$ belongs to at most $cet$ hyperedges each of which has cardinality at most $t$. By the induction hypothesis, $H[V \setminus \{v\}]$ admits a $(t+1)$-colourful colouring with at most $2cet+1$ colours. (Note that $H[V \setminus \{v\}]$ satisfies the conditions of the theorem.) Assign to $v$ a colour that differs from the colours of all its neighbours in $G_t$. To see that the resulting colouring is a valid $(t+1)$-colourful colouring consider a hyperedge $h$. We can assume that $v \in h$ for otherwise $h$ satisfies the required conditions by the induction hypothesis. Also if $|h| > t$ we are done by the induction hypothesis as $h\setminus\{v\}$ already contains the required number of colours. Otherwise if $|h| \leq t$ then by the induction hypothesis $h\setminus\{v\}$ consists of pairwise distinct coloured vertices. Since $v$ is a neighbour in $G_t$ of any vertex in $h\setminus\{v\}$ it is assigned a distinct colour. This completes the induction step. \end{proof} We are ready to prove \cref{thm:linear-delaunay}. \begin{proof}[Proof of \cref{thm:linear-delaunay}] By \cref{thm:Delaunay}, $H$ admits a $t$-UM-colouring with $O(c t \log n)$ colours. In view of \cref{lem:t-UM}, it follows that $\chi_{CF}^t(H)=O(c t^2 \log n)$. As for the tightness of this bound, consider for each $n\in\setN$ the hypergraph $H_n=H(P,{\cal I})$ where $P=[n]$ is the set $\{1,\ldots,n\}$ and ${\cal I}$ is the family of all intervals on the real line. Clearly, $H_n$ has the HLD property (with $c=1$). Let $\varphi$ be a $t$-subset-CF colouring of $H_{2n+1}$. Since there is an interval $I\in {\cal I}$ such that $I\cap P=P$, $I$ contains a uniquely-coloured $t$-subset $S$. Hence there exists $I'\in {\cal I}$ such that $I'$ does not contain $S$ and $|I'|\ge n$. Let $P'=I'\cap P$. The restriction of $\varphi$ to $H_{2n+1}[P']$ is still a $t$-CF-colouring but does not use the colour $\varphi(S)$. Hence the recurrence relation $\chi^t_{CF}(H_{2n+1}) \geq 1 + \chi^t_{CF} (H_n)$ is satisfied and therefore $\chi^t_{CF}(H_n)= \Omega (\log n)$. \end{proof} \section{Comparing \texorpdfstring{$t$}{t}-subset-CF-colouring and CF-colouring} \label{sec:negative-results} \subsection{\texorpdfstring{$\chi^t_{CF}$}{\textchi\^{}t\_{CF}} is not bounded in terms of \texorpdfstring{$\chi_{CF}$}{\textchi\texttwosuperior\_{CF}}}\label{subsec:comparing} In this subsection we prove Theorem \ref{thm:unbounded} and show that there exist hypergraphs with bounded $\chi_{CF}$ but arbitrarily large $\chi^t_{CF}$. \begin{proof}[Proof of \cref{thm:unbounded}] Consider $t,n\in \mathbb{N}$ such that $n\ge t \ge 2$ and let $H_n=(V,\mathcal{E})$ with $V=\{1,2,\ldots,n\}$ and $\mathcal{E}=\{\{1\}\cup s \mid s\in {\binom{\{2,3,\ldots,n\}}{t+1}}\}$. It is easy to see that $\chi_{CF}(H_n)=2$, with two colour classes $\{1\}$ and $\{2,3,\ldots,n\}$. On the other hand, for any $k \in \setN$, there exists $n\in \mathbb{N}$ such that $\chi^t_{CF}(H_n)>k$. Let \[ n = \twr_{t-1}(3(b-t+1) k \log k )+1, \] where $b=\twr_t(3k\log k)$, and the {\em tower function} is defined by $\twr_1(m)=m$ and $\twr_t(m)=2^{\twr_{t-1}(m)}$. Indeed, by the multicolour hypergraph Ramsey theorem~\cite{ER52}, for any $2 \leq r<s$ and $N \geq \twr_r ( 3(s-r) k \log k ) $, in any $k$-colouring of all the $r$-subsets of an $N$-element set there exists a monochromatic $s$-subset, namely an $s$-subset all of whose $r$-element subsets have the same colour. Assume to the contrary that $\varphi$ is a $t$-subset-CF colouring of $V$ with $k$ colours. Define a colouring $\psi$ of $\binom{ \{ 2,3,\ldots,n \} }{t-1}$ by $\psi(\{i_1,i_2,\ldots,i_{t-1}\}) = \varphi(\{1,i_1,i_2,\ldots,i_{t-1}\}) $. Since $n-1 = \twr_{t-1}(3(b-t+1) k \log k)$, there exists some $b$-subset $B$ of $\{ 2, \ldots ,n \}$, all of whose $(t-1)$-subsets are assigned the same colour by $\psi$. In other words, all $t$-subsets of type $\{\{1,i_1,i_2,\ldots,i_{t-1}\}\colon i_1,i_2,\ldots,i_{t-1} \in B\}$ are assigned the same colour in $\varphi$. Since $|B|=b=\twr_t(3k\log k)$, using the same result as above, we can deduce that there is a subset $B' \subset B$, with $|B'|=t+1$, all of which $t$-subsets receive the same colour under $\varphi$. Hence, $e=B' \cup \{1\}$ is a hyperedge of $H$ with no uniquely coloured $t$-subset under $\varphi$. Indeed, all $t$-subsets of $e$ containing 1 get the same colour, and all $t$-subsets that are contained in $B'$ get the same (possibly different) colour, a contradiction. \end{proof} \subsection{\texorpdfstring{$\chi^2_{CF}$}{\textchi\texttwosuperior\_{CF}} can be much smaller than \texorpdfstring{$\chi_{CF}$}{\textchi\_{CF}}} \subsubsection{Union of hyperedges} \label{subsec:union} Here we prove \Cref{thm:union}. \begin{proof}[Proof of Theorem~\ref{thm:union}] By \cref{thm:Delaunay}, $H$ admits a 2-UM-colouring $\psi$ with $O(c\log |V|)$ colours. Define the following pairs-colouring: \begin{align*} \phi \colon \{x,y\} \in \binom{V}{2} &\mapsto \left(\psi(x)+\psi(y), \begin{cases}0 \text{ if $\psi(x)=\psi(y)$,} \\ 1 \text{ otherwise} \end{cases}\right). \end{align*} Consider some hyperedge $h = h_1 \cup h_2$ of $H^{\cup}$ where $ h_1,h_2 \in \mathcal{E}$ with $|h| \geq 3$. Let $v_1,v_2,v_3$ be three vertices in $h$ with the top three maximal values of $\psi(h)$ so $\psi(v_1)\geq \psi(v_2)\geq \psi(v_3)$. Then (even without considering the second coordinate of $\phi$), the only case in which $\{v_1,v_2\}$ is not the uniquely-coloured pair is when $\psi(v_1)> \psi(v_2)= \psi(v_3)$. In this case, $\psi(v_2)= \psi(v_3)$ appears exactly once in each of $h_1,h_2$ (in one as the maximal colour, in the other as the second-maximal colour), and $\{v_2,v_3\}$ is the uniquely-coloured pair -- here we use the second coordinate of $\phi$. \end{proof} \subsubsection{Points on a line with respect to interval unions}\label{subsec:interval_union} In some geometric hypergraphs, the constants of \cref{thm:union} can be improved: \begin{theorem}\label{thm:union_intervals} Let $H=H(P,\mathcal I)$ with $V$ a set of $n$ points on a line $\ell$ and $\mathcal I$ the set of intervals of $\ell$. Then $\chi^2_{CF}(H^{\cup}) \leq 4\log n$. \end{theorem} \begin{remark} The hypergraph $H^{\cup}$ in \cref{thm:union_intervals} has a geometric interpretation as (isomorphic to) a hypergraph whose vertices are points on the moment curve in $\setR^4$, and whose hyperedges are induced by intersections with half-spaces. Indeed, a hyperplane in $\setR^4$ intersects the moment curve at most 4 times, and for any choice of up to 4 points on the moment curve in $\setR^4$ there exists a hyperplane that intersects the moment curve exactly at these points. \end{remark} \begin{proof} Without loss of generality, the vertex set is $V=\{1,2,\ldots,n\}$ and $n=2^s-1$ for some integer $s=\lceil \log n\rceil$. Build a vertex colouring $\psi\colon V \to \{1,\dots,s\}$ as follows. The midpoint $(n+1)/2=2^{s-1}$ receives $\psi(2^{s-1})=s$. Each of the two halves (from $1$ to $2^{s-1}-1$ and from $2^{s-1}+1$ to $2^s -1$) has $2^{s-1} -1$ elements, colour them recursively with the colours from $1$ to $s-1$. It is easy to see that $\psi$ is a UM-colouring of $H$. As for the pair colouring, define \begin{align*} \phi &\colon \{i,i+1\} \mapsto \left( \max \{\psi(i), \psi(i+1)\}, \begin{cases}0 \text{ if $\psi(i)<\psi(i+1)$,} \\ 1 \text{ otherwise} \end{cases}\right), \intertext{and for non-adjacent vertices (that is, $|i-j|>1$):} \phi &\colon \{i,j\} \mapsto (\psi(i)+ \psi(j), 0). \end{align*} To check that $\phi$ is a pairs-CF colouring of $H^{\cup}$, we have to consider two types of hyperedges: those consisting of a single integer interval (of the form $\iinterval i j$ with $i<j$) and those consisting of two disjoint intervals (of the form $\iinterval i j\cup \iinterval k l$ where $i \leq j <k \leq l$). In the first case, $\psi$ has a unique maximum $m$ over $\iinterval i j$ and hence at least one of the pair-colours $(m,0)$ and $(m,1)$ is present exactly once in $\phi$. In the second case, $\psi$ attains respective maxima $m$ and $m'$ on each of the intervals. If $m\neq m'$ then at least one of the colours $(\max\{m,m'\},0)$ and $(\max\{m,m'\},1)$ appears exactly once, whereas if $m=m'$ then the colour $(2m,0)$ is present exactly once in the hyperedge. \end{proof} In \cref{subsec:comparing} we described a hypergraph whose $t$-subset CF-chromatic number is much larger than its CF-chromatic number. The hypergraph $H^{\cup}$ in \cref{thm:union_intervals} demonstrates the opposite situation: $\chi^2_{CF}(H^{\cup})=O(\log n)$, while clearly $\chi_{CF}(H^{\cup})=n$ and hence proves Theorem \ref{thm:unbounded2}. However, this is not very informative since the underlying lower bound is due to the hyperedges of size $2$, which we disregard for the pairs-CF colourings. It is therefore natural to consider the CF-chromatic number of $H^{\cup}$ restricted to its hyperedges of size at least $3$. \cref{thm:LBunion} below shows that even if we consider only hyperedges of size at least 3 the same phenomenon holds -- the pairs-CF chromatic number of $H^{\cup}$ is still at most logarithmic in the CF-chromatic number of $H^{\cup}$. \begin{theorem}\label{thm:LBunion} Let $H^{\cup}$ be as in \cref{thm:union_intervals}. The sub-hypergraph $H' \subset H^{\cup}$ containing all hyperedges of size at least $3$ has \[\chi_{CF}(H') \geq \sqrt{n-1}.\] \end{theorem} \begin{proof} Consider a CF-colouring of $H'$ with $\chi_{CF}(H')$ colours. For every ordered pair of colours $(a,b)$ there is at most one pair of consecutive vertices that are coloured with $a$ and $b$ in this order, for otherwise there would be a hyperedge of size $4$ where both colour $a$ and $b$ appear twice. Since there are $n-1$ consecutive pairs of vertices and $\chi_{CF}(H')^2$ ordered pairs of colours, we have $\chi_{CF}(H')^2 \geq n-1$ and the inequality follows. \end{proof}
train/arxiv
BkiUeF44uBhjBQYf9HEY
5
1
\section{Introduction} Suppose that we intend to perform an experiment consisting of a set\footnote{Note that we implicitly assume that reordering of the trials does not influence the relevant properties of the experimental design.} of trials. Assume that the observed response in each trial depends on a design point chosen from a finite design space $\mathfrak X=\{\mathbf x_1,\ldots,\mathbf x_n\}$. For instance, $\mathfrak X$ may be the set of all available combinations of levels of several discrete factors\footnote{In some experimental situations, the set of available design points can be modeled as a continuous domain. However, in many applications, the design space is finite. This is the case if each factor has - in principle or effectively - only a finite number of levels that the experimenter can select, or if the optimal design problem corresponds to data sub-selection (see the examples in Section \ref{sec:EX}). Moreover, the method proposed in this paper can also be useful for solving the problems with continuous design spaces; cf. Section \ref{Sec:misc}.}. \bigskip An ``exact'' design (ED) is a selection $\xi$ of design points, not necessarily distinct, to be used for individual trials. We will formalize an ED $\xi$ as a non-negative integer-valued vector $(\xi_1,\ldots,\xi_n)^T \in \mathbb{N}_0^n$,\footnote{The symbols $\mathbb R$, $\mathbb R_+$, $\mathbb N$, $\mathbb N_0$, and $\mathbb R^{k \times n}$ denote the sets of real, non-negative real, natural, non-negative integer numbers, and the set of all $k \times n$ real matrices, respectively.} where $\xi_i$, called the $i$-th weight, represents the number of trials to be performed at the design point $\mathbf x_i$, $i=1,\ldots,n$.\footnote{Therefore, we do not represent designs by normalized (probability) measures, as is frequently done in optimal design, but by non-normalized vectors of numbers of trials.} An ``approximate'' design (AD), $\xi=(\xi_1,\ldots,\xi_n)^T \in \mathbb R_+^n$, is allowed to have general non-negative components, which means that the weight $\xi_i$ is a continuous relaxation of the integer number of trials to be performed at $\mathbf x_i$, $i=1,\ldots,n$.\footnote{Approximate designs are sometimes also called ``continuous'' designs, which refers to the continuity of the space of designs, not the design space.} Thus, an AD must be converted into an ED prior to its application in a real experiment. \bigskip Let $\Xi^E_{\mathbf A,\mathbf b}=\{\xi \in \mathbb N_0^n: \mathbf A \xi \leq \mathbf b\}$ be a non-empty set of permissible EDs, where $\mathbf A \in \mathbb R^{k \times n}$ and $\mathbf b \in \mathbb R^k$. In the classical situation\footnote{The symbols $\mathbf 1_n$, $\mathbf 0_n$, $\mathbf I_n$ and $\mathbf J_n$ denote the $n$-dimensional vector of ones, $n$-dimensional vector of zeros, the $n \times n$ unit matrix and the $n \times n$ matrix of ones, respectively.} $\mathbf A=\mathbf 1_n^T$ and $\mathbf b=N \in \mathbb N$; in that case we only restrict the number $N$ of trials, the so-called size of the experiment. Nevertheless, there are also many situations where $\mathbf A$ and $\mathbf b$ are more complex. They can correspond to various time, budget, material, unbiasedness and safety restrictions, or requirements on the form of the design (see, e.g., \citet{HBF} and Section \ref{sec:EX} of this paper). An important constraint necessary for applications to subsampling is the that each design point (i.e., data-point) can be used only once; formally $\mathbf A=\mathbf I_n$ and $\mathbf b=\mathbf 1_n$.\footnote{In actual computation using integer programming solvers this ``without replication'' constraint can be forced by setting the type of variables to binary.} \bigskip Suppose that the information gained from an experiment based on $\xi \in \Xi^E_{\mathbf A,\mathbf b}$ can be represented by a matrix $\mathbf M(\xi)$. For instance, $\mathbf M(\xi)$ may be proportional to the Fisher information matrix for the unknown parameters of an underlying statistical model. In optimal experimental design, it is usual to select a concave function $\Phi$ with a target set $\mathbb R \cup \{-\infty\}$ to quantify the information content of $\mathbf M(\xi)$. Such an optimality criterion allows an experimenter to compare different designs and, in principle, to select a $\Phi$-optimal ED, i.e., a design that maximizes\footnote{Alternatively, it is possible to select a \emph{convex} criterion $\Phi$ such that $\Phi(\mathbf M(\xi))$ can be interpreted as a loss from the experiment that depends on the design $\xi$. In this case, the optimal design would minimize $\Phi(\mathbf M(\cdot))$ over $\Xi^E_{\mathbf A,\mathbf b}$. Note that some criteria do not depend on the design via its information matrix; we will not discuss them in this paper.} $\Phi(\mathbf M(\cdot))$ over the discrete set $\Xi^E_{\mathbf A,\mathbf b}$. In many cases, the information matrix can be consistently extended to ADs. Then, maximizing $\Phi(\mathbf M(\cdot))$ over the convex set $\Xi^A_{\mathbf A,\mathbf b}=\{ \xi \in \mathbb R_+^n: \mathbf A\xi \leq \mathbf b\}$ results in the so-called optimal AD. \bigskip The construction of optimal EDs is typically a difficult problem of discrete optimization. There are two general approaches to computing optimal or nearly-optimal EDs (see, e.g., \citet{MWY} for a survey): \begin{enumerate}[(i)] \item Convex computational methods, in which an optimal AD is first determined and a process called ``rounding'' is then used to obtain an ED; \item Computational methods of discrete optimization, including complete or partial enumeration methods, as well as various specialized or general-purpose solvers and heuristics of mathematical programming. \end{enumerate} It is usually much simpler to determine an optimal AD than an optimal ED, both theoretically and computationally. Therefore, a large part of the literature is concerned only with approximate designs (cf., \citet{Puk}, \citet{Pazman}). Although ADs cannot be directly used for conducting experiments, relatively little attention has been paid to their conversion into efficient EDs. \bigskip The standard methods for converting an AD into an ED are called rounding algorithms, developed for the classical, size-constrained problem. A rounding algorithm begins with an AD $\xi^*=(\xi_1^*,\ldots,\xi_n^*)^T \in \Xi^A_{\mathbf 1_n^T,N}$ and extracts a vector $\mathcal{W}=(\xi^*_{i_1}, \ldots, \xi^*_{i_s})^T$ of positive weights, where $\{i_1,\ldots,i_s\}=\{i: \xi^*_i>0\}$ is the support of $\xi^*$ and $s$ is the size of the support. Then, typically using simple rules, the algorithm converts $\mathcal{W}$ into a vector $\left(\xi^\pm_{i_1},\ldots,\xi^\pm_{i_s}\right)^T \subset \mathbb N_0^s$ such that $\sum_j \xi^\pm_{i_j}=N$ and, finally, transforms the vector of rounded weights into an ED that belongs to $\Xi^E_{\mathbf 1_n^T,N}$. \bigskip The first notable rounding method was suggested by \citet{kiefer}, who formulated the rounding problem as the minimization of the maximum of the difference between the exact and approximate design weights. By using techniques similar to those applied in voting apportionment, \citet{PukRieder} arrived at a criterion-independent rounding algorithm known as efficient rounding (ER). More recent proposals include randomized rounding heuristics, e.g., proportional and pipage rounding, as well as incremental rounding, and bounds on the approximation ratios of the resulting designs have been presented (see \citet{Bouhtou} and \citet{Sagnol}). However, these methods are only applicable if the criterion function is submodular (e.g., $D$-optimality). We are not aware of rounding procedures for general $\Xi^E_{\mathbf A,\mathbf b}$, but for specific classes of constraints, it is not difficult to mimic the existing rounding procedures originally developed for the size-constrained problem. \bigskip ER and its variants, although prevalent to this day, have several major drawbacks. First, for any positive coordinate of the initial AD, the value of the corresponding coordinate of the resulting ED is forced to be at least $1$. This implies the restriction $N \geq s$, which can completely prevent the application of ER if the support size of the AD is large. From the opposite perspective, if a design point is not present in the support of the AD, then ER cannot add a corresponding design point into the resulting ED. Moreover, ER does not account for any design criterion nor any underlying statistical model, although it is based on an optimal AD, which itself can strongly depend on the adopted criterion and model. In addition, for many statistical models, an infinite number of optimal ADs exist, and it is unclear which of them should be used for the rounding operation. All of these disadvantages generally make approach (ii) preferable to (i) in practice; see, e.g., the examples in \citet{GoosJones}. \bigskip \citet{HF} proposed a substantially different approach to the use of an optimal AD for ED construction, which overcomes many disadvantages of ER and similar methods. In particular, it does not depend on the choice of the optimal AD if the AD is not unique, it is not restricted to the support of the optimal AD, and the resulting EDs are usually significantly more efficient than the EDs computed by ER. The method is based on a second-order approximation of the $D$-criterion in the neighborhood of the $D$-optimal approximate information matrix, and to arrive at an ED, it employs rapid off-the-shelf solvers for integer quadratic programming (IQP). \bigskip In this paper, we view the idea of a quadratic criterion approximation based on an optimal or nearly-optimal AD as a broadly applicable principle in computational experimental design, and we call this principle AQuA (ascent with quadratic assistance). As we will show, AQuA can be realized by means of heuristics but also via solvers of IQP or mixed integer conic quadratic programming (MICQP) in situations with various budget and structural constraints on the design. AQuA can also be used sequentially, similarly to the sequential quadratic programming. \bigskip The new results of this paper demonstrate that AQuA can be applied to a wide range of criteria, including the important criteria of $A$- and $I$-optimality, and, utilizing a low-rank property of key quadratic forms, to much larger design spaces than competing methods. \bigskip This paper is organized as follows: In Section \ref{sec:Kiefer}, we present the general statistical model that we consider and two versions of Kiefer's $\Phi_p$-criteria. Subsequently, in Section \ref{Sec:SQR}, we demonstrate how to compute quadratic approximations of these criteria. We propose a low-rank method for the efficient application of AQuA in Section \ref{sec:AQuA}. This leads to the main result of this paper, a MICQP formulation of AQuA that can be practically used for large structured or unstructured design spaces. Section \ref{Sec:misc} provides various remarks. Finally, Section \ref{sec:EX} presents examples of optimal designs that can be computed by the application of the AQuA approach. \section{The model and Kiefer's criteria}\label{sec:Kiefer} For a trial in $\mathbf x_i \in \mathfrak X$, $i=1,\ldots,n$, the observed response $Y(\mathbf x_i)$ is an $r$-dimensional random vector that is assumed to satisfy the linear regression model $E(Y(\mathbf x_i))=\mathbf A^T_i \beta$, where $\beta \in \mathbb R^m$ is a vector of unknown parameters and $\mathbf A_i \in \mathbb R^{m \times r}$ is a known matrix. For different observations, the errors are assumed to be independent and identically distributed with a finite and non-zero variance. Note that we consider a linear regression model with homoscedastic errors only for simplicity. It is straightforward to use the results of this paper for the construction of locally optimal designs of non-linear regression models (it only requires a linearization in a nominal parameter of the model; see, e.g., \citet{Atkinson}, Chap. 17) and also to heteroscedastic observations (by means of a proper transformation of the model; see \citet{Atkinson}, Chap. 23). \bigskip The information matrix associated with a design $\xi$ on $\mathfrak X$, either exact or approximate, is \begin{equation*} \mathbf M(\xi)=\sum_{i=1}^n\xi_i \mathbf H_i, \end{equation*} where the $\mathbf H_i=\mathbf A_i\mathbf A^T_i$, $i=1,\ldots,n$, are non-negative definite ``elementary'' information matrices with dimensions of $m\times m$\footnote{For brevity, we will henceforth use $\mathcal{S}^m$, $\mathcal{S}^m_+$, and $\mathcal{S}^m_{++}$ to denote the sets of all symmetric, non-negative definite and positive definite $m \times m$ matrices, respectively.}. For the classical case with univariate observations, $\mathbf A_i=\mathbf f(\mathbf x_i) \in \mathbb R^m$, i.e., the $m$-dimensional regressor corresponding to $\mathbf x_i \in \mathfrak X$. The general form of the elementary information matrices may be useful for instance for problems with grouped or multivariate observations with possibly correlated components (see \citet{Pazman}, Sec. II.5.3.), optimal augmentation of a set of existing trials (as shown in \citet{HT}, Section 6) and elsewhere. \bigskip Let $\Phi: \mathcal{S}^m_+ \to \mathbb R \cup \{-\infty\}$ be a continuous optimality criterion that attains its smallest value for singular non-negative definite matrices. Note that a $\Phi$-optimal ED exists because $\Xi^E_{\mathbf A,\mathbf b}$ is finite and non-empty. Since $\Xi^A_{\mathbf A,\mathbf b}$ is a non-empty compact set, a $\Phi$-optimal AD is also guaranteed to exist. If $\xi^*$ is a $\Phi$-optimal exact (approximate) design, then $\mathbf M(\xi^*)$ is called a $\Phi$-optimal exact (approximate) information matrix. To make the optimal design problem non-trivial, we will suppose that there exists a $\xi \in \Xi^E_{\mathbf A,\mathbf b}$ such that $\mathbf M(\xi)$ is non-singular, which implies that both the approximate and exact $\Phi$-optimal information matrices are non-singular. We will also assume that $\Phi$ is twice differentiable in $\mathcal{S}^m_{++}$ and that there exists a version\footnote{By two versions of a criterion, we mean two criteria that induce the same ordering on the set of information matrices.} of $\Phi$ that is strictly concave in the optimal approximate information matrix $\mathbf M_*$. This assumption is satisfied for most models and standard optimality criteria, and it implies that the $\Phi$-optimal approximate information matrix is unique. \bigskip All properties stated above are satisfied by Kiefer's criteria, which are commonly used in practice. In the optimal design literature, several versions of Kiefer's criteria for $\Phi_p$-optimality appear, and usually the choice of the particular version does not affect the strength of the theoretical or computational results. However, it turns out that in general, criterion-approximation methods \emph{do} depend on the particular version of the criterion that is chosen. Therefore, we will consider two concave versions of $\Phi_p$ criteria, as follows. \bigskip The ``positive'' version (cf. \citet{Puk}): For $p \in \mathbb N$ and $\mathbf M \in \mathcal{S}^m_{++}$, let \begin{equation}\label{eq:Phiplus} \Phi^+_p(\mathbf M)=\left(\frac{1}{m}\mathrm{tr} (\mathbf M^{-p}) \right)^{-1/p} \end{equation} and $\Phi^+_p(\mathbf M)=0$ for a singular matrix $\mathbf M \in \mathcal{S}^m_{+}$. In particular, for $p=1$, we obtain the criterion $\Phi^+_1$ of $A$-optimality. The corresponding criterion of $D$-optimality is defined as $\Phi^+_0(\mathbf M)=\left(\det(\mathbf M)\right)^{1/m}$ for all $\mathbf M \in \mathcal{S}^m_{+}$. \bigskip The ``negative'' version (cf. \citet{Pazman}, Section IV.2.7): For $p \in \mathbb N$ and $\mathbf M \in \mathcal{S}^m_{++}$, let \begin{equation}\label{eq:Phiminus} \Phi^-_p(\mathbf M)=-\left(\frac{1}{m}\mathrm{tr}(\mathbf M^{-p}) \right)^{1/p} \end{equation} and $\Phi^-_p(\mathbf M)= -\infty$ for a singular matrix $\mathbf M \in \mathcal{S}^m_{+}$. In particular, $\Phi^-_1$ is a version of the $A$ criterion, and the corresponding $D$ criterion is $\Phi^-_0(\mathbf M)=-\left(\det(\mathbf M)\right)^{-1/m}$ for $\mathbf M \in \mathcal{S}^m_{++}$ or $\Phi^-_0(\mathbf M)= -\infty$ for a singular $\mathbf M \in \mathcal{S}^m_{+}$. \bigskip Another commonly used concave version of the $D$-optimality criterion is $\Phi^0_0(\mathbf M)=\log(\det(\mathbf M))$ for all $\mathbf M \in \mathcal{S}^m_{+}$ ($\log(0):=-\infty$); cf. \citet{Pazman}. This is the version of $D$-optimality used in \citet{HF}. \bigskip Note that both positive and negative versions of the criterion are smooth on the set of positive definite matrices, and the gradients are \begin{equation*} \nabla_\mathbf M\Phi^\pm_p(\mathbf M)=\pm\frac{\Phi^\pm_p(\mathbf M)}{\mathrm{tr} (\mathbf M^{-p})} \mathbf M^{-p-1}. \end{equation*} It is customary to evaluate the quality of a design with respect to the optimal AD. Let $\xi^*$ be the optimal AD, and let $\Phi$ be a non-negative, positively homogeneous criterion that is not constantly equal to zero (these conditions are satisfied by the positive version $\Phi^+_p$ of Kiefer's criteria). Then, the $\Phi$-efficiency of a design $\xi$ is defined as $\mathrm{eff}_{\Phi}(\xi)=\frac{\Phi(\mathbf M(\xi))}{\Phi(\mathbf M(\xi^*))}$, see \cite{Puk}, Section 5.15. \section{Quadratic approximations of Kiefer's criteria}\label{Sec:SQR} Suppose that we have a quadratic approximation $\Phi_Q: \mathcal{S}^m_+ \to \mathbb R$ of a concave criterion $\Phi$ in the neighborhood of $\mathbf M_*$. Our experience shows that in most optimal design problems, the ordering on $\Xi^E_{\mathbf A,\mathbf b}$ that is induced by $\Phi_Q$ largely coincides with the ordering induced by the original criterion $\Phi$. At the same time, the quadratic approximation criterion $\Phi_Q$ can be evaluated (or updated) much more rapidly than $\Phi$, it has a simpler analytic properties and there are powerful available solvers that can maximize $\Phi_Q$. \bigskip Let $\mathbf M_*\in\mathcal{S}^m_{++}$ denote the $\Phi$-optimal approximate information matrix\footnote{Note that the optimal approximate information matrix $\mathbf M_*$ with respect to $\Phi_p^+$ and $\Phi_p^-$ is non-singular for any $p \in \mathbb N_0$.} and let $\Phi$ be twice differentiable in $\mathcal{S}^m_{++}$. Then, a second-order Taylor approximation of $\Phi$ in terms of $\mathbf M \in \mathcal{S}^m_{+}$ can be written as follows (see, e.g., \citet{Dattorro}, Appendix D) \begin{eqnarray} \Phi(\mathbf M) &\approx & \Phi(\mathbf M_*)+\partial\Phi(\mathbf M_*,\mathbf M-\mathbf M_*) \nonumber \\ && +\frac{1}{2}\partial^2\Phi(\mathbf M_*,\mathbf M-\mathbf M_*), \label{taylor} \end{eqnarray} where $\partial\Phi(\mathbf M,\mathbf N)$ denotes the directional derivative at the point $\mathbf M$ in the direction $\mathbf N$, i.e., $\partial\Phi(\mathbf M,\mathbf N)=\mathrm{tr}\left(\nabla_\mathbf M\Phi(\mathbf M)\mathbf N\right)$, and $\partial^2\Phi(\mathbf M,\mathbf N)$ denotes the second directional derivative at the point $\mathbf M$ in the direction $\mathbf N$, i.e., $\partial^2\Phi(\mathbf M,\mathbf N)=\mathrm{tr}\left(\nabla_\mathbf M\partial\Phi(\mathbf M,\mathbf N)\mathbf N\right)$, with $\nabla_\mathbf M\Phi(\mathbf M)$ denoting the gradient with respect to $\mathbf M$. \bigskip For $\Phi=\Phi_p^+$ as defined in \eqref{eq:Phiplus} with $p \in \mathbb N_0$, we obtain \begin{equation*} \partial\Phi_p^+(\mathbf M_*,\mathbf M-\mathbf M_*)=\frac{\Phi_p^+(\mathbf M_*)}{\mathrm{tr}(\mathbf M_*^{-p})}\mathrm{tr}(\mathbf M_*^{-p-1}(\mathbf M-\mathbf M_*)) \end{equation*} and \begin{eqnarray*} \partial^2\Phi_p^+(\mathbf M_*,\mathbf M-\mathbf M_*)=\frac{\Phi_p^+(\mathbf M_*)}{\mathrm{tr}(\mathbf M_*^{-p})}\Big[(p+1)\frac{\mathrm{tr}^2(\mathbf M_*^{-p-1}\mathbf M)}{\mathrm{tr}(\mathbf M_*^{-p})}\Big.\\ \Big. -\mathcal{F}_p(\mathbf M_*,\mathbf M, \mathbf M)\Big], \end{eqnarray*} where \begin{equation*} \mathcal{F}_p(\mathbf M_*,\mathbf M_1,\mathbf M_2)=\sum\limits_{r=1}^{p+1}\mathrm{tr}(\mathbf M_*^{-r}\mathbf M_1\mathbf M_*^{-p-2+r}\mathbf M_2). \end{equation*} Note that in particular, \begin{eqnarray*} \mathcal{F}_0(\mathbf M_*,\mathbf M, \mathbf M)&=&\mathrm{tr}([\mathbf M_*^{-1}\mathbf M]^2) \\ \mathcal{F}_1(\mathbf M_*,\mathbf M, \mathbf M)&=&2\mathrm{tr}(\mathbf M_*^{-2}\mathbf M\M_*^{-1}\mathbf M). \end{eqnarray*} According to \eqref{taylor}, we have $\Phi_p^+(\mathbf M)\approx \Phi_{pQ}^+(\mathbf M)$, where $\Phi_{pQ}^+$ is the second-order approximation of the criterion of $\Phi_p^+$-optimality and is given by \begin{eqnarray*} \Phi_{pQ}^+(\mathbf M)=\frac{\Phi_p^+(\mathbf M_*)}{\mathrm{tr}(\mathbf M_*^{-p})}\Big( \mathrm{tr}(\mathbf M_*^{-p-1}\mathbf M)+ \Big.\\ \Big.\frac{p+1}{2}\frac{\mathrm{tr}^2(\mathbf M_*^{-p-1}\mathbf M)}{\mathrm{tr}(\mathbf M_*^{-p})}-\frac{1}{2}\mathcal{F}_p(\mathbf M_*,\mathbf M,\mathbf M)\Big).\nonumber \end{eqnarray*} Similar computations can be performed for $\Phi=\Phi_p^-$, $p \in \mathbb N_0$, as defined in \eqref{eq:Phiminus}, leading to the quadratic approximation \begin{eqnarray*} \Phi_{pQ}^-(\mathbf M)=\frac{-3\Phi_p^-(\mathbf M_*)}{\mathrm{tr}(\mathbf M_*^{-p})}\Big(\mathrm{tr}(\mathbf M_*^{-p-1}\mathbf M)-\mathrm{tr}(\mathbf M_*^{-p})\Big.\\ \Big. +\frac{p-1}{6}\frac{\mathrm{tr}^2(\mathbf M_*^{-p-1}\mathbf M)}{\mathrm{tr}(\mathbf M_*^{-p})} -\frac{1}{6}\mathcal{F}_p(\mathbf M_*,\mathbf M, \mathbf M)\Big).\nonumber \end{eqnarray*} We remark that it is also possible to compute $\Phi_{pQ}^-$ based on the formulas for the Hessian of a modified version of $\Phi_p^-$ that were derived by \citet{YBT} (for integer values of $p$) and \citet{SY} (for $p=0,1$). \bigskip Because the mapping $\xi \to \mathbf M(\xi)$ is linear, $\Phi_Q(\mathbf M(\cdot))$ is a quadratic function on $\mathbb R^n_+$; i.e., $\Phi_Q(\mathbf M(\cdot))=\phi_{\mathbf h,\mathbf Q}(\cdot)+c$ for some $\mathbf h \in \mathbb R^n$, $\mathbf Q \in \mathcal{S}^n_+$ and $c \in \mathbb R$, where \begin{equation}\label{eq:phihQ} \phi_{\mathbf h,\mathbf Q}(\xi)=\mathbf h^T\xi - \xi^T\mathbf Q\xi, \:\: \xi \in \mathbb R^n_+. \end{equation} Then, the problem of optimal ED based on the AQuA approach can be expressed as the integer quadratic problem \begin{equation}\label{opt} \left.\begin{array}{rl} \max_{\xi} & \phi_{\mathbf h,\mathbf Q}(\xi), \\ \hbox{subject to} & \xi\in\Xi^E_{\mathbf A,\mathbf b}. \end{array}\right. \end{equation} For a general criterion, there are several possible ways of constructing the appropriate vector $\mathbf h$ and matrix $\mathbf Q$, for instance, through the use of standard numerical differentiation techniques. However, for Kiefer's criteria with $p \in \mathbb N_0$, it is simple to derive analytical forms for $\mathbf h$ and $\mathbf Q$, as we show next. \bigskip Consider an ED $\xi=(\xi_1,\ldots,\xi_n)^T$ with the information matrix $\mathbf M=\mathbf M(\xi)$. Clearly, $\mathrm{tr}(\mathbf M_*^{-p-1}\mathbf M)=\sum_i \xi_i \mathrm{tr}(\mathbf M_*^{-p-1}\mathbf H_i)$, and for $r=1,\ldots, p+1$, \begin{equation*} \mathcal{F}_p(\mathbf M_*,\mathbf M,\mathbf M)=\sum_{i,j=1}^n\xi_i \xi_j \mathcal{F}_p(\mathbf M_*,\mathbf H_i,\mathbf H_j); \end{equation*} therefore, the maximization of $\Phi_{pQ}^+(\mathbf M(\cdot))$ over $\Xi^E_{\mathbf A,\mathbf b}$ is equivalent to the integer quadratic optimization problem expressed in \eqref{opt}, where $\mathbf h=\mathbf h_p^+$ has the components $(\mathbf h_p^+)_i=\mathrm{tr}(\mathbf M_*^{-p-1}\mathbf H_i)$, $i=1,\ldots,n$, and the matrix $\mathbf Q=\mathbf Q_p^+$ has the elements\footnote{Note that the matrix $\mathbf Q_p^+$ is symmetric, as is the matrix $\mathbf Q_p^-$ defined below, because $\mathrm{tr}(\mathbf M_1\mathbf H_1\mathbf M_2\mathbf H_2)=\mathrm{tr}(\mathbf M_1\mathbf H_2\mathbf M_2\mathbf H_1)$ for the symmetric non-negative definite matrices $\mathbf M_1$, $\mathbf M_2$, $\mathbf H_1$, and $\mathbf H_2$.} \begin{equation*} (\mathbf Q_p^+)_{i,j}=\frac{p+1}{2}\frac{(\mathbf h_p^+)_i(\mathbf h_p^+)_j}{\mathrm{tr}(\mathbf M_*^{-p})}-\frac{1}{2}\mathcal{F}_p(\mathbf M_*,\mathbf H_i,\mathbf H_j), \end{equation*} $i,j=1,\ldots,n$. Similarly, the maximization of $\Phi_{pQ}^-(\mathbf M(\cdot))$ over $\Xi^E_{\mathbf A,\mathbf b}$ is equivalent to the integer quadratic optimization problem expressed in \eqref{opt}, where $\mathbf h=\mathbf h_p^-$ has the components $(\mathbf h_p^-)_i=\mathrm{tr}(\mathbf M_*^{-p-1}\mathbf H_i)$, $i=1,\ldots,n$, and the matrix $\mathbf Q=\mathbf Q_p^-$ has the elements \begin{equation*} (\mathbf Q_p^-)_{i,j}=\frac{p-1}{6}\frac{(\mathbf h_p^-)_i(\mathbf h_p^-)_j}{\mathrm{tr}(\mathbf M_*^{-p})}-\frac{1}{6}\mathcal{F}_p(\mathbf M_*,\mathbf H_i,\mathbf H_j), \end{equation*} $i,j=1,\ldots,n$. \section{Efficient computational approach to AQuA}\label{sec:AQuA} \subsection{A low-rank property of the quadratic approximations of Kiefer's criteria}\label{subsec:lowrank} We can use quadratic approximation of criteria in combination with many algorithms for optimal ED (e.g., \citet{Atkinson}, \citet{Dykstra}, \citet{Haines}). To do so, we must be able to compute the values of the quadratic function $\phi_{\mathbf h,\mathbf Q}$ given in \eqref{eq:phihQ} for designs $\xi$, as required by the algorithm. We will show that for the quadratic approximation criteria resulting from the optimal ED problem based on the Kiefer's criteria, this computation can be performed rapidly, based on a low-rank property of the associated quadratic forms. As a key by-product, we will obtain a useful quadratic cone representation of the AQuA optimization problem. \bigskip The ability to efficiently numerically evaluate multivariate quadratic functions of the form $\phi_{\mathbf h,\mathbf Q}$ generally depends on various specifics of the problem at hand, the known theoretical properties of $\mathbf h$ and $\mathbf Q$, the selected optimization algorithm, and the available hardware. Here, we will consider problems that are typical of optimal experimental design. In particular, we will assume that $m$ is a small number (usually less than $10$), whereas $n$ is a much larger number, possibly ranging from the order of tens to hundreds of thousands. \bigskip Let the function $\phi_{\mathbf h,\mathbf Q}$ be based on the quadratic approximation of a criterion defined on the set of information matrices. That is, $\phi_{\mathbf h,\mathbf Q}(\cdot)=\Phi_Q(\mathbf M(\cdot))$, where $\Phi_Q(\mathbf M)$ is a quadratic function of the elements of $\mathbf M$. For a design $\xi$, the most problematic part of computing $\phi_{\mathbf h,\mathbf Q}(\xi)$ is the evaluation of the quadratic form $\xi^T\mathbf Q\xi$ for the $n \times n$ matrix $\mathbf Q$, because $n$ is often large. However, as we will show, we can construct a matrix $\mathbf S$ with dimensions of $n \times t$, where $t \leq s:=m(m+1)/2 \ll n$, such that $\mathbf Q=\mathbf S \mathbf S^T$. Importantly, we can construct $\mathbf S$ without computing $\mathbf Q$; i.e., we can completely avoid working with potentially enormous matrices. \bigskip To this end, let the function $\Phi_Q: \mathcal{S}^m_+ \to \mathbb R$ be represented in the form \begin{equation*} \Phi_Q(\mathbf M)=a(\tilde{\mathbf h}^T\mathrm{vech}(\mathbf M) - (\mathrm{vech}(\mathbf M))^T\tilde{\mathbf Q}\:\mathrm{vech}(\mathbf M)) + c, \end{equation*} where $\tilde{\mathbf h} \in \mathbb R^s$, $\tilde{\mathbf Q} \in \mathcal{S}^s_+$, and $a>0$, $c$ are real numbers which do not influence the maximum. Let $\mathbf G_m\in\mathbb{R}^{m^2\times s}$ be the duplication matrix that relates the $\mathrm{vech}$ and $\mathrm{vec}$ operators\footnote{The symbols $\mathrm{vech}$ and $\mathrm{vec}$ denote the vectorization and half-vectorization of a matrix, respectively.}; i.e., $\mathrm{vec}(\mathbf M)=\mathbf G_m \mathrm{vech}(\mathbf M)$. Then, the versions of Kiefer's criteria defined in the previous section can be represented using Theorem 16.2.2. from \citet{Harville} and the formulas \begin{eqnarray*} \mathrm{tr}(\mathbf N\mathbf M)&=&(\mathrm{vec}(\mathbf N))^T\mathbf G_m \mathrm{vech}(\mathbf M), \\ \mathrm{tr}^2(\mathbf N\mathbf M)&=&\mathrm{vech}(\mathbf M)^T \mathbf G_m^T \mathrm{vec}(\mathbf N)\\ &&(\mathrm{vec}(\mathbf N))^T \mathbf G_m \mathrm{vech}(\mathbf M),\\ \mathrm{tr}(\mathbf N_1\mathbf M\mathbf N_2\mathbf M)&=&\mathrm{vech}(\mathbf M)^T \mathbf G^T_m (\mathbf N_2\otimes \mathbf N_1)\\ &&\mathbf G_m \mathrm{vech}(\mathbf M), \end{eqnarray*} which are valid for all $\mathbf N, \mathbf M, \mathbf N_1, \mathbf N_2 \in \mathcal{S}^m$; thus, we obtain \begin{equation*} \Phi_{pQ}^{\pm}(\mathbf M)=a^{\pm}\left((\tilde{\mathbf h}_p^{\pm})^T\mathrm{vech}(\mathbf M) - \mathrm{vech}(\mathbf M)^T\tilde{\mathbf Q}_p^{\pm}\:\mathrm{vech}(\mathbf M)\right) + c^{\pm}, \end{equation*} where \begin{equation*} a^+=\frac{\Phi_p^+(\mathbf M_*)}{\mathrm{tr}(\mathbf M_*^{-p})}, \:\: a^-=\frac{-3\Phi_p^-(\mathbf M_*)}{ \mathrm{tr}(\mathbf M_*^{-p})}, \end{equation*} \begin{equation*} \tilde{\mathbf h}^+_p=\tilde{\mathbf h}^-_p=\mathbf G_m^T\mathrm{vec}(\mathbf M_*^{-p-1}), \end{equation*} \begin{eqnarray*} \tilde{\mathbf Q}^+_p&=&\mathbf G_m^T\Big[-\frac{1+p}{2}\frac{\mathrm{vec}(\mathbf M_*^{-p-1})(\mathrm{vec}(\mathbf M_*^{-p-1}))^T}{\mathrm{tr}(\mathbf M_*^{-p})}+\Big.\\ &&\Big.\frac{1}{2}\sum_{r=1}^{p+1}\mathbf M_*^{-p-2+r}\otimes \mathbf M_*^{-r}\Big]\mathbf G_m,\\ \tilde{\mathbf Q}^-_p&=&\mathbf G_m^T\Big[\frac{1-p}{6}\frac{\mathrm{vec}(\mathbf M_*^{-p-1})(\mathrm{vec}(\mathbf M_*^{-p-1}))^T}{\mathrm{tr}(\mathbf M_*^{-p})}+\Big.\\ &&\Big.\frac{1}{6}\sum_{r=1}^{p+1}\mathbf M_*^{-p-2+r}\otimes \mathbf M_*^{-r}\Big]\mathbf G_m, \end{eqnarray*} and $c^+=0$, $c^-= 3\Phi_p^-(\mathbf M_*)$. \bigskip Next, we can construct a decomposition $\tilde{\mathbf Q}=\tilde{\mathbf C}\tilde{\mathbf C}^T$ such that the $s \times t$ matrix $\tilde{\mathbf C}$ is of rank $t$,\footnote{Note that $\tilde{\mathbf Q}$ can be a singular non-negative definite matrix; therefore, $t$ can be even smaller than $s$.} using, for instance, the Cholesky algorithm or the singular value decomposition. We have \begin{eqnarray*} \phi_{\mathbf h,\mathbf Q}(\xi)&=&\Phi_Q\left(\sum_{i=1}^n\xi_i\mathbf H_i\right)=\sum_{i=1}^n\xi_i \tilde{\mathbf h}^T\mathrm{vech}(\mathbf H_i) \nonumber \\ &&- \sum_{i=1}^n\sum_{j=1}^n\xi_i\xi_j(\mathrm{vech}(\mathbf H_i))^T\tilde{\mathbf Q}\:\mathrm{vech}(\mathbf H_j)\nonumber\\ &=&\xi^T (\mathbf H\tilde{\mathbf h})^T - \sum_{i=1}^n\sum_{j=1}^n\xi_i\xi_j(\mathbf H\tilde{\mathbf C}\tilde{\mathbf C}^T\mathbf H^T)_{i,j} \nonumber \\ &=&\mathbf h^T\xi - \|\mathbf S^T \xi\|^2,\label{eq:lrr} \end{eqnarray*} where $\mathbf H=(\mathrm{vech}(\mathbf H_1),\ldots,\mathrm{vech}(\mathbf H_n))^T$ is an $n \times s$ matrix, $\mathbf h=\mathbf H\tilde{\mathbf h}$, and $\mathbf S=\mathbf H\tilde{\mathbf C}$. Equation \eqref{eq:lrr} allows us to compute $\phi_{\mathbf h,\mathbf Q}(\xi)$ without evaluating and storing $\mathbf Q$. \bigskip An advantage of the previous expression is that with the use of $\mathbf S$, $\phi_{\mathbf h,\mathbf Q}(\xi)$ can be rapidly evaluated; for instance, the exchange step in an exchange algorithm (\citet{Atkinson}, Sec. 12.3) can be performed based on the equation \begin{eqnarray*} &&\phi_{\mathbf h,\mathbf Q}(\xi+\mathbf e_l-\mathbf e_k)=\\ &&\phi_{\mathbf h,\mathbf Q}(\xi)+\mathbf h_l-\mathbf h_k-2(\mathbf S^T\xi)^T[\mathbf S_{l\cdot}- \mathbf S_{k\cdot}]-\\ &&\|\mathbf S_{l\cdot}\|^2+2(\mathbf S_{l\cdot})^T\mathbf S_{k\cdot}-\|\mathbf S_{k\cdot}\|^2, \end{eqnarray*} where $\mathbf e_l$, $\mathbf e_k$ are the $l$-th and $k$-th standard unit vectors, and $\mathbf S_{l\cdot}$, $\mathbf S_{k\cdot}$ are the $l$th and the $k$th rows of $\mathbf S$. Note that $\mathbf S^T\xi$ is updated as follows: $\mathbf S^T(\xi+\mathbf e_l-\mathbf e_k)=\mathbf S^T\xi+\mathbf S_{l\cdot}-\mathbf S_{k\cdot}$. If the $n$ values of $\|\mathbf S_{k\cdot}\|^2$, $k=1,\ldots,n$, are precomputed and stored in memory, then each update involves only $2t+2$ multiplications and $4t+4$ subtractions or additions. An example of how these formulas can be utilized with heuristic exchange algorithm can be found in a preprint of the previous version of this paper; see \citet{aqua}. Here we will focus on a more versatile application of the low-rank property, as detailed in the next section. \subsection{Mixed integer conic quadratic programming formulation of AQuA}\label{Sec:MICQP} Once we proved the low-rank property $\mathbf Q=\mathbf S\SS^T$, where $\mathbf S$ is an $n \times t$ matrix, $t \ll n$, we can use a known trick to reformulate the problem of quadratic programming (e.g., \cite{moseka}, Chapter 10). Introducing an auxiliary continuous variable $r$, the optimization problem \ref{opt} of the AQuA approach can be written as \begin{equation}\label{prob:aux} \left.\begin{array}{rl} \max_{\xi, r} & \mathbf h^T\xi - r\\ \hbox{s.t.} & \mathbf A\xi\leq\mathbf b,\ \xi\geq \mathbf 0_n,\ \xi \in \mathbb{Z}^n\\ & r \geq ||\mathbf S^T\xi||^2. \end{array}\right. \end{equation} It is simple to verify that the last constraint in \eqref{prob:aux} can be expressed as $\mathbf V(1/2,r,\xi^T\mathbf S)^T \in Q^{2+t}$, where $Q^{2+t}$ is the second-order cone \begin{equation*} Q^{2+t}=\{(a,b,\mathbf v^T)^T: a \geq \|(b,\mathbf v^T)^T\| \} \end{equation*} and $\mathbf V$ is the orthogonal matrix \begin{equation*} \mathbf V= \left( {\begin{array}{ccc} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & \mathbf 0_t^T \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & \mathbf 0_t^T \\ \mathbf 0_t & \mathbf 0_t & \mathbf I_t \end{array} } \right). \end{equation*} We thus obtained a mixed integer conic quadratic problem (MICQP) which can be formulated as follows: \begin{equation}\label{aqua.cone} \left.\begin{array}{rl} \max_{\xi,\mathbf v,r,a,b} & \mathbf h^T\xi - r\\ \hbox{s.t.} & \mathbf A\xi\leq\mathbf b,\ \xi\geq \mathbf 0_n,\ \xi \in \mathbb{Z}^n\\ & 2\sqrt{2}a-2r =1, \\ & 2\sqrt{2}b+2r =1, \\ & \mathbf S^T\xi-\mathbf v=\mathbf 0_t, \\ & (a,b,\mathbf v^T)^T \in Q^{2+t}. \end{array}\right. \end{equation} Note that the formulation \eqref{aqua.cone} has a linear objective function and does not require potentially huge $n \times n$ matrix $\mathbf Q$ at all; it only requires the $n \times t$ matrix $\mathbf S$ which is often much smaller in optimum design problems. Moreover, the number of variables of \eqref{aqua.cone}, $n+t+3$, is only marginally larger than the number $n$ of variables in the direct integer quadratic formulation \eqref{opt}. Indeed, for $t \ll n$ our numerical studies prove that the formulation \eqref{aqua.cone} can be significantly more computationally efficient than \eqref{opt}, as we will demonstrate in Section \ref{sec:EX}. \section{Miscellaneous comments}\label{Sec:misc} \subsection{Continuous design spaces}\label{Subs:cont} In some applications, it is possible to use a continuous design space $\tilde{\mathfrak X}$, instead of a finite one. This is typical of factor experiments under the theoretical assumption that the levels of some factor can be any real numbers in a given interval. In such cases, AQuA cannot be directly applied\footnote{Of course, the same is true for a multitude of other popular design algorithms which work only on finite spaces.}. However, a straightforward strategy is to first apply AQuA to a finite subset of $\tilde{\mathfrak X}$, and then use its result as an initial design for any constrained continuous optimization method which adjusts the positions of the support points within $\tilde{\mathfrak X}$. Note that the search for optimal positions of design points in a continuous space is generally a highly non-convex problem, and a good initial feasible solution provided by a finite-space method such as AQuA can make a crucial difference. See Subsection 5.1 in \cite{aqua} that demonstrates this approach for the full quadratic model ($m=15$) with $4$ continuous factors, i.e., $\tilde{\mathfrak X}=[-1,1]^4$. \subsection{Quadratic approximations of different versions of the same criterion}\label{Sec: gamma} We can regard criteria $\Phi^+_p$ and $\Phi^-_p$ as part of a larger class of concave criteria: for $\gamma \in [-1,1]$ and for $p \in \mathbb N_0$ we can define \[ \Phi^{(\gamma)}_p:=(1+\gamma)\Phi^+_p/2+(1-\gamma)\Phi^-_p/2, \] where we set $0 \times \infty=0$. Thus, $\Phi^+_p=\Phi^{(+1)}_p$ and $\Phi^-_p=\Phi^{(-1)}_p$ for all $p \in \mathbb N_0$. Clearly, $\Phi^{(\gamma)}_p$ is a concave version of the same criterion for all $\gamma \in [-1,1]$ and its quadratic approximation is \[ \Phi^{(\gamma)}_{pQ}=(1+\gamma)\Phi^+_{pQ}/2+(1-\gamma)\Phi^-_{pQ}/2. \] Note that setting $p=0$ and $\gamma=\gamma_d=\frac{1-d^2}{1+d^2}$, where $d=(\det \mathbf M_*)^{1/m}$, leads to the optimization problem of the form \eqref{opt} with \begin{equation*} \tilde{\mathbf h}_0=\mathbf G_m^T\mathrm{vec}(\mathbf M_*^{-1}) \end{equation*} and \begin{equation*} \tilde{\mathbf Q}_0=\frac{1}{4}\mathbf G_m^T\left[\mathbf M_*^{-1}\otimes \mathbf M_*^{-1} \right]\mathbf G_m. \end{equation*} It is straightforward to verify that this choice of $\tilde{\mathbf h}_0$ and $\tilde{\mathbf Q}_0$ corresponds to the same quadratic approximation as the one that can be obtained from the $D$-optimality criterion in the form $\log(\det(\mathbf M))$, used in \citet{HF}. Note that we always have $\gamma_d \in (-1,1)$. That is, in the sense of the AQuA approach, the $\log\det$ criterion is always ``between'' the positive and the negative versions of $D$-optimality. \bigskip Different versions of the same criterion lead to different quadratic approximations. Nonetheless, our numerical observations suggest that the differences are minor (see Subsection \ref{SBW}). \subsection{Generalization of $I$-optimality and its conversion into $A$-optimality} \label{Sec:IV} Recently, there has been much interest in $I$-optimality\footnote{This criterion is sometimes called called $IV$- or $V$-optimality (see Section 10.6 in \citet{Atkinson}).}, because $I$-optimality may be a more appropriate criterion than $D$-optimality if we are interested in the estimation of the mean value of the response (see, e.g., \citet{Mont}, \citet{LN}, and \citet{ABM}). The results for $A$-optimality can be easily adapted to compute $I$-optimal designs. Standard $I$-optimal designs are applied to models with one-dimensional observations ($r=1$), and they minimize the integral of the variances of the BLUEs of the response surface over a region $\mathfrak{Y}$ with respect to some measure. We will generalize the notion of $I$-optimal design to potentially multivariate observations and show that $I$-optimal designs are $A$-optimal in a transformed model, giving us the possibility to use the theory and algorithms developed for $A$-optimality. \bigskip Let $\mathfrak{Y} \subseteq \mathbb{R}^d$ be a measurable set representing a region of prediction interest, and let $\eta$ be a measure on $\mathfrak{Y}$. Suppose that for each $\mathbf x \in \mathfrak{Y}$, there is a matrix $\mathbf V(\mathbf x) \in \mathcal{S}^m_+$ such that $\mathrm{tr}(\mathbf M^{-1}\mathbf V(\mathbf x))$ is a measure of variance of the response surface estimator in $\mathbf x$, provided that the information matrix for the parameters is $\mathbf M \in \mathcal{S}^m_{++}$. For a positive definite $\mathbf M$, we can define a (generalized) $I$-optimality criterion \begin{equation*} \Phi_{I}(\mathbf M)=-\int_{\mathbf x \in \mathfrak{Y}}\mathrm{tr}(\mathbf M^{-1}\mathbf V(\mathbf x))\mathrm{d}\eta(\mathbf x) =-\mathrm{tr}\left(\mathbf M^{-1}\mathbf L\right), \end{equation*} where $\mathbf L=\int_{\mathbf x \in \mathfrak{Y}}\mathbf V(\mathbf x)\mathrm{d}\eta(\mathbf x)$, and for a singular $\mathbf M$, we can set $\Phi_{I}(\mathbf M)=-\infty$. Suppose that $\mathbf L=\mathbf{S}\mathbf{S}^T$, where $\mathbf{S}$ is non-singular. Then, clearly, a design $\xi$ is $I$-optimal if and only if it is $A$-optimal in the model given by the elementary information matrices \begin{equation*} \mathbf{S}^{-1}\mathbf H_1(\mathbf{S}^T)^{-1},\ldots,\mathbf{S}^{-1}\mathbf H_n(\mathbf{S}^T)^{-1}. \end{equation*} The standard situation corresponds to $r=1$, $\mathfrak{Y}=\mathfrak X$, $\mathbf V(\mathbf x)=\mathbf f(\mathbf x)\mathbf f^T(\mathbf x)$, and $\eta$ being a uniform measure on $\mathfrak X$. \bigskip We demonstrate the computation of $I$-optimal designs using AQuA in Subsections \ref{ss:scheffe} and \ref{ss:wine}. \subsection{Iterative application of AQuA}\label{iter} The central idea of this paper is to apply integer quadratic programming to a problem constructed on the basis of the optimal approximate information matrix $\mathbf M_*$, which is often available, either theoretically or via an efficient algorithm of convex optimization. Note, however, that the approximation is quite precise even if the criterion is based on a matrix $\tilde{\mathbf M}_*$ (henceforth called the 'anchor matrix') which is not perfectly optimal. Thus, in more difficult situations, in particular with a large design space and complex design restrictions, when $\mathbf M_*$ may be difficult to compute, we suggest to apply the following heuristic iterative scheme, similar to the successive application of the Newton's method to sequential quadratic optimization: \begin{enumerate} \item Compute a rough estimate $\tilde{\mathbf M}^{(0)}_*$ of $\mathbf M_*$ at a random subsample of $\mathfrak X$ or neglecting some design constraints. Set $j$ to $0$. \item Use AQuA with the anchor matrix $\tilde{\mathbf M}^{(j)}_*$ instead of $\mathbf M_*$.\footnote{If this is not last iteration of the algorithm, we can use AQuA without the integer constraints on the design. Indeed this iterative approach can also be used for computing optimal \emph{approximate} designs, but we do not explore this possibility here.} Set $\tilde{\mathbf M}^{(j+1)}_*$ to be the information matrix of the resulting design. \item If a stopping rule is not satisfied, increase $j$ by one, and continue with the previous step. \end{enumerate} The previous scheme uses a sequence of successive quadratic optimization problems, which, in some cases, can be solved via the conic formulation of AQuA, despite the fact that we cannot solve the original optimal approximate problem because of its size or complexity. In the last subsection of the next section we will demonstrate that this approach can indeed lead to efficient EDs for large design spaces. \subsection{Current limitations of AQuA} AQuA can be a valuable tool in the toolbox of computational methods of experimental design as numerically demonstrated in Section \ref{sec:EX}. However, it has currently no theoretical underpinnings in the sense of lower bounds on the efficiency of the resulting designs depending on general properties of the problem at hand\footnote{Note that after we already have a candidate exact design for a specific problem, we can compute a lower bound on its efficiency relative to the optimal approximate design. This often leads to a guarantee which is fully satisfactory for practical purposes. Moreover, many optimization heuristics which are eminently useful across sciences also lack theoretical bounds on the efficiency of the results that they generate.} Note that we have observed that AQuA sometimes produces significantly suboptimal designs for small design sizes $N \geq m$, in particular for $N=m$\footnote{See \citet{HF} for an example a strongly suboptimal result of AQuA for $N=m$.}. Moreover, we do not have a theoretical proof of convergence of the sequential approach outlined in Subsection \ref{iter}. \bigskip With easily available hardware and software, the IQP formulation of AQuA can solve problems with middle size $n$ (up to thousands) and any $m \leq n$. On the other hand, the MICQP version of AQuA can solve ``tall'' problems with a large $n$ (up to hundreds of thousands) and a relatively small $m \ll n$. However, we do not know how to use AQuA to handle problems with both $n$ and $m$ large. \section{Numerical studies}\label{sec:EX} The principle of AQuA can be applied to a wide spectrum of optimal design problems in various creative ways. Here we will choose several very different examples to inform the reader about general properties of AQuA, for instance: \begin{enumerate} \item the degree of reliability in achieving the optimal ED and the robustness with respect to the anchor matrix; \item the possibility to efficiently construct solutions to optimal ED problems with complex constraints on the structure of the design; \item the possibility to apply the conic version of AQuA to specific problems with a large design space, in particular to the problem of an information-based sub-selection of ``tall'' datasets. \end{enumerate} We will demonstrate the application and explore the performance of AQuA in the \texttt{R} computing environment (\citet{R}) employing the packages \texttt{OptimalDesign} (\citet{RLIB}), \texttt{matrixcalc} (\citet{matrixcalc}), and the mathematical programming solvers of \texttt{gurobi} (\citet{gurobi}). Note that there are also several other professional solvers that can handle IQP and MIQCP problems, for instance \texttt{mosek} (\citet{mosekb}). The examples were computed on a 64-bit Windows 10 system with an Intel Core i5-5500U processor at 2.40 GHz and 8 GB of RAM. The codes and additional information can be found at \noindent \hyperlink{http://www.iam.fmph.uniba.sk/ospm/Harman/design/}{http://www.iam.fmph.uniba.sk/ospm/Harman/design/}. For the application of the provided R codes, the user only needs to create the model (the matrix of all possible regressors $\mathbf f(x)$), the constraints (in the form of $\mathbf A$ and $\mathbf b$), and choose the criterion ($D$, $A$ or $I$). \subsection{Size-constrained $D$- and $A$-optimal exact designs for the model of spring balance weighing of $6$ items}\label{SBW} Consider the linear regression of the first degree without an intercept term on the vertices of the $m$-dimensional unit cube given by the formula \begin{equation}\label{sbw} E(Y(\mathbf x))=x_{1}\beta _1+\ldots+x_{m}\beta_m, \end{equation} where the components $x_{j}$ of $\mathbf x$ are chosen to be either $0$ or $1$. In \eqref{sbw}, the measurement $Y(\mathbf x)$ can be interpreted as the result of the weighing of items with unknown weights $\beta_1,\ldots,\beta_m$ on a spring balance, where $x_{j}$ denotes the presence or the absence of the item $j$. Here, the design space is the set of $n=2^m$ vertices of the unit cube in $\mathbb R^m$. For this example, we selected $m=6$ items, that is, $n=64$. \bigskip The AD theory for model \eqref{sbw} with the standard constraint on the size of the experiment has been worked out in great detail: see, e.g., \citet{Cheng}, who used the equivalence theorem to find $\Phi_p$-optimal ADs for all values of $p$. For the application of AQuA, we can use the well-known ``neighbor vertex'' $D$-optimal and $A$-optimal ADs as described in \citet{Puk}, Sec. 14.10. For non-normalized ADs of size $N$, and for $s\in[0,m]$, the neighbor vertex design is \begin{equation*} \xi_s=(1-(s-\lfloor s \rfloor))\zeta_{\lfloor s \rfloor} + (s-\lfloor s \rfloor)\zeta_{\lfloor s \rfloor+1}, \end{equation*} where $\zeta_j$ is a $j$-vertex design, i.e., $\zeta_j$ assigns $N / \binom{m}{j}$ to the vertices of $\mathfrak X$ having $j$ components equal to $1$ and $m-j$ components equal to $0$ and $\lfloor s \rfloor$ denotes the largest integer not exceeding $s$. For our case of $m=6$, the design $\xi_s$ with $s=\frac{24}{7}$ is $D$-optimal, its support size is $35$ and its information matrix is $\mathbf M^*_D=\frac{2N}{7} \mathbf I_6+\frac{2N}{7}\mathbf J_6$. Similarly, the design $\xi_s$ with $s=3$ is $A$-optimal, its support size is $20$ and its information matrix is $\mathbf M^*_A=\frac{3N}{10}\mathbf I_6+\frac{2N}{10}\mathbf J_6$. \bigskip In this model, the optimal ADs are not unique; the designs from Tables \ref{T:SBWD7} and \ref{T:SBWA7} are evidently not neighbor vertex designs, yet they are $D$- and $A$-optimal, respectively, which can be directly verified. Notice that the $D$-optimal approximate design from Table \ref{T:SBWD7} is evidently a $D$-optimal exact of size $N=7k$, $k \in \mathbb{N}$, and the $A$-optimal approximate design from Table \ref{T:SBWA7} is evidently an $A$-optimal exact design of size $N=10k$, $k \in \mathbb{N}$.\footnote{We stress that it is not completely trivial to find these balanced small-support $D$-, and $A$-optimal ADs in class of all optimal ADs; in fact, we have found them using the integer programming capabilities of AQuA. In this respect, AQuA can be very useful also for the problem with a single size constraint.} \bigskip We remark that, according to our experience, for a problem of optimal ED constrained only by the experimental size, well implemented heuristics such as the KL-exchange algorithm (\cite{Atkinson}, Section 12.6) will often outperform methods based on IP solvers, including AQuA, in terms of time required to achieve a practically optimal design. However, the existing heuristics and theoretical results for the selected size-constrained problem provide benchmarks that can be used to assess the properties of the AQuA method, as we show next. \bigskip For the numerical study of ED, we will use the experimental sizes of $N=6,7,\ldots,30$. For $m=6$, the $D$-optimal EDs are theoretically known (see \citet{NWZ}). For $A$-optimality and $m=6$ items, we are not aware of any publication which provides optimal EDs; therefore, we have computed the $A$-optimal EDs using the KL heuristic. We tested the AQuA approach realized by the integer quadratic solver of gurobi against the exact optimal values. To anchor the quadratic approximations, we used either the theoretically known optimal approximate information matrix $\mathbf M_*$, or a perturbed information matrix $\tilde{\mathbf M}_*$ that corresponds to a random design with efficiency $0.95$. The sub-optimal anchor matrix allows us to assess the robustness of the AQuA approach for problems where precise optimal AD is unavailable. \bigskip The results, visualized in Figures \ref{F:sbwD} and \ref{F:sbwA}, can be summarized as follows: \begin{itemize} \item If $\mathbf M_*$ is precise (see the top panels of Figs. \ref{F:sbwD}, \ref{F:sbwA}), AQuA usually provides not only good, but perfectly optimal EDs. Less efficient results tend to occur for smaller sizes of $N$, in particular for $N=m$. \item The time to compute the solution generally increases with $N$ (see the right panels of Figs. \ref{F:sbwD}, \ref{F:sbwA}). However, if there is an optimal AD that coincides with an optimal ED of a given size, the computation tends to be rapid, in particular if $\mathbf M_*$ is precisely computed. \item AQuA is generally robust with respect to the choice of the anchor matrix (see the bottom panels of Figs. \ref{F:sbwD}, \ref{F:sbwA}). Even using a significantly sub-optimal anchor matrix $\tilde{\mathbf M}_*$, the resulting EDs are either perfectly optimal or reasonably efficient, without a significant increase of the computation time (except a few specific cases of $N$ as discussed in the comment above). \item There are some numerical differences between the two approximations of the $D$- and $A$-criteria, but they do not tend to be pronounced. \end{itemize} Note that the reported computation time corresponds to the moment at which the solver determines that its current design is good enough with respect to the quadratic criterion\footnote{We did not alter the default stopping rules and other options of the gurobi solver.}; the actual time that the solver first obtains the resulting design may be shorter. \bigskip It is also worth noting that the standard ER procedure cannot be applied to the neighbor vertex optimal ADs, for $N < 35$ in case of $D$-optimality, and for $N<20$ in case of $A$-optimality. The reason is that the neighbour vertex ADs have too many support points for ER to be applicable. Even in the remaining cases that ER can be applied, for instance if we used some auxiliary tools to obtain optimal ADs with a smaller support (such as those in Tables \ref{T:SBWD7} and \ref{T:SBWA7}), our computational experience suggests that the resulting EDs tends to be worse than those found by AQuA. \begin{table}[!h] \begin{center} \begin{tabular}{cccccc|c} $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $\xi^*_D(\mathbf x)$\\ \hline 1 & 1 & 0 & 1 & 0 & 0 & $N/7$ \\ 0 & 0 & 1 & 1 & 1 & 0 & $N/7$ \\ 0 & 1 & 1 & 0 & 0 & 1 & $N/7$ \\ 1 & 0 & 0 & 0 & 1 & 1 & $N/7$ \\ 1 & 1 & 1 & 0 & 1 & 0 & $N/7$ \\ 1 & 0 & 1 & 1 & 0 & 1 & $N/7$ \\ 0 & 1 & 0 & 1 & 1 & 1 & $N/7$ \end{tabular} \caption{A $D$-optimal AD of size $N$ for the model from Subsection \ref{SBW}}\label{T:SBWD7} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{cccccc|c} $x_1$ & $x_2$ & $x_3$ & $x_4$ & $x_5$ & $x_6$ & $\xi^*_A(\mathbf x)$\\ \hline 1 & 1 & 0 & 1 & 0 & 0 & $N/10$\\ 1 & 0 & 1 & 1 & 0 & 0 & $N/10$\\ 1 & 0 & 1 & 0 & 1 & 0 & $N/10$\\ 0 & 1 & 1 & 0 & 1 & 0 & $N/10$\\ 0 & 1 & 0 & 1 & 1 & 0 & $N/10$\\ 1 & 1 & 0 & 0 & 0 & 1 & $N/10$\\ 0 & 1 & 1 & 0 & 0 & 1 & $N/10$\\ 0 & 0 & 1 & 1 & 0 & 1 & $N/10$\\ 1 & 0 & 0 & 0 & 1 & 1 & $N/10$\\ 0 & 0 & 0 & 1 & 1 & 1 & $N/10$ \end{tabular} \caption{An $A$-optimal AD of size $N$ for the model from Subsection \ref{SBW}}\label{T:SBWA7} \end{center} \end{table} \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{figure_SBW_D.pdf} \caption{The efficiency and the computation times (in decadic logarithmic scale) of $D$-efficient designs for the model \eqref{sbw} with $m=6$ items and various numbers $N$ of measurements, as obtained via the direct IQP formulation of AQuA. The positive version of the quadratic approximation is denoted by $\bigtriangleup$ and the negative version of the quadratic approximation is denoted by $\bigtriangledown$. See the main text for details and discussion.}\label{F:sbwD} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{figure_SBW_A.pdf} \caption{The efficiency and the computation times (in decadic logarithmic scale) of $A$-efficient designs for the model \eqref{sbw} with $m=6$ items and various numbers $N$ of measurements, as obtained via the direct IQP formulation of AQuA. The positive version of the quadratic approximation is denoted by $\bigtriangleup$ and the negative version of the quadratic approximation is denoted by $\bigtriangledown$. See the main text for details and discussion.}\label{F:sbwA} \end{center} \end{figure} \subsection{Marginally constrained symmetric $D$- and $I$-optimal exact designs for the $3$-component Scheff\'{e} mixture model}\label{ss:scheffe} The most important applications of AQuA can be expected in those situations for which there are no specialized heuristics, such as for problems with complex constraints on the design weights\footnote{We would like to stress that here we do not focus on the constraints on the design region, which are trivial to incorporate (at least in the case of finite design spaces); we work with constraints on the design vector itself in the polyhedral set of designs in $\mathbb R^n$.}. For models with general linear constraints, few options are available if one wishes to find informative EDs. Namely, \citet{SagnolHarman} have shown that the $D$- and $A$-optimal EDs under general linear constraints on the design weights can be obtained by solving a specific mixed integer second-order cone programming problem (MISOCP). This approach, although, given enough time, it is guaranteed to find a perfectly optimal ED, is practically feasible only for problems that are small to medium in size (with currently common hardware, up to a thousand design points even with $m<10$). In this subsection, we will demonstrate that our approach can be superior to both the method of \citet{SagnolHarman}, as well as the direct application of a quadratic approximation as suggested in \citet{HF}. \bigskip Consider a mixture of three components with the ratio of each varying between 0\% and 100\% in increments of 2.5\%. The response can be modeled by a quadratic Scheff\'{e} mixture model given by \begin{equation}\label{scheffe} E(Y(\mathbf x))=\sum_{j=1}^3 \beta_j x_j+\sum_{u<v} \beta_{(uv)}x_u x_v, \end{equation} where $\mathbf x=(x_1,x_2,x_3)$, $x_j\in\{0,0.025,0.05,\ldots,1\}$, $j=1,2,3$. Hence, the model contains $m=6$ unknown parameters, and the dimensionality of the set of designs $n=861$ (for more details and applications of mixture designs see, e.g., \citet{Cornell} and \citet{GJS}). \bigskip Suppose that, in addition to the size constraint, we are required to compute a design that fulfils a set of marginal constraints which require that each level of each factor can be used at most once, i.e., for all permissible designs $\xi$ and all $\tilde{\mathbf x}=(\tilde{x}_1,\tilde{x}_2,\tilde{x}_3) \in \mathfrak X$ we have $\sum_{x_2,x_3} \xi(\tilde{x}_1,x_2,x_3) \leq 1$, $\sum_{x_1,x_3} \xi(x_1,\tilde{x}_2,x_3) \leq 1$ and $\sum_{x_1,x_2} \xi(x_1,x_2,\tilde{x}_3) \leq 1$. These ``non-collapsibility'' constraints can be justified similarly as the Latin hypercube designs (\citet{LHD}) and ``bridge'' designs (\citet{bridge}) on cubes. In particular, they lead to designs without replications of design points, which is important for computer experiments. Additionally, we have imposed constraints of the form $\xi(x_1,x_2,x_3)=\xi(x_2,x_3,x_1)=\xi(x_3,x_1,x_2)$ that force the design to be symmetric. Therefore, we aim to find an optimal exact design which combines properties of non-collapsibility of individual factor levels, symmetry, and efficiency of parameter estimation. \bigskip In this setting, we computed the $D-$, and $I-$optimal exact designs with the MISOCP approach of \citet{SagnolHarman} and with AQuA, realized by both the standard IQP solver and by the MICQP as proposed in Subsection \ref{Sec:MICQP}. The results are depicted in Figures \ref{F:scheffeD} and \ref{F:scheffeI} and described in Tables \ref{T:scheffeD} and \ref{T:scheffeI}. \bigskip We see that all three methods of computing EDs provide designs of similar efficiency, but the conic reformulation of AQuA can decrease the computation time by as much as two orders of magnitude. \bigskip Note that in this case, ER method cannot be used at all to transform AD to ED. Besides the support of ADs in this model being very large, as can be seen in Figures \ref{F:scheffeD} and \ref{F:scheffeI}, the marginal and symmetry constraints cannot be incorporated into ER without its significant modification. \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{figure_MIX_D.pdf} \caption{(a) $D$-optimal approximate design for the Scheff\'{e} mixture model \eqref{scheffe} as obtained by the SOCP solver. (b) $D$-efficient exact designs obtained by the MISOCP solver. (c) $D$-efficient exact designs obtained by AQuA via the IQP solver. (d) $D$-efficient exact designs obtained by AQuA with the MICQP solver. The vertices correspond to the pure mixtures $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$, the gray dots represent the discrete design space, with the larger colored dots denoting the obtained designs.}\label{F:scheffeD} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{figure_MIX_I.pdf} \caption{(a) $I$-optimal approximate design for the Scheff\'{e} mixture model \eqref{scheffe} as obtained by the SOCP solver. (b) $I$-efficient exact designs obtained by the MISOCP solver. (c) $I$-efficient exact designs obtained by AQuA via the IQP solver. (d) $I$-efficient exact designs obtained by AQuA with the MICQP solver. The vertices correspond to the pure mixtures $(1,0,0)$, $(0,1,0)$ and $(0,0,1)$, the gray dots represent the discrete design space, with the larger colored dots denoting the obtained designs.}\label{F:scheffeI} \end{center} \end{figure} \begin{table}[!h] \begin{center} \begin{tabular}{ c | c | c | c | c } \hline Panel & type & method & efficiency & time \\ \hline (a) & appr. & SH-SOCP & $1.000$ & $7.59$ \\ (b) & exact & SH-MISOCP & $0.98377$ & $616.91$ \\ (c) & exact & AQuA-IQP & $0.98373$ & $625.53$ \\ (d) & exact & AQuA-MICQP & $0.98374$ & $8.56$ \\ \hline \end{tabular} \caption{$D$-optimality, related to Fig. \ref{F:scheffeD}. Approximate and exact designs of the mixture experiment analyzed in Subsection \ref{ss:scheffe} computed by methods SH-SOCP, SH-MISOCP (both described in \citet{SagnolHarman}), AQuA-IQP based on the direct use of an integer quadratic solver and AQuA-MICQP based on the low-rank reformulation in \ref{Sec:MICQP}. The efficiency is computed relative to the optimal approximate design.}\label{T:scheffeD} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{ c | c | c | c | c } \hline Panel & type & method & efficiency & time\\ \hline (a) & appr. & SH-SOCP & $1.000$ & $3.87$ \\ (b) & exact & SH-MISOCP & $0.99627$ & $602.96$ \\ (c) & exact & AQuA-IQP & $0.98928$ & $636.16$ \\ (d) & exact & AQuA-MICQP & $0.99647$ & $4.54$ \\ \hline \end{tabular} \caption{$I$-optimality, related to Fig. \ref{F:scheffeI}. Approximate and exact designs of the mixture experiment analyzed in Subsection \ref{ss:scheffe} computed by methods SH-SOCP, SH-MISOCP (both described in \citet{SagnolHarman}), AQuA-IQP based on the direct use of an integer quadratic solver and AQuA-MICQP based on the low-rank reformulation in \ref{Sec:MICQP}. The efficiency is computed relative to the optimal approximate design.}\label{T:scheffeI} \end{center} \end{table} \subsection{$D$- and $I$-optimal subsampling of a dataset under an upper constraint on budget and lower constraint on average quality}\label{ss:wine} Lastly, we will show that the conic specification of AQuA can be used as a tool for computing EDs for large design spaces, in particular for a constrained information-based subsampling of ``tall'' datasets; see, e.g., \citet{iboss} for a justification of this approach. Here, the purpose is to select a subsample for a screening with the quality based on a linear regression model. In contrast to the existing information-based subsampling methods, we can require a subsample that keeps limits on the numbers of selected objects within given strata, and, simultaneously, a lower constraint on the quality as well as an upper constraints on the price of the subsample. \bigskip To this end we used the wine datafile \citet{wine} that contains data on approximately $150000$ wine reviews from WineEnthusiast. The aim is to subsample this database for a survey, marketing or educational purposes. \bigskip After cleaning duplicities and incomplete entries, we were left with $n=111534$ wine reviews containing variables on country of origin, description, points, price, province, title, variety, and winery. Out of this dataset, we are to sample wines so that the upper bound on the combined price of the wines is 1000\$, lower bound on the average quality points is $90$ and there is exactly one wine from each of the $42$ countries. To avoid selecting the same wine more than once, we added the upper bounds $\xi_i \leq 1$ for all $i=1,\ldots,n$. The model used was the linear regression with $m=3$ parameters with intercept, the quality points and the logarithm of the price as explanatory variables. \bigskip We will use the robustness of AQuA with respect to the selection of the anchor matrix and the iterative approach explained in Subsection \ref{iter}. For the computation of the first AD, we used the SOCP formulation from \citet{SagnolHarman} applied to a random sub-selection comprising 1500 data-points. Then, we sequentially applied AQuA until convergence. \bigskip For both criteria, we run the randomly initiated computation $5$ times, and in every case it converged to the same solution in as few as $4$ steps (including the first, SOCP computation), each taking less than $4$ seconds. The resulting $42$-element subsamples are visualized in Figures \ref{wineD} and \ref{wineI}. It turns out that for both resulting subsamples, the cumulative price of the $42$ wines is exactly 1000\$ and the average quality is exactly $90$ points. To meet the restrictions, the subsample computed using the $D$-optimality criterion is automatically concentrated largely in the region of inexpensive wines of a good quality while still making the samples diverse enough to permit precise estimation of parameter of the linear model. The result based on $I$-optimality is similar, but since $I$-optimality minimizes the average variance throughout all points, the subsample is more concentrated in the area that is most densely populated. \bigskip We also remark that, using a standard computer, AQuA based on the IQP solver (unlike the specific MICQP formulation of AQuA) cannot be applied to design spaces of size larger than a few thousands, because of the quadratic memory requirements. That is, here we again demonstrated the advantage of the proposed conic AQuA approach over the approach of AQuA from \citet{HF}, not only over methods directly based on a MISOCP formulation of the problem as in \citet{SagnolHarman}. \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{figure_SUB_D.pdf} \caption{The subsample (blue), chosen from the wine reviews data, based on $D$-criterion. The area of the gray dots is proportional to the density of the full dataset. The inlay shows the convergence of the iterations of the sequential computation of the subsample (the vertical axis is the value of the $D$-criterion).}\label{wineD} \end{center} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[width=\linewidth]{figure_SUB_I.pdf} \caption{The subsample (blue), chosen from the wine reviews data, based on $I$-criterion. The area of the gray dots is proportional to the density of the full dataset. The inlay shows the convergence of the iterations of the sequential computation of the subsample (the vertical axis is the value of the $I$-criterion).}\label{wineI} \end{center} \end{figure} \section{Conclusions} We extended the quadratic approximation from a single version of the $D$-criterion used in \citet{HF} to two versions of all Kiefer's criteria with an integer parameter, including the criterion of $A$-optimality and, via transformation, to the criterion of $I$-optimality. \bigskip Importantly, we also proved a low-rank property of the associated quadratic forms and used it to construct efficient conic formulation of the integer quadratic programming problem. The formulation permits using the method of AQuA in case of large design spaces that are out of reach of the previous methods. On the other hand, for smaller design spaces (provided that $m \ll n$ is still satisfied), the conic formulation of AQuA can significantly speed up the computation; in particular, it can rapidly identify an optimal exact design in case where one of the optimal approximate designs is also optimal exact. Moreover, using AQuA it is possible to obtain efficient exact designs for situations with simultaneous constraints on various characteristics of the experiment, e.g., its form, cost, and quality. \bigskip The basic AQuA approach presumes the knowledge of the optimal approximate information matrix. However, because the algorithms for computing optimal approximate designs are well developed and fast, this is not considered to be a drawback. Moreover, there is a large body of literature that provides theorems that explicitly yield optimal approximate designs. Note that with rounding procedures alone, the practical value of optimal approximate designs is weaker since direct heuristic computational methods can often find better designs, entirely circumventing approximate design theory and computation. We prove that optimal approximate designs carry more useful information for the construction of exact designs than is utilized by rounding procedures. \bigskip We also showed that the AQuA approach is generally robust with respect to the misspecification of the optimal information matrix, and can even be used sequentially, starting from an anchor matrix that is far from the approximate optimum. \bigskip Finally, the approach of AQuA can be extended to various criteria other than those analyzed in this paper; what is needed is only their quadratic approximation\footnote{For the application of the conic improvement, the quadratic forms must have low ranks.}, which can be found either analytically or numerically. This opens up new possibilities for the computation of optimal experimental designs with respect to criteria that are difficult to evaluate. \bigskip \textbf{Acknowledgments} The work was supported by Grant No 1/0341/19 from the Slovak Scientific Grant Agency (VEGA).
train/arxiv
BkiUalS6NNjgBpvH_AFE
5
1
\section{Locality} In the last several years the use of the overlap fermion has become more popular because the conceptual and technical clarity that results from its exact chiral symmetry on the lattice is seen to overcome its superficially higher computational cost as compared to conventional formulations of the fermion. Furthermore, the disparity in numerical intensity can be mitigated by using coarse lattice spacing, given that good scaling and localization properties are established. Hern\'{a}ndez, Jansen, and L\"{u}scher~\cite{Her99} showed numerically that Neuberger's overlap operator is local (using the Wilson gauge action on fine lattices). Recently, Golterman, Shamir, and Svetitsky~\cite{Gol05} have speculated that overlap simulations with a cutoff of $1\,{\rm GeV}$ (such as~\cite{Che04}) might have a range as long as 4 lattice units, and thus be afflicted by unphysical degrees of freedom as light as $0.25\,{\rm GeV}$. Here we show directly that such is not the case; the range is about 1 lattice unit (in Euclidean distance or 2 units of ``taxi-driver'' distance). All is well. \subsection{Lattice Details} We use the renormalization-group-improved Iwasaki~\cite{Iwa85} gauge action, on three different lattices; for each, the lattice size, lattice spacing, and number of configurations used are tabulated in Table~\ref{Table:lattices}. \begin{table}[hb] \begin{center} \begin{tabular}{llr} $N_s \times N_t$ & $a ({\rm fm})$ & $N_{\rm cfg}$ \\ \hline $16^3\times 28$ & $0.20$ & $300/10$ \\ $20^3\times 32$ & $0.17$ & $ 98/10$ \\ $28^3\times 44$ & $0.13$ & $ /10$ \\ \hline \end{tabular} \caption{\label{Table:lattices} Lattice size, lattice spacing, number of configurations (for scaling/locality). } \end{center} \end{table} For the associated scaling study of hadron masses, we use the overlap fermion~\cite{Neu98} and massive overlap operator~\cite{Ale00} \begin{eqnarray*} D(m_0) & = & (\rho + \frac{m_0a}{2}) + (\rho - \frac{m_0a}{2} ) \gamma_5 \epsilon (H) \end{eqnarray*} where $\epsilon (H) = H /\sqrt{H^2}$, $H = \gamma_5 D_w$, and $D_w$ is the usual Wilson fermion operator, except with a negative mass parameter $-\rho = 1/2\kappa -4$ in which $\kappa_c < \kappa < 0.25$; we take $\kappa = 0.19$ in our calculation which corresponds to $\rho = 1.368$. For the locality study, we set $m_0=0$ to look at the properties of the massless operator $D(0)$. Complete details are described in~\cite{Che04}. \subsection{Locality as Measured by Taxi-Driver Distance} It is convenient for formal reasons to discuss locality in terms of ``taxi-driver'' distance~\cite{Her99}. \begin{equation} r_{\rm TD} = || x-y||_1 = \sum_{\mu} |x_\mu - y_\mu| \end{equation} The locality of the overlap operator is then studied by plotting the quantity $|D(r)|$ ($f(r)$ in the notation of~\cite{Her99}) as a function of the taxi-driver distance for a localized source, $\psi_{\alpha}(x)=\delta(x)\delta_{\alpha\beta}$ for fixed Dirac-color index $\beta$. \begin{equation} |D(r)| = \max \{ ||D\psi(x)|| \,\, | \,\, \sum_{\mu} x_{\mu}=r \} \end{equation} For large $r$, the kernel of the Dirac-overlap operator decays exponentially with decay rate $\nu=r_{\rm ov}^{-1}$, where $r_{\rm ov}$ is the range (characteristic decay distance) measured in lattice units. \subsection{Results} In the left pane of Fig.~\ref{Fig:locality}, we plot $|D(r)|$ as a function of taxi-driver distance for each of three lattice spacings. At large distances, we fit to an exponentially decreasing function to extract the range $r_{\rm ov}$. These are tabulated in Table~\ref{Table:locality}. We note that our results are consistent with the results of Hern\'{a}ndez, Jansen and L\"{u}scher~\cite{Her99} on finer lattices for the overlap operator ($\rho=1.4$) with Wilson action at $\beta=6.0$, $6.2$, and $6.4$ where they find $\nu=r_{\rm ov}^{-1}=0.49$. \begin{table}[ht] \begin{center} \begin{tabular}{ll} $a$ & $r_{\rm ov}$ \\ \hline $0.20\,{\rm fm}$ & $1.93(1)$ \\ $0.17\,{\rm fm}$ & $1.83(1)$ \\ $0.13\,{\rm fm}$ & $1.81(1)$ \\ \hline \end{tabular} \caption{\label{Table:locality} The range (taxi driver metric) for three lattice spacings. It is less than two lattice units on a lattice as coarse as $0.20\,{\rm fm}$.} \end{center} \end{table} \begin{figure}[ht] \vspace{0cm} \begin{center} \includegraphics[angle=0,width=0.5\hsize]{figures/taxi_3lattice_finalcut_fit.eps}% \includegraphics[angle=0,width=0.5\hsize]{figures/taxi_final_result.eps} \vspace{-0.5cm} \caption{\label{Fig:locality} Left: For each of three lattice spacings, the expectation value of $|D(r)|$ as a function of the taxi-driver distance, $r$. For large $r$, $|D(r))|$ falls exponentially, with range $r_{\rm ov}$. Fitted values of $r_{\rm ov}$ are shown with fit intervals. Right: Taxi driver range in physical units (fm) as a function of lattice spacing. The range is small even at coarse lattice spacing and trends to zero in the continuum limit. } \vspace{0cm} \end{center} \end{figure} Furthermore, it is gratifying to see that even for our coarsest lattice, $0.20\,{\rm fm}$, the {\em measured\/} range is less than two lattice units. In the right pane of Fig.~\ref{Fig:locality} we plot the range in physical units as a function of lattice spacing; it trends to zero in the continuum limit. \subsection{Locality as Measured by Euclidean Distance} We conclude that it is perfectly acceptable to simulate overlap fermions with lattice spacing as coarse as $0.20\,{\rm fm}$, since for this we find that the range is not greater than two lattice units when measured in taxi-driver distance. In fact, the situation is even better than it seems. To see this, we consider the more familiar standard Euclidean metric \begin{equation} r_{\rm E} = || x - y ||_2 = \sqrt{\sum_{\mu} |x_\mu - y_\mu|^2} \end{equation} \begin{figure}[ht] \vspace{0cm} \begin{center} \includegraphics[angle=0,width=0.6\hsize]{figures/eucl_16_cut1_bin_avg.eps} \vspace{0cm} \caption{\label{Fig:Euclidean} For the $16^3\times 28$ lattice, the expectation value of $|D(r)|$ as a function of Euclidean distance $r$ (for data cut to remove wrap-around effects). Data is plotted at each Euclidean distance in red. The blue points are averages within bins of width $\Delta{r}=0.5$. The exponential tail is then fit over the longest interval possible with acceptable $\chi^2$, as shown by the green line which has inverse slope $r_{\rm ov}=1.05(1)$, the range.} \vspace{0cm} \end{center} \end{figure} In Fig.~\ref{Fig:Euclidean} we plot $D(r)$ versus the Euclidean distance for our coarsest lattice. As compared to the corresponding plot~\ref{Fig:locality} using the less-familiar taxi-driver distance, one sees that the data are more scattered due to violations of rotational symmetry, but are still clearly contained with a worst-case decay rate. \begin{table}[ht] \begin{center} \begin{tabular}{ll} $a$ & $r_{\rm ov}$ \\ \hline $0.20\,{\rm fm}$ & $1.05(1)$ \\ $0.17\,{\rm fm}$ & $0.98(1)$ \\ $0.13\,{\rm fm}$ & $0.9(1)$ \\ \hline \end{tabular} \caption{\label{Table:Euclidean} The range (Euclidean metric) at three values of lattice spacing. It is less than about 1 lattice unit with lattice spacing as coarse as $0.20\,{\rm fm}$. } \end{center} \end{table} \pagebreak Again we fit the tail of the function with a decaying exponential to extract the range. We tabulate the results in Table~\ref{Table:Euclidean}. Note that although the two ranges (taxi-driver and Euclidean) differ by a factor of two they are quite compatible heuristically; on a $L^4$ hypercube, the maximum taxi-driver distance is $4L$, and the maximum Euclidean distance is $\sqrt{4L^2}=2L$. So even at lattices as coarse as $a=0.2\,{\rm fm}$, the range is about 1 lattice unit (measured using Euclidean distance, or 2 units using taxi-driver distance). No unphysical degrees of freedom are induced at distances longer than the lattice cutoff. \section{Scaling} At Lattice 2004, Davies {\it et al.}~\cite{Dav04} collected world data to demonstrate that different quenched quark formulations could have a consistent continuum limit. \begin{figure}[ht] \vspace{0cm} \begin{center}\hspace*{2cm} \includegraphics[clip,angle=0,width=1.0\hsize]{figures/aokisq_fit.eps} \vspace{-0.5cm} \caption{\label{Fig:Aoki} ``Aoki'' plot for various quenched data, as obtained from~\cite{Dav04}. Our data is labeled ``overlap''.} \vspace{0cm} \end{center} \end{figure} The conclusion, as illustrated in Fig.~\ref{Fig:Aoki}, is that they could. But we emphasize that the constrained fit demanded that there exist a global continuum limit (by design). Furthermore, the global continuum limit differed substantially from the continuum limit obtained solely from the Wilson formulation where the discretization errors are largest. The lesson learned is that with large discretization errors, it is quite possible to be misled when extrapolating to the continuum limit, even with high statistics and many lattice spacings. It is important to seek a formulation with very small discretization errors to be able to trust the continuum extrapolation. Here we add our data ``overlap'' to the global quenched spectrum data and find that of all the formulations, its discretization errors are smallest, allowing for viable computation at surprisingly coarse lattice spacing. As another example of the efficacy of the overlap formulation, we made a non-perturbative computation~\cite{Zha05} of the renormalization constants of composite operators on the $16^{3}\times 28$ lattice using the regularization independent scheme. We found that the relations $Z_A=Z_V$ and $Z_S=Z_P$ agree well (within 1\%) above $m=1.6\,{\rm GeV}$. The $m_{{\Lambda}_{\rm QCD}}a^2$ and $(ma)^2$ corrections of the renormalization are small; the mass dependence is less than about 3\% up to $ma=0.6$. This is much superior to the competition. \section{Conclusions} It is viable to simulate quenched overlap fermions at surprisingly coarse lattice spacing. Locality is well under control; the range (characteristic exponential decay length) is about one lattice unit (of Euclidean distance, or about two lattice units of Taxi-driver distance) for lattice spacing as coarse as $0.20\,{\rm fm}$ (such as in~\cite{Che04}), and trends to zero (in physical units) in the continuum limit. Scaling is remarkable. The Aoki plot is essentially flat up to $0.20\,{\rm fm}$. The overlap fermion outperforms all other formulations; discretization errors are smallest for overlap. Non-perturbative renormalization of operators show little mass dependence~\cite{Zha05}; e.g.\ less than about 3\% up to $ma=0.6$ for the renormalization constants
train/arxiv
BkiUa3O6NNjgB0Ssz63e
5
1
\section{Introduction and motivation} The classical Kre\u{\i}n-Feller differential operator $\Delta_{\nu, \Lambda}$, introduced in \cite{Fe57,KK68}, where $\nu$ denotes a non-atomic compactly supported Borel probability measure on $\mathbb{R}$ and where $\Lambda$ denotes the one-dimensional Lebesgue measure, has been investigated with respect to its spectral properties first by Fujita \cite{Fu87}, K\"uchler \cite{MR574035}, Langer \cite{MR0314125} and Kotani and Watanabe \cite{MR661628} and more recently by Arzt \cite{A15b}, Ehnes \cite{Ehnes2019} and Freiberg \cite{MR2017701,MR2030736,Fr05}. The case when $\nu$ is purely atomic has also been studied in \cite{MR2513598}; where it was shown that the eigenvalues of $\Delta_{\nu, \Lambda}$ have a dependence not only on the positions of the atoms of $\nu$ but also on the weights of the atoms. Returning to the case when $\nu$ is a non-atomic, it has been established that $\Delta_{\nu, \Lambda}$ is the infinitesimal generator of a Liouville Brownian motion (also known as gap diffusion, skip-free diffusion, quasi-diffusion or generalised diffusion), see \cite{Burkhardt1983,MR2817342,MR3034785,X_Jin_2017,MR574035,MR0314125,MR3005002,MR3272822}. Here, we investigate generalised Kre\u{\i}n-Feller operators $\Delta_{\nu, \mu}$ for Borel measures $\nu$ and $\mu $ on the real line under the natural assumptions $\operatorname{supp}(\nu) \subseteq \operatorname{supp}( \mu)$ and $\mu$ atomless. In the case that $\nu=\mu =\Lambda$ the operator coincides with the classical second order weak derivative. For arbitrary $\mu=\nu$, atomless and compactly supported, a harmonic calculus for $\Delta_{\mu, \mu}$ was developed in \cite{FZ02} and, when $\mu$ is a self-similar measure supported on a Cantor set, it is now well established that the eigenvalue counting function of $\Delta_{\mu, \mu}$ is comparable to the square-root function. In \cite{KSW16} the exact eigenvalues and eigenfunctions of $\Delta_{\mu, \mu}$ were obtained and it was shown that the eigenvalues do not depend on the given measure. Moreover, the eigenfunctions are given by a composition of the appropriated classical trigonometric functions with a phase space transformation induced by the distribution function of $\mu$. The case when the measure $\mu$ has a continuous as well as an atomic part, was the subject of \cite{KSW17,KSW2019b}. Here, it has been shown that, if $\mu$ has a continuous part, then the eigenvalues may depend on the position of the atoms, and otherwise not. In the present article, we elaborate on the connections between the generalised and the classical Kre\u{\i}n-Feller operators by establishing a suitable phase space transformation determined by the distribution function of $\mu$. As a first application of this observation we show that the spectral properties of the generalised Kre\u{\i}n-Feller operators can be reduced to those of the to the classical ones; and as a second application, we connect properties of the associated Liouville Brownian motions for generalised Kre\u{\i}n-Feller operators to that of the classical Kre\u{\i}n-Feller operators with a special focus on the concept of walk dimension. This complements and partially resembles the general framework established by Dynkin \cite[Vol.\,I,\,\textsection\,6]{DynkinI_II}. \section{Setup and statement of main results} \subsection{Our setting} Let $\mu$ and $\nu$ denote two Borel probability measures on $[0,1]$ with $\operatorname{supp}(\nu) \subseteq \operatorname{supp}(\mu)$, $\nu(\{0,1\})=0$ and $\mu$ atomless. Denote the distribution function of $\mu$ and $\nu$ by $F_{\mu}$ and $F_{\nu}$, respectively. Let $(C_{\nu, \mu}, \lVert \cdot \rVert_{\infty} )$ denote the Banach space of continuous function on $[0,1]$ which are linear in scale $F_{\mu}$ on intervals where $F_{\nu}$ is constant. Namely, on each connected component $J$ of $[0,1] \setminus \operatorname{supp}(\nu)$ the function $f$ is linear in $F_{\mu}$, that is $f(x)=a_J F_{\mu}(x)+b_J$ for all $x \in J$ and some $a_J, b_J \in \mathbb{R}$. As indicated above, we let $\Lambda$ denote the one-dimensional Lebesgue measure restricted to $[0,1]$. Set $\mathcal{S}^w \coloneqq L^2(\nu)$ and $\mathcal{S}^s \coloneqq C_{\nu,\mu}$, where $w$ stands for {\em weak} and $s$ stands for {\em strong}; we sometimes write $\mathcal{S}^*(\mu,\nu)$ instead of $\mathcal{S}^*$ to stress the dependence of the underlying measure spaces for $* \in \{s,w\}$. In what follows we will mainly be concerned with the Banach spaces $(\mathcal{S}^{w},\Vert\cdot\Vert_{2})$ and $(\mathcal{S}^{s},\Vert\cdot\Vert_{\infty})$. Now fix $* \in \{s,w\}$; a function $f$ belonging to the set $C([0,1] )$ of continuous function on $[0,1] $ is said to lie in $\mathcal{D}^*(\Delta_{\nu,\mu})$ if there exist $a,b \in \mathbb{R}$ and $g \in \mathcal{S}^*$ with \begin{align}\label{KreinFeller} f(x)=a+bF_{\mu}(x)+\int_{[0,x]}( F_{\mu}(x)-F_{\mu}(y) ) g(y)\;\d \nu(y) \end{align} for all $x \in [0,1]$. Alternatively, by Fubini we can write $ f(x)=a+bF_{\mu}(x)+\int_{[0,x]}\int_{[0,y]} g(s)\;\d \nu(s)\;\d \mu(y)$. By the uniqueness of densities we observe that $f$ determines $a$, $b$ and $g$ uniquely and by setting $ \Delta_{\nu,\mu}f \coloneqq g$ we define an injective linear operator $\Delta_{\nu,\mu}:\mathcal{D}^*(\Delta_{\nu,\mu})\to \mathcal{S}^*$. The first derivative of $ f \in \mathcal{D}^{\text{*}}(\Delta_{\nu,\mu})$ is defined by \begin{align}\label{eq:first_derivative} \nabla_{\mu}f(x) \coloneqq \nabla_{\mu}f(0)+\int_{[0,x]} \Delta_{\nu,\mu}f(y)\;\d \nu(y), \;\; \text{where} \;\; \nabla_{\mu}f(0) \coloneqq \lim_{x \downarrow 0^+} \dfrac{f(x)-f(0)}{F_{\mu}(x)-F_{\mu}(0)} \;\; \text{and} \;\; 0^+ \coloneqq \inf (\operatorname{supp}(\mu)). \end{align} The existence of the above limit follows from \eqref{KreinFeller} and the assumption that $\operatorname{supp}(\nu) \subseteq \operatorname{supp}( \mu)$. Additionally, from \eqref{KreinFeller} together with Lebesgue's dominated convergence theorem, we have that $f$ is constant on every interval of constancy of $F_{\mu}$, $a=f(0)$, $b= \nabla f_{\mu}(0)$ and $\mathcal{D}^s ( \Delta_{\nu,\mu} ) \subset \mathcal{D}^w (\Delta_{\nu,\mu}) \subset C_{\nu,\mu}$. Moreover, for all $f \in\mathcal{D}^*(\Delta_{\nu,\mu})$ and all $x \in (0,1)$, \begin{align*} \nabla_{\mu}(f)(x) = \lim_{ \substack{ y \downarrow x^+}} \dfrac{f(y)-f(x)}{F_{\mu}(y)-F_{\mu}(x)} + \Delta_{\nu,\mu} f (x)\nu(\{ x^+ \} ) (1 - \mathds{1}_{\{x^+\}}(x) ) = \lim_{ \substack{ y \uparrow x^-}} \dfrac{f(x)-f(y)}{F_{\mu}(x)-F_{\mu}(y)} + \Delta_{\nu,\mu} f (x)\nu(\{ x^- \} ). \end{align*} Here, $x^+ \coloneqq \inf\{ y \in [0,1] \colon F_{\mu}(x) < F_{\mu}(y) \}$ and $x^- \coloneqq \sup\{ y \in [0,1] \colon F_{\mu}(x)>F_{\mu}(y) \}$. Observe, if $\nu$ is atomless, then $\nabla_{\mu}(f)$ is independent of $\nu$ and given by an `ordinary' differential quotient. For $ \gamma =(\alpha,\beta) \in [0,{\pi/2}]^{2}$ we consider the following eigenvalue problem for $\Delta_{\nu,\mu}$, see \cite{Fu87}, \begin{align}\label{EWC} \Delta_{\nu,\mu}f= \lambda f \end{align} with {\em Robin boundary conditions} \begin{align}\label{BC} f(0)\cos(\alpha)-\nabla_{\mu}f(0)\sin(\alpha)=0 \quad \text{and} \quad f(1)\cos(\beta)+\nabla_{\mu}f(1)\sin(\beta)=0. \end{align} We refer to the particular case $\gamma=(\pi/2,\pi/2)$ as the {\em Neumann case} and the case $\gamma=(0,0)$ as the {\em Dirichlet case}. We denote by $\mathcal{D}^{*}_{\gamma}( \Delta_{\nu,\mu} )$ the set of all $f \in \mathcal{D}^{*}( \Delta_{\nu,\mu} )$ which satisfy \eqref{EWC} and \eqref{BC}. Combining \eqref{KreinFeller} and \eqref{eq:first_derivative} with Fubini's theorem and the assumptions on $\nu,\mu$ one obtains a {\em Gauss-Green formula}; namely, for $f,g \in \mathcal{D}_{\gamma}^s( \Delta_{\nu,\mu} )$, \begin{align*} \int (\Delta_{\nu,\mu}f )g\mathrm{d} \nu &= \left( \nabla_{\mu}f (1)-\nabla_{\mu}f (0)\right)g(0)+ \int\nabla_{\mu}g(y)\left( \nabla_{\mu}f (1)+\Delta_{\nu,\mu}f (y)\nu \left(\{y\} \right)-\nabla_{\mu}f (y) \right) \d \mu(y) \\ &= \nabla_{\mu}f (1) g(1) - \nabla_{\mu}f (0) g(0) - \int \nabla_{\mu}f \nabla_{\mu} g \;\mathrm{d} \mu \end{align*} where $\int \nabla_{\nu,\mu}g(y)\Delta_{\nu,\mu}f (y)\nu \left(\{y\} \right) \,\d\mu(y)=0$ since $\mu$ is atomless. Considering $g = f$ we obtain that $\Delta_{\mu,\nu}$ with domain $\mathcal{D}^{*}_{\gamma}( \Delta_{\nu,\mu} )$, for $\gamma\coloneqq (\alpha,\beta) \in [0,\pi/2]^{2}$, is non-positive and since \begin{align*} &\int (\Delta_{\nu,\mu}f )gd \nu-\int (\Delta_{\nu,\mu}g )f \mathrm{d} \nu\\ &=-\tan(\beta) \nabla_{\mu}f(1) \nabla_{\mu}g(1)-\tan(\alpha) \nabla_{\mu}f(0) \nabla_{\mu}g(0)+\tan(\beta) \nabla_{\mu}f(1) \nabla_{\mu}g(1)+\tan(\alpha) \nabla_{\mu}f(0) \nabla_{\mu}g(0) =0, \end{align*} the boundary conditions force $\Delta_{\mu,\nu}$ also to be symmetric. \subsection{Main results} In \Cref{prop:laplace_backward} we establishes a strong connection between $\Delta_{\nu,\mu}$ and $\Delta_{\nu \circ F_{\mu}^{-1}, \Lambda}$. Indeed, by utilising the \textsl{pseudo-inverse} \begin{align*} \check{F}^{-1}_{\mu}(x) \colon x \mapsto \inf\{ y \in [0,1] \colon F_{\mu}(y) \geq x \} \end{align*} of $F_{\mu}$, we prove, for $* \in \{s,w\}$ and $\gamma \in [0,\pi/2]^{2}$, that $\varphi: f \mapsto f \circ \check{F}_{\mu}^{-1}$ is an isometric isomorphism on $\mathcal{S}^{*}$ with \begin{align*} \Delta_{\nu \circ F_{\mu}^{-1}, \Lambda }\circ\varphi=\varphi\circ \Delta_{\nu,\mu }\quad \text{and}\quad\varphi ( \mathcal{D}^*_{\gamma}(\Delta_{\nu,\mu}))=\mathcal{D}^*_{\gamma}(\Delta_{\nu \circ F_{\mu}^{-1},\Lambda}). \end{align*} With this at hand, one may conclude that the spectral properties of $\Delta_{\nu \circ F_{\mu}^{-1},\Lambda}$ will be inherit from $\Delta_{\nu ,\mu}$ and vice versa. For instance, using our correspondence theorem (\Cref{prop:laplace_backward}), we obtain that $\Delta_{\nu ,\mu}$ is a densely defined, self adjoint operator with compact resolvent. Further, we are able the prove the following. \begin{enumerate}[leftmargin=*] \item We obtain \Cref{Freiberg} concerning the asymtotic growth rate of the eigenvalue counting function of $\Delta_{\nu,\mu}$, first provided in \cite{Fr05} for a certain class of self-similar measures. This is achieved via an application of \Cref{prop:laplace_backward} and the identification of $\eta = \nu \circ F_{\mu}^{-1}$ as a certain self-similar measure, in tandem with the corresponding result for the classical Kre\u{\i}n-Feller operator $\Delta_{\eta,\Lambda}$, see \cite{Fu87}. This generalises several of the results of \cite{KSW16}, where the case $\nu=\mu$ was considered. \item Letting $(X_{t})_{t \geq 0}$ denote the a Liouville Brownian motion with speed measure $\nu \circ F_{\mu}^{-1}$ (see \Cref{LBM}), utilising our correspondence theorem (\Cref{prop:laplace_backward}), we show that the infinitesimal generator of $(\check{F}^{-1}_{\mu}( X_t ))_{ t\geq 0}$ is the generalised Kre\u{\i}n-Feller operator $\Delta_{\nu,\mu}$ with Neumann boundary condition. Additionally, we compute the walk dimension of $(X_t)_{t \geq 0}$ and $(\check{F}^{-1}_{\mu}( X_t ))_{ t\geq 0}$. \end{enumerate} \section{Kre\u{\i}n-Feller operators} \subsection{Properties of classical Kre\u{\i}n-Feller operators} In this section we collect important properties of classical Kre\u{\i}n-Feller operators, that is we consider the case $\mu= \Lambda$, with respect to weak and strong solutions. Most of these results are nowadays folklore and can be found for instance in \cite[Behauptung 2.2, Satz 2.1, Satz 3.1]{MR0314125} with more or less detailed proofs. Since we could not find references where all the facts are proven in detail we decided to give here a quick overview and reduce all properties essentially on two key observations, namely the symmetry as a consequence of the Gauss-Green formula and surjectivity as stated in the following lemma. \begin{Lem}\label{Surjec} For $\gamma=(\alpha, \beta) \in [0,{\pi/2}] ^{2}$ and $* \in \{s,w\}$ we have that the map $\Delta_{\nu,\Lambda}: \mathcal{D}^{*}_{\gamma}( \Delta_{\nu,\Lambda} ) \rightarrow \mathcal{S}_{\gamma}^* $ is surjective, where $\mathcal{S}_{(\pi/2,\pi/2)}^*\coloneqq\left\{ g \in \mathcal{S}^* : \int g \d \nu=0 \right\}$ and $\mathcal{S}_{\gamma}^* \coloneqq \mathcal{S}^ *$ for $\gamma\not=(\pi/2,\pi/2)$. For $\gamma=(\alpha, \beta) \in [0,{\pi/2}]^{2}\setminus\{(\pi/2,\pi/2)\}$ we have that $\Delta_{\nu,\Lambda}$ is also injective and for its inverse $\Delta_{\nu,\Lambda}^{-1}: \mathcal{S}_{\gamma}^* \rightarrow \mathcal{D}^{*}_{\gamma}( \Delta_{\nu,\Lambda} )$ we have the following kernel representation \begin{align*} \Delta_{\nu,\Lambda}^{-1}g:x\mapsto \int K_{\alpha,\beta}(x,y) g(y) \,\d \nu(y) \end{align*} with continuous kernel given by \begin{align*} K_{\alpha,\beta}(x,y)\coloneqq \begin{cases} A_{\alpha,\beta}(1+\tan(\beta)-y)( \tan(\alpha)+x)+\mathbbm{1}_{[0,x]}(y)(x-y) &\mbox{for $\alpha,\beta\in [0,\pi/2)$},\\ (y-1)(x+\tan(\alpha) )+\mathbbm{1}_{[0,x]}(y)(x-y) &\mbox{for $\beta=\pi/2$ and $\alpha \in [0,\pi/2)$},\\ (y-1-\tan(\beta))+\mathbbm{1}_{[0,x]}(y)(x-y) &\mbox{for $\alpha= \pi/2$ and $\beta \in [0,\pi/2)$}. \end{cases} \end{align*} with $A_{\alpha,\beta}:= {-1}/(1+\tan(\alpha)+\tan(\beta))$, for $\alpha,\beta\in (0,\pi/2)$. For the Neumann case $\alpha=\beta=\pi/2$ the operator $\Delta_{\nu,\Lambda}$ is not injective with kernel $ \Delta_{\nu,\Lambda}^{-1}(\{0\})=\mathbb{R} \mathbbm{1}$. \end{Lem} \begin{proof} We only consider the case $\alpha,\beta \in [0,\pi/2)$ the other cases can be proved along the same lines. For fixed $ g \in \mathcal{S}^{*}$ and $x \in [0,1]$ we set \[ f(x):=b\tan(\alpha)+bx+\int_{[0,x]} (x-y)g(y) \d \nu(y), \] with \[ b\coloneqq\dfrac{-1}{(1+\tan(\alpha)+\tan(\beta))} \left(\tan(\beta)\int_{[0,1]}g(y) \d \nu(y)+ \int_{[0,1]} (1-y)g(y) \d \nu(y)\right). \] This imposes the right boundary conditions on $f$ and consequently we have $f \in \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda} ) $ with $\Delta_{\nu, \Lambda}f=g$. \end{proof} \begin{Rem} As a direct consequence it follows that only in the Neumann case one has an eigenfunction with eigenvalue equal to zero. \end{Rem} Recall the following abstract facts on linear operators: Let us assume that $A: \dom(A) \subset H \rightarrow H$ is a linear, symmetric and surjective operator on a Hilbert space $H$. Then one easily verifies that the annihilator of $\dom(A)$ is trivial. i.e. $\dom(A)^\perp=\{0\}$, and equivalently, $\dom(A)$ is dense in $H$. We can deduce further that $A$ is also self-adjoint: The inclusion $\dom(A) \subset \dom(A^*)$ holds by symmetry of $A$, where $A^*$ denotes the adjoint of $A$. For the reverse inclusion note that for fixed $f \in \dom(A^*)$ there exists, by surjectivity of $A$, an element $g \in \dom(A)$ such that $A^*f=Ag$. Then, using symmetry again, for each $h \in \dom(A)$, we have $\langle f ,A h \rangle = \langle A^* f,h\rangle = \langle A g,h \rangle = \langle g , A h\rangle $ and by surjectivity of $A$ we deduce $f=g \in \dom(A)$. Now, we can apply this observation to our special setting, namely, for $\gamma \in [0,\pi/2]^2$ we set $H=\mathcal{S}_{\gamma}^w$, $A= \Delta_{\nu, \mu}$ and $\dom(A)=\mathcal{D}^w_{\gamma}(\Delta_{\nu, \mu} )\cap \mathcal{S}_{\gamma}^w$. In the case of $\gamma \in [0, \pi/2] \setminus \{ (\pi/2,\pi/2)\}$ it follows immediately that $ \Delta_{\nu, \mu}$ with domain $\mathcal{D}^w_{\gamma}(\Delta_{\nu, \mu} )$ is a densely defined, self-adjoint operator on $L^{2}(\nu)$. In the case of Neumann boundary condition, i.e. $\gamma=(\pi/2,\pi/2)$, note that for the kernel we have $\Delta_{\nu,\ \mu}^{-1}(\{0\})= \mathbb{R}\mathds{1}$ and hence $L^2(\nu) = \mathcal{S}_{\gamma}^w \oplus \mathbb{R}\mathds{1}$. Therefore we have $ \mathcal{D}^w_{\gamma}(\Delta_{\nu, \mu} )=\{f +a:f\in \mathcal{D}^w_{\gamma}(\Delta_{\nu, \mu} )\cap \mathcal{S}_{\gamma}^w, a \in \mathbb{R}\}$ is dense in $L^2(\nu) $. Using this oberservation it follows again that $\Delta_{\nu, \mu} $ with domain $\mathcal{D}^w_{\gamma}(\Delta_{\nu, \mu} )$ is a densely defined and self-adjoint operator on $L^2(\nu) $. The following proposition summarizes this observation. \begin{Prop}\label{adjoint} The partially defined operator $\Delta_{\nu,\Lambda}:L^{2}(\nu)\to L^{2}(\nu) $ with domain $ \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda}) $ for $\gamma \in [0,{\pi/2}]^{2}$ is self-adjoint, non-positive and, in particular, closed. \end{Prop} \begin{comment} \begin{proof} First, we prove that $\mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})$ is dense in $L^2(\nu)$ by showing that for the orthogonal complement $\mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})^\perp$ in $L^2(\nu)$ is equal to $\{0\}$. At first we consider the case $\alpha \neq \pi/2$ or $\beta \neq \pi/2$ Indeed, for $h \in L^2(\nu)$, by Lemma \ref{Surjec} we find $g \in \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})$ such that $h= \Delta_{\nu,\Lambda}g$. For $h\in \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})^\perp$, with this choice of $g$, and using the symmetry of $\Delta_{\nu,\Lambda}$ with domain $\mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})$ we have \[ \forall f \in \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda}): \ 0=\langle f,h\rangle= \langle f, \Delta_{\nu,\Lambda}g \rangle = \langle \Delta_{\nu,\Lambda}f, g \rangle. \] Again by Lemma \ref{Surjec} this implies $g=0$ and hence $h=\Delta_{\nu,\Lambda}g=0$. For the case $\alpha=\beta=\pi/2$ for every $h \in \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})^\perp$ there exists by Lemma \ref{Surjec} $g \in \mathcal{D}^{w}_{\gamma}( \Delta_{\nu,\Lambda})$ such that $\Delta_{\nu,\ \Lambda}g=h-\int h \d \nu$, and hence for all $f \in \mathcal{D}_{\gamma}^w \left( \Delta_{\nu,\ \Lambda} \right)$ \begin{align*}- \int f \d \nu \cdot \int h \d \nu= \langle f,h-\int h \d \nu \rangle=\langle f, \Delta_{\nu,\Lambda}g \rangle = \langle \Delta_{\nu,\Lambda} f, g \rangle. \end{align*} Choose $ k \coloneqq g-\int g \d \nu \in \mathcal{S}_{\gamma}^*$ and $n \in \mathbb N$ define \[ f(x)=n+\int_{[0,x]} (x-y)k(y) d \nu (y). \] This gives for all $n \in \mathbb N$ \begin{align*} \left( -n-\int \int_{[0,x]} (x-y)k(y) d \nu (y)\d \nu(x)\right) \int h \d \nu = \langle k, g \rangle. \end{align*} Thus, we must have \[ \int h d\nu =0. \] Again it follows $h=0$. It remains to prove $\dom(\Delta_{\nu,\Lambda})=\dom(\Delta_{\nu,\Lambda}^*)$. By symmetry we have $\dom(\Delta_{\nu,\Lambda}) \subset \dom(\Delta_{\nu,\Lambda}^*)$. For the reverse inclusion let $\gamma \in [0,\pi/2)^2$ and fix $f \in \dom(\Delta_{\nu,\Lambda}^*)$ and $g \in \dom(\Delta_{\nu,\Lambda})$ with $\Delta_{\nu,\Lambda}^*f=\Delta_{\nu,\Lambda}g$ which exists by Lemma \ref{Surjec}. Then, for every $h \in \dom(\Delta_{\nu,\Lambda})$, \begin{align*} \langle f ,\Delta_{\nu,\Lambda } h \rangle = \langle \Delta_{\nu,\Lambda }^* f,h\rangle = \langle \Delta_{\nu,\Lambda } g,h \rangle = \langle g , \Delta_{\nu,\Lambda } h\rangle. \end{align*} Again by Lemma \ref{Surjec} we therefore have $f=g$ implying $\dom(\Delta_{\nu,\Lambda}^*) \subset \dom(\Delta_{\nu,\Lambda})$. Now, we consider the case $\alpha=\beta=\pi/2$. Then we have for every $h \in \dom(\Delta_{\nu,\Lambda})$, \begin{align*} \langle f ,\Delta_{\nu,\Lambda } h \rangle = \langle \Delta_{\nu,\Lambda }^* f-\int \Delta_{\nu,\Lambda }^*f \d \nu ,h\rangle +\int \Delta_{\nu,\Lambda }^*f \d \nu\int h \d \nu . \end{align*} There exists $g \in \dom(\Delta_{\nu,\Lambda})$ such that \begin{align*} \langle \Delta_{\nu,\Lambda }^* f-\int \Delta_{\nu,\Lambda }^*f \d \nu ,h\rangle +\int \Delta_{\nu,\Lambda }^*f \d \nu\int h \d \nu &=\langle \Delta_{\nu,\Lambda }g ,h\rangle +\int \Delta_{\nu,\Lambda }^*f \d \nu\int h \d \nu \\ &=\langle g , \Delta_{\nu,\Lambda } h\rangle +\int \Delta_{\nu,\Lambda }^*f \d \nu\int h \d \nu . \end{align*} Therefore we obtain \[ \langle f-g ,\Delta_{\nu,\Lambda } h \rangle =\int \Delta_{\nu,\Lambda }^*f \d \nu\int h \d \nu \] from this it follows $\int \Delta_{\nu,\Lambda }^*f \d \nu=0$. Finally, it follows \[ \forall k \in \mathcal{S}_{\gamma}^*: \ \langle f-g ,k \rangle=0. \] Therefore, we have $f-g=\text{const.}$ and the statement follows. \end{proof} \end{comment} \begin{Cor} Fix $\gamma=(\alpha,\beta) \in [0,\pi/2]^2\setminus\{ (\pi/2, \pi/2)\}$ then the operator $R_0\coloneqq -\Delta_{\nu,\Lambda}^{-1}: \mathcal{S}_{\gamma}^w \to \mathcal{S}_{\gamma}^w$ is compact and self-adjoint. \end{Cor} \begin{proof} Lemma \ref{Surjec} shows that $R_0$ is a Hilbert-Schmidt operator with continuous (bounded) kernel function, therefore the compactness follows. Furher, the operator $R_0$ is self-adjoint this follows from symmetry of $\Delta_{\nu, \Lambda}$ in tandem with the fact that $R_0$ is also bounded. \end{proof} \begin{Cor}\label{Spectral} Let $(\alpha,\beta) \in [0,\pi/2]^2$ , then the operator $\Delta_{\nu,\mu}$ with domain $\mathcal{D}_{\gamma}^{w}(\Delta_{\nu,\mu} )$ gives rise to an orthonormal (possibly finite) basis of eigenfunctions with eigenvalues $\lambda_{n}\leq 0$. If $L^{2}(\nu)$ is not finite dimensional then we have a countable number of eigenvalues with $\lim_{n \rightarrow \infty} -\lambda_n=\infty$, in particular $\Delta_{\nu,\Lambda}$ is an unbounded operator, otherwise there are only finitely many eigenfunctions and $\Delta_{\nu,\Lambda}$ is bounded. \end{Cor} \begin{proof} At first we consider the case $\gamma=(\alpha,\beta) \in [0,\pi/2]^2\setminus\{ (\pi/2, \pi/2)\}$. Note that if $f \in \mathcal{D}^w_{\gamma}\left(\Delta_{\nu, \Lambda} \right)$ is an eigenfunction of $\Delta_{\nu, \Lambda}$ with eigenvalue $\lambda<0$, then applying Lemma \ref{Surjec} gives \[ \Delta_{\nu, \Lambda}f=\lambda f \Longleftrightarrow \lambda^{-1} f = \Delta_{\nu, \Lambda}^{-1}f. \] Then the statement follows directly from the spectral theorem for linear, compact and self-adjoint operators applied to $ \Delta_{\nu, \Lambda}^{-1}$. For the case $\gamma=(\alpha,\beta)=(\pi/2,\pi/2)$ we have to consider the resolvent operator $R^{\lambda}_{\nu,\Lambda}\coloneqq(\lambda I -\Delta_{\nu,\Lambda})^{-1}$ with $\lambda>0$ , from the integral representation of the resolvent operator $R^{\lambda}_{\nu,\Lambda}\coloneqq(\lambda I -\Delta_{\nu,\Lambda})^{-1}$ of $\Delta_{\nu,\Lambda}$ with domain $\mathcal{D}^w_{\gamma}\left(\Delta_{\nu, \Lambda} \right)$ given in \cite[\textsection\,1.2]{MR0314125}, one may conclude that $R^{\lambda}_{\nu,\Lambda}$ is compact and self-adjoint, see also \cite[Theorem 1, p. 251]{MR574035}. Again applying the spectral theorem proves the statement. \end{proof} \begin{Rem} Clearly, $L^{2}(\nu)$ is finite dimensional if and only if the support of $\nu$ is a finite set. \end{Rem} \begin{comment} \begin{proof} \[ \max\left\{\left|a_i-a_j \right|,\left|(b_i-b_j)\int_{[0,1]}x \,\d\nu(x) \right|\right \}\leq \left\|g_i-g_{j}\right\|_{L^1(\nu)}+\left\| f_i-f_j \right\|_{L^1(\nu)}. \] Therefore, both $(a_i)$ and $(b_{i})$ are Cauchy sequences, hence $a:=\lim_{i \rightarrow \infty} a_i$ and $b:=\lim_{i \rightarrow \infty} b_i$ exist. Combining this with \eqref{eq:closedness} shows that uniformly $f_i \rightarrow h$ with $h: x \mapsto a+bx+\int_{[0,x]}(x-y)g(y) d \nu(y)$ and $\nabla f_i (0)\rightarrow \nabla h(0)$. Thus, $f=h \in \mathcal{D}^w_{\gamma}\left( \Delta_{\nu, \Lambda} \right)$ and $\Delta_{\nu,\Lambda}f=g$. \end{proof} \end{comment} \begin{Lem}\label{densecm} For $\gamma=(\alpha,\beta )\in [0,{\pi/2}]^{2} $, set \begin{align*} C_{\nu,\Lambda}^{\gamma} \coloneqq \begin{cases} \left\{ f \in C_{\nu,\Lambda} \colon f(0)=0, \ f(1)=0\right\}& \text{ if } \alpha=\beta=0, \\ \left\{ f \in C_{\nu,\Lambda} \colon f(0)=0\right\}& \text{ if } \alpha=0, \ \beta \in (0,\pi/2],\\ \left\{ f \in C_{\nu,\Lambda} \colon f(1)=0\right\} &\text{ if } \alpha \in (0,\pi/2], \ \beta=0, \\ C_{\nu,\Lambda }& \text{ if } \alpha,\beta \in (0,\pi/2]. \end{cases} \end{align*} The set $\mathcal{D}^s_{\gamma}(\Delta_{\nu, \, \Lambda})$ is dense in $\left(C_{\nu,\Lambda}^{\gamma}, \lVert \cdot \rVert_{\infty}\right)$. \end{Lem} \begin{proof} The result can be found in \cite[Behauptung 2.4]{MR0314125} without a detailed proof. We will sketch a proof of this result for the case $\gamma=(\alpha,\beta) \in (0,\pi/2) \times [0,\pi/2)$; the other cases follow in a similar fashion. Fix $\Phi \in C_{\mu,\Lambda}' $ such that \[ \forall f \in \mathcal{D}^s_{\gamma}(\Delta_{\nu, \, \Lambda}): \ \Phi(f)=\int f(x) \,\d \phi(x)=0, \] where $\phi$ is the signed distribution function representing $\Phi$. By definition $\phi$ is of bounded variation and local constant on the complement of $\operatorname{supp}(\nu)$. With $E:=-(1+\tan(\beta))A_{\alpha,\beta} $ it follows from the proof of Lemma \ref{Surjec} for all $g \in C_{\nu,\Lambda}^{\gamma}$ \begin{align} \int (E+s\cdot A_{\alpha,\beta})g(s)\,\d \nu(s) \cdot \int (\tan(\alpha)+x) \,\d\phi(x)&= \int \int (x-s)g(s) \,\d \nu (s)\,\d \phi(x) \label{eq:dual1}\\ &= \int \left(\phi(1)(1-s)- \int_{[s,1]}\phi(s) \,\d s \right)g(s)\,\d\nu(s).\nonumber \end{align} Furher, we have \[ \int (\tan(\alpha)+x) \,\d\phi(x)=\tan(\alpha)\left( \phi(1)-\phi(0)\right)+\phi(1)-\int \phi(s) \,\d s\eqqcolon B_{\phi}. \] Combining these identities gives \begin{align*} \int \left(E \cdot B_{\phi} +s \cdot A_{\alpha,\beta} \cdot B_{\phi} -\phi(1)(1-s)+ \int_{[s,1]}\phi(s) \,\d s\right)g(s) \,\d \nu(s)=0 \end{align*} If we consider $g(s):=E \cdot B_{\phi} +s \cdot A_{\alpha,\beta} \cdot B_{\phi} -\phi(1)(1-s)+ \int_{[s,1]}\phi(x) \,\d x \in \mathcal{C}_{\nu,\Lambda}^{\gamma}$ it follows, that for all $s \in [0,1]$ \[ \int_{[s,1]}\phi(x) \,\d x=\phi(1)-E \cdot B_{\phi} -s( A_{\alpha,\beta} \cdot B_{\phi}+\phi(1)), \] which is only possible if $\phi(s)=A_{\alpha,\beta} B_{\phi}+\phi(1)$, for all $s \in [0,1]$. Therefore $\Phi$ is a Dirac-measure in $\{0\}$ with weight $\phi(0)$ and we have by \eqref{eq:dual1}, or all $f \in \mathcal{C}_{\nu,\Lambda}^{\gamma}$, \[ \phi(0) \ \tan(\alpha) \ \int\left(E+A_{\alpha,\beta} \cdot s\right)f(s) \,\d \nu(s)=0 \] For the particular choice $f \in \mathcal{C}_{\nu,\Lambda}^{\gamma}$ given by $f:s\mapsto E+A_{\alpha,\beta} \cdot s$ the above integral is positive and hence $ \phi(0)=0$. Consequently, $\Phi=0$ and we have shown that the annihilator of $ \mathcal{D}^s_{\gamma}(\Delta_{\nu, \Lambda} )$ is trivial and therefore $ \mathcal{D}^s_{\gamma}(\Delta_{\nu, \, \Lambda})$ is dense in $ \mathcal{C}_{\nu,\Lambda}^{\gamma}$. \end{proof} \subsection{Generalized Kre\u{\i}n-Feller operators and transformations of measure spaces} Let us first state two basic key observations. three key observation. \begin{Lem}\label{identity} The function $\check{F}^{-1}_{\mu} \circ F_{\mu}$ equals the identity $\nu$-almost everywhere. \end{Lem} \begin{proof} Note, $\check{F}^{-1}_{\mu}( F_{\mu}(x) )\neq x$ if and only if there exists $\varepsilon>0$ with $F_{\mu}(x-\varepsilon)=F_{\mu}(x)$. This means, if $\check{F}^{-1}_{\mu}( F_{\mu}(x) )\neq x$, then $x$ belongs to an interval of constancy for $F_\mu$. This in tandem with our hypothesis $\operatorname{supp}(\nu) \subseteq \operatorname{supp}(\mu)$, implies that the countable union of these intervals have $\nu$-measure zero. \end{proof} \begin{Lem}\label{lem: isomorphism} The mapping $\varphi: \mathcal{S}^{*}(\nu,\mu) \to \mathcal{S}^{*}(\nu\circ {F}^{-1}_{\mu} ,\Lambda))$ defined by $\varphi(f) \coloneqq f \circ \check{F}^{-1}_{\mu}$, is an isometric isomorphism with inverse $\varphi^{-1}(f) \coloneqq f \circ F_{\mu}$, $*\in \{s,w\}$. \end{Lem} \begin{proof} This is a consequence of \Cref{identity} together with the push-forward formula for measures in the strong case and the definition of $C_{\nu,\mu}$ in the weak case. \end{proof} The following theorem is the main observation in this section needed for all subsequent corollaries. \begin{Thm}\label{prop:laplace_backward} For $\gamma \in [0,{\pi/2}]^{2}$, we have that $\Delta_{\nu \circ F_{\mu}^{-1}, \Lambda }\circ\varphi=\varphi\circ \Delta_{\nu,\mu }$ and $\varphi ( \mathcal{D}^*_{\gamma}(\Delta_{\nu,\mu}))=\mathcal{D}^*_{\gamma}(\Delta_{\nu \circ F_{\mu}^{-1},\Lambda})$. \end{Thm} \begin{proof} If $f \in \mathcal{D}^*_{\gamma}(\Delta_{\nu,\mu})$, then, for all $x \in \operatorname{supp}(\mu)$, \begin{align*} f(x)=a+bF_{\mu}(x)+ \int \mathds{1}_{[0, x]}(y)(F_{\mu}(x)-F_{\mu}(y) ) \Delta_{\nu, \mu } f(y) \;\d \nu(y), \end{align*} where $a=f(0)$ and $b= \nabla_{\mu} f(0)$. Using \Cref{identity} and replacing $x$ with $\check{F}_{\mu}^{-1}(x)$ gives \begin{align*} f(\check{F}_{\mu}^{-1}(x)) &=a+bF_{\mu}(\check{F}_{\mu}^{-1}(x))+ \int \mathds{1}_{[0, \check{F}_{\mu}^{-1}(x)]}(y)(F_{\mu}(\check{F}_{\mu}^{-1}(x))-F_{\mu}(y) ) \Delta_{\nu, \mu }(f )(y)\;\d \nu(y) \\ &=a+b x+ \int \mathds{1}_{[0, \check{F}_{\mu}^{-1}(x)]}(y)(x-F_{\mu}(y) ) \Delta_{\nu, \mu }(f )(\check{F}_{\mu}^{-1}(F_{\mu}(y)))\;\d \nu(y) \\ &=a+bx+ \int \mathds{1}_{[0, x]}(F_{\mu}(y))(x-F_{\mu}(y) ) \Delta_{\nu, \mu }(f )(\check{F}_{\mu}^{-1}(F_{\mu}(y)))\;\d \nu(y) \\ &=a+bx+ \int \mathds{1}_{[0, x]}(y)(x-y ) \Delta_{\nu, \mu }(f ) \circ \check{F}_{\mu}^{-1}(y) \;\d (\nu \circ F_{\mu}^{-1})(y). \end{align*} Therefore, $a=f ( \check{F}^{-1}_{\mu}(0) )$ and $b=\nabla_{\mu}f ( \check{F}^{-1}_{\mu}(0) )$. This shows \begin{align*} f \circ \check{F}_{\mu}^{-1} \in \mathcal{D}^*(\Delta_{\nu \circ F_{\mu}^{-1}, \, \Lambda}) \quad \text{and} \quad \Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda} (f \circ \check{F}_{\mu}^{-1}) = \Delta_{\nu, \, \mu}f\circ \check{F}^{-1}_{\mu}. \end{align*} If $f \circ \check{F}_{\mu}^{-1} \in \mathcal{D}^*_{\alpha,\beta}(\Delta_{\nu \circ F_{\mu}^{-1}, \, \Lambda})$, then, as above, for $x \in \operatorname{supp}(\mu)$ with $F_{\mu}(x-\varepsilon)<F_{\mu}(x)$ for all $\varepsilon>0$, \begin{align*} f( x ) =f(\check{F}_{\mu}^{-1}( F_{\mu}( x)) ) &= c+dF_{\mu}(x) + \int \mathds{1}_{[0, F_{\mu}(x)]}(y)(F_{\mu}(x)-y ) \Delta_{\nu \circ F_{\mu}^{-1}, \Lambda}(f \circ \check{F}_{\mu}^{-1})(y) \;\d (\nu \circ F_{\mu}^{-1})(y)\\ &= c+dF_{\mu}(x)+ \int \mathds{1}_{[0, F_{\mu}(x)]}(F_{\mu}(y))(F_{\mu}(x)-F_{\mu}(y) ) \Delta_{\nu \circ F_{\mu}^{-1}, \Lambda} (f \circ \check{F}_{\mu}^{-1}) \circ F_{\mu}(y)\;\d \nu (y) \\ &= c+dF_{\mu}(x)+ \int \mathds{1}_{[0, x]}(y)(F_{\mu}(x)-F_{\mu}(y) ) \Delta_{\nu \circ F_{\mu}^{-1}, \Lambda} (f \circ \check{F}_{\mu}^{-1}) \circ F_{\mu}(y)\;\d \nu (y), \end{align*} with $c=f (\check{F}^{-1}_{\mu}(0 ))$ and $d =\nabla f (\check{F}^{-1}_{\mu} (0 ) )$. The case $x \in \operatorname{supp}(\mu)$ and $F_{\mu}(x-\varepsilon)=F_{\mu}(x)$ for some $\varepsilon>0$ implies that $x$ lies in an interval of constancy. Notice, \begin{align*} \check{F}_{\mu}^{-1}( [0,1])=\{0\} \cup \operatorname{supp}(\mu) \cap \{ x \in [0,1] \text{ is not a right endpoint of an interval of constancy}\}, \end{align*} thus we can consider a modification of $f$ such that $f$ is constant on each interval of constancy of $F_{\mu}$. Thus, we have $c=f(0), \ d=\nabla_{\mu}f(0)$. Therefore, we have \begin{align*} \Delta_{\nu \circ F_{\mu}^{-1}, \Lambda}(f \circ \check{F}_{\mu}^{-1} ) \circ F_{\mu} = \Delta_{\nu,\mu} ( f ) \quad \text{and} \quad f\in \mathcal{D}^{*}(\Delta_{\nu, \mu}). \end{align*} It remains to verify that the boundary conditions are preserved. Combining the above with \Cref{identity} gives \begin{align*} \nabla_{\mu}f(1)= \nabla_{\mu}f(0)+\int_{[0,1]} \Delta_{\nu,\mu}f(y)\;\d \nu(y) &= \nabla_{\Lambda}f(0)+\int_{[0,1]} \Delta_{\nu,\mu}f ( \check{F}_{\mu}^{-1}(y) )\;\d \nu \circ F_{\mu}^{-1}(y) \\ &= \nabla_{\Lambda}f(0)+\int_{[0,1]} \Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda} (f \circ \check{F}_{\mu}^{-1}) \;\d \nu \circ F_{\mu}^{-1}(y) = \nabla_{\Lambda} (f \circ \check{F}_{\mu}^{-1} )(1). \qedhere \end{align*} \end{proof} \begin{Cor} For $\gamma=(\alpha,\beta )\in [0,{\pi/2}]^{2} $, set \begin{align*} C_{\nu,\mu}^{\gamma} \coloneqq \begin{cases} \left\{ f \in C_{\nu,\mu} \colon f(0)=0, \ f(1)=0\right\}& \text{ if } \alpha=\beta=0, \\ \left\{ f \in C_{\nu,\mu} \colon f(0)=0\right\}& \text{ if } \alpha=0, \ \beta \in (0,\pi/2],\\ \left\{ f \in C_{\nu,\mu} \colon f(1)=0\right\} &\text{ if } \alpha \in (0,\pi/2], \ \beta=0, \\ C_{\nu,\mu }& \text{ if } \alpha,\beta \in (0,\pi/2]. \end{cases} \end{align*} The set $\mathcal{D}^s_{\gamma}(\Delta_{\nu, \, \mu})$ is dense in $(C_{\nu,\mu}^{\gamma}, \lVert \cdot \rVert_{\infty})$. \end{Cor} \begin{proof} This follows from \Cref{lem: isomorphism}, Lemma \ref{densecm} and Theorem \ref{prop:laplace_backward}. \end{proof} \begin{Cor} For each $\gamma \in [0,{\pi/2}]^{2}$, the operator $\Delta_{\nu, \, \mu}$ with domain $\mathcal{D}_{\gamma}^w(\Delta_{\nu,\mu})$ is densely defined and self-adjoint. \end{Cor} \begin{proof} The denseness follows from \Cref{lem: isomorphism}, Lemma \ref{adjoint} and Theorem \ref{prop:laplace_backward}. Furhermore, for $f \in \dom(\Delta_{\nu,\mu}^*)$ by Theorem \ref{prop:laplace_backward} and Lemma \ref{identity}, \begin{align*} g\mapsto \left\langle f ,\Delta_{\nu,\mu} g \right\rangle_{L^2(\nu)} =\left\langle f\circ \check{F}^{-1}_{\mu} , \Delta_{\nu \circ \check{F}^{-1}_{\mu},\Lambda} (g \circ \check{F}^{-1}_{\mu}) \right\rangle_{L^2(\nu \circ F_{\mu}^{-1})} \end{align*} defines a continuous linear functional on $ \dom(\Delta_{\nu,\mu})$. Combining Proposition \ref{adjoint} and Theorem \ref{prop:laplace_backward} we deduce $ f \circ \check{F}^{-1}_{\mu} \in \dom\big( \Delta_{\nu \circ \check{F}^{-1}_{\mu},\Lambda}^{*}\big)= \dom\big( \Delta_{\nu \circ \check{F}^{-1}_{\mu},\Lambda}\big)=\mathcal{D}_{\gamma}^w( \Delta_{\nu \circ \check{F}^{-1}_{\mu},\Lambda})$ and consequently $f \in \mathcal{D}_{\gamma}^w(\Delta_{\nu,\mu})=\dom(\Delta_{\nu,\mu})$. \end{proof} \begin{Rem} There is an analogous theorem in general theory of Markov processes [\cite{DynkinI_II}, p. 325, Theorem 10.13]. Taking into account that $\Delta_{\nu,\mu}$ with domain $\mathcal{D}_{\gamma}^{*}(\Delta_{\nu,\mu})$ has a probabilistic interpretation as infinitesimal generator of a Markov process, thus the result above is not surprising. \end{Rem} \begin{comment} \begin{Cor} For each $\gamma \in [0,{\pi/2}]^{2}$, the operator $\Delta_{\nu, \, \mu}$ with domain $\mathcal{D}_{\gamma}^w(\Delta_{\nu,\mu})$ is self-adjoint. \end{Cor} \begin{proof} \end{proof} \end{comment} Corollary \ref{Spectral} implies that $\Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda}$ with domain $\mathcal{D}_{\gamma}^w \left( \Delta_{\nu, \Lambda} \right)$ gives rise to an orthonormal basis of eigenfunctions with non-positive eigenvalues. In the case of $\operatorname{supp}(\nu)$ is infinite we have $(\lambda_n)_{n \in \mathbb{N}}$ with $\lim_{n \to \infty}-\lambda_n = \infty$ otherwise there are only finitely many eigenvalues. Using the one-to-one correspondence established in \Cref{prop:laplace_backward} to relate the spectral properties of $\Delta_{\nu,\mu}$ with those of $\Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda}$, we obtain the following. \begin{Cor}\label{thm Spec} For fixed $ \gamma \in [0,{\pi/2}]^{2}$, the operators $\Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda} $ with domain $\mathcal{D}^w_{\gamma}( \Delta_{\nu \circ F_{\mu}^{-1},\Lambda} )$ and $ \Delta_{\nu,\mu} $ with domain $\mathcal{D}^w_{\gamma}( \Delta_{\nu,\mu})$ have the same eigenvalues $(\lambda_n)_{n \in \mathbb{N}}$. Further, if $f$ is an eigenfunction of $ \Delta_{\nu,\mu} $, then $f \circ \check{F}^{-1}_{\mu}$ is an eigenfunction of $ \Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda} $, and if $f$ is an eigenfunction of $\Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda}$, then $f \circ F_{\mu}$ is an eigenfunction of $ \Delta_{\nu,\mu}$. In particular, if $(f_n)_{n \in \mathbb{N}}$ denotes the orthonormal basis consisting of eigenfunctions of $ \Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda}$ then $(f_n \circ F_{\mu})_{n \in \mathbb{N}}$ forms an orthonormal basis consisting of $ \Delta_{\nu,\mu}$-eigenfunctions. \end{Cor} \Cref{thm Spec} can be seen as a generalisation of \cite{KSW16} where the $\Delta_{\mu,\mu}$-Laplacian has been considered. \begin{Cor}\label{Reso} Assume that $\operatorname{supp}(\nu) \subseteq \operatorname{supp}(\mu)$ and $\gamma \in[0,{\pi/2}]^{2}$. For $\lambda>0$, letting $R_{\nu , \mu}^{\lambda}$ denote the resolvent operator of $ \Delta_{\nu, \mu}$ with domain ${ \mathcal{D}^*_{\gamma}(\Delta_{\nu,\mu})}$, for all $f \in \mathcal{S}^*$ \begin{align*} R_{\nu,\mu}^{\lambda}( f ) \circ \check{F}_{\mu}^{-1}=R_{\nu \circ F_{\mu}^{-1},\Lambda}^{\lambda}(f \circ \check{F}_{\mu}^{-1}). \end{align*} In particular, we have \begin{align*} \big \Vert R_{\nu \circ F_{\mu}^{-1},\Lambda}^{\lambda} \big\Vert_{L^2(\nu \circ F_{\mu}^{-1} ) }=\big \Vert R_{\nu,\mu}^{\lambda} \big\Vert_{L^2 (\nu) }, \end{align*} and \begin{align*} \big \Vert R^{\lambda}_{\nu \circ F_{\mu}^{-1},\Lambda} \big\Vert_{C_{\nu \circ F_{\mu}^{-1},\Lambda}} = \big\Vert R^{\lambda}_{\nu,\mu}\big\Vert_{C_{\nu,\mu}} \end{align*} \end{Cor} \begin{proof} Note that the resolvent $R^{\lambda}_{\nu,\mu}$ is well defined for all $\lambda>0$ (see \cite{MR2017701}) and for $f \in \mathcal{S}^*$ we have \begin{align*} \Delta_{\nu,\mu} R_{\nu ,\mu}^{\lambda}f =\lambda R_{\nu, \mu}^{\lambda}f-f, \ R_{\nu ,\mu}^{\lambda}f \in \mathcal{D}^*_{ \gamma}( \Delta_{\nu,\mu} ) . \end{align*} \Cref{prop:laplace_backward} gives \begin{align*} \Delta_{\nu\circ F_{\mu}^{-1},\Lambda} (R_{\nu ,\mu}^{\lambda}f\circ \check{F}^{-1}_{\mu}) \circ {F}_{\mu} =\lambda R_{\nu,\mu}^{\lambda}f-f \end{align*} and hence \begin{align*} \Delta_{\nu\circ F_{\mu}^{-1},\Lambda} (R_{\nu ,\mu}^{\lambda}f\circ \check{F}^{-1}_{\mu}) =\lambda R_{\nu,\mu}^{\lambda}f\circ \check{F}_{\mu}^{-1} -f \circ \check{F}_{\mu}^{-1} \end{align*} proving the first part. The statement on the norms now follows from \Cref{lem: isomorphism}. \end{proof} \begin{Cor}\label{Feller} Let $\gamma \in[0,{\pi/2}]^{2}$. Then the operator $\Delta_{\mu,\nu}$ with domain $ \mathcal{D}_{\gamma}^s(\Delta_{\nu,\mu})$ is an infinitesimal generator of a strongly continuous semi-group of contraction on $C_{\nu,\mu}^{\gamma}$. \end{Cor} \begin{proof} It is well known that this holds true for the classical Krein-Feller operator \cite[Behauptung 4.1]{MR0314125}. This in tandem with the Yosida-Hille Theorem \cite[p. 11, Theorem 1.12]{ ma1992introduction} and \Cref{Reso}, yields \begin{align*} \big\Vert R^{\lambda}_{\nu \circ F_{\mu}^{-1},\Lambda} \big\Vert_{C^{\gamma}_{\nu \circ F_{\mu}^{-1},\Lambda}} = \big\Vert R^{\lambda}_{\nu,\mu} \big\Vert_{C_{\nu,\mu}^{\gamma}} \leq 1/\lambda , \end{align*} for all $\lambda>0$. Further $\Delta_{\nu, \mu}$ is densely defined and closed operator in $C_{\nu,\mu}^{\gamma}$. Applying again the theorem of Yosida-Hille gives the statement. \end{proof} \section{Applications} \subsection{Spectral asymptotics} In this section we review the asymptotic spectral properties of $\Delta_{\nu,\mu}$ for certain classes of self-similar measures. We will show how the results in \cite{Fr05} can be deduced from \cite{Fu87} with the help of the above established isomorphism. Let us give general assumption on the self-similar measures which are clearly fulfilled under the assumptions (A.1)~--~(A.4) of \cite{Fr05}. \begin{assumptions}\label{StrongAssumption} Let $S_i:[0,1]\to [0,1]$, $i=1,\dots,M$, denote a family of affine contractions fulfilling the open set condition (OSC), that is, $S_i((0,1)) \cap S_j((0,1))= \emptyset$, for all $j \neq i$, and let $\nu$ denote the associated self-similar measure with probability weight vector $(p_{1},\ldots ,p_{M}) \in (0,1)^M$ uniquely determined by \begin{align*} \nu(A)=\sum_{i=1}^{M} p_i \nu(S_i^{-1}(A)) \end{align*} for $A \in \mathfrak{B}([0,1])$. For fixed $(\sigma_{1},\ldots ,\sigma_{M})\in (0,1)^{M}$ with $\sum_{i=1}^M \sigma_i \leq 1$ let $\mu$ denote an atomless probability measure with $\operatorname{supp}(\nu) \subseteq \operatorname{supp}(\mu)$ such that for all $A \in \mathfrak{B}([0,1])$, $i=1,\dots ,M$, we have $\mu(S_i(A))=\sigma_i \mu(A)$. \end{assumptions} \begin{Thm}\label{Freiberg} For the decreasing sequence of eigenvalues $( \lambda_n )_{n \in \mathbb{N}}$ of $\Delta_{\nu,\mu}$ with domain $ \mathcal{D}^{w}_{\gamma}(\Delta_{\nu ,\mu})$, $\gamma\in [0,\pi/2]^{2}$, we have \begin{align*} -\lambda_n\asymp n^{1/ u} \quad \text{and} \quad N_{\Delta_{\nu,\mu}}(x)\asymp x^{ u}, \end{align*} where $ u\in (0,1)$ denotes the unique number with $\sum_{i=1}^M ( \sigma_i p_i )^{ u}=1.$ \end{Thm} \begin{proof} We have $\sigma_i F_{\mu}(x)=\sigma_i \mu([0,x])= \mu(S_i( [0,x] ) )=F_{\mu}(S_i(x))- F_{\mu}(S_i(0))$, for all $x \in [0,1]$, $i \in \{1,\dots,M\}$. Setting $\tilde{S}_i(x)\coloneqq\sigma_i x+F_{\mu}(S_i(0))$ we obtain $F_{\mu} \circ S_i= \tilde{S}_i \circ F_{\mu}$, and hence, for all $A \in \mathfrak{B}([0,1])$, \begin{align*} \sum_{i=1}^M p_i\nu( F_{\mu}^{-1} \circ \tilde{S}_i^{-1}(A)) &=\sum_{i=1}^M p_i\nu( S_i^{-1} \circ F_{\mu}^{-1}(A)) =\nu ( F_{\mu}^{-1}(A) ). \end{align*} This shows that $ \nu \circ F_{\mu}^{-1}$ is the unique self similar measure with contractions $( \tilde{S}_i )_{i=1,\dots N}$ and weights $(p_i)_{i=1,...,M}$. Further, \begin{align*} \tilde{S}_i( [0,1] )=[F_{\mu}(S_i(0)),\sigma_i+F_{\mu}(S_i(0))]=[F_{\mu}(S_i(0)),F_{\mu}(S_i(1))], \end{align*} which shows that the OSC is satisfied for $( \tilde{S}_i \colon i=1,\dots ,M )$ given that $( {S}_i \colon i=1,\dots ,M )$ satisfies the OSC. Therefore, we can apply the classical result from Fujita \cite{Fu87} for spectral dimension of $\Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda} $ with domain ${\mathcal{D}^{w}_{\gamma} (\Delta_{\nu \circ {F}_{\mu}^{-1}, \, \Lambda} )}$. Combining this with \Cref{thm Spec} completes the proof. \end{proof} \begin{example} Let $S_i(x) \coloneqq s_ix+b_i$ for $x \in [0,1]$ with $s_i,b_i \geq 0$ and $s_i+b_i \leq 1$ for $i=1,\dots,M$ and assume the OSC. For $N\leq M$ let $\nu$ denote the self-similar measure with respect to $(S_{i}\colon i=1,\ldots ,N)$ and probability weight vector $(p_{1},\ldots , p_{N})\in (0,1)^{N}$ and support $L=\bigcup_{i=1}^N S_i(L)$, and $\mu$ the self-similar measure with respect to $(S_{i}\colon i=1,\ldots ,M)$ and probability weight vector $(\sigma_{1},\ldots , \sigma_{M})\in (0,1)^{M}$ and support $K=\bigcup_{i=1}^M S_i(K)$. Then we have $L \subset K$ and \begin{align*} \nu \circ F_{\mu}^{-1}(A)= \sum_{i=1}^N p_i \nu(S_i^{-1}\circ F_{\mu}^{-1}(A)). \end{align*} If for $\delta_{K}\coloneqq \dim_{H}(K)$ and $\delta_{L}\coloneqq \dim_{H}(L)$ we chose $\sigma_{m}\coloneqq s_{m}^{\delta_{K}}$, $m =1,\ldots, M$, and $p_{\ell}\coloneqq s_{\ell}^{\delta_{L}}$, $\ell =1,\ldots, L$, then we find the well-known relation $ u=\delta_{K}/(\delta_{K}+\delta_{L})$. \end{example} \subsection{Liouville Brownian motion and walk dimension}\label{sec:LBM} In this section we will construct Liouville Brownian motion via a time change of a fixed Brownian motion. We will give a short overview about basic properties of this stochastic process. After which, we will compute the walk dimension for special classes of gap diffusions. Additionally, we show that the infinitesimal generator coincides with a generalised Kre\u{\i}n-Feller operator. \begin{assumptions} For the tuple $\left(\Omega, \mathcal{F}, ( \mathcal{F}_t)_{t \geq 0}, ( \theta_t)_{t \geq 0}, \mathbb{P}_x\right)$ where $\mathbb{P}_x$ denotes the probability measure such that, $( \mathcal{F}_t)_{t \geq 0}$ is the right-continuous completed natural filtration and $(\theta_t )_{t \geq 0}$ is the shift-operator. The expectation with respect to $\mathbb{P}_x$ is denoted by $\mathbb{E}_x$. We call the stochastic process $(B_t )_{t \geq 0}$ a Brownian motion if the following are satisfied. \begin{enumerate} \item[•]$\mathbb{P}_x(B_0=x)=1$ \item[•] For $0 \leq s_0< \dots <s_n$ with $n \in \mathbb N$ the increments $B_{s_1}-B_{s_0}, \dots ,B_{s_n}-B_{s_{n-1}}$ are stochastically independent. \item[•] The distribution of the increment $B_{s}-B_{t}$ follows a Gaussian distribution with variance $s-t$ where $s>t \geq 0$ and mean $0$. \end{enumerate} Let $(L_{x}^t)_{t \geq 0, x \in \mathbb{R}}$ the jointly continuous version of the local time of the Brownian motion, see \cite[Chapter VI]{Revuz2013}. \end{assumptions} Let $m$ denote a non-zero Borel measure on $(\mathbb{R},\mathcal{B})$. \begin{Def}[$m$-Liouville Brownian motion] \label{LBM} We define for $ t \geq 0$ the {\em (inverse) time-change function} \begin{align*} \Phi_t \coloneqq \int_{\mathbb{R}} L_x^t \;\mathrm{d}m(x),\;\; \ \hat{\Phi}^{-1}_t \coloneqq \inf\left\{s \geq 0 \colon \Phi_s>t\right \} \end{align*} where $\hat{\Phi}^{-1}_t$ is called right-continuous inverse of $\Phi_t$. Now, we define the new process for $x \in \operatorname{supp}(m)$ \begin{align*} \left((X_t)_{ t \geq 0} \coloneqq (B_{\hat{\Phi}^{-1}_t})_{t \geq 0}, (\mathcal{F}_{\hat{\Phi}^{-1}_t})_{t \geq 0}, \mathbb{P}_x\right ) \end{align*} which will be called $m$-Liouville Brownian motion with speed measure $m$ starting in $x$. \end{Def} \begin{Def}[Walk dimension]\label{DefWalkD} Let $(X_t)_{t \geq 0}$ a Liouville Brownian motion with speed measure $m$. Then the \textit{walk dimension} $d_w(x)$ in $x \in \operatorname{supp}(m)$ is defined by \begin{align}\label{WalkD} d_W(x) \coloneqq \lim_{R \to \infty} \dfrac{\log(\mathbb{E}_x[\tau_{(-R+x,R+x)}] )}{\log(R)} \end{align} assuming that the limit exists, where $\tau_{(-R+x,R+x)} \coloneqq \inf \{ t \geq 0 \colon X_t \notin (-R+x,R+x)\}$ \end{Def} Assuming the support of $m$ is bounded, then the walk dimension does not exist. In this case it can be useful to consider the so-called \textit{local walk dimension} \begin{align*} d_{LW}(x) \coloneqq \lim_{R \downarrow 0} \dfrac{\log(\mathbb{E}_x[\tau_{(-R+x,R+x)}] )}{\log(R)}. \end{align*} There are other definitions of the walk dimension; for example in \cite{KGOLMANKHANEH2018960} the walk dimension $d_{w}(x)$ is defined by \begin{align*} \mathbb{E}_x [ (X_t-x )^2 ] \asymp t^{2/d_w(x)}. \end{align*} It remains open in which cases this definition coincides with the Definition \ref{DefWalkD}. \begin{Prop}[{\cite[Lemma 3.1]{MR3034785}}]\label{SkipfreeP} For all $t \geq 0$, we have $\mathbb{P}_x$-almost everywhere, that $X_t \in \operatorname{supp}(m)$. \end{Prop} \begin{Thm} We have $\left((X_t )_{t \geq 0}, (\mathcal{F}_{\hat{\Phi}^{-1}_t})_{t \geq 0}, (\theta_{\hat{\Phi}^{-1}_t} )_{t \geq 0}, ( \mathbb{P}_x )_{x \in \operatorname{supp}(m)}\right)$ defines a Feller process. Namely, the tuple defines is a strong Markov process such that, for all $f \in C_b (\operatorname{supp}(m) )$, the function $x \mapsto \mathbb{E}_x[f(X_t) ]$ belongs to $C_b(\operatorname{supp}(m) )$ and $\lim_{ t \downarrow 0} \mathbb{E}_x[f(X_t)] =f(x)$. Here, $C_b (\operatorname{supp}(m) )$ denotes the set of bounded continuous function with domain $\operatorname{supp}(m)$. \end{Thm} To compute the walk dimension and infinitesimal generator, we need the following lemma. \begin{Lem}\label{PotentialGapD} Fix $x_0,x_1 \in \operatorname{supp}(m)$ with $x_0<x_1$ and set $\tau_{(x_0,x_1)} \coloneqq \inf\{ t \geq 0 \colon X_t \notin (x_0,x_1)\}$. For $x \in (x_0,x_1) \cap \operatorname{supp}(m)$, the following holds. \begin{enumerate} \item[(i)] For all $t \geq 0$ and all bounded measurable functions $f$ on $\operatorname{supp}(m)$ we have \begin{align*} \qquad\quad\mathbb{E}_x\left[ \int_0^{\tau_{[x_0,x_1]}}f(X_s)\mathrm{d}s \right]=\int_{[x_0,x_1]} G_{x_0,x_1}(x,y)f(y)\;\mathrm{d}m(y), \quad \text{where} \quad G_{x_0,x_1}(x,y) \coloneqq 2\dfrac{(x \wedge y-x_0)(x_1-x \vee y)}{x_1-x_0} \end{align*} for $x,y \in (x_0,x_1)$. In particular, we obtain $\mathbb{E}_x [ \tau_{(x_0,x_1)} ] < \infty$. \item[(ii)] We have \begin{align*} \mathbb{P}_x\left(X_{\tau_{(x_0,x_1)}} =x_0\right)=\dfrac{x_1-x}{x_1-x_0} \quad \text{and} \quad \mathbb{P}_x\left(X_{\tau_{(x_0,x_1)}} =x_1\right)=\dfrac{x-x_0}{x_1-x_0}. \end{align*} \item[(iii)] We have for $x<x_1$ with $x,x_1 \in \operatorname{supp}(m)$ \begin{align*} \mathbb{E}_x\left[ \int_0^{\tau_{(-\infty,x_1]}}f(X_s)ds\right]=\int_{(-\infty,x_1]}2(x_1-x \vee y)f(y)\;\mathrm{d}m(y) \end{align*} and for $x_2 \in supp(m)$ such that $x_2<x$ \begin{align*} \mathbb{E}_x\left[ \int_0^{\tau_{(x_2,\infty)}}f(X_s)ds\right]=\int_{[x_2,\infty)}2(x \wedge y-x_2)f(y)\;\mathrm{d}m(y). \end{align*} \end{enumerate} \end{Lem} \begin{proof} The proof for (i) and (ii) one can find in \cite{Burkhardt1983}, p. 42, Lemma 2.4.5. The proof of (iii) follows in a similar way as for (ii) taking into account that the following hold for $a<x,y<b$ \[ \mathbb{E}_x[ L_y^{\tau_{(-\infty,a)}} ]=2(y \wedge x-a) \quad \text{and} \quad \mathbb{E}_x[ L_y^{\tau_{(b,\infty)}} ]=2(b-y \vee x). \qedhere \] \end{proof} Now, we will give the connection between generalised Kre\u{\i}n-operators $D_{\nu}D_{\mu}$ and the infinitesimal generator of transformed gap diffusions. We consider, Borel measures $\mu,\nu$ with compact support and $ \nu( \{0,1\})=0$ and $\operatorname{supp}(\nu) \subseteq \operatorname{supp}(\mu) \subset [0,1]$. In the case if $F_{\mu}$ is strictly increasing see \cite{Burkhardt1983} p. 64-65. The only difference to the calculation in \cite{Burkhardt1983} is the replacement of the inverse with the pseudo-inverse and $\operatorname{supp}(\mu) \neq [0,1]$ if $F_{\mu}$ is not strictly increasing. Now, let $(X_t)_{ t \geq 0}$ a Liouville Brownian motion with speed measure $\nu \circ F_{\mu}^{-1}$. We call $Y_t \coloneqq \check{F}_{\mu}^{-1}(X_t)$ (generalised) $\nu$-$\mu$-Liouville Brownian motion with speed measure $\nu \circ F_{\mu}^{-1}$ which is again strong Markov process thus it is a deterministic transformation of $(X_t)_{t \geq 0}$. \begin{Def}[Infinitesimal generator] Let $(Z_t)_{t \geq 0}$ denote a Markov process with compact state space $E$ and let $C(E)$ denote the continuous functions on $E$. A function $f \in C(E)$ is said to belong the domain $\mathcal{D}(A)$ of the infinitesimal generator of $(Z_t)_{t \geq 0}$ if the limit \begin{align*} Af(x)=\lim_{t \downarrow 0} ( \mathbb{E}_x[f(Z_t) ]-f(x))/t \end{align*} exists in $C(E)$ with respect to $\lVert \cdot \rVert_{\infty}$. \end{Def} Now, we compute the infinitesimal generator of $(Y_t)_{t \geq 0}$. \begin{Thm}\label{GapDSec} Let $(X_t)_{ t \geq 0}$ a $\nu \circ F_{\mu}^{-1}$-Liouville Brownian motion and let $A$ denote the infinitesimal generator of $(Y_t)_{t \geq 0}=(\check{F}^{-1}_{\mu}(X_t))_{t \geq 0}$. For $f \in \mathcal{D}(A)$ there exists a continuous continuation of $f$ in $C_{\nu,\mu}$ (also denoted by $f$) such that \begin{align}\label{SecondOrder} f(x)=f(0) +\int_{0}^x (F_{\mu}(x)-F_{\mu}(y) )2Af(y)d\nu(y), \; \; x \in \mathbb{R}, \end{align} and the Neumann boundary condition $\nabla_{\mu} f(0)=\nabla_{\mu} f(1)=0$ are satisfied. \\ \begin{Rem} We will present two proofs. The first proof uses \Cref{prop:laplace_backward} and a result of \cite{Burkhardt1983}. The second proof will make use of Lemma \ref{PotentialGapD}. \end{Rem} \begin{proof} Let $A_Y$ the infintesimale generator of $(Y_t)_{t \geq 0}$ on $C_{\nu,\mu}$ and $A_X$ the infinitesimal generator of $(X_t)_{t \geq 0}$ on $C_{\nu \circ F_{\mu}^{-1},\Lambda}$. At first note that we have for $f \circ \check{F}_{\mu}^{-1} \in \mathcal{D}(A_X)$ and $x \in F_{\mu}( \operatorname{supp}(\nu))$ \begin{align*} \lim_{t \downarrow 0} ( \mathbb{E}_x[ f( \check{F}_{\mu}^{-1}(X_t))-f(\check{F}^{-1}_{\mu}(x)) ] )/t =\lim_{t \downarrow 0} ( \mathbb{E}_{\check{F}_{\mu}^{-1}(x)}[ f (Y_t) -f(\check{F}^{-1}_{\mu}(x))] )/t =\lim_{t \downarrow 0} ( \mathbb{E}_{\check{F}_{\mu}^{-1}(x)}[ f (Y_t) -f(Y_0)] )/t \end{align*} From this, it follows that \begin{align*} f \circ \check{F}_{\mu}^{-1} \in \mathcal{D}(A_X) \rightarrow f \in \mathcal{D}(A_Y) \quad \text{and} \quad A_X(f \circ \check{F}_{\mu}^{-1})=A_Y ( f) \circ \check{F}_{\mu}^{-1}. \end{align*} From \cite[p.\,49]{Burkhardt1983}, it is known that $2A_X(f)=\Delta_{\nu \circ F_{\mu}^{-1}, \Lambda}(f), \ f \in \mathcal{D}(A_X)$ with $\nabla_{\mu} f(0)=\nabla_{\mu} f(1)=0$. This in tandem with \Cref{prop:laplace_backward} and the fact that $A_y(f) \in C_{\nu,\mu}$ yields \[ \Delta_{\nu,\mu}(f)= \Delta_{\nu \circ F_{\mu}^{-1}, \Lambda}(f \circ \check{F}_{\mu}^{-1})\circ F_{\mu}=2A_Y(f) \quad \text{and} \quad \nabla_{\mu} f \circ \check{F}_{\mu}^{-1}(0)=\nabla_{\mu} f \circ \check{F}_{\mu}^{-1}(1)=0. \qedhere \] \end{proof} \end{Thm} \begin{proof} First, note that $\operatorname{supp}(\nu \circ F_{\mu}^{-1})=F_{\mu}(\operatorname{supp}(\nu))$. Therefore, we have for all $t \geq 0$ $ \check{F}^{-1}_{\mu}(X_t) \in \check{F}^{-1}_{\mu}(F_{\mu}(\operatorname{supp}(\nu)) ) $ almost everywhere by \Cref{SkipfreeP}. We will show that \eqref{SecondOrder} holds on $ \check{F}^{-1}_{\mu}(F_{\mu}(\operatorname{supp}(\nu)) )$ which determines $f$ uniquely. To see this note that by \Cref{identity} we have $\nu( \check{F}^{-1}_{\mu}(F_{\mu}(\operatorname{supp}(\nu)) )\cap \operatorname{supp}(\nu))=1$ proving that the set $\check{F}^{-1}_{\mu}(F_{\mu}(\operatorname{supp}(\nu)) )$ intersects $\operatorname{supp}(\nu)$ densely. Let $x_1 \in \check{F}^{-1}_{\mu}(F_{\mu}(\operatorname{supp}(\nu)) )\cap \operatorname{supp}(\nu)$. Clearly, $\check{F}^{-1}_{\mu}(F_{\mu}(x_{1}))=x_{1}$. Applying \Cref{PotentialGapD} (iii) and \Cref{identity} yields \begin{align*} \mathbb{E}_{F_{\mu}(0^+)}\left[ \int_{0}^{\tau_{(-\infty,F_{\mu}(x_1))}} 2Af\circ \check{F}_{\mu}^{-1}(X_s)\;\d s \right] &=\int_{[F_{\mu}(0^+),F_{\mu}(x_1)]}(F_{\mu}({x_1})-y ) 2Af \circ \check{F}_{\mu}^{-1}(y)\;\d \nu \circ F_{\mu}^{-1}(y) \\ &=\int_{[0,x_1]}(F_{\mu}(x_1)-F_{\mu}(y)) 2Af(y)\;\d \nu (y). \end{align*} On other site applying Dynkin's formula (\cite{Revuz2013}, p. 284 Proposition 1.5) and using the fact that $Af$ is bounded and $\tau_{(-\infty,F_{\mu}(x_1))}$ is integrable yields \begin{align*} \mathbb{E}_{F_{\mu}(0^+)}\left[ \int_{0}^{\tau_{(-\infty,F_{\mu}(x_2))}} 2Af\circ \check{F}_{\mu}^{-1}(X_s) \;\d s \right] =\mathbb{E}_{F_{\mu}(x_0)}[ f (\check{F}_{\mu}^{-1}(X_{\tau_{(-\infty,F_{\mu}(x_1))}}) )]-f( \check{F}_{\mu}^{-1}(F_{\mu}(0^+))) =f(x_1)-f( \check{F}_{\mu}^{-1}(F_{\mu}(0^+))). \end{align*} Therefore, we have \begin{align}\label{Neumann} f(x_1)=f( \check{F}_{\mu}^{-1}(F_{\mu}(0^+)))+\int_{[0,x_1]}(F_{\mu}(x_1)-F_{\mu}(y)) 2Af(y)\;\d \nu (y). \end{align} This combined with $f(0) = f (\check{F}^{-1}_{\mu} (F_{\mu}(0^+ )))$ shows \eqref{SecondOrder} for all $x\in \operatorname{supp}(\nu)$ and implies $\nabla_{\mu}f(0)=0$. In the same way applying the second part of Lemma \ref{PotentialGapD} (iii) for $x=1^-$ and $x_2<1$ gives \[ f(x_1)-f(1)=\int_{[x_2,1]} 2 ( F_{\mu}(1)-F_{\mu}(y))Af(y) d \nu(y). \] Therefore, it follows $\nabla_{\mu}f(1)=0$. \end{proof} \begin{example} In following we consider the case $\mu=\nu$. The occupation formula of the local time yields for $ t \geq 0$ \begin{align*} \Phi_t=\int L^{t}_x\;\d \nu \circ F_{\mu}^{-1}(x)= \int L^{t}_x \;\d \Lambda(x) =\int_{0}^t \textbf{1}_{[0,1]}(B_s) \;\d \Lambda(s). \end{align*} Now, for the case that $\mu$ is the $1/3$-Cantor, we demonstrate in Fig.\ \ref{fig} a simulation of a $\mu$-$\mu$-Liouville Brownian path. First we need a simulation of a standard Brownian path $B_{t}$ as depicted in Fig.\ \ref{fig}(\subref{a}), the corresponding time-change function $\Phi_t$ with respect to $\Lambda$, see Fig.\ \ref{fig}(\subref{b}), and the associated $\Lambda$-Liouville Brownian path $B_{\check{\Phi}^{-1}_{t}}$ as shown in Fig.\ \ref{fig}(\subref{c}). Finally, its image under $\check{F}_{\mu}^{-1}$ then shows a realisation of a $\mu$-$\mu$-Liouville Brownian path as shown in Fig.\ \ref{fig}(\subref{d}). \begin{figure}[H] \begin{subfigure}[t]{0.49\textwidth} \input{tik} \vspace*{-1cm} \subcaption{Simulation of a Brownian path $B_{t}$.\hspace*{\fill}\label{a}} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{0.49\textwidth} \input{tik1} \vspace*{-1cm} \subcaption{The time-change function $\Phi_{t}$ with respect to the Brownian path $B_{t}$ and speed measure $\Lambda$.\label{b} \end{subfigure}\\ \begin{subfigure}[t]{0.49\textwidth} \input{tik3} \vspace*{-1cm} \subcaption{$\Lambda$-Liouville Brownian path $B_{\check{\Phi}^{-1}_{t}}$.\hspace*{\fill}\label{c}} \end{subfigure} \hspace*{\fill} \begin{subfigure}[t]{0.49\textwidth} \input{tik5} \vspace*{-1cm} \subcaption{$\mu$-$\mu$-Liouville Brownian path $\check{F}^{-1}_{\mu}(B_{\check{\Phi}^{-1}_{t}})$.\hspace*{\fill}\label{d}} \end{subfigure} \caption{Simulation of generalized Liouville Brownian motion.\label{fig}} \end{figure} \end{example} The above theorem allows us to consider the set of (continuous) elements $f$ from $\mathcal{D}(2A)$. With this at hand we observe the following. \begin{Cor} $\mathcal{D}(A)=\mathcal{D}^s_{\pi/2, \pi/2}( \Delta_{\nu,\mu})$ \end{Cor} \begin{proof} We have $\mathcal{D}(A) \subseteq \mathcal{D}^s_{\pi/2, \pi/2 }( \Delta_{\nu,\mu} )$ and $2Af= \Delta_{\nu,\mu}f$ for all $f \in \mathcal{D}(A)$ by \Cref{GapDSec}. The operator $\Delta_{\nu,\mu}$ restricted to $ \mathcal{D}^s_{\pi/2, \pi/2 }( \Delta_{\nu,\mu} )$ is an infinitesimal generator of a Feller process (\Cref{Feller}). Therefore, \cite[Exercise 1.18]{Revuz2013} shows that the transition functions of both processes are equal. \end{proof} With \Cref{PotentialGapD} we are able to give easy condition to calculate the walk dimension (under some additional assumptions). \begin{Thm}\label{ThmWalkDim} Let $(X_s)_{s \geq 0}$ a $m$-Liouville Brownian motion starting in $x \in \operatorname{supp}(m)$. If, for $R(r)\coloneqq \inf\{s\geq r\colon s\in \operatorname{supp}(m)\}$ and $L(r)\coloneqq \inf\{s\geq r\colon -s\in \operatorname{supp}(m)\}$, we have \begin{align*} \lim_{r \to \infty}\frac{\log R(r)}{\log r} =\lim_{r \to \infty}\frac{\log L(r)}{\log r}=1, \end{align*} then, for $t\geq 0$, \begin{align}\label{Condition1WD} d_W(x)=1+t\iff \lim_{r \to \infty} \frac{\log{m((-r,r))} }{\log r}=t. \end{align} In particular, if the walk dimension exists, then it is independent of the starting point in $\operatorname{supp}(m)$. \end{Thm} \begin{proof} Set $S(r)\coloneqq R(r+x)\wedge L(r-x) $ and $T(r)\coloneqq R(r+x)\vee L(r-x)$. Using \Cref{PotentialGapD} and estimating the Green kernel gives \begin{align*} (S(r)/2) m((-S(r)/2,S(r)/2)) \leq \mathbb{E}_x [\tau_{(x-r,x+r)} ] \leq {T(r)} m((-T(r),T(r))). \end{align*} Taking logarithms, using our hypothesis, and dividing by $\log(r)$ then proves the equivalence. \end{proof} These ideas would also carry over to calculate the local walk dimension. \begin{Cor} Assume $m$ is a finite and non-zero measure on $[0,1)$, let $\tilde{m} = m \star \delta_{\mathbb{Z}}$ and let $(X_t)_{t \geq 0}$ denote a Liouville Brownian motion with speed measure $\tilde{m}$. For all $x \in \operatorname{supp}( \tilde{m})$, we have $d_W(x)=2$. \end{Cor} \begin{proof} Since $ \tilde{m}((-r+x,r+x)) =2r\cdot m([0,1))+O(1)$ and $L(r-x)-r,R(r+x)-r\in [0, 1]$, the claim is a consequence of \Cref{ThmWalkDim} with $t=1$. \end{proof} \begin{Cor} Let $\nu$ be a Borel measure on $\mathbb{R}$ and $ F_{\mu}$ is a non-decreasing and continuous function on $\mathbb{R}$ with associated Lebesgue-Stieltjes measure $\mu$ such that $\operatorname{supp}(\nu) \subset \operatorname{supp}(\mu)$. Denote by $(X_t)_{t \geq 0}$ the $m$-Liouville Brownian motion with $m \coloneqq \nu \circ F_{\mu}^{-1}$ . Let $d_W$ denote the walk dimension of $(X_t)_{t \geq 0}$ and $\check{d}_W$ denote the walk dimension of $(\check{F}_{\mu}^{-1}(X_t))_{t \geq 0}$. For $x \in \operatorname{supp}(m)$, if \begin{align*} \forall r \in \mathbb{R} : F_{\mu}(-r)=-F_{\mu}(r), \ a \coloneqq \lim_{r \rightarrow \infty} \dfrac{\log F_{\mu}(r) }{\log r}>0 \quad \mbox{ and }\quad \lim_{r \rightarrow \infty} \dfrac{\log R(r) }{\log r}=\lim_{r \rightarrow \infty} \dfrac{\log L(r) }{\log r}=1, \end{align*} then \begin{align*} \lim_{r \rightarrow \infty} \frac{\log{m(\left(-r,r\right))} }{\log r}=t \iff 1+t=d_W\left(x\right)= a^{-1} \cdot \check{d}_W(\check{F}_{\mu}^{-1}(x)). \end{align*} \end{Cor} \begin{proof} Let $x \in \check{F}_{\mu}^{-1}\left(\operatorname{supp}(m)\right)$ and, for $r>0$, define \[ R^{\check{F}_{\mu}^{-1}}(r) \coloneqq \inf\{ s \geq r \colon s \in \check{F}_{\mu}^{-1}(\operatorname{supp}(m))\} \quad \text{and} \quad L^{\check{F}_{\mu}^{-1}}(r)\coloneqq \inf\{ s \geq r \colon -s \in \check{F}_{\mu}^{-1}(\operatorname{supp}(m)) \}. \] By assumption $(F_{\mu}(-r)=-F_{\mu}(r))$ and so \begin{align*} % F_{\mu}(R^{\check{F}_{\mu}^{-1}}(r+x)) = R(F_{\mu}(x+r)), \ F_{\mu}(L^{\check{F}_{\mu}^{-1}}(r-x)) = L(F_{\mu}(r-x)), % \end{align*} therefore we have \begin{align*} \inf\{ t \geq 0 \colon \check{F}_{\mu}^{-1}\left(X_t \right) \in \{ R^{\check{F}^{-1}}(r+x),L^{\check{F}^{-1}}(r-x)\} &= \inf\{ t \geq 0 \colon X_t \in \{ R(F_{\mu}(r+x)),L(F_{\mu}(r-x))\} \end{align*} Define for $r>0$ large \[ m(r,x):=\min \{F_{\mu}(r+x)-F_{\mu}(x),F_{\mu}(r-x)-F_{\mu}(x)\}, \ M(r,x):=\max \{F_{\mu}(r+x)-F_{\mu}(x),F_{\mu}(r-x)-F_{\mu}(x)\},\] then \begin{align*} \mathbb{E}_{F(x)}[ \tau_{(F_{\mu}(x)-m(r,x),F_{\mu}(x)+m(r,x) )} ] \leq \mathbb{E}_{F(x)}[ \tau_{(F_{\mu}(-r+x),F_{\mu}(r+x))}] \leq \mathbb{E}_{F_{\mu}(x)}[ \tau_{(F_{\mu}(x)-M(r,x),F_{\mu}(x)+M(r,x) )} ]. \end{align*} The statement follows now with help of Theorem \ref{ThmWalkDim} and the assumption on $F_{\mu}$. \end{proof}
train/arxiv
BkiUdhQ5qhLBWkLTSKK7
5
1
\section*{Introduction} In their famous paper \cite{CG}, Clemens and Griffiths represented the intermediate Jacobian $J_1(X)$ of a smooth cubic threefold $X$ as the Albanese variety of the family of lines on $X$ and parametrized the theta divisor on it by the Abel--Jacobi images of pairs of lines. They used this parametrization to establish the Torelli Theorem for $X$ and the non-rationality of $X$. Similar results were obtained by Tyurin \cite{Tyu}. Later several authors constructed parametrizations of the intermediate Jacobians and, sometimes, of their theta divisors for other Fano threefolds, using families of curves of low degree lying on them, see, for example, \cite{CV}, \cite{De-1}, \cite{De-2}, \cite{I}, \cite{Lo}, \cite{Le}, \cite{T-1}, \cite{T-2}, \cite{V-1}, \cite{V-2}, \cite{We}. However, the following natural question has not been investigated: given a family of connected curves parametrized by some variety $B$, what are the fibers of the Abel--Jacobi map considered on $B$ itself and not on the $\operatorname{Alb}\nolimits (B)$? By analogy with the Abel--Jacobi map from divisors of a given degree $d$ on a curve of genus $g$ to its Jacobian, which is smooth with projective spaces as fibers, provided that $d\geq 2g-1$, we might expect that the Abel--Jacobi map in our context behaves well for families of curves of rather big degree. The papers mentioned above search, on the contrary, for the families of the smallest posssible degree whose image generates the intermediate Jacobian. For the cubic threefold $X$, the only known facts in this direction concern the map of difference, defined on the pairs of lines in $X$, which is onto the theta divisor and is generically of degree 6 (see \ref{psi}), and the Abel--Jacobi map of the family of rational cubics in $X$, which is also onto the theta divisor and its generic fiber is ${\mathbb P}^2$ (see Proposition \ref{prop41}). There are no similar results about the Abel--Jacobi parametrizations of the entire intermediate Jacobian. In the present paper, we study the Abel--Jacobi map on the 10-dimensional component $H$ of the Hilbert scheme whose general point represents an elliptic normal quintic in $X$. We find an open subset ${\mathcal H}\subset H$, on which the map is smooth and all the components of its fibers are isomorphic to ${\mathbb P}^5$ (Theorem \ref{main}). The analogy with the Jacobians of curves is not complete, because the fibers may consist of {\em many} copies of the projective space, and almost nothing is known about the boundary of ${\mathcal H}$ in $H$ (see Proposition \ref{aftermain}). We give also an unexpected interpretation to the variety $M=M_X$ parametrizing the connected components of the fibers of the Abel--Jacobi map $\Phi : {\mathcal H}\lra J_1(X)$: we show that it is isomorphic to an open subset in the component of the moduli space $M_X(2;0,2)$ of stable rank 2 vector bundles on $X$ with Chern numbers $c_1=0, c_2=2$, whose general member is obtained by Serre's construction from elliptic quintics in $X$. Thus, there exists a natural Abel--Jacobi map from this component of the moduli space of stable vector bundles on $X$ to $J_1(X)$, induced by $\Phi$; this map is nothing but the Abel--Jacobi map of the moduli space, defined by the second Chern class of the vector bundle with values in the Chow group of 1-cycles modulo rational equivalence. Moreover, this map is \'etale on the open subset $M$, see Theorem \ref{main}. As far as we know, the above theorem provides the first example of a moduli space of vector bundles which has a dominant map to an abelian variety, different from the Picard and Albanese varieties of the base. It also shows that the moduli spaces of vector bundles yield sometimes more efficient parametrizations of the intermediate Jacobian of a threefold than families of curves. This construction can be relativized over the family of nonsingular hyperplane sections of a cubic fourfold $V$. It gives a moduli component of torsion sheaves on $V$ with supports on the hypeplane sections $X=H\cap V$, whose restrictions to $X$ are rank two vector bundles from $M_X$. One can show that the Yoneda pairing induces a symplectic structure on it. This is a new example of a moduli space of sheaves possessing a symplectic structure; the known ones parametrize sheaves on hyperkaehler manifolds (see \cite{Muk} for surfaces, and \cite{K} for higher dimensional varieties). It is a 10-dimensional symplectic variety, covering the one constructed by Donagi and Markman in \cite[Example 8.22]{D-M} (work in process). Now, we will briefly describe the contents of the paper by sections. In Section 1, we remind basic facts about the cubic threefolds, the Abel--Jacobi map and its differential, following essentially \cite{CG} and \cite{We}. Section 2 explains Serre's construction, applied to elliptic normal quintics in $X$. It yields a 5-dimensional component of the moduli space of stable vector bundles of rank 2 on $X$. The unstability of vector bundles obtained by Serre's construction from {\em non-normal} elliptic quintics is also proved. Section 3 gives the proof of the irreducibility of the family of rational normal quartics $\Gamma\subset X$. First, the irreducubility of the family of rational twisted cubics lying in nonsingular hyperplane sections of $X$ is proved, in using the Abel--Jacobi map of these curves to the theta divisor in $J_1(X)$. Next, the irreducibility of the family of curves of the form $D+l$ is proved, where $D$ is a twisted cubic as above and $l$ a line meeting $D$ transversely at 1 point. We show also that the Hilbert scheme $\HILB{4n+1}$ is smooth at points representing such curves $D+l$, and that they are strongly smoothable into a rational normal quartic (the techniques of \cite{HH} are used). This implies the wanted result. Section 4 is devoted to the normal elliptic quintics $C\subset{\mathbb P}^4$ contained in a cubic threefold $X$. It is proved that the Hilbert scheme $\Hilb$ is smooth at points representing such curves $C$, and that their family is irreducible for a {\em general} $X$. The proof is reduced to that of the irreducibility of the family of rational normal quartics $\Gamma\subset X$, in using the liaison defined by cubic scrolls ${\mathbb F}_1\subset {\mathbb P}^4$: ${\mathbb F}_1\cap X=C\cup\Gamma$. Section 5 contains the main result of the paper. We relativize Serre's construction of Section 2 over some open subset $H_0\subset H$, and construct a morphism $\phi :H_0\lra M_X(2;0,2)$ with fibers ${\mathbb P}^5$. We prove that $\phi$ is, locally in the \'etale topology, the structure projection of a projectivized vector bundle of rank 6. The smoothness of the Abel--Jacobi map $\Phi :{\mathcal H}\lra J_1(X)$ is proved in using the technique of Tangent Bundle Sequence, and this, together with the fact that an Abel--Jacobi map should contract to a point any projective space, implies that the components of fibers of $\Phi$ are exactly the fibers of $\phi$, isomorphic to ${\mathbb P}^5$. We also study the extension $\tilde{\Phi}$ of $\Phi$ to the boundary component of ${\mathcal H}\subset H$, formed by non-normal elliptic quintics, and find that its image coincides with a translate of $F+F\subset \operatorname{Alb}\nolimits (F)$, where $F$ is the Fano surface of $X$, and that the differential of $\tilde{\Phi}$ is degenerate along this component. \smallskip {\em Acknowledgements}. We acknowledge with pleasure the hospitality of the Mathematisches Forschungsinstitut at Oberwolfach, where we made an essential part of the work during our RiP stay. We are grateful to J.~Harris for his idea of the link between ellipitic quintics and rational quartics on a cubic threefold, and to A.~Beauville for his remark, cited in \ref{phi}. We are also grateful to K.~Hulek, A.~Iliev, E.~Markman, S.~Mukai, C. Peskine, and F.-O.~Schreyer for discussions. \section{Generalities and known results} \subsection{} We will start by a reminder on cubic threefolds. Let $X$ be a smooth cubic hypersurface in ${\mathbb P}^4$. It is a Fano variety, that is its anticanonical sheaf $\omega_X^{-1}\simeq{\mathcal O}_X(2)$ is ample, and the following properties are obtained by standard techniques for Fano threefolds \cite{Isk}: \begin{equation}\label{generalities-1}\begin{array}{c} h^i({\mathcal O}_X(k))=0\; \mbox{for}\; i=1,2, k\in{\mathbb Z}\; , \\ \; h^{i,0}=h^{0,i}=0 \; \mbox{for}\; i>0, h^{1,2}=h^{2,1}=5\; , \end{array} \end{equation} \begin{equation}\label{generalities-2}\begin{array}{c} \operatorname{Pic}\nolimits (X)=A_2(X)=H^2(X,{\mathbb Z} )={\mathbb Z} \cdot [{\mathcal O}_X(1)]\; , \\ B_1(X)=H^4(X,{\mathbb Z} )={\mathbb Z}\cdot l \; .\end{array} \end{equation} Following \cite[Ch. 19]{Fu}, we denote by $A_i$, resp. $B_i$ the Chow group of algebraic cycles of dimension $i$ modulo rational, resp. algebraic equivalence, and $l$ is the class of a line. \subsection{} The geometry of lines (and, sometimes, of conics) plays a crucial role in the study of a Fano threefold $X$, essentially by the following two reasons. Firstly, they give (a part of) generators of the group of birational automorphisms of $X$ \cite{Isk-2}, and secondly, they are used for the description of the Abel--Jacobi map $AJ:\operatorname{Hom}\nolimits_1X\lra J_1(X)$, where we denote, following again \cite{Fu}, by $\operatorname{Hom}\nolimits_iX$ the subgroup in the group $Z_iX$ of algebraic $i$-cycles, consisting of cycles homologous to 0, and $J_1(X)$ is the intermediate Jacobian of $X$. According to \cite{CG}, the lines on a nonsingular cubic threefold $X$ are parametrized by a nonsingular surface $F=F(X)\subset \operatorname{Gr} (2,5)$ with invariants \begin{equation}\label{hijF} h^{1,0}(F)=5\; ,\; h^{2,0}(F)=10, \end{equation} which can be thought of as a component of the Hilbert scheme $\operatorname{Hilb}\nolimits^{n+1}_X$. The smoothness of $\operatorname{Hilb}\nolimits^{n+1}_X$ at the points of $F$ follows from the calculation of the normal bundle of a line $l\subset X$: \begin{equation}\label{NlX} \begin{array}{c} {\mathcal N}_{l/X}\simeq {\mathcal O}\oplus{\mathcal O}\;\; (\mbox{for}\; l\in F\setminus D) \\ {\mathcal N}_{l/X}\simeq {\mathcal O} (-1)\oplus{\mathcal O} (1)\;\; (\mbox{for}\; l\in D), \end{array}\end{equation} where $D\subset F$ is a curve. Formulas (\ref{NlX}) imply that $h^1({\mathcal N}_{l/X})=0$, hence $F$ is smooth by \cite{G}. $F$ is called the {\em Fano surface} of $X$. \subsection{}\label{AJ_B} We will recall some known facts about the Abel--Jacobi map \cite{Gri}, \cite{CG}, \cite{Tyu}, \cite{We}. The intermediate Jacobian of a threefold $X$ is defined by $$ J_1(X)=(F^2H^3(X,{\mathbb C} ))^*/im\: (H_3(X,{\mathbb Z} )), $$ where $F^2=H^{3,0}+H^{2,1}$ is a term of the Hodge filtration, and $H_3(X,{\mathbb Z} )$ is mapped to $(F^2H^3(X,{\mathbb C} ))^*$ by integration over cycles. For a cubic threefold, $H^{3,0}(X)=0$, so, by the Hodge index theorem, the intersection form on $H_3(X)$ defines a principal polarization on $J_1(X)$. The Abel--Jacobi map is defined as follows: let $\gamma\in\operatorname{Hom}\nolimits_1X$. Then there exists $\Gamma\in H_3(X,{\mathbb Z} )$ such that $\gamma = \partial\Gamma$, and $AJ(\gamma )$ is given by $$ \left[ \omega\mapsto\int_\Gamma \omega\right] \mod im\: H_3(X,{\mathbb Z} ). $$ For a family of 1-cycles $\{ Z_b\}$ parametrized by some variety $B$ with a reference point $\beta$, this defines a set-theoretic map $AJ_B:B\lra J_1(X), \; b\mapsto AJ(Z_b-Z_\beta )$. By \cite[II]{Gri}, $AJ_B$ is analytic if $B$ is nonsingular. Hence $AJ_B$ factors through the Albanese variety of (the desingularization of) $B$; the resulting morphism $\operatorname{Alb}\nolimits (B)\lra J_1(X)$ is also called Abel--Jacobi map. Applying this construction to the family of lines on a cubic threefold $X$, Clemens--Griffiths get a natural map $\operatorname{Alb}\nolimits (F)\lra J_1(X)$. By (\ref{generalities-1}) and (\ref{hijF}), both abelian varieties are of dimension 5. The following facts are known: \subsection{} The Abel--Jacobi map establishes an isomorphism of abelian varieties $\operatorname{Alb}\nolimits (F)\simeq J_1(X)$ (\cite[(0.9)]{CG}). \subsection{} For any fixed $s\in F$, the map $i_s:F\lra J_1(X)$, $t\mapsto [l_t-l_s]$ is a closed embedding, where $l_t$ denotes the line in $X$ corresponding to a point $t\in F$ (\cite[\S 3]{Tyu}). \subsection{}\label{theta3} The class of $i_s(F)$ in $B_2(J_2(X))$ is $\frac{1}{3!}\theta^3$, where $\theta$ is the class of the theta divisor defining the principal polarization on $J_2(X)$ (\cite[(0.9)]{CG}). \subsection{}\label{psi} The map $\psi :F\times F\lra J_1(X)$, $(s,t)\mapsto [l_t-l_s]$ is generically 6-to-1 over the image, and $[\psi (F\times F)]=\theta$ (ibid, (0.10)). A precise description of the singularities of the map $\psi$ is given in \cite{Tyu}. See also loc. cit., or \cite[Example 4.3.2]{Fu} and references therein for further information on the geometry of the Abel--Jacobi map of lines. \subsection{}\label{phi}\label{beau} We will use in the sequel another map $\tilde{\psi} :F\times F\lra J_1(X)$, $(s,t)\mapsto [l_s+l_t-2l_0]$, where $0\in F$ is some reference point. The same arguments as in \cite{CG} show that $\tilde{\psi} $ is generically finite over its image and \ref{theta3} implies that its class in $B_2(J_1(X))$ is $m\theta$ for some $m\in{\mathbb Z}, m>0$. The generical finiteness follows from the tangent bundle theorem for $F$ \cite[12.4]{CG}, which implies that whenever $l_s,l_t$ is a pair of skew lines, the tangent spaces to $i_0(F)$ in $\operatorname{Alb}\nolimits (F)$ at $i_0(s),i_0(t)$ are transversal (when translated to $0\in\operatorname{Alb}\nolimits (F)$), so that both maps of sum ($\tilde{\psi} $) and of difference ($\psi$) have injective differentials at $(s,t)$. Beauville communicated us that the values of the degree $d$ of the map $\tilde{\psi} :F\times F\lra \tilde{\psi} (F\times F)$ and of $m$ follow from the description of the intermediate Jacobian in terms of Prym varieties \cite{B}: $d=2$ and $m=3$. \subsection{Technique of TBS}\label{TBS} Now we will describe, following \cite[Sect. 2]{We}, the technique of ``tangent bundle sequence", which will be applied later to elliptic quintics in a cubic threefold. Let $X\hookrightarrow W$ be an embedding of a smooth projective threefold $X$ into a smooth quasiprojective fourfold $W$, and $\{ Z_b\}_{b\in B}$ a flat family of curves in $X$ parametrized by a nonsingular variety $B$ over ${\mathbb C}$. Then a choice of a base point $\beta\in B$ determines the Abel--Jacobi map $B\lra J_1(X)$. Let $Z=Z_b$ be a scheme theoretical locally complete intersection fiber of the family. Then the differential of the Abel--Jacobi map at $b$ factors into the composition of the two natural maps: $$ T_{B,b}\lra H^0(Z,{\mathcal N}_{Z/X}) $$ and $$ \psi_Z:H^0(Z,{\mathcal N}_{Z/X})\lra (H^0(X,\Omega^3_X)\oplus H^1(X,\Omega^2_X))^*=T_{J_1(X),0} $$ The $H^0(X,\Omega^3_X)^*$-component of $\psi_Z$ is always 0, so in the sequel we will consider and denote by the same symbol $\psi_Z$ the map to $H^1(X,\Omega^2_X)^*$. In \cite{We}, this factorization is given for $Z$ smooth with a reference to \cite[II]{Gri}. This holds also in a more general context \cite{Fl}: if we identify $H^0({\mathcal N}_{Z/X})$ with $\operatorname{Ext}\nolimits^1_X({\mathcal F} ,{\mathcal F})$, where ${\mathcal F} ={\mathcal O}_Z$, then $\psi_Z$ can be represented as the composition $\operatorname{Ext}\nolimits^1_X({\mathcal F} ,{\mathcal F}) \raisebox{0.5 ex}{$\begin{CD} @>{\cup at({\mathcal F} )}>> \end{CD}$} \operatorname{Ext}\nolimits^2_X({\mathcal F} ,{\mathcal F}\otimes\Omega^1 ) \stackrel{\operatorname{Tr}}{\lra} H^2(\Omega^1)$, where $at({\mathcal F} )\in\operatorname{Ext}\nolimits^1_X({\mathcal F} ,{\mathcal F}\otimes\Omega^1 )$ denotes the Atiyah--Illusie class of a coherent sheaf ${\mathcal F}$. It is convenient to describe $\psi_Z$ via its conjugate $\psi_Z^*$, which fits into the following commutative square: \begin{equation}\label{CDWelters} \begin{CD} H^0(X,{\mathcal N}_{X/W}\otimes\omega_X) @>{R}>> H^1(X,\Omega^2_X) \\ @V{r_Z}VV @VV{\psi_Z^*}V \\ H^0(Z,{\mathcal N}_{X/W}\otimes\omega_X|_Z) @>{\beta_Z}>> H^0(Z, {\mathcal N}_{Z/X})^*.\\ \end{CD} \end{equation} Here $r_Z$ is the map of restriction to $Z$, and the whole square (upon natural identifications) is a part of the commutative diagram of long exact cohomology sequences associated to the following commutative diagram of sheaves: \begin{equation}\label{CDsheaves} \begin{array}{ccccccccc} \scriptstyle{ 0} &\scriptstyle{ \rar} &\scriptstyle{ \Omega^2_X\otimes{\mathcal O}_Z} &\scriptstyle{ \rar} & \scriptstyle{ \Omega^3_W\otimes{\mathcal N}_{X/W}\otimes{\mathcal O}_Z }&\scriptstyle{ \rar} & \scriptstyle{ \Omega^3_X\otimes{\mathcal N}_{X/W}\otimes{\mathcal O}_Z }&\scriptstyle{ \rar }&\scriptstyle{ 0 }\\ & & \downarrow & &\downarrow & & \| & & \\ \scriptstyle{ 0} &\scriptstyle{\rar} &\scriptstyle{ \Omega^3_X\otimes {\mathcal N}_{Z/X} }&\scriptstyle{ \rar} &\scriptstyle{ \Omega^3_X\otimes{\mathcal N}_{Z/W}} & \scriptstyle{ \rar} &\scriptstyle{ \Omega^3_X\otimes{\mathcal N}_{X/W}\otimes{\mathcal O}_Z} &\scriptstyle{ \rar} &\scriptstyle{ 0} \end{array} \end{equation} See \cite[2.8]{We} for definitions of the maps. \section{Vector bundles from elliptic quintics} \label{sect.5} In the sequel, $X$ will denote a nonsingular cubic threefold in ${\mathbb P}^4$, $H$ or $H(X)$ the union of components of $\Hilb$ having smooth normal elliptic quintics $C\in{\mathbb P}^4$ as their generic points. We will see later in Section \ref{section-quintics} that $H(X)$ is non empty and has dimension 10 for any $X$. We will denote by $[C]_V$ the point representing a subscheme $C\subset V$ in the Hilbert scheme of $V$; the subscript $V$ may be omitted, if this makes no confusion. In this section, we will study the vector bundles $\mathcal E$ on $X$ obtained by Serre's construction applied to an elliptic quintic $C$ in $X$. They are defined by the following extension of ${\mathcal O}_X$-modules: \begin{equation}\label{serre} 0\lra {\mathcal O}_X\lra {\mathcal E} (1) \lra {\mathcal I}_C(2) \lra 0\; , \end{equation} where ${\mathcal I}_C={\mathcal I}_{C,X}$ is the ideal sheaf of $C$ in $X$. Since the class of $C$ in $B_1(X)$ is $5l$, the sequence (\ref{serre}) implies that $c_1({\mathcal E} )=0, c_2({\mathcal E} )=2l$. Moreover, $\det{\mathcal E}$ is trivial, and hence ${\mathcal E}$ is self-dual as soon as it is a vector bundle (that is, ${\mathcal E}^\ast\simeq{\mathcal E}$). We will verify that there exists a unique non-trivial extension (\ref{serre}) for given $C$ (Corollary \ref{unique-ext}). \begin{lemma}\label{dim=6} For any smooth $C\in H$, we have: a) $\dim\operatorname{Ext}\nolimits^1({\mathcal I}_C(2),{\mathcal O}_X )=1$; b) for $k\geq 2,\; h^0({\mathcal I}_C(k))=\binom{4+k}{4}-\binom{1+k}{4} -5k,\; h^1({\mathcal I}_C(k))=$ $h^2({\mathcal I}_C(k))=0$; $h^0({\mathcal I}_C(1))=h^1({\mathcal I}_C(1))$ is $0$ if $C$ is not contained in a hyperplane, and $1$ otherwise, and $h^2({\mathcal I}_C(1))=0$; c) $h^0({\mathcal I}_C (2))=5,h^0({\mathcal E} (1))=6,h^i({\mathcal E} (1))=0$ for $i>0$. \end{lemma} \begin{proof} a) Applying $\operatorname{Ext}\nolimits (\cdot ,{\mathcal O}_X)$ to the restriction sequence \begin{equation}\label{restriction} 0\lra{\mathcal I}_C(k)\lra{\mathcal O}_X(k)\lra{\mathcal O}_C(k)\lra 0 \end{equation} for $k=2$, we obtain: \begin{equation}\label{Ext}\begin{array}{r} \operatorname{Ext}\nolimits^1({\mathcal O}_X (2),{\mathcal O}_X)\lra\operatorname{Ext}\nolimits^1({\mathcal I}_C(2),{\mathcal O}_X)\lra \operatorname{Ext}\nolimits^2({\mathcal O}_C (2),{\mathcal O}_X)\\ \lra\operatorname{Ext}\nolimits^2({\mathcal O}_X (2),{\mathcal O}_X) \end{array} \end{equation} By (\ref{generalities-1}), the left and the right hand terms are zero. So, $\operatorname{Ext}\nolimits^1({\mathcal I}_C(2),{\mathcal O}_X)= \operatorname{Ext}\nolimits^2({\mathcal O}_C (2),{\mathcal O}_X)=H^0(X,{\mathcal Ext}_{{\mathcal O}_X}^2({\mathcal O}_C,\omega_X))= H^0(C,\omega_C)\simeq{\mathbb C}$. b) Looking again at (\ref{restriction}), we see that the assertion for $k\geq 1$ follows from the projective normality of $C$ \cite[Proposition IV.1.2]{Hu} in the case when $C$ does not lie in a hyperplane. Let now $C\subset{\mathbb P}^3$. If $k=1$, the assertions follow easily from (\ref{restriction}). For $k= 2$, it is sufficient to show that $C$ is not contained in a quadric. Assume the contrary. $C$ is obviously not contained in a nonsingular quadric, neither in the nonsingular locus of a quadratic cone, because in that case its degree should be even. So it lies in a quadratic cone $Q\subset{\mathbb P}^3$ and passes through the vertex. By degree considerations, it has to meet twice the generators of the cone outside the vertex. So, in blowing up the vertex, we obtain a nonsingular curve $\tilde{C}$ of genus 1 in the Hirzebruch surface ${\mathbb F}_2$ with class $2s+af$, where $s,f$ are the standard generators of $\operatorname{Pic}\nolimits {\mathbb F}_2$ such that $s^2=-2, f^2=0, (s\cdot f)=1$. Computing the canonical class of $\tilde{C}$, we find immediately $a=4$, hence $(s\cdot \tilde{C})=0$, which implies that $C$ does not pass through the vertex. This contradicts our assumptions. Hence $C$ does not lie in any quadric. We have seen that $h^1({\mathcal I}_C (2))=h^2({\mathcal I}_C (1))=0$; obviously, also $h^3({\mathcal I}_C)=0$, so that $C$ is 3-regular according to the definition of Castelnuovo--Mumford. The Castenuovo's Proposition in Lecture 14 of \cite{Mum} implies the case $k>2$. c) The first equality follows from b) with $k=2$. The remaining ones are immediate consequences of (\ref{serre}) and of the Serre duality. \end{proof} \begin{corollary}\label{unique-ext} For any smooth $C\in H$, there exists a unique (up to isomorphism) extension (\ref{serre}) with ${\mathcal E}$ locally free. \end{corollary} \begin{proof} The uniqueness follows from Lemma \ref{dim=6}, a). The local freeness follows from the sheafified version of (\ref{Ext}), giving ${\mathcal Ext}_{{\mathcal O}_X}^1({\mathcal I}_C(2),{\mathcal O}_X)= {\mathcal Ext}_{{\mathcal O}_X}^2({\mathcal O}_C,\omega_X)=\omega_C$, and from the following lemma due to Serre: \begin{lemma}[{\cite[Ch. I, Lemma 5.1.2]{Okonek}}] Let $X$ be a nonsingular variety, $C$ a locally complete intersection of codimension 2 in $X$, $L$ invertible on $X$, and $$ 0\lra {\mathcal O}_X\lra {\mathcal E} \lra {\mathcal I}_C \otimes L \lra 0\ $$ an extension given by a class $e\in \operatorname{Ext}\nolimits^1({\mathcal I}_C\otimes L,{\mathcal O}_X)$. Then ${\mathcal E}$ is locally free of rank 2 if and only if the image of $e$ in $H^0(C,{\mathcal Ext}_{{\mathcal O}_X}^1({\mathcal I}_C\otimes L,{\mathcal O}_X))$ generates the stalk of ${\mathcal Ext}_{{\mathcal O}_X}^1({\mathcal I}_C\otimes L,{\mathcal O}_X)$ at every point of $C$. \end{lemma} \end{proof} \begin{corollary}\label{P5toH} If $C$ is smooth and is not contained in a hyperplane, then any non-zero section of ${\mathcal E} (1)$ has a locally complete intersection curve with zero canonical class as its zero locus, and this defines a natural morphism ${\mathbb P}^5={\mathbb P} (H^0(X,{\mathcal E} (1))\lra H$ whose image contains $[C]$. \end{corollary} \begin{proof} The scheme of zeros of a section of a rank 2 vector bundle on a nonsingular variety is locally complete intersection as soon as it is of codimension 2; the assertion about the canonical class follows by adjunction. So, we have to show that a non-zero section of ${\mathcal E} (1)$ cannot vanish on a surface $S\subset X$. By (\ref{generalities-2}), ${\mathcal O}_X(S)\simeq{\mathcal O}_X(d)$ for some $d>0$, hence it suffices to show that $h^0({\mathcal E} (1-d))= 0$ for all $d> 0$. Twisting (\ref{serre}) by ${\mathcal O}_X(-1)$, we see that if $C$ is not contained in a hyperplane, that is, $h^0({\mathcal I}_C(1))=0$, then $h^0({\mathcal E} )=0$, and so much the more $h^0({\mathcal E} (1-d))= 0$. This ends the proof. \end{proof} For future use in Section \ref{factor}, we will study also the case of non-normal elliptic quintics. \begin{proposition}\label{space-C} If $C$ is a smooth elliptic quintic contained in a hyperplane section $S$ of $X$, then there exists a unique pair of possibly coincident lines $l_1\cup l_2$ in $S$, such that the zero locus of any section of ${\mathcal E} (1)$, if not of codimension 2, is of the form $S'\cup l_1\cup l_2$ for some hyperplane section $S'$ of $X$. In this case $C$ is rationally equivalent to a curve of the form $C^3+l_1+l_2$, where $C^3$ is a plane cubic in $S$. Conversely, any curve of the form $C^3+l_1+l_2$ on a general cubic threefold $X$, where $C^3$ is a plane cubic curve and $l_1,l_2$ a pair of disjoint lines, is rationally equivalent to a smooth elliptic quintic contained in a hyperplane section of $X$. \end{proposition} \begin{proof} As in the previous corollary, we find a section $s$ of ${\mathcal E} (1)$ vanishing on $S$, and a section $s_0$ of ${\mathcal E}$ such that $s=Ls_0$, where $L$ is the linear form defining $S$ in $X$. As $c_2( {\mathcal E} )=2l$, the scheme of zeros $Z$ of $s_0$ is either a conic, or a pair of lines; $Z$ may be non-reduced (a double line), but without embedded points, for it is a locally complete intersection. As $h^0({\mathcal E} (1))=6$, we deduce from the exact triple \begin{equation}\label{nonstable} 0\lra {\mathcal O}_X(1)\stackrel{\alpha}{\lra} {\mathcal E} (1)\stackrel{\beta}{\lra} {\mathcal I}_Z(1)\lra 0 \end{equation} that $h^0({\mathcal I}_Z(1))=1$, so that $Z$ is contained in a unique hyperplane. This eliminates the case of the conic, as well as that of a planar double line. Thus, $\chi({\mathcal O}_Z(n))=2n+2,\ <Z>=P^3$, i.~e. either $Z$ is a disjoint union $Z=l_1\bigsqcup l_2$ of two lines, or $Z$ is a double non-planar locally complete intersection structure on a line $l_1=l_2$ (the latter is clearly the limit of the former). We will denote this shortly as $Z=l_1\cup l_2$. Moreover, we see from (\ref{nonstable}) that ${\mathcal E} (1)$ has a 5-dimensional subspace of sections $L's_0$ with $L'\in H^0(X,{\mathcal O}_X(1))$ whose zero locus is of the form $S'\cup l_1\cup l_2$, where $S'=\{ L'=0\}$ is a hyperplane section of $X$. Let $s_1$ be the section of ${\mathcal E} (1)$ vanishing exactly on $C$, and $L'$ a general linear form. Then the pencil of sections $\lambda_0L's_0+\lambda_1s_1$ defines a pencil of curves, degenerating $C$ into $C^3+l_1+l_2$ with $C^3\subset S'$ of degree 3. Now, $s_1$ is not in the image of $\alpha$ in (\ref{nonstable}), and it vanishes on $C$. Hence $\beta (s_1)\neq 0$ and $\beta (s_1)$ vanishes on $C$. Hence $C$ lies in the unique hyperplane containing $Z=l_1\cup l_2$. Hence, this hyperplane is $L=0$, and $l_1\cup l_2\subset S$. It remains to see that $C^3\subset S$. By construction, $C^3\subset S'$ for another hyperplane section $S'$ of $X$. If $C^3$ were not contained in $S\cap S'$, then the general member of the pencil of curves defined by the sections $\lambda_0L's_0+\lambda_1s_1$ would be a smooth elliptic curve not contained in a hyperplane, because $C^3\cup l_1\cup l_2$ is not. This contradicts Corollary \ref{P5toH}. Hence $C^3 \subset S\cap S'$ is a plane cubic curve. The fact that any curve of the form $C^3+l_1+l_2$ with $l_1\cap l_2 =\varnothing$ is rationally equivalent to a smooth quintic is reduced to the study of linear systems on cubic surfaces. We can always arrange the things so that $C^3,l_1,l_2$ lie in one hyperplane and $C^3$ meets each one of the lines $l_1,l_2$ transversely at 1 point, because all the plane sections $C^3$ of $X$ are rationally equivalent to each other, being parametrized by the rational Grassmannian variety $G(3,5)$. We obtain in this way the linear system $|C^3+l_1+l_2|$ on the cubic surface $S=<l_1,l_2>\cap X$. If $S$ is nonsingular, we can represent it as ${\mathbb P}^2$ with 6 points $P_1,\ldots ,P_6$ blown up. Let $e_0$ be the inverse image of a line in ${\mathbb P}^2$, and $e_i$ the exceptional divisors over $P_i$ ($i=1,\ldots ,6$). We can choose this representation in such a way, that $l_1=e_1,l_2=e_2$. We have $C^3\in |h|$, where $h=3e_0-e_1-\ldots -e_6$ is the hyperplane section, so that $C^3+l_1+l_2\in |3e_0-e_3-\ldots -e_6|$ is obviously smoothable in its linear system. To treat the case when $S$ is singular, we will use the assumption that $X$ is general. This restricts the possible degenerations of the hypeplane sections $H\cap X$ to those which have codimension $\leq 4$ in the projective space ${\mathbb P}^{19}$ parametrizing cubic surfaces in ${\mathbb P}^3$. Such degenerations $S$ have at most 4 isolated singular points, and the line joining two distinct singular points lies entirely in $S$ (see \cite[Sect. 4.6]{GH}). Choosing coordinates in such a way that the singular points lie in the vertices of the coordinate octahedron and writing out the monomials that can be present in the equation of $S$ in order that $S$ might have isolated singularities in the prescribed vertices, we arrive at the following list of degenerations up to codimension 4: $A_1$, $A_1A_1$, $A_2$, $A_1A_1A_1$, $A_2A_1$, $A_3$, $A_1A_1A_1A_1$, $ A_2A_1A_1$, $A_3A_1$, $A_2A_2$, $D_4$, where $X_i$ (resp. $X_iX_j,\ldots$) stands for a surface with one singular point of type $X_i$ (resp. distinct singular points of types $X_i,X_j,\ldots$), and the analytic types of singularities $X_i$ are either $A_i$ or $D_i$ of Du Val, defined up to analytic equivalence by $x^2+y^2+z^{i+1}=0\; (A_i)$ or $x^2+y^2z+z^{i-1}=0\; (D_i)$. The codimension of such degenerations in ${\mathbb P}^{19}$ is $i$ (resp. $i+j+\ldots$). The standard technique of projections from two lines (see \cite{M} or \cite{Re}) gives a birational map $S\dasharrow {\mathbb P}^1\times{\mathbb P}^1$. In the above 11 cases, it represents $S$ as ${\mathbb P}^1\times{\mathbb P}^1$ with 5 points $\Psi=\{ Q_1,\ldots, Q_5\}$ blown up and ($-2$)-curves blown down; some of the points $Q_i$ can be infinitesimal. Since the quadric with a point blown up is isomorphic to ${\mathbb P}^2$ with two points blown up, we can replace $({\mathbb P}^1\times{\mathbb P}^1,\Psi )$ by $({\mathbb P}^2,\Phi )$, where $\Phi =\{ P_1, \ldots , P_6\}$ is a set of 6 points, some of which can be infinitesimal. Let $S_\Phi$ denote the blowup of $\Phi$ in ${\mathbb P}^2$. We are going to provide a configuration of $\Phi$ giving rise to each of the 11 degenerations above. In each case, $\Phi$ should be general satisfying the formulated constraints in the sense that there are no curves $F\simeq{\mathbb P}^1, F^2\leq -2$ among the proper transforms in $S_\Phi$ of lines or conics in ${\mathbb P}^2$ except for the ($-2$)-curves imposed by the constraints. Here is the list: \begin{itemize} \item[$A_1$:] $P_1,P_2,P_3$ are collinear. \item[$A_1A_1$:] $P_3=P_1P_2\cap P_4P_5$. \item[$A_2$:] $P_1,P_2,P_3$ are collinear, and $P_4,P_5,P_6$ are collinear. \item[$A_1A_1A_1$:] $P_2,P_4,P_6$ lie on the sides of the triangle $P_1P_3P_5$. \item[$A_2A_1$:] The limit of $A_1A_1$ when $P_4\to P_2$ along the line $P_2P_5$ (so that $P_4$ is infinitesimally close to $P_2$). \item[$A_3$:] $P_3=P_1P_2\cap P_4P_5$, and $P_6$ is infinitesimally close to $P_3$). \item[$A_1A_1A_1A_1$:] $\Phi$ is the set of intersection points of 4 lines in general position in~${\mathbb P}^2$. \item[$A_2A_1A_1$:] The specialization of $A_2A_1$ with $P_6\in P_1P_5$. \item[$A_3A_1$:] The limit of $A_2A_1$ when $P_6\to P_1$. \item[$A_2A_2$:] The limit of $A_2A_1$ when $P_6\to P_5$. \item[$D_4$:] $P_1,P_2,P_3$ are collinear, and $P_{3+i}$ is infinitesimally close to $P_i\; (i=1,2,3)$. \end{itemize} A routine verification of the emptiness of the base loci in $S_\Phi$ of the linear systems $h+l_1+l_2$ ends the proof. \end{proof} \begin{proposition}\label{stable} Let $C$ be a smooth elliptic quintic. If $C$ is not contained in a hyperplane, then the vector bundle ${\mathcal E}$ obtained from $C$ by Serre's construction is stable. It has a non-trivial global section and is Gieseker unstable, if $C$ is contained in a hyperplane. \end{proposition} \begin{proof} {\em Unstability part}. Assume that $C$ is contained in a hyperplane. As in the proof of Proposition \ref{space-C}, we obtain an exact triple (\ref{nonstable}). It is obvious that $\chi ({\mathcal O}_X(k))>\chi ({\mathcal I}_Z(k))$ for $k\gg 0$, so ${\mathcal E}$ is Gieseker unstable, though semistable in the sense of Mumford--Takemoto. {\em Stability part}. Any saturated torsion free rank 1 subsheaf of ${\mathcal E}(1)$ is invertible of the form ${\mathcal O}_X(k)$ and gives an exact triple: \begin{equation}\label{diese} 0\lra {\mathcal O}_X(k)\stackrel{\alpha}{\lra} {\mathcal E} (1)\lra {\mathcal I}_{Z}(2-k) \lra 0 \end{equation} where $Z\subset X$ is a subscheme of codimension 2 (if not empty). Clearly, (\ref{serre}) twisted by ${\mathcal O}_X(-2)$ shows that the case $k\ge2$ here is impossible. On the other hand, if ${\mathcal E}$ is not Gieseker stable, then, computing Hilbert polynomials, one has $k\ge1$ in (\ref{diese}). Hence $k=1$, and we obtain the above case when $C$ lies in a hyperplane. \end{proof} \begin{lemma}\label{h0NCX} Let $C$ be a smooth elliptic quintic in $X$. Let ${\mathcal E}$ be the unique locally free sheaf defined by (\ref{serre}). Then we have: a) $h^i({\mathcal E} (-1))=0\;\; \forall\; i\in{\mathbb Z}$; b) $h^0({\mathcal E}\otimes{\mathcal E} )=1$ if and only if $C$ is not contained in a hyperplane, and in this case $h^1({\mathcal E}\otimes{\mathcal E} )=5, h^2({\mathcal E}\otimes{\mathcal E} )=h^3({\mathcal E}\otimes{\mathcal E} )=0$; c) $h^0({\mathcal N}_{C/X})=10,h^1({\mathcal N}_{C/X})=0$. \end{lemma} \begin{proof} a) From (\ref{restriction}) with $k=0$, we deduce $h^i({\mathcal I}_C)=0,\ i=0,1,3,$ $h^2({\mathcal I}_C)=1$. Then (\ref{serre}), twisted by ${\mathcal O}_X(-2)$, gives $\chi({\mathcal E} (-1))=$ \linebreak $h^i({\mathcal E} (-1))=0$ for $i=0,1$. Next, by Serre duality, $h^3({\mathcal E} (-1))=h^0({\mathcal E} (-1))=0$; hence also $h^2({\mathcal E} (-1))=0$. b) Tensor (\ref{serre}) by ${\mathcal E} (-1)$: \begin{equation}\label{EE} 0\lra {\mathcal E} (-1)\lra {\mathcal E}\otimes{\mathcal E}\lra {\mathcal I}_C\otimes{\mathcal E} (1)\lra 0 \end{equation} >From a) and (\ref{EE}), we deduce that \begin{equation}\label{hi} h^i({\mathcal E}\otimes{\mathcal E})=h^i({\mathcal I}_C\otimes{\mathcal E} (1)) \;\;\forall\; i\in{\mathbb Z} . \end{equation} If $C$ is not contained in a hyperplane, then ${\mathcal E}$ is stable, hence by \cite[Theorem I.2.9]{Okonek}, ${\mathcal E}$ is simple, that is $h^0({{\mathcal H}om\:} ({\mathcal E} ,{\mathcal E} ))=h^0({\mathcal E}\otimes{\mathcal E} )=1$. By Serre duality, $h^3({\mathcal E}\otimes{\mathcal E})=h^0({\mathcal E}\otimes{\mathcal E} (-2))=0$. By Riemann--Roch--Hirzebruch, $\chi ({\mathcal E}\otimes{\mathcal E})=-4$. It remains to prove that $h^2({\mathcal E}\otimes{\mathcal E})=h^1({\mathcal E}\otimes{\mathcal E} (-2))=0$. Consider the exact triples $$ 0\lra {\mathcal E} (-3)\lra {\mathcal E}\otimes{\mathcal E} (-2)\lra {\mathcal I}_C\otimes{\mathcal E} (-1)\lra 0 $$ $$ 0\lra {\mathcal I}_C\otimes{\mathcal E} (-1)\lra {\mathcal E} (-1)\lra {\mathcal E} (-1)\otimes{\mathcal O}_C\lra 0 $$ We have ${\mathcal E} (-1)\otimes{\mathcal O}_C\simeq{\mathcal N}_{C/X}(-2)$. Consider the natural inclusion ${\mathcal N}_{C/X}(-2)\subset{\mathcal N}_{C/{\mathbb P}^4}(-2)$ and apply \cite[Proposition V.2.1]{Hu}: \linebreak $h^1({\mathcal N}^\ast_{C/{\mathbb P}^4}(2)\otimes {\mathcal M})=0$ for any invertible ${\mathcal M}$ of degree 0. By Serre duality, $h^0({\mathcal N}_{C/{\mathbb P}^4}(-2))=0$. Hence $h^0({\mathcal N}_{C/X}(-2))=0$. The last exact sequence and a) imply that $h^i({\mathcal I}_C\otimes{\mathcal E} (-1))= h^i({\mathcal E} (-1))=0\;,\; i=0,1$, hence $h^1({\mathcal E}\otimes{\mathcal E} (-2))=h^1({\mathcal E} (-3))$. By Lemma \ref{dim=6}, c), and Serre duality, we have $h^2({\mathcal E} (1)) =h^1({\mathcal E} (-3))=0$, and we are done. If $C$ is contained in a hyperplane, then $h^0({\mathcal E} )\neq 0$ by Proposition \ref{stable}, hence ${\mathcal E}\otimes{\mathcal E}={{\mathcal E}nd\:} ({\mathcal E} )$ has at least two linearly independent sections: the identity endomorphism, and $s\otimes s$, where $s$ is a non-trivial section of ${\mathcal E}$. c) Let $C$ be not contained in a hyperplane. In the exact triple \begin{equation}\label{timesE1} 0\lra {\mathcal I}_C\otimes{\mathcal E} (1)\lra {\mathcal E} (1)\lra {\mathcal N}_{C/X}\lra 0 \end{equation} we have $h^0( {\mathcal I}_C\otimes{\mathcal E} (1))=1, h^1( {\mathcal I}_C\otimes{\mathcal E} (1))=5$ by b) and (\ref{hi}), and $h^0({\mathcal E}(1))=6,h^1({\mathcal E}(1))=0$ by Lemma \ref{dim=6}, c). Hence $h^0({\mathcal N}_{C/X})=10$. By Riemann--Roch on $C$, we have $h^1({\mathcal N}_{C/X})=0$. In the case if $C\subset{\mathbb P}^3$, we can consider $C$ as a curve on a surface, the hyperplane section $S$ of $X$, as in the proof of Proposition \ref{space-C}. For example, assume that the hyperplane section is nonsingular. Then we can represent it as ${\mathbb P}^2$ with 6 points $P_1,\ldots ,P_6$ blown up. Let $e_0$ be the inverse image of a line in ${\mathbb P}^2$, and $e_i$ the exceptional divisors over $P_i$ ($i=1,\ldots ,6$). Then $C\in |3e_0-e_{i_1}-e_{i_2}-e_{i_3}-e_{i_4}|$ for some subset $\{ i_1,i_2,i_3,i_4\}\subset\{1,\ldots 6\}$. The standard exact sequences $$ 0\lra {\mathcal O}_S\lra {\mathcal O}_S(C)\lra {\mathcal N}_{C/S}\lra 0, $$ $$ 0\lra {\mathcal N}_{C/S}\lra {\mathcal N}_{C/X}\lra {\mathcal O}_C(1)\lra 0 $$ show that $h^0({\mathcal N}_{C/X})=10,h^1({\mathcal N}_{C/X})=0$. \end{proof} \begin{lemma} Let $C$ be a smooth elliptic quintic, not contained in a hyperplane. Then any two non-proportional sections of ${\mathcal E} (1)$ define different curves $C$. Hence the morphism ${\mathbb P}^5\lra H$ of Corollary \ref{P5toH} is injective. \end{lemma} \begin{proof} By Lemma \ref{h0NCX}, b), $h^0({\mathcal E}\otimes{\mathcal E} )=1$. To prove that the section of ${\mathcal E} (1)$ defining $C$ is unique up to a constant factor, it suffices to show that $h^0( {\mathcal I}_C\otimes{\mathcal E} (1))=1$. This follows from (\ref{hi}). \end{proof} \section{Rational normal quartics} In this section, $X$ is a general cubic threefold. For the future use in Section \ref{section-quintics}, we prove the irreducibility of the family of rational normal quartics in $X$. \begin{theorem}\label{quart-irred} There is a unique component of the Hilbert scheme of curves in $X$ whose generic point correponds to a rational normal quartic. \end{theorem} We start by several auxiliary results. \begin{proposition}\label{prop41} Let $X$ be a nonsingular cubic threefold, and $\stackrel{\circ}{H}_{3,0} =$ $\stackrel{\circ}{H}_{3,0} (X) $ the open subscheme of $\HILB{3n+1}$ parametrizing smooth cubic rational curves lying in nonsingular hyperplane sections of $X$. Then we have: (i) $\stackrel{\circ}{H}_{3,0}$ is smooth and irreducible of dimension 6. (ii) The image of the Abel--Jacobi map $\eta :\stackrel{\circ}{H}_{3,0}\lra J_1(X)$ is an open subset of a translate $\Theta_a$ of the theta divisor of $J_1(X)$. (iii) All the scheme-theoretic fibers $\eta^{-1}\eta (s)$ are open subschemes of ${\mathbb P}^2$, in particular, they are smooth and irreducible of dimension 2. \end{proposition} \begin{proof} See \cite[(6.21)]{We}. We have \begin{equation}\label{Csmooth} \eta^{-1}\eta (s) = \{ C\in |{\mathcal O}_S(C_s)|\; : \; C \;\mbox{is smooth} \}, \end{equation} where $C_s\subset X$ is the curve represented by $s\in\stackrel{\circ}{H}_{3,0}$, $S=<C_s>\cap X$, and $<C_s>\simeq{\mathbb P}^3$ denotes the linear span of $C_s$. Thus, the ${\mathbb P}^2$ from the statement of the proposition is the linear system of $C_s$ on the nonsingular cubic surface $S$. The image of $\eta$ can be identified, up to translation, with the image under $\psi$ (see \ref{psi}) of the subset $U=\{ (t,t')\in F\times F\; |\; l_t\cap l_{t'}=\varnothing ,\ <l_t, l_{t'}>\cap \ X\ \mbox{is nonsingular}\}$. The pairs $(t,t')$ are recovered from $s$ as follows: there are exactly 6 lines $l_t$ in $S$ such that $C\in |{\mathcal O}_S(C_s)|$ degenerates into $C^2+l_t$, where $C^2$ is a conic. Choose one of them, and let $l_{t'}$ be determined as the residual intersection of the linear span of $C^2$ with $X$: $X\cap <C^2>=C^2\cup l_{t'}$. Then the Abel--Jacobi image of $C_s$ coincides, up to a translation by a constant, with that of $l_t-l_{t'}$. The openness of $\eta (\stackrel{\circ}{H}_{3,0} )$ follows from the finiteness of $\psi$ on $U$ \cite[\S 3]{Tyu}. \end{proof} \begin{proposition}\label{prop42} Let $X$ be a general cubic threefold, $\stackrel{\circ}{H}_{3,0}$ as in Proposition \ref{prop41}, and $F$ the Fano surface of $X$. Let $$ B=B(X)=\{ (s,t)\in\stackrel{\circ}{H}_{3,0}\times F\; |\; C_s \;\mbox{\em meets}\; l_t\; \mbox{\em transversely at 1 point}\} , $$ where $C_s$ (resp. $l_t$) denotes the rational cubic curve represented by $s$ (resp. the line represented by $t$). Then $B$ is irreducible of dimension $7$. \end{proposition} \begin{proof} Let $b:B\lra\Theta_a$ be defined by $b=\eta\circ p_1$, where $\Theta_a, \eta$ are as in Proposition \ref{prop41}, and $p_1$ is the natural projection from $B$ to $\stackrel{\circ}{H}_{3,0}$. The map $b$ is dominant on every component of $B$. To see this, notice first that on a {\em general} cubic $X$, the number of lines passing through any point is finite ($\leq 6$). Indeed, by \cite[\S 1]{Tyu} or \cite[Sect. 8]{CG}, the points $P\in X$ lying on an infinite number of lines are characterized by the property that $T_PX\cap X$ is a cone over a plane cubic curve, and it is easy to check by counting dimensions that there are no such hyperplane sections on a general $X$. Hence {\em all} the fibers $p_1^{-1}(x)$ are of pure dimension 1. Moreover, any $(x_0,t_0)\in p_1^{-1}(x_0)$ can be obtained as a limit of points $(x,t)\in p_1^{-1}(x)$ for some one-parameter family specializing $x$ into $x_0$. Hence $B$ is equidimensional and each one of its components dominates $\stackrel{\circ}{H}_{3,0}$. This implies, by Proposition \ref{prop41}, that each component of $B$ dominates $\Theta_a$. Look now at the fibers of $b:B\lra\Theta_a$. Let $v\in\eta (\stackrel{\circ}{H}_{3,0} ) \subset\Theta_a$, $B_v=b^{-1}(v)$, and $f=p_2|_{B_v}:B_v\lra F$, $f:([C],[l])\mapsto [l]$. By (\ref{Csmooth}), $$\begin{array}{r} B_v=\{ ([C'],[l])\in\stackrel{\circ}{H}_{3,0}\times F\; |\; C'\in|{\mathcal O}_S(C)|, \; l\;\mbox{meets}\; C'\;\mbox{transversely}\phantom{oint}\\ \mbox{in 1 point}\} .\end{array} $$ Hence, if $l\not\subset S$, $$ f^{-1}([l])=\{\mbox{open subset of}\;{\mathbb P}^1=|{\mathcal I}_{x,S}(C)|\} , $$ where $x=l\cap S$. So, all the fibers of $f:B_v\lra F$ are open subsets of ${\mathbb P}^1$ except for 27 fibers $\simeq{\mathbb P}^2$, corresponding to lines $l_i\subset S$ ($i=1,\ldots ,27$). These exceptional fibers are not irreducible components, as any $l_i\cup C'$ represented by a point of $B_v$ can be deformed into $l\cup C''$ with $l\not\subset S$, $C''\cap l=\{1 \;\mbox{point}\}$, in using the fact that the family of rational cubics covers $S$. Hence $B_v$ is irreducible of constant dimension 3, hence $B$ is irreducible, and we are done. \end{proof} \begin{proposition}\label{prop43} Let $X,B$ be as in Proposition \ref{prop42}. Then $B$ can be naturally identified with a subset of $H_{4,0} =\HILB{4n+1}$. The following assertions are true: (i) $H_{4,0}$ is smooth at any point $[Z]=([C],[l])\in B$, and $$ \dim T_{[Z]}H_{4,0} =h^0({\mathcal N}_{Z/X})=8,\; h^1({\mathcal N}_{Z/X})=0 . $$ (ii) A curve $Z=C\cup l$ represented by a general point of $B$ can be smoothed into a rational normal quartic inside $X$. \end{proposition} \begin{proof} (i) We have natural exact sequences \begin{equation}\label{seq1} 0\lra{\mathcal N}_{Z/X}\lra {\mathcal N}_{Z/X}|_C\oplus {\mathcal N}_{Z/X}|_l \lra {\mathcal N}_{Z/X}\otimes{\mathbb C} (x)\lra 0, \end{equation} \begin{equation}\label{seq2} 0\lra {\mathcal N}_{C/X}\lra {\mathcal N}_{Z/X}|_C\lra {\mathbb C} (x)\lra 0, \end{equation} \begin{equation}\label{seq3} 0\lra {\mathcal N}_{l/X}\lra {\mathcal N}_{Z/X}|_l\lra {\mathbb C} (x)\lra 0, \end{equation} where $x=C\cap l$. Let $S=<C>\cap X$, then ${\mathcal N}_{C/S}\simeq{\mathcal O}_{\PP^1} (1)$ via an identification $C\simeq{\mathbb P}^1$. Hence the exact sequence $$ 0\lra {\mathcal N}_{C/S}\lra{\mathcal N}_{C/X}\lra {\mathcal N}_{S/X}|_C\lra 0 $$ yields \begin{equation}\label{normcubic} {\mathcal N}_{C/X}\simeq {\mathcal O}_{\PP^1} (2)\oplus{\mathcal O}_{\PP^1} (2)\;\mbox{or} \; {\mathcal O}_{\PP^1} (1)\oplus{\mathcal O}_{\PP^1} (3). \end{equation} By (\ref{NlX}),(\ref{normcubic}), (\ref{seq2}) and (\ref{seq3}), $h^1({\mathcal N}_{C/X})=h^1({\mathcal N}_{l/X})=0$, $h^0({\mathcal N}_{Z/X}|_C)=7$, $h^0({\mathcal N}_{Z/X}|_l)=3$. The map $H^0({\mathcal N}_{Z/X}|_C\oplus {\mathcal N}_{Z/X}|_l) \lra H^0({\mathcal N}_{Z/X}\otimes{\mathbb C} (x))$ arising from (\ref{seq1}) is surjective, because ${\mathcal N}_{Z/X}$ is locally free and ${\mathcal N}_{Z/X}|_C\simeq {\mathcal O}_{\PP^1} (2)\oplus{\mathcal O}_{\PP^1} (3)\;\mbox{or} \; {\mathcal O}_{\PP^1} (1)\oplus{\mathcal O}_{\PP^1} (4)$ is generated by global sections. Hence $h^0({\mathcal N}_{Z/X})=8$, $h^1({\mathcal N}_{Z/X})=0$. (ii) We will use the following particular case of \cite[Theorem 4.1]{HH} (though it is formulated in \cite{HH} for nodal curves in ${\mathbb P}^3$, its proof remains valid with ${\mathbb P}^3$ replaced by any nonsingular projective variety). \begin{lemma} Let $X$ be a nonsingular projective variety, $C_1,C_2$ two smooth curves in $X$ meeting transversely at one point $x$. Assume that $H^1(C_i,\operatorname{elm}\nolimits^-_x{\mathcal N}_{C_i/X})=H^1(C_{3-i},{\mathcal N}_{C_{3-i}/X})=0$ for at least one $i=1$ or $2$. Then $H^1({\mathcal N}_{C_1\cup C_2/X})=0$ and $C_1\cup C_2$ is strongly smoothable in $X$. \end{lemma} Here $\operatorname{elm}\nolimits^-_x$ denotes the negative elementary transformation in $x$ (see loc. cit., Sect. 2 for a definition). >From (\ref{normcubic}), we obtain $$ \operatorname{elm}\nolimits^-_x{\mathcal N}_{C/X}\simeq{\mathcal O}_{\PP^1} (1)\oplus{\mathcal O}_{\PP^1} (2)\;\mbox{or} \; {\mathcal O}_{\PP^1} \oplus{\mathcal O}_{\PP^1} (3). $$ Hence $h^1(\operatorname{elm}\nolimits^-_x{\mathcal N}_{C/X})=0$. We have already seen that $h^1({\mathcal N}_{l/X})=0$, hence the lemma implies the result. \end{proof} \subsubsection*{Proof of Theorem \ref{quart-irred}} Let $H_{4,0} '(X)$ be the component of $H_{4,0} (X)$ containing $B(X)$, where $B(X)$ is defined in Proposition \ref{prop42}. It is unique, \hfill because \hfill $H_{4,0} (X)$ is smooth and of dimension 8 at the points of $B(X)$, and thus we have $\dim H_{4,0} '(X)=8$. Let $\stackrel{\circ}{H}_{4,0}\subset\HILBP{4n+1}$ be the subscheme parametrizing rational normal quartics; it is a smooth homogeneous manifold of dimension 21: $\stackrel{\circ}{H}_{4,0}\simeq PGL_5({\mathbb C} )/PGL_2({\mathbb C} )$. Let $$ I=\{ (X,C)\in |{\mathcal O}_{P^4}(3)|\times\stackrel{\circ}{H}_{4,0}\; |\; C\subset X\} $$ be the incidence variety, $p_1,p_2$ its projections to the factors of \linebreak $|{\mathcal O}_{P^4}(3)|\times\stackrel{\circ}{H}_{4,0}$. Then $p_2^{-1}([C])=|{\mathcal I}_{C,P^4}(3)|\simeq{\mathbb P}^{21}$ for any $C$. Hence $I$ is irreducible of dimension 42. We have seen that there is an irreducible component of a general fiber of $p_1:I\lra{\mathbb P}^{34} =|{\mathcal O}_{P^4}(3)|$ of dimension 8, hence $p_1$ is dominant. Moreover, the irreducibility of $I$ implies that the action of the monodromy group on the ireducible components of a general fiber is transitive. As we can distinguish one particular component $H_{4,0} '(X)$, invariant under monodromy, the general fiber of $p_1$ is irreducible. \hfill\square \begin{remark} The basic idea of our proof of the irreducibility of the family of rational normal quartics in $X$ is degenerating a quartic into a reducible curve, which reduces the problem to questions on lower degree curves. This step could be done using the ``bend and break technique" \cite{Mo}, \cite[II.5]{Ko}, which gives a deformation $\Gamma '$ of $\Gamma$ whose associated 1-cycle is of the form $\sum a_i\Gamma_i$ with $\sum a_i\geq 2$ (see \cite[Corollary II.5.6]{Ko}). In order to fulfil the hypotheses necessary for the application of this technique, one has to verify that there are enough rational quartics in $X$ passing through two general points. Each component of the Hilbert scheme of curves in $X$ whose generic point is a smooth rational normal quartic is of dimension 8. Hence, one can expect that in each component, there is a 4-dimensional family of curves passing through two general points $P_1,P_2\in X$. To verify that this is really the case, the standard argument with incidence varieties can be used (compare to Lemma \ref{lemma-2}). One has to show that there is a component of dimension 4 in the subset of the Hilbert scheme of rational quartics passing through two general points in a {\em particular} cubic threefold. One can easily check that this is so on a cone over a smooth cubic surface $Y\subset{\mathbb P}^3$. \end{remark} \section{Normal elliptic quintics} \label{section-quintics} \subsection{}\label{nota} Let $H_0$ be the component of $\Hilbp$ whose generic point corresponds to a smooth normal elliptic quintic in ${\mathbb P}^4$. $H$ or $H(X)$ will denote, as before, the union of components of $\Hilb$ having smooth normal elliptic quintics as their generic points (hence contained in $H_0$). It is well-known that $H_0$ is unique, of dimension 25, and is generically parametrized by 24 parameters corresponding to the action of ${PGL}_5({\mathbb C} )$, and by the modulus of the elliptic curve (see \cite{Hu}, or \cite[Proposition 6.1]{HH} for the irreducibility of Hilbert schemes of curves in a more general context). \begin{lemma}\label{lemma-1} Through any smooth elliptic normal quintic in ${\mathbb P}^4$ passes a $19$-dimensional linear system of cubic hypersurfaces, and its general member is nonsingular. \end{lemma} \begin{proof} The dimension calculation follows from (\ref{restriction}) for $k=3$. By \cite[Theorem IV.1.3]{Hu} or by \cite[Theorem 10]{Mu}, $C$ is a scheme-theoretic intersection of quadrics passing through $C$. Hence this is also true for cubics, and by Bertini's Theorem, the general cubic through $C$ is nonsingular. \end{proof} \begin{lemma}\label{lemma-2} Let $I\subset H_0\times {\mathbb P}^{34}$ be the incidence variety parametrizing the pairs $(C,X)$ consisting of a smooth normal elliptic quintic $C$ and a cubic hypersurface $X$ containing $C$. Then $I$ is irreducible of dimension 44, and its projection to ${\mathbb P}^{34}$ is dominant. The fiber of this projection has dimension 10 and is smooth at any point $[C]$ representing a normal elliptic quintic $C$ in a nonsingular cubic threefold. \end{lemma} \begin{proof} The irreducibility of $I$ follows from \cite[Theorem 1.6.12]{Sh}. Now, take any nonsingular cubic $X_0$ containing a smooth normal elliptic quintic $C_0$, which is possible by Lemma \ref{lemma-1}. Then, by Lemma \ref{h0NCX}, c), $h^0({\mathcal N}_{C_0/X_0})=10,\; h^1({\mathcal N}_{C_0/X_0})=0$, hence $H(X_0)$ is smooth of dimension 10 at $[C_0]$ (see \cite{G}). \end{proof} Sofar, we have proved: \begin{proposition}\label{H-comps} Let $X$ be a general cubic threefold. Then $H=H(X)$ is the union of finitely many irreducible components of dimension 10. The generic point of any of these components is smooth and corresponds to a smooth normal elliptic quintic of ${\mathbb P}^4$ contained in $X$. For any such quintic $C$, the point $[C]\in H$ is smooth. \end{proposition} \begin{theorem}\label{H-irred} Let $X$ be a general cubic threefold, and $H=H(X)$. Then $H$ is irreducible of dimension 10.\end{theorem} \begin{proof} Choose any component $H'$ of $H$. By Proposition \ref{H-comps}, there is a smooth normal quintic $C\subset{\mathbb P}^4$ corresponding to a point of $H'$. Choose a linear series $g^1_2$ on $C$, and construct the rational cubic scroll $\Sigma$ as the union of lines joining pairs of points in $g^1_2$. One can easily verify that $\Sigma$ is the Hirzebruch surface ${\mathbb F}_1$ embedded by the linear system of degree 3, and its intersection with $X$ is $C$ plus a rational normal quartic $\Gamma$ or, possibly, a degenerate curve of the linear system of rational normal quartics in $\Sigma$. Such curves $\Gamma$ meet the $(-1)$-curve $L$ of $\Sigma$ in two points. Notice, that the family of scrolls $\Sigma$ obtained by this construction is connected of dimension 1, parametrized by the points of $\operatorname{Pic}\nolimits^2(C)\simeq C$. As a next step, we want to invert this construction and to recover $C$ from $\Gamma$. It is convenient to reduce the problem to the case when $\Gamma$ is smooth (see Remark \ref{conversely}, giving an idea, how to avoid this reduction). The singular members of $|\Gamma |$ on $\Sigma$ are of the form $\Gamma'+\sum a_iF_i$, with $1\leq\sum a_i\leq 3$, where $\Gamma'$ (resp. $F_i$) is a section (resp. a fiber) of the scroll $\Sigma$, and $F_i$ are chords of $C$ lying in $X$. So, if $C$ has only finitely many chords in $X$, then we can always choose a $g^1_2$ on $C$ in such a way that it contains no chords, and then $\Gamma$ will be nonsingular. The finiteness of the number of chords is guaranteed by the following lemma. \begin{lemma}\label{gen-gen} A general elliptic quintic $C$ in any component of $H(X)$ for a general cubic threefold $X$ has only finitely many chords in $X$. \end{lemma} Postponing the proof of the lemma, let us show now that given any normal rational quartic $\Gamma\in {\mathbb P}^4$ contained in $X$, we can find a cubic scroll $\Sigma$ such that $\Sigma\cap X=\Gamma\cup C$ for some quintic $C$. It can be constructed geometrically as follows. Take any chord $L$ of $\Gamma$, meeting $\Gamma$ in two distinct points $P_1,P_2$. Choose arbitrarily two points $P_3\in\Gamma$ and $P'_3\in L$, different from $P_1,P_2$. Connect each point $M\in \Gamma$ by a line with $M'\in L$, where $M'$ is defined by the equality of the cross-ratios $[P_1,P_2,P_3,M]=[P_1,P_2,P'_3,M']$. The union of these lines is a cubic scroll $\Sigma$ having $L$ as $(-1)$-curve. In the case when $P_1=P_2=P$, $L$ is tangent to $\Gamma$ at $P$, and the scroll is determined by an isomorphism $\phi :\Gamma\lra L$ such that $\phi (P)=P$, $d_P\phi =\operatorname{id}\nolimits_{T_PL}$ (with a natural identification of $T_PL$ and $T_P\Gamma$). The residual curve in the intersection with $X$ yields a quintic $C$. We see that the family of the scrolls $\Sigma$ constructed from a fixed quartic curve is irreducible and has dimension 3. Indeed, it is parametrized by a 3-dimensional variety fibered over $\Sym^2\Gamma$ with fiber over a pair of points $(P_1,P_2)\in\Sym^2\Gamma$ equal to \newcommand\Span{\operatorname{Span}\nolimits} $\Span (P_1,P_2)\smallsetminus\{ P_1,P_2\}$, if $P_1\neq P_2$, and to the curve of isomorphisms $\Gamma\lra L$ fixing $P=\Gamma\cap L$ and $T_PL$, if $P_1=P_2=P$. So, our construction yields a 3-dimensional irreducible family of scrolls whose residual intersection with $X$ defines a quintic curve. Taking into account Proposition \ref{H-comps}, we obtain the following statement: \begin{proposition}\label{fam-quart} Let $U$ be the open set of $H$ parametrizing general (in the sense of Lemma \ref{gen-gen}) smooth normal elliptic quintics of ${\mathbb P}^4$ lying on a general cubic threefold $X$, and $V$ the open subset of the Hilbert scheme of quartic curves in $X$ parametrizing all the curves which are residual intersections with $X$ of the cubic scrolls through quintics $C$ with $[C]\in U$. Then the above construction establishes a correspondence $Z\subset V\times U$ with irreducible fibers of dimension 3 over $V$ and of dimension 1 over $U$. In particular, $U$ and $V$ have equal numbers of irreducible components. \end{proposition} The proof of Theorem \ref{H-irred} is ended by the application of Theorem \ref{quart-irred}. \end{proof} \subsubsection*{Proof of Lemma \ref{gen-gen}} Take $C$ smooth and projectively normal in any component $H_0'\subset H(X_0)$ of a given nonsingular cubic threefold $X_0$. Let $\pi_P:C\lra C_1$ be a general projection from a point $P\in{\mathbb P}^4\setminus X_0$ to ${\mathbb P}^3\subset{\mathbb P}^4$. Then $C_1$ is an elliptic quintic in ${\mathbb P}^3$. There exists a nonsingular cubic surface $Y$ containing $C_1$; the proof of this assertion is similar to that of Lemma \ref{lemma-1}. Let $X_1$ be the cone over $Y$ with vertex $P$. Then $X_1$ is also a cubic threefold containing $C$. Every chord of $C$ lying in $X_1$ is projected by $\pi$ into a chord of $C_1$ lying in $Y$. As the number of lines in a nonsingular cubic surface is equal to 27, the number of chords of $C$ in $X_1$ is finite. Hence it is also finite in a general member $X_\lambda$ of the pencil of cubics $(X_0,X_1)$. As we can start with $C$ in any component of $H(X_0)$, we see that a general $C$ in any component has only finitely many chords in $X_t$ for a general deformation $X_t$ of $X_0$, and we are done. \hfill\square \begin{remark}\label{conversely} The construction of the cubic scroll through a rational quartic extends also to the case when $\Gamma$ is a singular member of the corresponding linear system on some scroll $\Sigma\simeq{\mathbb F}_1$. Indeed, this linear system is $|L+3F|$, where $F$ denotes the fiber of the ruled surface, and the only possible degenerate members are of one of the following types: a) $L+F_1+F_2+F_3$, where $F_i$ are lines on $X$ meeting $C$ in two points (chords), or tangent to $C$; b) $C_2+F_1+F_2$, where $C_2$ is a smooth conic; c) $C_3+F_1$, where $C_3$ is a twisted cubic in some ${\mathbb P}^3$. Treat, for example, the case a); the arguments are similar in the remaining cases. Let $P_i=F_i\cap L$, $i=1,2,3$. The three lines, whether they are distinct or not (in the last case we consider them as a non-reduced subscheme of ${\mathbb P}^4$), do not lie in one hyperplane. Otherwise, this hyperplane would have the intersection number at least 6 with $C$, and hence $C$ would lie in the hyperplane, which contradicts our assumptions. So, through three points $Q_i\in F_i$ (possibly infinitesimal, in the non-reduced case), different from $P_i$, passes a unique 2-plane $\Pi$. Choosing in this plane a conic $B$ through the $Q_i$, we can join the points of $L,B$ with the same cross-ratios with respect to $P_i=L\cap F_i , Q_i$. This construction giving a scroll $\Sigma\supset\Gamma$ works in the reduced case, and can be extended to the non-reduced one by an obvious passage to the limit. \end{remark} \begin{remark}\label{h1nonnormal} We saw in the proof of Lemma \ref{lemma-2}, c) that the smooth normal elliptic quintics are represented by smooth points of the Hilbert scheme of curves. In fact, smooth non-normal quintics (that is, lying in a hyperplane), are represented also by smooth points in $H$, though they may be rather ``singular'' in other situations: they define unstable vector bundles of rank 2, and their images under the Abel--Jacobi map coincide with those of pairs of lines $l_1+l_2\; (l_i\subset X)$. Indeed, $h^0({\mathcal N}_{C/X})=10,h^1({\mathcal N}_{C/X})=0$ by Lemma \ref{h0NCX}, so $[C]_X$ is a smooth point. It belongs to the component $H$ of $\Hilb$, because the family of elliptic quintics in hyperplane sections of $X$ is 9-dimensional, so that $C$ has deformations to smooth quintics not contained in a hyperplane. \end{remark} \section{Factorization of the Abel--Jacobi map through moduli of vector bundles} \label{factor} Let $X$ be a general cubic threefold. Let $M_X(2;0,2)$ be the Gieseker--Maruyama moduli space \cite{Ma}, \cite{Sim} of semistable (with respect to ${\mathcal O}_X(1)$) rank 2 torsion free sheaves on $X$ with Chern classes $c_1=0$ and $c_2=2[l]$, where $[l]$ is the class of a line modulo numerical equivalence. Define the Zariski open subset $M_0$ in it as follows: \begin{multline} M_0 =\{[{\mathcal E} ]\in M_X(2;0,2) \; |\; (i)\; {\mathcal E}\;\mbox{is {\em stable} and {\em locally free}};\\ (ii) \; h^1({\mathcal E} (-1))= h^1({\mathcal E} (1))=h^2({\mathcal E} (1))=h^2({\mathcal E}\otimes{\mathcal E} )=0 \} . \end{multline} \begin{lemma}\label{prop-E} Let ${\mathcal E}$ be a sheaf on $X$ with $[{\mathcal E} ]\in M_0$. Then we have: a) $h^0({\mathcal E} (1))=6,\; h^i({\mathcal E} (1))=0$ for $i>0$. b) The scheme of zeros $C$ of any section $s\neq 0$ of ${\mathcal E} (1)$ is of pure dimension $1$ and is not contained in a hyperplane. c) $h^i({\mathcal E} (-1))=0\;\; \forall\; i\in{\mathbb Z}$. d) $h^0({\mathcal E}\otimes{\mathcal E} )=1$, $h^1({\mathcal E}\otimes{\mathcal E} )=5, h^2({\mathcal E}\otimes{\mathcal E} )=h^3({\mathcal E}\otimes{\mathcal E} )=0$. e) $h^0({\mathcal N}_{C/X})=10,h^1({\mathcal N}_{C/X})=0$. \end{lemma} \begin{proof} This follows immediately from the definition of $M_0$, the Serre duality, exact sequences (\ref{EE}), (\ref{timesE1}), and from Riemann--Roch--Hirzebruch for ${\mathcal E} (1),{\mathcal E}\otimes{\mathcal E}$. \end{proof} Define now the locally closed subscheme ${\mathcal H}_0\subset \Hilb$: \begin{equation}\label{defH0} {\mathcal H}_0=\left\{\begin{minipage}{8.8 truecm} $[C]\in\Hilb\; |\;$ (i) $C$ is a locally complete intersection of pure dimension 1, (ii) $h^1({\mathcal I}_C)=h^1({\mathcal I}_C(1))=0$ (hence $h^0({\mathcal O}_C)=1$), (iii) $h^1({\mathcal I}_C(2))=h^2({\mathcal I}_C(2))=0$ (hence $h^0({\mathcal I}_C(2))=5$), (iv) $\omega_C\simeq {\mathcal O}_C$ \end{minipage}\right\} \end{equation} \begin{lemma}[Relative Serre's construction]\label{lem61} There is a well-defined \linebreak morphism $\phi :{\mathcal H}_0\lra M_0,\; [C]\mapsto [{\mathcal E} ]$, where ${\mathcal E}$ is defined by the exact triple (\ref{serre}), determined by a non-zero extension class in $H^0(\omega_C) =\operatorname{Ext}\nolimits^1({\mathcal I}_C(2),{\mathcal O}_X)$. \end{lemma} \begin{proof} The proof consists in an obvious relativization of Serre's construction of Sect. \ref{sect.5} (see Corollary \ref{unique-ext}). \end{proof} \begin{lemma}\label{lem62} ${\mathcal H}_0$ is isomorphic to the projectivization of a rank 6 vector bundle locally in the \'etale topology over $M_0$. \end{lemma} \begin{proof} According to Simpson \cite[Theorem 1.21]{Sim}, each point of $M_0$ has an \'etale neighborhood $T\lra U\subset M_0$, such that there exists a Poincar\'e vector bundle $\boldsymbol{\mathcal E}$ on $X\times T$. Let $\boldsymbol{\mathcal G} =\operatorname{pr}\nolimits_{2*}(\boldsymbol{\mathcal E}\otimes {\mathcal O}_X(1))$, a locally free sheaf of rank 6 on $T$. Let ${\mathcal C}$ be the universal family of curves over ${\mathcal H}_0$, and ${\mathcal C}_U\lra{\mathcal H}_0^U$ its restriction over $U$. Given a commutative diagram $$ \begin{CD} Y @>{\alpha}>> {\mathcal H}_0^U \\ @V{\beta}VV @VV{\phi}V \\ T @>{\lambda}>> U \end{CD} $$ we will show that there exists a unique morphism $\gamma :Y\lra{\mathbb P} (\boldsymbol{\mathcal G} )$ such that $q\circ\gamma =\alpha ,\; p\circ\gamma =\beta$, where $p:{\mathbb P} (\boldsymbol{\mathcal G} )\lra T$ is the structure morphism and $q:{\mathbb P} (\boldsymbol{\mathcal G} )\lra {\mathcal H}_0$ is the natural map sending the proportionality class of a section $s_t$ of ${\mathcal E}_t(1)$ on $X\times \{ t\}$ to its scheme of zeros $Z_X(s_t)$. Then, by the universal property of the Cartesian product, ${\mathbb P} (\boldsymbol{\mathcal G} )={\mathcal H}_0^U\times_U T$. Lift ${\mathcal C}_U$ to the family $p_2: {\mathcal C}_Y\lra {\mathcal H}_0^Y=Y\times_U{\mathcal H}_0$. By conditions (ii), (iv) of (\ref{defH0}) and Base Change, $p_{2*}\omega_{{\mathcal C}_{Y}/Y}$ is invertible, so that there exists an open covering $Y=\cup_{i\in I}Y_i$ and non-vanishing sections $\xi_j \in\Gamma (Y_j, p_{2*}\omega_{{\mathcal C}_Y/Y})$. They determine extensions $$ 0\lra{\mathcal O}_{X\times Y_j} \stackrel{\mu_j}{\lra}\tilde{\boldsymbol{\mathcal E}}_{j}(1)\lra{\mathcal I}_{{\mathcal C}_j}(2)\lra 0 , $$ where ${\mathcal C}_j={\mathcal C}_Y|_{Y_j}$, such that $[\tilde{\boldsymbol{\mathcal E}}_{j}|_{X\times\{ y\} }]=\lambda\circ\beta (y)\; \forall\; y\in Y_j$. Hence, denoting $\beta_j=\beta|_{Y_j}$, we get by the universal property of the Poincar\'e bundle the isomorphisms $\tilde{\boldsymbol{\mathcal E}}_{j}\simeq (1\times\beta_j)^*\boldsymbol{\mathcal E}\otimes {\mathcal M}_j$ for some invertible sheafs ${\mathcal M}_j$ on $Y_j$. Applying $p_{2*}$ to $\mu_j$, we obtain the monomorphism $p_{2*}\mu_j:{\mathcal O}_{Y_j}\hookrightarrow \beta_j^*\boldsymbol{\mathcal G}\otimes {\mathcal M}_j$, or, by duality, an epimorphism $\epsilon_j:\beta_j^*\boldsymbol{\mathcal G}^* \lra{\mathcal M}_j$. By the universal property of the functor {\bf Proj} \cite[Exercise II.7.8]{Ha}, there exists a unique morphism $\gamma_j:Y_j\lra{\mathbb P}(\boldsymbol{\mathcal G} )$ such that $\epsilon_j=\gamma_j^*\epsi$, where $\epsi :p^*\boldsymbol{\mathcal G}^*\lra{\mathcal O}_{{\mathbb P}(\boldsymbol{\mathcal G} )/T} (1)$ is the canonical epimorphism. These morphisms agree on the intersections $Y_i\cap Y_k$ and thus define a morphism $\g :Y\lra {\mathbb P}(\boldsymbol{\mathcal G} )$. By construction, $q\circ\gamma =\alpha ,\; p\circ\gamma =\beta$. \end{proof} \begin{corollary}\label{sharp} The morphism $\phi :{\mathcal H}_0\lra M_0$ defined in Lemma \ref{lem61} is smooth, projective, and all its fibers are $5$-dimensional projective spaces. \end{corollary} \begin{proof} According to Lemma \ref{lem62}, $\phi :{\mathcal H}_0\lra M_0$ is identified, locally in the \'etale topology, with the natural projection ${\mathbb P}^5\times U\lra U$. \end{proof} \begin{corollary}\label{flat} ${\mathcal H}_0,M_0$ are smooth of dimensions $10$, resp. $5$; moreover, $\Hilb$, $M_X(2;0,2)$ are smooth at the points of ${\mathcal H}_0$, resp. $M_0$. \end{corollary} \begin{proof} This follows from Lemma \ref{prop-E} d), e). \end{proof} We do not treat the question of irreducibility of ${\mathcal H}_0$ and $M_0$. Let ${\mathcal H}_0'$ be the irreducible component of ${\mathcal H}_0$ containing the open subset $$ {\mathcal H}^*=\{ [C]\in {\mathcal H}_0\; |\; C\;\mbox{is a smooth projectively normal elliptic quintic}\} . $$ ${\mathcal H}^*$ is irreducible by Theorem \ref{H-irred}, and we proved in Section \ref{sect.5} that ${\mathcal H}^*\subset {\mathcal H}_0$. Define also $M=\phi ({\mathcal H}^*),{\mathcal H} =\phi^{-1}(M)\cap{\mathcal H}_0'$. In the sequel, we will denote the restriction $\phi |_{{\mathcal H}}$ by the same symbol $\phi$. By Corollaries \ref{sharp}, \ref{flat}, $M,{\mathcal H}$ are open and smooth. \begin{theorem}\label{main} Let ${\mathcal H}^*\subset{\mathcal H}\subset\Hilb , M\subset M_X(2;0,2)$, $\phi :{\mathcal H} \lra M$ be defined as above. The following assertions are true: (i) For every choice of a reference point $[C_0]\in{\mathcal H}$, the Abel--Jacobi map $[C]\mapsto [C-C_0]$ defines a morphism $\Phi :{\mathcal H}\lra J_1(X)$. Fix some $[C_0]$ and the corresponding morphism. (ii) $\Phi$ is smooth, and every fiber of $\Phi$ is a disjoint union of $5$-dimensional projective spaces. (iii) There exists a quasi-finite \'etale morphisme $\Psi :M\lra J_1(X)$ such that $\Phi=\Psi\circ\phi$. \end{theorem} \begin{proof} (i) ${\mathcal H}$ is smooth and is a base of a flat family of curves, hence, by \ref{AJ_B}, the Abel--Jacobi map $\Phi :{\mathcal H}\lra J_1(X)$ is well-defined as an analytic map. It is in fact algebraic. Indeed, ${\mathcal H}$ is a subvariety of the Chow variety of $1$-cycles in $X$ (see \cite[Theorem I.6.3]{Ko}), and we can resolve the singularities of the closure of ${\mathcal H}$ in the Chow variety. Then $\Phi$ extends to this resolution as an analytic map. It will be projective by GAGA principle, as the Chow variety is projective. (ii) First, we will verify that $\Phi$ is smooth at any point of ${\mathcal H}^*$. This follows from the computation of the differential of $\Phi$ by the technique of TBS (see \ref{TBS}). Apply it to $X\subset W={\mathbb P}^4$. The map $R$ of (\ref{CDWelters}) is an isomorphism $H^0(X,{\mathcal O}_X(1)) \stackrel{\textstyle\sim}{\lra}H^1(X,\Omega_X^2)$ by \cite[(12.7)]{CG}. We have to verify that the map $\psi_Z^*$ of (\ref{CDWelters}) is injective for $Z=C$ with $[C]\in {\mathcal H}^*$. It suffices to show the injectivity of $r_C,\beta_C$. The kernel of $r_C$ lies in $H^0(X,{\mathcal N}_{X/{\mathbb P}^4}\otimes\omega_X\otimes{\mathcal I}_{C/X})= H^0({\mathcal I}_C(1))=0$ by Lemma \ref{dim=6} b). By (\ref{CDsheaves}), $\beta_C$ is a part of the exact sequence \begin{multline*} \ldots\lra H^0(\Omega_X^3\otimes{\mathcal N}_{C/{\mathbb P}^4})\lra H^0(\Omega_X^3\otimes{\mathcal N}_{X{\mathbb P}^4}|_C)\stackrel{\beta_C}{\lra} \\ H^1(\Omega_X^3\otimes{\mathcal N}_{C/X})=H^0({\mathcal N}_{C/X})^* . \end{multline*} By \cite[V.2.1]{Hu}, $H^0(\Omega_X^3\otimes{\mathcal N}_{C/{\mathbb P}^4})=0$. Hence $\beta_C:H^0({\mathcal O}_C(1))\lra $\linebreak $H^0({\mathcal N}_{C/X})^*$ is injective. Hence $\Phi$ is of maximal rank, equal to 5, at $[C]$. Thus, every $z\in {\mathcal H}^*$ is contained in a 5-dimensional component ${\mathcal H}_z$ of the fiber $\Phi^{-1}\Phi (z)$, nonsingular at $z$. We know also that every projective space contained in ${\mathcal H}$ is contracted by $\Phi$, because the Abel--Jacobi map contracts the rational equivalence. Hence ${\mathcal H}_z=\phi^{-1}\phi (z)$, the fiber of~$\phi$. More generally, every fiber $\Phi^{-1}(w)$ is the union of projective spaces $\phi^{-1}\phi (z)\simeq{\mathbb P}^5$ over all $z\in\Phi^{-1}(w)$, and for $z\neq z'\in\Phi^{-1}(w)$ we have: either $\phi^{-1}\phi (z)= \phi^{-1}\phi (z')$, or $\phi^{-1}\phi (z)\cap \phi^{-1}\phi (z')=\varnothing$. By construction, every fiber ${\mathbb P}^5$ of $\phi$ contains a point of ${\mathcal H}^*$, hence $\Phi^{-1}(w)$, if non-empty, is a disjoint finite union of irreducible components of the form $\phi^{-1}\phi (z)\simeq{\mathbb P}^5$ with $z\in\Phi^{-1}(w)$, and there are no multiple components among them, since the fiber should be scheme-theoretically smooth at points of ${\mathcal H}^*$. Moreover, the fibers of $\Phi$ are locally complete intersections in ${\mathcal H}$, because they are of pure codimension 5 and $J_1(X)$ is smooth of dimension 5, so, they have no embedded points. Hence all the components of the fibers of $\Phi$ are smooth subvarieties of ${\mathcal H}$, isomorphic to ${\mathbb P}^5$, with their reduced structure. (iii) By (ii), the map $\Psi$ is well-defined set-theoretically: we saw that $M$ parametrizes the irreducible components (isomorphic to ${\mathbb P}^5$) of the fibers of $\Phi$. By Corollary \ref{sharp}, $\phi_*{\mathcal O}_{\mathcal H} ={\mathcal O}_M$, and $\phi$ is open. For every open $U\subset J_1(X)$, the inverse image $\Psi^{-1}(U)$ under our set-theoretical map is open, because $\Psi^{-1}(U)=\phi(\Phi^{-1}(U))$, and $\Psi^*(g)$ is a regular function on $\Psi^{-1}(U)$ for every $g\in{\mathcal O}_{J_1(X)}(U)$. Thus $\Psi$ is Zariski continuous and lifts regular functions on a Zariski open subset $U\subset J_1(X)$ to those on $\Psi^{-1}(U)$. Hence $\Psi$ is a morphism. We have $d\Phi =d\Psi\circ d\phi$ and $d\Phi$ is surjective. Hence $d\Psi$ is surjective, hence it is an isomorphism and $\Psi$ is \'etale. \end{proof} Leaving aside the hard question on the complete description of the boundary of ${\mathcal H}$ in $\Hilb$ and of its image in $J_1(X)$, we can easily determine the image of one of the components of the boundary, namely, the one whose general point represents a projectively non-normal elliptic quintic. Let $H_\ns$ be the nonsingular locus of $H$, $B^*\subset H$ the subset of classes of smooth elliptic quintics contained in nonsingular hyperplane sections of $X$, and $B$ the closure of $B^*$ in $H$. By Remark \ref{h1nonnormal}, $B^*\subset H_\ns$ and $B$ are of pure dimension 9. According to \ref{AJ_B}, the Abel--Jacobi map $\Phi$ of Theorem \ref{AJ_B} extends to a morphism, well defined on $H_\ns$, hence on $B^*$ and on the desingularization of $B$. \begin{proposition}\label{aftermain} Let $\tilde{\Phi}:H_\ns \lra J_1(X)$ be the Abel--Jacobi map defined above. The following properties are verified: (i) $B$ is an irreducible divisor in $H$. (ii) The image $\tilde{\Phi}(B^*)$ of $B^*$ in $J_1(X)$ is dense in the translate $\Delta_a$ by an element $a\in J_1(X)$ of the divisor $\Delta =\tilde{\psi} (F\times F)$ defined in \ref{phi}. (iii) Let $\eta =\Phi |_{B^*}:B^*\lra\Delta_a$ be the restriction of $\tilde{\Phi}$. Then the general fiber of $\eta$ is an open subset of the 5-dimensional projective space ${\mathbb P}^5$, realized as the subset of nonsingular curves in a linear system on a cubic surface. (iv) $\tilde{\Phi}$ is ramified along $B$. More exactly, $\operatorname{corank}\nolimits \: d\tilde{\Phi}\: =\: 1$ at every point of $B^*$. \end{proposition} \begin{proof} By Proposition \ref{space-C}, a curve $C$ with $[C]\in B^*$ determines a unique unordered pair of disjoint lines $l_1,l_2$ on the nonsingular cubic surface $S=X\cap <C>$ in such a way that $C\in |h+l_1+l_2|\simeq{\mathbb P}^5$, where $<C>$ is the linear span of $C$ in ${\mathbb P}^4$, and $h$ is the hyperplane section of $S$. The pairs of disjoint lines are parametrized by an irreducible 4-fold $\Sym^2F\setminus\mbox{(incidence divisor)}$, and the pairs whose span defines a nonsingular hyperplane section form an open subset $U$ in it. Hence $B^*$ is irreducible, and its image in $J_1(X)$ coincides with $\tilde{\psi}(U)$ modulo a translation depending on the choice of reference points for $\tilde{\psi},\Phi$. By Beauville's remark in \ref{beau}, the degree of $\tilde{\psi}$ over its image is 2, so there is a unique unordered pair $l_1,l_2$ over the generic point of $\Delta_a$. This proves (i)--(iii). The assertion (iv) follows from the technique of TBS: as in the proof of Theorem \ref{main}, (ii), for $[C]\in B^*$, we find that $\beta_C$ is injective, but $\ker\: r_C=H^0({\mathcal I}_C(1))$ is of dimension 1, which implies the result. \end{proof}
train/arxiv
BkiUbLw5qg5A5vxQ5NvX
5
1
\section{Introduction} Let $\mathcal{C}$ be a small category and write $\mathbf{Vec}$ for the category of vector spaces over a field $k$. By a \emph{persistence module} (over $\mathcal{C}$) we mean a functor $M\colon \mathcal{C}\to\mathbf{Vec}$. We say that $M$ is \emph{pointwise finite-dimensional} if each $M_x$ is finite-dimensional. The work in this paper is inspired by topological data analysis (TDA). For an introduction to TDA, see e.g. the survey by Carlsson \cite{carlsson2009topology}, or the recent book by Oudot \cite{oudot2015persistence} on quiver representations and TDA. Let $X$ be a topological space, $h\colon X\to \mathbb{R}$ a continuous function, and consider the following functors \begin{align*} \mathcal S^\uparrow(h)\colon& \mathbb{R}\to {\rm Top} \quad & \mathcal S^\uparrow(h)(t) = \{x\in X \mid h(x) \leq t\} \\ \FI{h}\colon& \mathbb{R}^2\to {\rm Top} \quad & \FI{h}(-s,t) = \{x\in X \mid s< h(x) < t\} \end{align*} \emph{Persistent homology} studies the evolution of the homology of the sublevel sets of $h$ and is perhaps the most prominent tool in TDA. Specifically, the \emph{$p$-th sublevel set persistence module associated to $h$} is the functor ${\rm H}_p\mathcal S^\uparrow(h)\colon \mathbb{R}\to \mathbf{Vec}$. Here ${\rm H}_p\colon {\rm Top} \to \mathbf{Vec}$ denotes the $p$-th singular homology functor with coefficients in $k$. Importantly, and as we shall see later in this paper, if ${\rm H}_p\mathcal S^\uparrow(h)$ is pointwise finite-dimensional, then it is completely determined by a collection of intervals called the \emph{barcode} of ${\rm H}_p\mathcal S^\uparrow(h)$. This collection of intervals is then in turn used to extract topological information from the data at hand; a ''long'' interval corresponds to a topological feature which persists over a significant range. A richer invariant is obtained by considering interlevel sets: define the \emph{$p$-th interlevel set persistence of $h$} to be the functor ${\rm H}_p \FI{h}\colon \mathbb{R}^2\to \mathbf{Vec}$. By a Mayer-Vietoris argument \cite{cochoy2016decomposition} one can show that ${\rm H}_p\FI{h}$ is middle exact (see \cref{sec:upper}) when restricted to the points above the anti-diagonal. Analogously to above, assuming that ${\rm H}_p\FI{h}$ is pointwise finite-dimensional, such a module is completely determined by a collection of simple regions in $\mathbb{R}^2$. These regions in turn give valuable insight into the homological properties of the fibers of the function $h$. We refer the reader to \cite{botnan2016algebraic,cochoy2016decomposition} for an in-depth treatment. We also remark that there are many settings for which it is fruitful to combine a collection of real-valued functions into a single function $g\colon X\to \mathbb{R}^n$ \cite{carlsson2009theory}. By combining them into a single function we not only learn how the data looks from the point of view of each function (i.e. a type of measurement) but how the different functions (measurements) interact. How to effectively use such persistence modules in data analysis is not clear and for the time being an area of active research, see e.g. \cite{lesnick2015interactive} and the references therein. \subsection{Contributions} We give a short direct proof of the following result. \begin{thm} \label{t:decomp} Any pointwise finite-dimensional persistence module is a direct sum of indecomposable modules with local endomorphism ring. \end{thm} We remark that this result is already known by the theory of locally finitely presented additive categories. The category $\mathbf{Vec}$ is locally finitely presented, hence so is the category of persistence modules, which is a functor category. Now any pointwise finite-dimensional module is a direct sum of indecomposables with local endomorphism ring by the theory of $\Sigma$-pure-injectives, see (3)$\Rightarrow$(4) of \cite[\S3.2 Theorem 2]{CBlfp}. Persistence modules are often considered for partially ordered sets (where $\mathcal{C}$ is the naturally associated category). Using this result, we give a short proof of the following result, originally proved in a slightly weaker form in \cite{CBdpf}. \begin{thm} \label{t:totorder} Pointwise finite-dimensional persistence modules over a totally ordered set decompose into interval modules. \end{thm} Note that the advantage of the approach in \cite{CBdpf} is that it produces functors which give the multiplicity of any interval module as a direct summand. Following the ideas of \cite{CBdpf}, \cref{t:totorder} was generalized to exact (middle exact in this paper) bi-modules in \cite{cochoy2016decomposition}. We give a comparatively short proof of a slight generalization of the main theorem of \cite{cochoy2016decomposition}. \begin{thm} Pointwise finite-dimensional middle exact modules over a product of two totally ordered sets decompose into block modules. \label{thm:block} \end{thm} As a corollary to this we obtain a structure theorem for pointwise finite-dimensional persistence modules on \emph{zigzag paths}. This generalizes the structure theorem for \emph{zigzag persistent homology} given in \cite{botnanzigzag}. We refer the reader to \cite{botnanzigzag} and the references therein for a discussion on zigzag persistent homology. In the last part of the paper we apply the structure theorem for persistence modules indexed by zigzag paths to prove a structure theorem for persistence modules that are middle exact (strictly) above the anti-diagonal in $\mathbb{R}^2$. \begin{thm} Pointwise finite-dimensional middle exact modules over the (strictly) upper-triangular subset of the plane decompose into block modules.\label{thm:upperT} \end{thm} \begin{rem} We are indebted to D. Vossieck for pointing out the reference \cite[\S 3.6]{gabriel1992representations}, where Theorems \ref{t:decomp} and \ref{t:totorder} are both discussed, with sketch proofs. As this paper is intended to be self-contained and aimed at a broader audience, we include detailed proofs of both theorems. In fact, our proof of \cref{thm:block} depends in part on \cref{t:totorder}, and, as the reader will see, very little work is needed to prove \cref{t:totorder} once the machinery for proving (the complementary parts of) \cref{thm:block} has been introduced. \end{rem} \section{Preliminaries} Let $\mathcal{C}$ be a small category and $M,N\colon \mathcal{C}\to \mathbf{Vec}$. If $x$ is an object in $\mathcal{C}$ we write $M_x$ for the corresponding vector space, and if $\alpha\colon x\to y$ is a morphism, we write $M_\alpha\colon M_x\to M_y$. A morphism $f\colon M\to N$ is an \emph{epimorphism} (\emph{monomorphism}) if $f_x\colon M_x\to N_x$ is surjective (injective) for all $x\in {\rm Ob}(\mathcal{C})$. A morphism is an \emph{isomorphism} if it is both an epimorphism and a monomorphism. A monomorphism $f\colon M\to N$ \emph{splits}, or is a \emph{split monomorphism}, if there exists a $g\colon N\to M$ such that $g\circ f = {\rm id}_M$. We say that $M$ and $N$ are \emph{isomorphic} if there exists an isomorphism $f\colon M\to N$ and denote this by $M\cong N$. The \emph{direct sum} of $M$ and $N$ is the persistence module $M\oplus N\colon \mathcal{C}\to \mathbf{Vec}$ given by $(M\oplus N)_x = M_x\oplus N_x$ and $(M\oplus N)_\alpha = M_\alpha\oplus N_\alpha$ for all $\alpha\colon x\to y$. The persistence module $M'$ is a \emph{submodule} of $M$ if $M'_x \subseteq M_x$ and $M'_\alpha$ is the restriction of $M_\alpha$ to $M'_x$ for all $\alpha\colon x\to y$. We write $M'\subseteq M$ if $M'$ is a submodule of $M$. If $M$ has two non-trivial submodules $M'$ and $M''$ such that $M=M'\oplus M'$, then $M$ is \emph{decomposable} and $M'$ and $M''$ are \emph{summands} of $M$. If no such decomposition exists, then $M$ is \emph{indecomposable}. It is an elementary fact that $M'\subseteq M$ is a summand of $M$ if and only the inclusion $M'\hookrightarrow M$ splits. If every monomorphism with domain $M$ splits, then $M$ is an \emph{injective persistence module}. The endomorphism ring $\End(M):= \Hom(M,M)$ is \emph{local} if $\theta$ or $1-\theta$ is invertible for all $\theta\in \End(M)$. The Krull--Remak--Schmidt--Azumaya theorem\cite{azumaya} asserts that persistence modules which decompose into a direct sum of indecomposables with a local endomorphism ring, do so in an essentially unique way (unique up to reordering and isomorphism). If $M$ has a non-trivial decomposition then $\End(M)$ is not local. Dualizing each vector space and each linear map in a persistence module $M\colon \mathcal{C} \to \mathbf{Vec}$ yields a persistence module $DM\colon \mathcal{C}^{\rm op}\to \mathbf{Vec}$. Here $\mathcal{C}^{\rm op}$ denotes the opposite category of $\mathcal{C}$. This dualization procedure is contravariantly functorial, exact and satisfies $D^2M\cong M$ whenever $M$ is pointwise finite-dimensional. \subsection{Posets} Let $P$ be a partially ordered set (poset). Recall that $P$ can be considered as a category with objects the elements of $P$ in a natural way: \[ \Hom(p,q) = \begin{cases} \{ \iota_{qp} \} & (p\le q) \\ \varnothing & (p \not\le q) \end{cases} \] If $Q\subseteq P$ and $M\colon P\to \mathbf{Vec}$, then $M|_Q$ denotes the restriction of $M$ to $Q$. A subset $I\subseteq P$ is \emph{convex} if $p\leq q \leq r$ with $p,r\in P$ implies that $q\in P$. If $I$ satisfies the stronger condition that $q\in I$ whenever $q\leq p$ and $p\in I$, then we say that $I$ is an \emph{ideal}. Dually, if $I$ satisfies that $q\in I$ whenever $q\geq p$ and $p\in I$, then we say that $I$ is a \emph{filter}. Furthermore, $I$ is \emph{connected} if there for every $p,q\in P$ exists a sequence $\{r_i\}_{i=0}^u\subseteq I$ such that $r_0 =p$, $r_u = q$ and $r_i\leq r_{i+1}$ or $r_i\geq r_{i+1}$ for all $0\leq i<u$. We define an \emph{interval} to be a non-empty, connected, and convex set. Examples of intervals include $[p,q]$, $(p,q)$, $(p,q]$ and $(p,q)$, where $[p,q] = \{r\in P \mid p\leq r \leq q\}$, and similarly for the other cases. We also have intervals $[p,\infty)= \{r\in P \mid r\geq p\}$ and $(p, \infty) = \{r\in P\mid r>p\}$, and similarly for $(-\infty, p)$ and $(-\infty, p]$. The notation $\langle p,q\rangle$ is used to denote any of the appropriate intervals in $\{(p,q), [p,q), (p,q], [p,q]\}$. E.g, we have $\langle p ,\infty\rangle \in \{ (p,\infty), [p,\infty)\}$. When $P$ is totally ordered the intervals are precisely the non-empty convex sets, and if $P = \mathbb{R}$, they are all of the form $\langle p,q\rangle$. Observe that the subset $\{x \mid x^2<2\}\subseteq \mathbb{Q}$ is an interval which is not of the form $\langle p,q\rangle$. For an interval $I\subseteq P$, we write $k_I$ for the \emph{constant module} which is 1-dimensional at points on $I$, zero at points outside $I$, and with the the morphisms $\iota_{yx}$ for $x,y\in I$ sent to the identity map. It follows from $\End(k_I)\cong k$ that $k_I$ is indecomposable \cite[Proposition 2.2]{botnan2016algebraic}. A subset $I\subseteq P$ is \emph{directed} if there for every $p,q\in I$ exists a $c\in I$ satisfying $p,q\leq c$. Dually, $I\subseteq P$ is \emph{codirected} if there for every $p,q\in I$ exists a $c\in I$ satisfying $p,q\geq c$. \begin{lem} Let $I\subseteq P$ be a directed ideal. Then $k_I\colon P\to \mathbf{Vec}$ is an injective persistence module. \label{l:injective} \end{lem} \begin{proof} This follows from the fact that $\varinjlim_{p\in I}$ is an exact functor whenever $I$ is directed. Assume that $f\colon k_I \hookrightarrow M$ is a monomorphism and consider its restriction to $I$, $f|_I\colon (k_I)|_I \hookrightarrow M|_I$. By the aforementioned exactness property \[\hat{f}:=\varinjlim_{p\in I} f|_I\colon \varinjlim_{p\in I} (k_I)|_I \hookrightarrow \varinjlim_{p\in I} M|_I\] is an injection. Let $\hat{g}$ be a left inverse to $\hat{f}$ and for $p\in I$ define $g_p: M_p\to (k_I)_p$ as the composition \[M_p \to \varinjlim_{p\in I} M|_I \xrightarrow{\hat{g}} \varinjlim_{p\in I} (k_I)|_I \xrightarrow{\cong} (k_I)_p = k.\] For $p\not\in I$ define $g_p = 0$. It is clear that $g\circ f = {\rm id}_{k_I}$. \end{proof} We remark that the converse statement of the previous lemma is also true \cite[Proposition 1.1]{hoppner1983note}. \begin{lem} \label{lem:codirected} Suppose $P$ is codirected. Let M be a pointwise finite-dimensional persistence module over $P$ with $M_p \neq 0$ for all $p\in P$, and suppose that $M_p \to M_q$ is injective for all $p \leq q$. Then there is a monomorphism $k_P\hookrightarrow M$. In particular, if P is also directed, then M has a copy of $k_P$ as a direct summand. \end{lem} \begin{proof} Let $p$ be a point such that $M_p$ is of minimal dimension, and choose a non-zero element $m_p$ in $M_p$. For any other point $q$ in $P$, there is an element $c$ with $p,q\geq c$. Since $\dim M_c = \dim M_p$, the morphism $M_c\to M_p$ is an isomorphism. Thus $m_p$ induces an element $M_q = M_{\iota_{qc}}(M_{\iota_{pc}}^{-1}(m_p)) $ in $M_q$. Using the codirectedness property again, it is easy to check that this does not depend on the choice of $c$, and that the elements $m_q$ define a morphism $k_P \to M$. This yields a monomorphism $k_P\hookrightarrow M$. The last part is immediate from \cref{l:injective}. \end{proof} The following dual result will be important. \begin{lem} Suppose $P$ is directed and codirected. Let $M$ be a pointwise finite-dimensional persistence module over $P$ with $M_p\neq 0$ for all $p\in P$, and suppose that $M_p \to M_q$ is an epimorphism for all $p\leq q$. Then $M$ has a copy of $k_P$ as a direct summand. \label{l:directecodirected} \end{lem} \begin{proof} Observe that $P^{{\rm op}}$ is both directed and codirected. It follows that $DM\colon P^{{\rm op}}\to \mathbf{Vec}$ satisfies the conditions of \cref{lem:codirected}. Hence $DM$ has a copy of $k_{P^{{\rm op}}}$ as a direct summand. Using that $M$ is pointwise finite-dimensional we get that $M\cong DDM$ has a copy of $D(k_{P^{{\rm op}}}) \cong k_P$ as a direct summand. \end{proof} \section{Decomposition} In this section we prove Theorem~\ref{t:decomp}. Our argument is inspired by Ringel's proof of the corresponding result for covering functors, see \cite{RingelIzmir}. First suppose $M$ is a pointwise finite-dimensional indecomposable module, and let $\theta$ be an endomorphism. If $x$ is an object in $\mathcal{C}$ then $\theta$ induces an endomorphism $\theta_x$ of $M_x$. Since $M_x$ is finite-dimensional, Fitting's lemma gives a decomposition \[ M_x = M'_x \oplus M''_x \] where $M'_x = \Ima(\theta_x^n)$ for $n\gg 0$ and $M''_x = \Ker(\theta_x^n)$ for $n\gg 0$. Moreover $\theta_x$ induces an automorphism of $M'_x$ and a nilpotent endomorphism of $M''_x$. Now if $\alpha:x\to y$ is a morphism in $\mathcal{C}$ then $M_\alpha \theta_x = \theta_y M_\alpha$. Moreover $M_\alpha$ sends $M'_x$ into $M'_y$ and $M''_x$ into $M''_y$. Namely, taking $n$ to be sufficiently large for the decompositions of $M_x$ and $M_y$, we have $M_\alpha \theta_x^n = \theta_y^n M_\alpha$, so if $m\in M''_x = \Ker(\theta_x^n)$, then $\theta_y^n M_\alpha(m)=0$, so $M_\alpha(m)\in \Ker(\theta_y^n) = M''_y$. If $m\in M'_x$ then $m = \theta_x^n(m')$, so $M_\alpha(m) = \theta_y^n M_\alpha(m') \in \Ima(\theta_y^n) = M'_y$. It follows that the decomposition $M_x = M'_x\oplus M''_x$ for each object $x$ in $\mathcal{C}$ gives a decomposition of $M = M' \oplus M''$ as a persistence module. Thus if $M$ is indecomposable, $M = M'$ or $M = M''$. In the first case $\theta_x$ is invertible for all $x$, so $\theta$ is invertible. If $\theta$ is not invertible, then the above decomposition shows that $\theta_x$ is nilpotent for all $x$. Assume that $(1-\theta_x)(m) = 0$ for $m\neq 0$ and let $n\geq 2$ be the smallest integer such that $\theta^n_x(m) = 0$. Then $\theta^{n-1}_x\circ (1-\theta_x)(m) = \theta^{n-1}(m) =0$, contradicting that $n$ was the minimal such $n$. Thus $\ker (1-\theta_x) = 0$ and $1-\theta$ is invertible for all $x$. We conclude that $\End(M)$ is local. Now let $M$ be a non-zero pointwise finite-dimensional persistence module, and let $D$ be the set of decompositions of $M$ into a direct sum of non-zero submodules. That is, letting $S$ be the set of non-zero submodules of $M$, $D$ is the set of subsets $I$ of $S$ such that $M = \bigoplus_{N\in I} N$. We consider the relation $\le$ on $D$ with $I \le J$ if $J$ is a refinement of $I$. That is, if each element of $J$ is contained in an element of $I$, or equivalently if each $N\in I$ is a direct sum of a subset of elements of $J$. In this case there is a uniquely determined mapping $f_{IJ}:J\to I$ such that for $N\in I$ we have \[ N = \bigoplus_{L\in f_{IJ}^{-1}(N)} L. \] Moreover $f_{IJ}$ is clearly surjective. It is easy to see that this relation $\le$ defines a partial ordering on $D$. Clearly $D$ is non-empty since it contains the element $\{M\}$ (as a unique minimal element). To prove the theorem, it suffices to prove that $D$ contains a maximal element, for if $I\in D$ and $N\in I$ is decomposable, say $N = N_1\oplus N_2$, then $J = (I \setminus \{N\})\cup \{N_1,N_2\}$ is in $D$, and $I < J$. Thus if $I$ is a maximal element of $D$ then it is a decomposition of $M$ into indecomposable summands. By Zorn's lemma, it suffices to prove that any non-empty chain $T$ in $D$ has an upper bound. We consider the inverse limit \[ L = \varprojlim_{I\in T} I \] using the maps $f_{IJ}$. An element $\lambda\in L$ is given by $\lambda_I \in I$ for all $I \in T$, satisfying $f_{IJ}(\lambda_J) = \lambda_I$ for all $I\le J$ in $T$, and we define \[ M[\lambda] = \bigcap_{I\in T} \lambda_I, \] a submodule of $M$. We show that \[ M = \bigoplus_{\lambda\in L} M[\lambda]. \] Suppose $x$ is an object in $\mathcal{C}$ and we have a relation \[ m_1+\dots+m_n = 0 \] with $m_i \in M[\lambda^i]_x$ for distinct $\lambda^i \in L$. For $i\neq j$ we have $\lambda^i \neq \lambda^j$, so $\lambda^i_I \neq \lambda^j_I$ for some $I$. But then also $\lambda^i_J \neq \lambda^j_J$ whenever $I\le J$. Repeating for all pairs $i\neq j$, and using that $T$ is a chain, there is some $J$ with $\lambda^1_J,\dots,\lambda^n_J$ distinct. But then since $M$ is the direct sum of the elements of $J$, and $m_i \in M[\lambda^i]_x \subseteq (\lambda^i_J)_x$, we deduce that $m_i=0$ for all $i$. Suppose that $m\in M_x$ and $m\neq 0$. For any $I\in T$ we can write \[ m = m_1+\dots+m_n \] with $n\ge 1$ and the $m_i$ non-zero and belonging to $(N_i)_x$ for distinct elements $N_i$ of $I$. Moreover \[ n \le \dim \bigoplus_{i=1}^n (N_i)_x \le \dim M_x. \] Choose $I$ such that the decomposition of $m$ has $n$ maximal. For any $J$ in $D$ with $I \le J$, the submodule $N_i$ breaks up as a direct sum of elements of $J$, but the element $m_i$ does not become a non-trivial sum of terms. Thus $m_i$ must belong to one of the submodules in $J$. This defines an element $\lambda^i \in L$, and $m_i \in M[\lambda^i]_x$. Thus $m\in\sum_{\lambda\in L} M[\lambda]_x$. Thus, as claimed, $M = \bigoplus_{\lambda\in L} M[\lambda]$. We now delete any terms from the sum which are zero. Letting $U = \{ M[\lambda] : \text{$\lambda\in L$ and $M[\lambda]\neq 0$}\}$ we have $M = \bigoplus_{N \in U} N$ and so $U\in D$. Clearly $U$ is an upper bound for $T$, as required. \section{Decomposition into interval modules} In this section we prove Theorem~\ref{t:totorder}. Let $M\colon S\to \mathbf{Vec}$ for a totally ordered set $S$. The support of an indecomposable persistence module over a totally ordered set must necessarily be an interval. Hence, it suffices to show that if $M$ is indecomposable with support $I$, then $M$ is isomorphic to $k_I$. Furthermore, we may assume without loss of generality that the support of $M$ is the whole of $S$. We show first that if $S$ has a minimal element $s$, then $M$ is isomorphic to $k_S$. Since $M_s\neq 0$ we can choose $0\neq m\in M_s$. Let $J=\{x\in S\mid M_{\iota_{xs}}(m)\neq 0\}$ and define a monomorphism $k_J\to M$, by sending the canonical basis element of the vector space $(k_S)_x$ to $M_{\iota_{xs}}(m)$. The constant module $k_J$ is injective by \cref{l:injective}, so the morphism is a split monomorphism. Since $M$ is indecomposable, it must be an isomorphism. We conclude that $M\cong k_J=k_S$. Next let $M$ be a pointwise finite-dimensional indecomposable persistence module. We will show that the map $M_{\iota_{yx}}\colon M_x\to M_y$ is surjective for all $x<y$. Consider the restriction $M'$ of $M$ to $S' = \{ s\in S : s\ge x\}$. This is a pointwise finite-dimensional persistence module over $S'$, so it is a direct sum of indecomposables. Take one of the indecomposable summands $N$ of $M'$. If $N_x=0$ then the projection and inclusion maps $M'\to N\to M'$ extend to give maps $M\to N\to M$, so $N$ is a summand of $M$, a contradiction. Thus by the remark above, $N_x$ is an interval module. Thus $M'$ is isomorphic to a direct sum of interval modules for intervals with minimal element $x$. This shows that the maps $M_x\to M_y$ are surjective for all $x<y$. The result now follows from \cref{l:directecodirected} and \cref{t:decomp}. \section{Decomposition of Middle Exact Bi-Modules} In this section we prove \cref{thm:block}. Let $S$ and $T$ be totally ordered sets and let $P=S\times T$ denote their product. \begin{dfn} A persistence module $M\colon P\to \mathbf{Vec}$ is \emph{middle exact} if \begin{equation} 0\rightarrow M_a \xrightarrow{M_{\iota_{ba}}\oplus M_{\iota_{ca}}} M_b\oplus M_c \xrightarrow{(M_{\iota_{db}}, -M_{\iota_{dc}})} M_d\rightarrow 0 \label{eq:middlex} \end{equation} is middle exact (i.e. exact over the middle term) whenever $a=(x,y)$, $b=(x,y')$, $c=(x',y)$ and $d=(x', y')$. The trivial vector spaces of \cref{eq:E} have been included for convenience as we will also consider the case that \cref{eq:E} is in fact short exact. \label{def:middleex} \end{dfn} \begin{dfn} A non-empty subset $I\subseteq P$ is a \emph{block} if: \begin{enumerate} \item $I=J_S\times J_T$ for interval ideals $J_S$ and $J_T$, \item $I=J_S\times J_T$ for interval filters $J_S$ and $J_T$, \item $I=J_S \times T$ for an interval $J_S$, \item $I=S\times J_T$ for an interval $J_T$. \end{enumerate} We shall refer to these as blocks of type death ({\textbf{db}}), birth (\textbf{bb}), vertical ({{\textbf{vb}}}) and horizontal (\textbf{hb}), respectively. Observe that one block may be of several types. \end{dfn} We say that $k_I$ is a \emph{block module} whenever $I$ is a block. Observe that if $I$ is of type {{\textbf{db}}}, then $I$ is a directed ideal. Hence, $k_I$ is injective by \cref{l:injective}. For $x\in S$ and $y\in T$ define subposets \begin{align*} (x,y)^{\overrightarrow{\leftarrow}}&:= \left(\{x\}\times (-\infty, y]\right)\cup \left((-\infty, x]\times \{y\}\right)\subseteq S\times T\\ (x,y)^{\overleftarrow{\rightarrow}}&:= \left(\{x\}\times [y, \infty)\right)\cup \left([x, \infty)\times \{y\}\right)\subseteq S\times T.\end{align*} Recall that a non-empty subset $I$ of $(x,y)^{\overrightarrow{\leftarrow}}$ or $(x,y)^{\overleftarrow{\rightarrow}}$ is an interval if it is convex and connected. \begin{lem} Let $M\colon (x,y)^\star \to \mathbf{Vec}$ be pointwise finite-dimensional and indecomposable for $\star\in \{\rightleftarrows, \leftrightarrows\}$. Then $M\cong k_I$ for some interval $I$. \label{thm:zz} \end{lem} \begin{proof} The two cases are dual so it suffices to prove it for the case $\star =~\rightleftarrows$. Let $M^\ell$ denote the restriction of $M$ to $(-\infty, x]\times \{y\}$. Assume that $\ker M_\alpha \neq 0$ for some $\alpha\colon (t,y) \to (x,y)$. Then $\ker M^\ell_\alpha \neq 0$, and by \cref{t:totorder}, $M^\ell$ has a summand $k_I$, where $I\subseteq (-\infty, x)\times \{y\}$ is an interval. Since $(x,y)\not\in I$, this shows that $k_I$ extends to a summand of $M$ and thus $M\cong k_I$. The corresponding argument applies if $\ker M_\alpha \neq 0$ for some $\alpha\colon (x,t) \to (x,y)$. To conclude the proof it suffices to consider the case that $M_\alpha$ is injective for all $\alpha\colon p\to (x,y)$. As $\dim M_{(x,y)} < \infty$, we can choose indices \begin{align*} -\infty &= a'_{0} < a'_{1} < \cdots < a'_{{n-1}} < a'_{n}= y\\ -\infty &= a_{0} < a_{1} < \cdots < a_{{n-1}} < a_{n}= x \end{align*} such that $M_{(x,t)} \to M_{(x,t')}$ and $M_{(s,y)} \to M_{(s',y)}$ are isomorphisms whenever $t, t'\in (a'_{i}, a'_{{i+1}})$ and $s,s'\in (a_{i}, a_{i+1})$. Thus, by choosing $b_i\in (a_i, a_{i+1})$ and $b_i'\in (a'_i, a'_{i+1})$, we get that $M$ is completely described by the following persistence module \begin{center} \begin{tikzpicture}[scale=0.5][baseline= (a).base] \node[scale=0.8] (a) at (0,0){ \begin{tikzcd} M_{(b_0,y)}\ar[r]& M_{(a_1,y)}\ar[r] & M_{(b_1,y)}\ar[r]& \cdots \ar[r]& M_{(b_{n-1},y)}\ar[r]&M_{(x,y)}\\ M_{(x,b'_0)}\ar[r] & M_{(x,a'_1)}\ar[r] & M_{(x,b'_1)}\ar[r]& \cdots \ar[r]& M_{(x,a'_{n-1})}\ar[r]& M_{(x, b'_{n-1})}\ar[u] \end{tikzcd}}; \end{tikzpicture} \end{center} A decomposition of this persistence module lifts to a decomposition of $M$. It follows from the representation theory of the linear quiver $A_n$ that $M\cong k_I$ for some interval $I$, see for example \cite[Theorem 1.1]{ringdynkin}. \end{proof} For $(s,t)\in S\times T$, let ${\bf v}_s = \{ (s,y) \mid y\in T\}$, ${\bf h}_t = \{ (x,t)\mid x\in S\}$, and let $M^{\bf v_s}$ and $M^{\bf h_t}$ denote the respective restrictions of $M$ to ${\bf v}_s$ and ${\bf h}_t$. \begin{lem} Assume that $M$ is pointwise finite-dimensional and middle exact. Let $s\in S, t\in T$, and let $J_S\subseteq S$ and $J_T\subseteq T$ be intervals. \begin{enumerate} \item Assume that there exists an upper bound for $J_T$ in $T-J_T$. A monomorphism $h\colon k_{\{s\}\times J_T} \hookrightarrow M^{\bf v_s}$ lifts to a monomorphism \[h\colon k_{(-\infty, s]\times J_T} \hookrightarrow M|_{(-\infty, s]\times T}.\] \item Assume that there exists an upper bound for $J_S$ in $S-J_S$. A monomorphism $h\colon k_{J_S\times \{t\}} \hookrightarrow M^{\bf h_t}$ lifts to a monomorphism \[h\colon k_{J_S\times (-\infty, t]} \hookrightarrow M|_{S\times (-\infty, t]}.\] \end{enumerate} \label{lem:lift} \end{lem} \begin{proof} We prove the first case; the second case is symmetrical. For $p=(p_1, p_2)\in (-\infty,s]\times J_T$ let $\pi_{J}(p)\colon p\to (s,p_2)$. Write $\epsilon>J_T$ if $\epsilon\in T-J_T$ and $\epsilon$ is an upper bound for $J_T$. For $\epsilon > J_T$ define \[\alpha_{p^\epsilon}\colon p \to (p_1, \epsilon)\] \[E_p^\epsilon = M_{\pi_{J}(p)}^{-1}(\Ima h_{(s,p_2)}) \bigcap \ker M_{\alpha_{p^\epsilon}}.\] It follows from the middle exactness condition on $M$ that $E_p^\epsilon \neq 0$, and that $E_q^\epsilon \to E_p^\epsilon$ is a surjection for all $q\leq p$ in $(-\infty,s]\times J_T$. Now consider \[E_p := \bigcap_{\epsilon>J_T} E_p^\epsilon.\] Since $M$ is pointwise finite-dimensional, there exists an $\epsilon_p>J_T$ such that $E_p = E^{\epsilon_p}_p$, and therefore it is also true that $E_p\neq 0$, and that the map $E_q\rightarrow E_p$ is a surjection for all $q\leq p$. Since $(-\infty, s]\times J_T$ is a product of totally ordered sets, it is both directed and codirected. Hence it follows from \cref{l:directecodirected} that $E\colon (-\infty, s]\times J_T\to \mathbf{Vec}$ has a copy of $k_{(-\infty, s]\times J_T}$ as a summand. The multiple of the canonical inclusion $k_{(-\infty, s]\times J_T} \hookrightarrow E$ which agrees with $h$ on $\{ s \}\times J_T$ defines a lift of $h$.\end{proof} \begin{lem} Let $M$ be pointwise finite-dimensional, middle exact and indecomposable. If there exist $a,b,c,d$ as in \cref{def:middleex} such that $\ker M_{\iota_{ba}}\cap \ker M_{\iota_{ca}} \neq 0$, then $M\cong k_I$ where $I$ of type {\textbf{db}}. \label{lem:splitinj} \end{lem} \begin{proof} By assumption, the restriction of $M$ to $(x,y)^{\leftrightarrows}$ must contain a summand isomorphic to $k_J$, where $J=\left( \{x\}\times J_T\right) \cup \left( J_S \times \{y\}\right)$ and $J_S$ and $J_T$ are intervals satisfying: \begin{itemize} \item $x\in J_S$ is minimal and $J_S\subseteq [x, x')$, \item $y\in J_T$ is minimal and $J_T\subseteq [y, y')$. \end{itemize} We shall construct a monomorphism $k_I\hookrightarrow M$ where \[I=\left((-\infty, x]\cup J_S\right) \times \left((-\infty,y]\times J_T\right).\] Since $k_I$ is injective, it follows that $M\cong k_I$. Consider the following subsets of $P$: \begin{align*} I_1=J_S\times J_T \qquad I_2 = (-\infty, x] \times J_T \qquad I_3 = \left((-\infty, x]\cup J_S\right) \times (-\infty, y]. \end{align*} Observe that $I = I_1\cup I_2\cup I_3$. The proof proceeds in three steps. \textbf{Step 1: Constructing $k_{I_1}\hookrightarrow M$}. Let $N \subseteq M|_{(x,y)^{\leftrightarrows}}$ be such that $M|_{(x,y)^{\leftrightarrows}}=N\oplus N^\bullet$ and $N\cong k_{J}$, and choose $0\neq m \in N_{(x,y)}\subseteq M_{(x,y)}$. We shall show that $M_\alpha(m)\neq 0$ for all $\alpha\colon (x,y)\to p$ where $p\in J_S\times J_T$. Assume for the sake of contradiction that $M_\alpha(m) = 0$ for $\alpha\colon (x,y)\to p=(p_1, p_2)$. By the middle exact sequence \[M_{(x,y)} \to M_{(x,p_2)}\oplus M_{(p_1, y)} \to M_{(p_1, p_2)}\] there exists an element $\hat{m} \in M_{(x,y)}$ such that $M_{\alpha'}(\hat{m}) = M_{\alpha'}(m)$ and $M_{\alpha''}(\hat{m}) = 0 $, for $(p_1,y) \xleftarrow{\alpha'} (x,y) \xrightarrow{\alpha''} (x,p_2)$. The first equality, together with the direct sum decomposition of $M|_{(x,y)^{\leftrightarrows}}$ and the injectivity of $N_{\alpha'}$, give $\hat{m} = m + n^\bullet$ for an $n^\bullet \in N^\bullet_{(x,y)}$. Substituting this into the second equality yields $M_{\alpha''}(m) = -M_{\alpha''}(n^\bullet)$. Since $M_{\alpha''}(m)\neq 0$, this contradicts $M|_{(x,y)}^{\leftrightarrows} = N\oplus N^\bullet$. For any $\alpha\colon (x,y)\to (p_1, p_2)\not\in I_1$, it follows by commutativity that $M_\alpha(m) = 0$. Hence, we have a well-defined monomorphism $h\colon k_{I_1}\hookrightarrow M$ given by $h_{p}(1) = M_{\alpha}(m)$ for $\alpha\colon (x,y)\to p$. \textbf{Step 2: Constructing $k_{I_1\cup I_2}\hookrightarrow M$.} The $h$ of the previous step restricts to a monomorphism $h'\colon k_{\{x\}\times J_T}\hookrightarrow M^{\bf v_x}$. By (1) of \cref{lem:lift} this restriction extends to a monomorphism $h'\colon k_{I_2}\hookrightarrow M|_{(-\infty, x]\times T}$. This defines a lift of $h$ to $h\colon k_{I_1\cup I_2}\hookrightarrow M$. \textbf{Step 3: Constructing $k_{I_1\cup I_2\cup I_3}\hookrightarrow M$.} The $h$ of Step 2 restricts to a monomorphism $h''\colon k_{\left((-\infty, x]\cup J_S\right)\times \{y\}} \hookrightarrow M^{\bf h_y}$. By (2) of \cref{lem:lift} this restriction extends to a monomorphism $h''\colon k_{I_3}\hookrightarrow M|_{S \times (-\infty, y]}$. This defines a lift of $h$ to $h\colon k_{I_1\cup I_2 \cup I_3}\hookrightarrow M$. \end{proof} We also have the following dual lemma. \begin{lem} \label{lem:splitproj} Let $M$ be pointwise finite-dimensional, middle exact and indecomposable. If there exist $a,b,c,d$ as in \cref{def:middleex} such that $\Coker ((M_{\iota_{db}}, -M_{\iota_{dc}}))\neq 0$, then $M\cong k_I$ where $I$ of type \textbf{bb}. \end{lem} \begin{proof} Observe that $DM$ is middle exact whenever $M$ is, and that $I$ is a directed ideal in $(S\times T)^{\rm op}$. Since $M\cong D^2M$ we also have that $DM$ is indecomposable. In particular, $DM \cong Dk_I$ by \cref{lem:splitinj}, and thus $k_I \cong D^2(k_I) \cong D^2M \cong M$. \end{proof} The previous two lemmas show that it suffices to consider the case where \cref{eq:middlex} is \emph{short exact}. Define persistence modules \[\Ima M^\leftarrow, \Ima M^\downarrow, \ker M^\rightarrow, \ker M^\uparrow\colon P \to \mathbf{Vec}\] by \begin{align*} \Ima M^\leftarrow_{(p_1, p_2)} &= \bigcap_{\alpha: (q, p_2)\to (p_1, p_2)} \Ima M_\alpha, \quad \ker M^\rightarrow_{(p_1, p_2)} = \bigcup_{\alpha: (p_1, p_2)\to (q, p_2)} \ker M_\alpha \\ \Ima M^\downarrow_{(p_1, p_2)} &= \bigcap_{\alpha: (p_1, q)\to (p_1, p_2)} \Ima M_\alpha, \quad \ker M^\uparrow_{(p_1, p_2)} = \bigcup_{\alpha: (p_1, p_2)\to (p_1, q)} \ker M_\alpha \end{align*} It is not hard to see that these are submodules of $M$. By definition, $M_{\alpha}$ maps $\Ima M^\leftarrow_{(p_1, p_2)}$ onto $\Ima M^\leftarrow_{(q, p_2)}$ for any $\alpha\colon (p_1, p_2)\to (q, p_2)$. Let $\alpha\colon (p_1, p_2)\to (p_1,q)$. Since $M$ is pointwise finite-dimensional, there exists $s\in S$ such that $\Ima M^\leftarrow_{(p_1, p_2)} = \Ima M_\beta$ and $\Ima M^\leftarrow_{(p_1, q)} = \Ima M_{\beta'}$ where $\beta\colon (s, p_2)\to (p_1, p_2)$ and $\beta'\colon (s, q)\to (p_1,q)$. This shows that $\Ima M^\leftarrow$ is a submodule of $M$. The other cases are similar. Following the same line of arguments we also have the following simple lemma. \begin{lem} Let $M$ be pointwise finite-dimensional, middle exact and assume that \cref{eq:middlex} is short exact for all $a,b,c,d$ as in \cref{def:middleex}. Then $\ker M^\rightarrow \cap \ker M^\uparrow =0$ and $M = \Ima M^\leftarrow+ \Ima M^\downarrow$. \label{lem:shortexact} \end{lem} \begin{lem} Let $M$ be as in \cref{lem:shortexact}. If $\Ima M^\leftarrow\cap \ker M^\rightarrow \neq 0$ or $\Ima M^\downarrow\cap \ker M^\uparrow\neq 0$, then $M\cong k_I$ where $I$ is of type {\textbf{db}}. \label{lem:kerker} \end{lem} \begin{proof} We prove it for the first case; the second case is symmetrical. Let $W=\Ima M^\leftarrow\cap \ker M^\rightarrow$ and assume that $W_{(x, y)} \neq 0$. By \cref{t:totorder} and the assumptions on $W$, the restriction $W^{\bf h_{y}}$ decomposes as a direct sum $\oplus_J k_J$ where at least one interval ideal $J$ has an upper bound in ${\bf h_{y}}-J$. Fix such $J$ and consider the associated monomorphism $h\colon k_{J\times \{y\}} \hookrightarrow W^{\bf h_{y}} \subseteq M^{\bf h_{y}}$. By \cref{lem:shortexact}, $\ker M^\rightarrow_{(s,y)}\cap \ker M^\uparrow_{(s,y)} = 0$, and therefore we must have $M_\alpha(h_{(s,y)}(1)) \neq 0$ for all $\alpha\colon (s,y)\to (s, p_2)$. Hence, $h$ lifts to a monomorphism $k_{J\times [y, \infty)}\hookrightarrow M$. This monomorphism can in turn be lifted to $h: k_{J\times T} \to M$ by means of (2) of \cref{lem:lift}. Since $J\times T$ is of type {{\textbf{db}}} the result follows. \end{proof} We are now ready to prove the main statement of this section. \begin{proof}[Proof of \cref{thm:block}] By \cref{t:totorder} it suffices to show the result for $M$ indecomposable. If the conditions of \cref{lem:splitinj} or \cref{lem:splitproj} are satisfied, then we are done. Thus, we may assume that \cref{eq:middlex} is short exact. Consider the submodules $\Ima M^\leftarrow$ and $\Ima M^\downarrow$, and an arbitrary $(x,y)\in P$. By \cref{lem:kerker} we may assume that $\ker (\Ima M^\leftarrow_\alpha) = 0$ and $\ker (\Ima M^\downarrow_\beta) =0$ for all $\alpha\colon(x,y) \to (x',y)$ and $\beta\colon (x,y)\to (x, y')$. Since these morphisms are surjective by definition, it follows that they are in fact isomorphisms. Hence, if $(\Ima M^\leftarrow)^{\bf v_x} \cong \bigoplus_J k_J$, then $\Ima M^\leftarrow \cong \bigoplus_J k_{S\times J}$, and therefore block-decomposable. Symmetrically we also get that $\Ima M^\downarrow$ is block-decomposable. By \cref{lem:shortexact} we have that $M=\Ima M^\leftarrow + \Ima M^\downarrow$. Let $W=\Ima M^\leftarrow\cap\Ima M^\downarrow$ and observe that the internal morphisms of $W$ are all isomorphisms. Thus, if $W\neq 0$, then we have a monomorphism $k_{P}\hookrightarrow W \subseteq M$, and therefore $M\cong k_{P}$. If $X=0$, then $M=\Ima M^\leftarrow\oplus \Ima M^\downarrow$, and since $M$ is indecomposable, $M=\Ima M^\leftarrow$ or $M=\Ima M^\downarrow$. \end{proof} \subsection{Decomposition of Infinite Zigzags} \label{sec:zigzag} Define a \emph{zigzag path} $\gamma$ to be a function $\gamma: \mathbb{Z} \to \mathbb{Z}^2$ satisfying \[\gamma(i+1) \in \left\{\gamma(i) + (1,0), \gamma(i) - (0,1)\right\}\] and $\lim_{i\to \pm \infty} \gamma(i) = (\pm \infty, \mp \infty)$. For such a path $\gamma$ let $Z(\gamma)\subseteq \mathbb{R}^2$ be the poset \[Z(\gamma) := \{ (s,t)\in \mathbb{R}^2 \mid \exists i\in \mathbb{Z} \text{ such that } \gamma(i) \leq (s,t) \leq \gamma(i+1)\}. \] Observe that $Z(\gamma)$ separates $\mathbb{R}^2-Z(\gamma)$ into two disjoint subsets \begin{align*} R_U&=\{(s,t) \mid \exists p\in Z(\gamma) \text{ such that } (s,t)\geq p\}-Z(\gamma)\\ R_L&= \{(s,t) \mid \exists p\in Z(\gamma) \text{ such that } (s,t) \leq p\}-Z(\gamma). \end{align*} We say that a non-empty subset $I\subseteq Z(\gamma)$ is an \emph{interval} if it is convex and connected. Observe that a non-trivial intersection of a block and $Z(\gamma)$ is an interval. \begin{cor} Let $\gamma$ be a zigzag path. If $M\colon Z(\gamma)\to \mathbf{Vec}$ is pointwise finite-dimensional, then $M$ decomposes into interval modules. \label{thm:zigzag} \end{cor} To prove this we need the following lemma \begin{lem} Let $M\colon \mathbb{R}^2\to \mathbf{Vec}$ be such that $M|_{[i, i+1]\times [j, j+1]}$ is middle exact for all $(i,j)\in \mathbb{Z}^2$. Then $M$ is middle exact. \label{lem:subdivide} \end{lem} \begin{proof} Let $a,b,c,d$ as in \cref{def:middleex} and choose any point $a\leq (s, t)\leq d$. Consider the following commutative diagram \[ \begin{tikzcd} M_b\ar[r] & M_{(s, y')} \ar[r] & M_d\\ M_{(x, t)} \ar[r]\ar[u] & M_{(s,t)} \ar[r]\ar[u] & M_{(x', t)}\ar[u]\\ M_a \ar[u]\ar[r] & M_{(s,y)} \ar[u]\ar[r] & M_{c}\ar[u] \end{tikzcd} \] A simple diagram chase shows that if $M$ satisfies the middle exact condition on the four minimal rectangles, then so does it on the larger bounding rectangle. Thus, we may iteratively subdivide the bounding rectangle such that the corner points of any (non-trivial) minimal rectangle all lie in a square $[i, i+1]\times [j, j+1]$ for some $(i,j)$. \end{proof} Let $\lceil t\rceil$ denote the least integer \emph{strictly} greater than $t$, and let $\lfloor t\rfloor$ denote the greatest integer \emph{strictly} less than $t$. We can extend $M$ to a representation $E(M)\colon \mathbb{R}^2\to \mathbf{Vec}$ recursively as follows \begin{equation} E_\gamma(M)_{(s,t)} = \begin{cases} M_{(s,t)} &\text{ if } (s,t)\in Z(\gamma)\\ \Ker \left( M_{(s,\lceil t\rceil)}\oplus M_{(\lceil s \rceil ,t)}\to M_{(\lceil s\rceil ,\lceil t \rceil )}\right) &\text{ if } (s,t)\in R_L\\ \Coker \left(M_{(\lfloor s \rfloor, \lfloor t\rfloor)} \to M_{(s,\lfloor t\rfloor )}\oplus M_{(\lfloor s \rfloor ,t)}\right) &\text{ if } (s,t)\in R_U \end{cases} \label{eq:E} \end{equation} where the internal morphisms are given by functoriality of $\Ker$ and $\Coker$. This definition is well-defined as every recursive call will terminate in finite time. An equivalent definition of $E_\gamma(M)$ using limits and colimits can be gives as follows: for $(s,t)\in \mathbb{R}^2$ let $D(s,t) = \{p\in \mathbb{R}^2\mid p \leq (s,t)\}$ and $U(s,t) = \{p\in \mathbb{R}^2 \mid p\geq (s,t)\}$. Then $E_\gamma(M)$ is the following persistence module \[ E_\gamma(M)_{(s,t)} = \begin{cases} M_{(s,t)} &\text{ if } (s,t)\in Z(\gamma)\\ \varprojlim M|_{Z(\gamma)\cap U(s,t)} &\text{ if } (s,t)\in R_L\\ \varinjlim M|_{Z(\gamma)\cap D(s,t)} &\text{ if } (s,t)\in R_U \end{cases}. \] By \cref{eq:E} we see that $E_\gamma(M)$ is middle exact on every square $[i, i+1]\times [j,j+1]$ and thus middle exact by \cref{lem:subdivide}. As $E_\gamma(M)$ is clearly pointwise finite-dimensional it follows from \cref{thm:block} that $E_\gamma(M)$ is block-decomposable. Therefore \[M=E_\gamma(M)|_{Z(\gamma)} \cong (\oplus_J k_J)|_{Z(\gamma)}\cong \oplus_J k_{J\cap Z(\gamma)}\] where the $J$'s are blocks. This concludes the proof of \cref{thm:zigzag}. \subsection{Upper-triangular support} \label{sec:upper} In this section $T\subseteq \mathbb{R}^2$ denotes the \emph{strictly upper-triangular} subset $\{(x, y)\in \mathbb{R}^2 \mid x+y > 0\}$, and $\overline T\subseteq \mathbb{R}^2$ denotes the \emph{upper-triangular} subset $\{(x, y)\in \mathbb{R}^2 \mid x+y \geq 0\}$. We define a block in $T$ to be a subset of the form $J\cap T$, where $J\subseteq \mathbb{R}^2$ is a block. Furthermore, $M\colon T\to \mathbf{Vec}$ is \emph{middle exact} if \cref{eq:middlex} is middle exact for all such $a,b,c,d\in T$. Blocks and middle exact modules are defined accordingly in the upper-triangular setting. First we prove \cref{thm:upperT} in the strictly upper-triangular setting. Observe that if $I\subseteq \mathbb{R}^2$ is of type {{\textbf{db}}}, then $I\cap T$ is both an ideal and directed. Hence, $k_{I\cap T}\colon T\to \mathbf{Vec}$ is injective by \cref{l:injective}. \begin{lem} Let $M\colon T\to \mathbf{Vec}$ be pointwise finite-dimensional, middle exact and indecomposable. If there exist $a,b,c,d\in T$ as in \cref{def:middleex} such that \[\ker (M_{\iota_{ba}}) \cap \ker (M_{\iota_{ca}}) \neq 0,\] then $M\cong k_{I\cap T}$ where $I$ of type {\textbf{db}}. \label{lem:splitinjT} \end{lem} \begin{proof} The restriction of $M$ to $((x-y)/2, \infty)\times ((y-x)/2, \infty)$ is again middle exact and by \cref{lem:splitinj} it must have a summand isomorphic to $k_{R_0}$ where $R_0=((x-y)/2, s''\rangle \times ((y-x)/2, t''\rangle$ for $s'',t''\in \mathbb{R}$. This defines a monomorphism $f_0\colon k_{R_0}\hookrightarrow M$ of persistence modules for $T$. Let $I = (-\infty, s''\rangle\times (-\infty, t''\rangle$, and write $J = I\cap T$ as a disjoint union $\bigcup_{n=0}^\infty R_n$, where (i) each $R_n$ is of the form $(x_n,x_n'\rangle\times (y_n,y_n'\rangle$, and (ii) $J\setminus J_n$ is an ideal in $T$ for all $n$, where $J_n = \bigcup_{i=0}^n R_n$. \[ \setlength{\unitlength}{0.5cm} \begin{picture}(12,12)(-6,-6) \put(-6,6){\line(1,-1){12}} \put(5,5){\line(-1,0){11}} \put(5,5){\line(0,-1){11}} \put(1,-1){\line(1,0){4}} \put(1,-1){\line(0,1){6}} \put(3,-3){\line(1,0){2}} \put(3,-3){\line(0,1){2}} \put(-2,2){\line(1,0){3}} \put(-2,2){\line(0,1){3}} \put(-3.5,3.5){\line(1,0){1.5}} \put(-3.5,3.5){\line(0,1){1.5}} \put(-0.5,0.5){\line(1,0){1.5}} \put(-0.5,0.5){\line(0,1){1.5}} \put(2,-2){\line(1,0){1}} \put(2,-2){\line(0,1){1}} \put(4,-4){\line(1,0){1}} \put(4,-4){\line(0,1){1}} \put(3,2){\makebox(0,0){$R_0$}} \put(-0.5,3.5){\makebox(0,0){$R_1$}} \put(4,-2){\makebox(0,0){$R_2$}} \put(-2.75,4.25){\makebox(0,0){$R_3$}} \put(0.25,1.25){\makebox(0,0){$R_4$}} \put(2.5,-1.55){\makebox(0,0){$R_5$}} \put(4.5,-3.55){\makebox(0,0){$R_6$}} \put(-5,5){\circle*{0.2}} \put(-5,4.8){\makebox(0,0)[tr]{$(-t'',t'')$}} \put(5,-5){\circle*{0.2}} \put(4.8,-5){\makebox(0,0)[tr]{$(s'',-s'')$}} \put(5,5){\circle*{0.2}} \put(4.8,4.8){\makebox(0,0)[tr]{$(s'',t'')$}} \put(1,-1){\circle*{0.2}} \put(1,-1){\makebox(0,0)[tr]{$\displaystyle (\frac{x-y}{2},\frac{y-x}{2})$}} \end{picture} \] By induction we extend $f_0$ to a monomorphism $f_n\colon k_{J_n}\hookrightarrow M$ for all~$n$. Namely, suppose we are given $f_{n-1}$, we construct $f_n$. There are two situations we need to consider: (a) where points above and to the right of $R_n$ are in $J_{n-1}$ (for example $R_4$, $R_5$), and (b) where points to the right of $R_n$ are in $J_{n-1}$ and points above $R_n$ are not in $J$ (for example $R_1$, $R_3$), or the dual situation (for example $R_2$, $R_6$). For $p\in R_n$ we construct a set $\emptyset \neq E_p \subseteq M_p$ as follows. For situation (a), let $q\in J_{n-1}$ be a point above $p$ and let $s\in J_{n-1}$ be a point to the right of $p$. We complete them to a rectangle $pqrs$. Then $r\in J_{n-1}$, and $(f_{n-1})_q(1) \in M_q$ and $(f_{n-1})_s(1)\in M_s$ have the same image $(f_{n-1})_r(1)\in M_r$. By middle exactness, the set \[ E_p = \{ m\in M_p : \text{$M_{\iota_{qp}}(m) = (f_{n-1})_q(1)$ and $M_{\iota_{rp}}(m) = (f_{n-1})_r(1)$} \} \] is not empty. For situation (b), let $q\notin J$ be a point above $p$ and let $s\in J_{n-1}$ be a point to the right of $p$. We complete them to a rectangle $pqrs$. Then $r\notin J$ and $0 \in M_q$ and $(f_{n-1})_s(1)\in M_s$ have the same image $0\in M_r$. By middle exactness, the set \[ E_p = \{ m\in M_p : \text{$M_{\iota_{qp}}(m) = 0$ and $M_{\iota_{rp}}(m) = (f_{n-1})_r(1)$} \} \] is not empty. For a different choice of $q',s'$ with $q'<q$ and $s'<s$ in both cases (a) and (b) we obtain a set $E_p' \subseteq E_p$. But the set $E_p$ is a coset of $\Ker M_{\iota_{qp}}\cap \Ker M_{\iota_{sp}}$. Henceforth, in the definition of $E_p$, we choose $q$ and $s$ such that this subspace is of minimal dimension. Thus $E_p' = E_p$ for any choice of $q',s'$ as above. It follows that for $m\in E_p$ and $t\in J_{n-1}$ with $p<t$, we have $M_{\iota_{tp}}(m) = (f_{n-1})_t(1)$, and for $t\notin J$ with $p<t$ we have $M_{\iota_{tp}}(m) = 0$. Now if $p,p'\in R_n$ and $p'\le p$ then middle exactness ensures that the map $E_{p'}\to E_p$ is surjective. To see this we can reduce to the cases when $p'$ is to the left of, or below $p$. We deal with the first of these. We choose a rectangle $p'pqq'$ where $q'$ is above $p'$ and $q$ is above $p$, both valid for the definition of $E_p$ and $E_{p'}$. The vertical condition for $m\in E_p$ is that $M_{\iota_{qp}}(m)$ is equal to $(f_{n-1})_q(1)$ in case (a) and 0 in case (b). The vertical condition for $m'\in E_{p'}$ is that $M_{\iota_{q'p'}}(m')$ is equal to $(f_{n-1})_{q'}(1)$ in case (a) and 0 in case (b). Middle exactness for the rectangle $p'pqq'$ thus implies that $E_{p'}\to E_p$ is surjective. Choose a sequence $p_1 \ge p_2 \ge \dots$ of elements of $R_n$, such that for any $p\in R_n$, $p_i\le p$ for some $i$. By recursively lifting elements we get that \[ V = \lim_{\substack{\longleftarrow \\ i}} E_{p_i} \] is non-empty. Choose $v\in V$ and define $f\colon k_{R_n} \hookrightarrow M|_{R_n}$ by \[ f_p(1)= \text{$M_{\iota_{p,p_i}}(v_i)$ where $p_i\leq p$.} \] This defines a lift of $f_{n-1}$ to a monomorphism $f_n\colon k_{J_n} \hookrightarrow M$, as required. Combining these maps gives a monomorphism $f\colon k_J\hookrightarrow M$. Since $k_J$ is injective and $M$ is indecomposable, we deduce that $M\cong k_J$, as required. \end{proof} We also have the following result which is a direct consequence of \cref{lem:splitproj}. \begin{lem} \label{lem:splitprojT} Let $M\colon T\to \mathbf{Vec}$ be pointwise finite-dimensional, middle exact and indecomposable. If there exist $a,b,c,d\in T$ as in \cref{def:middleex} such that \[\Coker ((M_{\iota_{db}}, -M_{\iota_{dc}}))\neq 0\] then $M\cong k_{I\cap T}$ where $I$ of type \textbf{bb}. \end{lem} \begin{proof}The restriction $M'$ of $M$ to $U(a) = \{p\mid p\geq a\}$ is again middle exact, and by \cref{lem:splitproj} it has a summand isomorphic to $k_I$ where $I$ is a block of type $\textbf{bb}$ contained in the interior of $U(a)$. Since $I$ is contained in the interior of $U(a)$ it follows that the inclusion and projection $k_I\hookrightarrow M'\twoheadrightarrow k_I$ extend to give maps $k_I\hookrightarrow M\twoheadrightarrow k_I$. This shows that $M\cong k_I = k_{I\cap T}$. \end{proof} \begin{proof}[Proof of \cref{thm:upperT} (Strictly upper-triangular) ] By \cref{t:decomp} it suffices to consider the case that $M$ is indecomposable. Furthermore, \cref{lem:splitinjT,lem:splitprojT} allow us to restrict our attention to the case that \cref{eq:middlex} is short exact for all such $a,b,c,d\in T$. In particular, this means that we have the following natural isomorphisms for all such $a,b,c$ and $d$: \begin{align} M_d \cong \Coker(M_a \to M_b\oplus M_c) & & M_a \cong \Ker(M_b\oplus M_c \to M_d). \label{eq:T} \end{align} Consider any zigzag path $\gamma$ satisfying $\Ima \gamma \subset T$. By comparing \cref{eq:T} to \cref{eq:E} we see that $M \cong E_\gamma(M|_{Z(\gamma)})|_T$, and by \cref{thm:zigzag}, \[\ E_\gamma(M|_{Z(\gamma)})|_T\cong E_\gamma\left(\bigoplus_{I} k_I\right)\Big |_T \cong \bigoplus_I E_\gamma(k_I)|_T.\] Since $M$ is assumed to be indecomposable it follows that $M\cong E_\gamma(k_I)|_T$ where $I= J\cap Z(\gamma)$ for a block $J\subseteq \mathbb{R}^2$. It is straightforward to verify that $E_\gamma(k_{J\cap Z(\gamma)})|_T = k_{J\cap T}$ if $J\cap Z(\gamma)\neq \emptyset$. \end{proof} \begin{proof}[Proof of \cref{thm:upperT} (Upper-triangular)] We shall show that any indecomposable, pointwise finite-dimensional and middle exact persistence module $N\colon \overline T\to \mathbf{Vec}$ is a block module. This will be done by first restricting $N$ to $T$, and then extending $N|_T$ to a module over $\overline T$. We show that when the restriction $N|_T$ is non-zero, the composition given by first restricting and then extending is an isomorphism. The result now follows from our previous work in the strictly upper-triangular setting. Let $p=(x_0,y_0) \in \overline T \setminus T$, so $x_0+y_0=0$. Given a point $s = (x,y)\in T$ with $x_0<x$ and $y_0<y$, we consider the non-degenerate rectangle $pqrs$ where $q=(x,y_0)$ and $r=(x_0,y)$. Observe that $q,r\in T$. For a persistence module $M\colon T\to \mathbf{Vec}$, we define \[ M_p^s = \Ker \left(M_q \oplus M_r \xrightarrow{ (-M_{\iota_{sq}} \ M_{\iota_{sr}}) } M_s \right). \] If $s'=(x',y')\in T$ with $x_0<x'\le x$ and $y_0<y'\le y$, and $pq'r's'$ is the corresponding rectangle, then the commutative diagram \[ \begin{CD} M_{q'} \oplus M_{r'} @>(-M_{\iota_{s'q'}}\ M_{\iota_{s'r'}})>> M_{s'} \\ @V \theta VV @V M_{\iota_{ss'}} VV \\ M_q \oplus M_r @>(-M_{\iota_{sq}}\ M_{\iota_{sr}})>> M_s \end{CD}, \] where $\theta = (\begin{smallmatrix} M_{\iota_{qq'}} & 0 \\ 0 & M_{\iota_{rr'}} \end{smallmatrix})$, induces a map $M^{ss'}_p\colon M^{s'}_p \to M^s_p.$ We define $\overline M_p = \varprojlim M^s_p$, where the inverse limit is over all $s$ giving non-degenerate rectangles, as above. Clearly there is a natural map $\overline M_p\to M_s$ for any $s=(x,y)$ with $x_0<x$ and $y_0<y$. In addition there are natural maps $\overline M_p\to M_q$ for $q = (x,y_0)$ with $x_0<x$ and $\overline M_p\to M_r$ for $r = (x_0,y)$ with $y_0<y$. Using this to extend $M$, this defines a functor from persistence modules over $T$ to persistence modules $\overline M$ over $\overline T$. Observe that if $M \cong k_{J\cap T}$ for a block $J\subseteq \mathbb{R}^2$, then $\overline M \cong k_{J\cap \overline T}$. Moreover, the functor respects direct sum decompositions. Now suppose that $N\colon \overline T\to \mathbf{Vec}$. For $p\in \overline T\setminus T$ there is a natural map $\mu_p^s:N_p \to (N|_T)_p^s$ induced by the maps $N_{\iota_{qp}}$ and $N_{\iota_{rp}}$. This induces a natural map $N_p \to (\overline{N|_T})_p$ and thus a morphism $N\to \overline{N|_T}$. If $N$ is a middle exact $\overline T$-module, then the map $\mu_p^s$ is surjective. Now suppose that $N$ is indecomposable, pointwise finite-dimensional, middle exact, and not isomorphic to the interval module $k_{\{p\}}$ for any point $p\in\overline T\setminus T$. Since $k_{\{p\}}$ is injective, it follows that it does not occur as a submodule of $N$. We need to show for any point $p\in \overline T\setminus T$ that $N_p \to (\overline{N|_T})_p$ is an isomorphism. This map is induced by the maps $\mu_p^s: N_p \to ({N|_T})^s_p$ for non-degenerate rectangles $pqrs$ as above. The kernels $\Ker(\mu^s_p)$ are subspaces of $N_p$, and if $s'\le s$, then $\Ker(\mu^{s'}_p) \subseteq \Ker(\mu^s_p)$. Since $N_p$ is finite-dimensional, there is some $s$ with $\Ker(\mu^s_p)$ of minimal dimension, and therefore $\Ker(\mu^s_p)$ must be contained in all other kernels. Now any element $0 \neq \ell \in \Ker(\mu^s_p)$ defines a submodule of $N$ of the form $k_{\{p\}}$, a contradiction. Thus $\Ker(\mu^s_p) = 0$, so also $\Ker(\mu^{s'}_p) = 0$ for any $s'\le s$. Thus $\mu^{s'}_p$ is an isomorphism for such $s'$, and so $N_p \to (\overline{N|_T})_p$ is an isomorphism, as desired. \end{proof} \bibliographystyle{amsplain}
train/arxiv
BkiUdWk4eIZijfna0HQr
5
1
\section{Introduction} 90 years ago, Heisenberg \cite{Heis} introduced (although in an approximate form) the famous ``uncertainty relation'' (UR) \be \Delta x \Delta p \ge \hbar/2, \label{dxp} \ee that was soon proven rigorously in the frameworks of the wave function description of quantum systems by Kennard \cite{Kennard} and Weyl \cite{Weyl28}. A few years later, Robertson \cite{Robertson30} and Schr\"odinger \cite{Schrod30} proved a more general inequality \be \sigma_A \sigma_B \ge \frac14\left| \langle [\hat{A},\hat{B}]\rangle\right|^2 \label{unc3} \ee for arbitrary Hermitian operators $\hat{A}$ and $\hat{B}$. According to them (and following Heisenberg's idea \cite{Heis}), the "uncertainty" of a quantity A was defined as the square root of its variance (or mean squared deviation): \be \Delta A \equiv \sqrt{\sigma_A}, \qquad \sigma_A \equiv \langle \hat{A}^2\rangle - \langle \hat{A}\rangle^2. \label{def-sigA} \ee It is known that inequality (\ref{dxp}) is saturated (becomes the equality) for the wave functions describing shifted ground states of the harmonic oscillator \cite{Schrodpac}, called nowadays after Glauber \cite{Glauber} as ``coherent states''. But what can we say about those quantum states, which possess the left-hand sides of inequalities (\ref{dxp}) or (\ref{unc3}) bigger than the right-hand ones? One of possible answers is that, probably, one should add some extra terms to the right-hand sides of (\ref{dxp}) or (\ref{unc3}) in such cases, taking into account some additional parameters or specific properties of concrete quantum systems under consideration. The first step in this direction was made by Robertson and Schr\"odinger in the same papers \cite{Robertson30,Schrod30}. Namely, they obtained a more precise version of (\ref{unc3}), taking into account the average value of the anticommutator $\{\hat{A} , \hat{B} \}$: \be \sigma_A \sigma_B \ge \sigma_{AB}^2 + \frac14\left| \left\langle [\hat{A},\hat{B}]\right\rangle\right|^2 \equiv \left|\left\langle (\delta\hat{A})(\delta\hat{B})\right\rangle\right|^2 \equiv G_{AB}^2, \label{unc4} \ee where \be \sigma_{AB} \equiv \frac12 \langle \hat{A}\hat{B} + \hat{B}\hat{A}\rangle - \langle \hat{A}\rangle\langle\hat{B}\rangle \equiv \frac12 \left\langle \Big\{\delta\hat{A} , \delta\hat{B} \Big\}\right\rangle, \qquad \delta\hat{A} \equiv \hat{A} -\langle \hat{A}\rangle. \label{defsigAB} \ee The special case of (\ref{unc4}) is the following generalization of (\ref{dxp}) for the coordinate and momentum operators: \be \sigma_p \sigma_x -\sigma_{xp}^2 \ge \hbar^2/4. \label{unc5} \ee The equality takes place for all Gaussian wave functions, as was discovered for the first time by Kennard \cite{Kennard}. Inequality (\ref{unc5}) can be rewritten in the form \cite{DKM80} \be \sigma_p \sigma_x \ge \frac{\hbar^2}{4\left(1-r^2\right)}, \qquad r= \frac{\sigma_{xp}}{\sqrt{\sigma_p \sigma_x}}, \label{unc34} \ee which emphasizes the role of the ``correlation coefficient'' $r$ as an additional parameter, responsible for the increase of product $\sigma_p \sigma_x$. One could treat the relation (\ref{unc34}) as though an ``effective Planck constant'' $\hbar\left(1-r^2\right)^{-1/2}$ is occurring instead of the usual constant $\hbar$. Such an interpretation of inequalities (\ref{unc5}) and (\ref{unc34}) was discussed, e.g., in Refs. \cite{DKM-200,Vysot13}. The explicit form of ``correlated coherent states'', saturating inequality (\ref{unc5}), is as follows \cite{DKM80}, \be \psi(x) =\left(2\pi\sigma_x\right)^{-1/4} \exp\left[-\,\frac{x^2}{4\sigma_x}\left(1-\,\frac{ir}{\sqrt{1-r^2}}\right) +\frac{\alpha x}{\sqrt{\sigma_x}} -\frac12\left(\alpha^2 +|\alpha|^2\right)\right]. \label{psicorr} \ee Inequalities (\ref{unc4}) and (\ref{unc34}) explain the increase of the uncertainty product $\sigma_A \sigma_B $ due to the existence of some ``intrinsic'' restrictions in the quantum system under investigation (the nonzero correlation coefficient). However, this product can increase also due to some ``extrinsic'' constraints, if the system interacts with other systems (``environment''). For example, in the case of an equilibrium state of a harmonic oscillator with frequency $\omega$ at temperature $T$, the uncertainty product equals $\sigma_{pp}\sigma_{xx}=\left[\frac{\hbar}{2} \coth\left(\frac{\hbar\omega}{2k_B T}\right)\right]^2 $ ($k_B$ is the Boltzmann constant). In the high-temperature case $k_B T \gg \hbar\omega$ the right-hand side of this equality is so large, that inequality (\ref{dxp}) becomes practically useless. The equilibrium state of a harmonic oscillator is a {\em mixed\/} quantum states, described by means of the statistical operator (density matrix) $\hat\rho$. The degree of mixing is frequently characterized by the difference $1-\mu$, where $\mu\equiv \mbox{Tr}(\hat\rho^2)$ is the ``quantum purity''. It is known that for any quantum state described by means of a {\em Gaussian\/} density matrix or the Wigner function (in particular, for the equilibrium state), the following equality holds for systems with one degree of freedom (see, e.g., \cite{167}): \be \sqrt{\sigma_{pp}\sigma_{xx} - \sigma_{xp}^2} =\frac{\hbar}{2\mu}. \label{SR-mu} \ee The generalized ``purity bounded uncertainty relation'' for mixed quantum states can be written in the form \begin{equation} \sqrt {\sigma_{pp}\sigma_{xx} - \sigma_{xp}^2}\ge\frac {\hbar}2\Phi(\mu), \label{79} \end{equation} where $\Phi (\mu )$ is a monotonous function of $\mu$, satisfying the relations $\Phi (1)=1\le \Phi(\mu)\le \mu^{-1}$ for $0<\mu\le 1$. Its explicit form turned out rather complicated, but it can be described with a good accuracy by a simple approximate formula \cite{183,S98} \be \tilde{\Phi }(\mu )=\frac {4+\sqrt {16+9\mu^2}}{9\mu}. \label{Phitil} \ee In particular, the following asymptotical formula holds for $\mu\ll 1$ (its leading term was obtained for the first time by Bastiaans \cite{Bast1}): \begin{equation} \tilde{\Phi }(\mu )=\frac 8{9\mu}\left(1+\frac 9{64} \mu^2+\ldots\right), \label{97} \end{equation} so that $|{\Phi} (\mu )-8/(9\mu)|<0,01$ for $\mu\le 0,25$. Both formulas, (\ref{Phitil}) and (\ref{97}), show that Gaussian states do not minimize the precise uncertainty relation for mixed states. The minimum value is achieved for some diagonal mixtures of finite numbers of the Fock states of the harmonic oscillator. Inequality (\ref{79}) can be considered as some kind of ``coarse-grained'' relations, since it hides all details of the interaction (entanglement) between the system under study and the ``environment''. Our goal is to derive a new inequality, where some of these details appear explicitly. \section{The new inequality and its illustration} \label{sec-2var3} The main idea is to start from some inequality related to three observables and find its consequences with respect to admissible values of the product of two selected variances. A general scheme of obtaining the uncertainty relations for several observables in terms of covariances was given by Robertson in 1934 \cite{Robertson34}. Let us remind it. Consider $N$ arbitrary operators $\hat{z}_1$, $\hat{z}_2$, \ldots, $\hat{z}_N$, and construct the operator $\hat{f} = \sum_{j=1}^N \alpha_j (\hat{z}_j -\langle \hat{z}_j\rangle)$, where $\alpha_j$ are arbitrary complex numbers. The inequalities, which can be interpreted as generalized uncertainty relations, are the consequences of the fundamental inequality $\langle\hat{f}^{\dagger}\hat{f}\rangle\ge 0 $, that must be satisfied for any pure or mixed quantum state (the symbol $\hat{f}^{\dagger}$ means the Hermitian conjugated operator). In the explicit form, this inequality is the condition of positive semi-definiteness of the quadratic form $\alpha^*_j F_{jm}\alpha_m $, whose coefficients $F_{jm} = \left\langle\Big(\hat{z}_j^{\dagger}- \langle\hat{z}_j\rangle^*\Big) \Big(\hat{z}_m- \langle\hat{z}_m\rangle\Big)\right\rangle$ form the Hermitian matrix $F =\Vert F_{jm}\Vert$. One has only to use the known conditions of the positive semi-definiteness of Hermitian quadratic forms to write down the explicit inequalities for the elements of matrix $F$. All such inequalities can be considered as generalizations of inequality (\ref{unc3}) to the case of more than two operators. Many of them can be found in the review \cite{183}. If all operators $\hat{z}_j$ are Hermitian, then it is convenient to split matrix $F$ as $F = X + iY$, where $X$ and $Y$ are real symmetric and antisymmetric matrices, respectively, consisting of the elements \be X_{mn}= \frac12 \left\langle\left\{\left(\hat{z}_m- \langle\hat{z}_m\rangle\right)\,,\, \left(\hat{z}_n- \langle\hat{z}_n\rangle\right)\right\}\right\rangle, \qquad Y_{mn}= \frac1{2i} \left\langle\left[\hat{z}_m\,,\,\hat{z}_n\right] \right\rangle. \label{defXY} \ee The symbols $\{,\}$ and $[\,,\,]$ mean, as usual, the anticommutator and the commutator. The fundamental inequality ensuring the positive semi-definiteness of matrix $F$ is $ \det F =\det \Vert X + iY \Vert \ge 0$. It suits quite well for our purposes, since it contains all elements of matrices $X$ and $Y$. Its explicit form in the special case of $N=3$ reads \beqn X_{11}X_{22}X_{33} &\ge & X_{11}\left(X_{23}^2 + Y_{23}^2\right) + X_{22}\left(X_{13}^2 + Y_{13}^2\right) + X_{33}\left(X_{12}^2 + Y_{12}^2\right) \nonumber \\ && + 2\left( X_{12}Y_{23}Y_{31} + X_{23}Y_{31}Y_{12} + X_{31}Y_{12}Y_{23} -X_{12}X_{23}X_{31}\right) , \label{unc17a} \eeqn Formula (\ref{unc17a}) was obtained by Synge \cite{Synge71} (without any reference to Robertson's paper). Recently, it was re-derived in Ref.~\cite{Qin16}. Inequality (\ref{unc17a}) has the form $X_{11}X_{22}X_{33} \ge a X_{11} + b X_{22} + c $, where coefficients $a$, $b$ and $c$ do not contain variances $X_{11}$ and $X_{22}$. Moreover $a$ and $b$ are non-negative. Due to the standard arithmetic-geometric inequality, we have $a X_{11} + b X_{22} \ge 2\sqrt{a b X_{11} X_{22} }$. This means that $X_{33}\xi^2 - 2\sqrt{ab}\,\xi -c \ge 0$, where $\xi = \sqrt{X_{11} X_{22}} \ge 0$. Consequently, $\xi$ must be greater than the biggest root of the quadratic polynomial in the left-hand side of this inequality: $X_{33}\xi \ge \sqrt{ab} +\sqrt{ab +cX_{33}}$. Thus we arrive at the inequality \be \Delta z_1 \Delta z_2 \ge \sqrt{G_{12}^2 +\Omega^2 +2\Gamma} + \Omega, \label{2+1} \ee where \be G_{jk}^2 = X_{jk}^2 +Y_{jk}^2, \qquad \Omega =\left|G_{13}G_{23}\right|/X_{33}, \label{def-Om} \ee \be \Gamma = \left[X_{12}\left(Y_{23}Y_{31} -X_{23}X_{31}\right) + Y_{12}\left(X_{23}Y_{31} +Y_{23}X_{31}\right)\right]/X_{33} . \label{def-Gam} \ee If the observables $z_1$ and $z_2$ are totally independent from $z_3$, then $Y_{13}=Y_{23}=X_{13}=X_{23}=0$, and (\ref{2+1}) is reduced to the Schr\"odinger--Robertson inequality (\ref{unc4}). If $\left[\hat{z}_1, \hat{z}_3\right] =\left[\hat{z}_2, \hat{z}_3\right]=0$ (for example, $z_1=x$, $z_2=p_x$ and $z_3=y$), then \be \Delta z_1 \Delta z_2 \ge \sqrt{Y_{12}^2 +\left(X_{12} - X_{13}X_{23}/X_{33}\right)^2 } +\left|X_{13}X_{23}\right|/X_{33}. \label{2+1comm} \ee The right-hand side of this inequality is bigger than the Robertson bound $|Y_{12}|$, if there exist correlations in the pairs $(z_1,z_3)$ and $(z_2,z_3)$, characterized by nonzero values of the covariances $X_{13}$ and $X_{23}$. To illustrate inequality (\ref{2+1comm}), let us consider a special two-variable pure Gaussian state, described by the wave function \be \psi(x,y)= {\cal N}\exp\left( -\frac{a}{2} x^2 - b xy - \frac{c}{2} y^2\right), \label{psib} \ee where ${\cal N}$ is the normalization factor. To simplify the following formulas, let us assume that coefficients $a$ and $c$ are real (and positive), while $b$ may be an arbitrary complex number, satisfying the restriction $D \equiv ac - [\mbox{Re}(b)]^2 >0$. It is easy to calculate all necessary variances and covariances: \[ X_{11} \equiv \langle x^2\rangle = \frac{c}{2D}, \qquad X_{22} \equiv \langle p_x^2\rangle = \frac{a\hbar^2}{2D} \left(ac - [\mbox{Re}(b)]^2 + [\mbox{Im}(b)]^2\right), \] \[ X_{12} \equiv \frac12\langle \hat{x}\hat{p}_x + \hat{p}_x \hat{x}\rangle = \frac{\hbar}{2D} \mbox{Re}(b) \mbox{Im}(b), \qquad Y_{12} = \frac{\hbar}{2}, \] \[ X_{33} \equiv \langle y^2\rangle = \frac{a}{2D}, \qquad X_{13} \equiv \langle xy\rangle = -\,\frac{\mbox{Re}(b)}{2D}, \quad X_{23} \equiv \langle p_x y\rangle = -\,\frac{a\hbar}{2D}\mbox{Im}(b). \] In this case we have $X_{12} = X_{13}X_{23}/X_{33}$, so that (\ref{2+1comm}) can be written as (here $p \equiv p_x$) $\Delta x \Delta p \ge \hbar/2 +|\sigma_{xp}|$. The right-hand side of this inequality is certainly bigger than the Robertson--Schr\"odinger lower boundary $\left[(\hbar/2)^2 + \sigma_{xp}^2\right]^{1/2}$. This happens, because the quantum state describing the $x$-subsystem is {\em mixed}. Its density matrix $\rho(x,x^{\prime})= \int \psi(x,y)\psi^*(x^{\prime},y)dy$ has the purity $\mu =\left\{\left(ac - [\mbox{Re}(b)]^2\right)/\left(ac + [\mbox{Im}(b)]^2\right)\right\}^{1/2}$. The equality in (\ref{2+1comm}) is achieved for the states (\ref{psib}) with $|\mbox{Re}(b)| = |\mbox{Im}(b)|$. If $b \neq 0$, then the function (\ref{psib}) cannot be written as a product of some function of $x$ by some function of $y$. In other words, this function describes an {\em entangled\/} state with respect to variables $x$ and $y$. Due to this entanglement, the uncertainty product $\Delta x \Delta p$ turns out bigger than the Robertson--Schr\"odinger boundary. \label{sec-ex} \section{Conclusion} The main results of this paper are inequalities (\ref{2+1}) and (\ref{2+1comm}), which show how the entanglement of the system under study with other degrees of freedom results in the increase of the minimal value of the uncertainty product with respect to the selected system observables. We have found also an example of quantum states which saturate the new inequality (\ref{2+1comm}). The weakness of inequality (\ref{2+1comm}) is that it reduces to the Robertson--Schr\"odinger lower boundary (\ref{unc4}), if one of covariances, $X_{13}$ or $X_{23}$, equals zero, although the uncertanty product can exceed the boundary (\ref{unc4}) in such cases. Probably, more general and more strict inequalities can be found, if one applies the scheme of section \ref{sec-2var3} to systems of more than three observables. This subject is under investigation now. \section*{Acknowledgments} A partial support of the Brazilian funding agency CNPq is acknowledged.
train/arxiv
BkiUfTw5qhLBvqGZ3MeB
5
1
\section{Automated construction} \input{src/inventory.tex} \input{src/brick_stack_detection.tex} \input{src/brick_loading.tex} \input{src/build_pattern_detection.tex} \input{src/experiments.tex} \input{src/conclusion.tex} \bibliographystyle{IEEEtran} \section{Conclusion} We have presented an autonomous robotic system for localization, grasping, transportation and precise deployment of construction blocks. The system is capable of autonomous operation in challenging lighting conditions on uneven terrain. The positioning and navigation pipeline does not rely on external systems, such as the GPS. The system may therefore be deployed in an indoor environment without any adjustments, and even perform the complicated transition from an outdoor to an indoor environment. The presented computer vision algorithms, which are application independent, offer fast and reliable segmentation of both depth and RGB images. We provide all the software to the community as open-source \cite{github}. The presented system was successfully deployed during the MBZIRC 2020 Brick challenge, which simulated a possible application of mobile manipulators in a construction task. As the winners of the contest, we hope this work will inspire further development and facilitate the mobile manipulation endeavors of other research groups. \vspace{-0.5em} \section{Experiments} Early experimental evaluation was performed at the tennis courts of the New York University campus in Abu Dhabi. In here, the system was repeatably able to locate, load and deploy the bricks in a specified order. The success rate of brick loading was around $75\%$, with the grasping accuracy reaching $\pm3$~cm. In the remaining $25\%$ of cases, the system was not able to fill the cargo bay with all seven bricks. However, in case of a brick loading failure, the system was still able to autonomously interrupt the loading procedure, navigate to the checkered pattern, and unload the contents of the cargo bay. During the experiments, we measured that navigating to the stacked bricks takes up to 2 minutes, assuming the stack of four orange bricks is visible from almost any starting position. The precise alignment and loading all 7 bricks into the cargo bay takes nearly 10 minutes. Without the UAV assistance, the building pattern search is by far the most time consuming operation. To acquire the image from an elevated position, the robot has to stop, extend the arm upward, look around with the camera and then fold the arm back. This procedure alone requires 15 minutes on average, which only leaves 3 minutes for the brick deployment. Increasing the manipulator speed was not possible, as higher accelerations might cause the brick to be dropped. With the time budget in mind, we also prepared the emergency protocol, which focused on reliably placing a single brick on the first run. This would reduce the loading and deployment times significantly, leaving enough margin for difficult arena configurations requiring prolonged exploration. The grasping procedure was extensively tested under various lighting conditions to ensure robust performance at any time of day, as shown in Fig. \ref{fig:grasping_illumination}. This feature greatly enhances the economic feasibility of the proposed system as a robotic construction worker, since night-time construction operations fall into the ``potentially hazardous for humans'' category. \begin{figure}[htbp] \centering \begin{subfigure}{0.32\columnwidth} \includegraphics[width=\textwidth]{img/night_testing.jpg} \vspace{-1.7em} \subcaption{} \end{subfigure} \begin{subfigure}{0.32\columnwidth} \includegraphics[width=\textwidth]{img/daylight_grasping2.jpg} \vspace{-1.7em} \subcaption{} \end{subfigure} \begin{subfigure}{0.32\columnwidth} \includegraphics[width=\textwidth]{img/afternoon_grasping.jpg} \vspace{-1.7em} \subcaption{} \end{subfigure} \vspace{-0.5em} \caption{Testing of the grasping procedure in various kinds of lighting conditions. By using the depth image rather than RGB, the grasping can be performed in a repeatable manner at night (a), in direct sunlight (b) or in strong lateral illumination at sunset (c).} \label{fig:grasping_illumination} \end{figure} The performance of the brick extraction and classification in the LiDAR data was evaluated offline using a dataset obtained during the first competition rehearsal. In Table \ref{tab:iepf_classification}, we show a success rate of brick classification by the IEPF algorithm without any additional filtration or odometry fusion. The blue bricks are the least populous in the pickup area, and therefore are the most susceptible to erroneous detections. The accuracy is greatly improved by filtering the false positive measurements and by employing the EM algorithm, which incorporates the a priori knowledge of the brick pile layout. Snapshots of the point cloud processing are shown in Fig. \ref{fig:experiments_pointcloud}. \begin{table}[htbp] \centering \begin{tabular}{l|r|r|r} Brick color & Correct & Incorrect & Success rate [\%]\\ \hline\rule{0pt}{1.1\normalbaselineskip} Red & 174 & 70 & 71 \\ Green & 57 & 19 & 75 \\ Blue & 24 & 63 & 28 \\ Orange & 32 & 11 & 62 \\ \end{tabular} \caption{Accuracy of brick candidates generation by the IEPF algorithm. The evaluation was performed offline using data collected during the first rehearsal in the competition arena.} \label{tab:iepf_classification} \end{table} \begin{figure}[htbp] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/segments.png} \subcaption{Extracted line segments} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/em_algo.png} \subcaption{Red pile center estimation} \end{subfigure} \caption{Detection of stacked brick in the LiDAR data. The left image showcases the line segments as extracted and classified by the IEPF algorithm. The right image shows the five initial iterations of the EM algorithm. The red points represent the estimated position of the red pile center.} \label{fig:experiments_pointcloud} \end{figure} The final evaluation took place during the MBZIRC 2020 Brick challenge finals. During the contest, the brick pickup area was placed on the inclined ramp, which posed additional challenges for the participants. With no time to reconfigure the system to this new environment, we opted to engage the emergency protocol, which was optimized for reliability and ensured task completion with at least one brick. The successful brick placement during the contest finals is shown in Fig. \ref{fig:promo}. \section{Arm manipulation} The arm manipulation is controlled by the Kinova Kortex API and a ROS driver provided by the manufacturer\footnote{\url{https://github.com/Kinovarobotics/ros_kortex}}. We have designed a custom wrapper for the API called Kortex Control Manager (KCM), which integrates the API into the control pipeline. The KCM includes a library of short pre-programmed actions, which are chained to form more complex tasks, such as the brick grasping and loading. In addition to self-collision avoidance, which is integrated into the API, the KCM also expands this by enforcing an extended no-go zone over the UGV's frame. The no-go zone is further expanded by half a brick size to prevent collisions with the grasped brick. These safety features are necessary, but significantly reduce the operational space of the arm, as shown in Fig. \ref{fig:working_envelope}. The limited reach of the arm therefore needs to be compensated by the mobile base. The KCM offers position control of the manipulator in joint space, and velocity control of the end-effector in Cartesian space. In the position control mode we utilize the MoveIt! planning interface \cite{chitta2012moveit} to interpolate the trajectory between current and desired position. The trajectory is sampled with a time step of $1$~ms and the corresponding joint commands are sent to the actuators via the Kortex API. The KCM also monitors the data returned by the integrated sensors, which measure position, velocity and torque in each of the joints at a rate of $100$~Hz. The data is analyzed, and the motion is terminated if excessive torque or no-go zone violation is detected. \begin{figure}[htbp] \centering \includegraphics[width=0.8\columnwidth]{img/working_envelope.pdf} \caption{Grasping reach of the proposed mobile manipulator with the 7 DOF Kinova Gen3 manipulator. The arm is equipped with the custom end-effector and a magnetic gripper. If a brick center is located within the range highlighted in red, it can be reliably grasped. The dimensions are shown in millimeters unless stated otherwise.} \label{fig:working_envelope} \end{figure} \section{Control architecture} The following sections summarize the key software components used to successfully complete the task in autonomous mode. As mentioned before, the environment layout is unknown prior to the deployment. The robot therefore has to explore the area and localize the objects of interest. Despite using an array of powerful onboard sensors, the object detection range of the robot is still quite limited due to the finite resolution of the sensors and the size of the objects. For this reason, we concluded that a full exploration of the area would not be feasible given the limited time budget. On the other hand, specifying navigational waypoints in an unknown environment is not possible. We approach the problem using a ``guided autonomy'' concept \cite{gray2018architecture,dellin2016guided,hebert2015supervised}, which aims to strike the balance between exhaustive exploration and going directly to a well-defined target. The system is able to investigate coarsely defined areas of interest, which may be provided on-the-fly by a human operator or by other cooperating robots. Limiting the search space allows the robot to conserve some of the limited time budget, and if no objects of interest are found within the prioritized areas, the full exploration is resumed. \subsection{Brick stack detection} During the exploration phase, the LiDAR is used for detection of the stacked bricks. The Velodyne VLP-16 has a $30^{\circ}$ vertical field of view divided into 16 evenly spaced scan layers. To reliably distinguish a brick in the point cloud, at least two scan layers have to hit it. The number of rays hitting a brick can be determined as \begin{equation} \phantom{,.}N = \frac{\mathrm{arccos}\left(1-\frac{a^2}{2b^2}\right)}{\alpha}\phantom{.}, \end{equation} where $a$ is the brick height, $b$ is the distance between the sensor and the point (ray length) and $\alpha$ is the angular pitch of two neighboring scan layers. The maximal detection distance $d$ is then estimated as \begin{equation} \phantom{..}d = \frac{\frac{a}{4}}{\mathrm{tan}\left(\frac{\alpha}{2}\right)}\phantom{.}. \end{equation} For a single brick with a height of $0.2$~m, the estimated detection range is approximately $3.055$~m. The formula assumes an idealized scenario, in which the rays and one brick side form an isosceles triangle. In practice, the range would be even shorter. Fortunately, the apriori knowledge of the stack layout can be leveraged to increase the range, since all brick types are arranged in at least two layers. The point cloud is processed using the Iterative End Point Fit (IEPF) algorithm \cite{ramer1972iterative}, which extracts line segments from the measurement. All segments of length matching one of the brick classes are considered brick candidates. In the real-world, this approach alone is not sufficient, as the structure of the environment causes generation of false-positive candidates. To filter out the erroneous data, an Expectation Maximization (EM) algorithm is employed. The algorithm uses the following probabilistic model: \begin{equation} \phantom{,.}P(\vec{x}_m) \sim \mathcal{N}(\vec{x}_m, \, \vec{\mu} + k_m\vec{v}, \, \bm{\Sigma_m})\phantom{.}, \label{eq:prob} \end{equation} where $m$ is the brick class index, $\mathcal{N}$ is the Gaussian distribution, $\vec{\mu}$ is the mean of the model, $\bm{\Sigma_m}$ is the covariance matrix of a brick class, $k_m$ is the scalar multiplier unique for each class and $\vec{v} = (\cos\phi, \sin\phi)^T$ is a direction vector representing orientation. The probability density of individual brick classes is visualized in Fig. \ref{fig:classifier_model}. The model provides the most likely estimate of the parameters $\vec{\mu}$ and $\phi$. The parameters correspond with the position of the brick stack center, and the orientation of the stack. The results are then used to generate a navigational waypoint near the red bricks, so that the bricks are on the right hand side of the robot. \begin{figure}[htbp] \centering \includegraphics[width=0.7\columnwidth]{img/classifier_model.pdf} \caption{Probability density of the brick class position in the pickup area. This model is used by the brick classifier to estimate the position and orientation of the pickup area.} \label{fig:classifier_model} \end{figure} After approaching the red bricks, a line-fitting algorithm based on the Random Sampling Consensus (RANSAC \cite{fischler1981random}) is applied to the point cloud. The UGV performs a parallel parking action to align its heading with the major axis of the brick stack, which is estimated by the RANSAC algorithm. After aligning, the manipulator is extended into a predefined position to the right hand side of the Husky. In this position, the wrist-mounted camera and the gripper point directly towards the ground. \subsection{Related work} The deployment of complex cyber-physical systems and machine learning in industry marks a historic shift in design paradigms, which has been dubbed the fourth industrial revolution, or Industry 4.0 \cite{lu2017industry, lasi2014industry}. Commercially available mobile manipulators include the KUKA KMR iiwa\footnote{\url{https://www.kuka.com/en-de/products/mobility/mobile-robots/kmr-iiwa}}, Robotnik RB-1\footnote{\url{https://robotnik.eu/products/mobile-manipulators/rb-1}} or the Fetch Robotics Mobile Manipulator \cite{wise2016fetch}. Despite availability, mobile manipulators remain predominantly a subject of academic research. A 5 DOF manipulator mounted to an omni-directional mobile platform proposed in \cite{bischoff2011kuka} demonstrated the ability to localize and pick up colored objects in a semi-structured environment. A dexterity analysis in \cite{chen2018dexterous,domel2017toward} considers the task of picking up an object from a shelf by a mobile manipulator. In \cite{ohashi2016realization}, manipulation and transportation of a heavy payload is addressed. Finally in \cite{pavlichenko2018kittingbot}, a robot is shown to fetch various warehouse objects, while also avoiding a human worker, who occupies the same area. Mobile manipulators will play a crucial role in the transition to Industry 4.0, as they offer an easy way to reconfigure the assembly line by self-reorganization, and provide more options for cooperation with human workers. In the near future, mobile manipulators may become an integral component of smart factories, workshops and highly modular assembly lines \cite{xu2018industry,petrasch2016process,roblek2016complex,davis2012smart}.\looseness=-1 The field of precision agriculture may also greatly benefit from mobile manipulators. However, robots working outdoors have to overcome changing weather and lighting conditions \cite{duckett2018agricultural}. In \cite{bac2014harvesting}, more than 50 agricultural mobile manipulation systems were reviewed. The results show a significant lack of reliability, if the robots are deployed outside of laboratory conditions. However, the challenges of reliable robotic perception in the real-world outdoor environment are slowly being overcome, as new sensors and processing methods are developed \cite{ponnambalam2020agri,binch2020context,kusumam20173d}. Novel approaches also allow for long-term deployment of agricultural robotic systems \cite{pretto2020building}. A study in \cite{bogue2020fruit} shows, that specializing in one crop type and adjusting the environment for robotic operation significantly improves reliability in real-world conditions. Aiming to push the boundaries of research, the Mohamed Bin Zayed International Robotics Challenge\footnote{\url{http://mbzirc.com}} (MBZIRC) held in 2017 and in 2020 also focused on mobile manipulation tasks. The competition attracted attention and participants from renowned research institutions from all over the world. In 2017, a ground robot was tasked to autonomously locate a valve, pick up a tool of an appropriate size and turn the valve \cite{schwarz2019team}. It also featured the ``Treasure hunt'' challenge, where a team of up to 3 aerial robots was tasked to locate, grasp and transport magnetic discs to a designated drop-off zone \cite{spurny2019cooperative, baca2019autonomous, loianno2018localization}. In 2020, a successor to the Treasure hunt was introduced in the ``Brick building'' challenge. The difficulty was increased significantly by adding objects of varying sizes, replacing the drop-off by precision placement, and requiring both ground and aerial vehicles to be deployed simultaneously \cite{penicka2020mbzirc}. \section{Robotic platform} \begin{figure}[htbp] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/husky_no_cargo.jpg} \subcaption{Proposed mobile manipulator} \label{fig:folded_arm} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/husky_with_cargo.jpg} \subcaption{Cargo bay attached} \end{subfigure} \caption{An overview of the proposed robotic system. A Gen3 Kinova manipulator arm is attached to a Clearpath Husky wheeled base. A custom-designed cargo bay is attached to the robot to increase payload capacity and minimize the time spent by traversing the construction site.} \label{fig:platform} \end{figure} The proposed system is built using mostly off-the-shelf components, which corresponds well with our modular and open design philosophy. A Clearpath Robotics Husky A200 serves as the mobile base for the system. An Intel NUCi7, with the operating system Ubuntu 18.04, is used as the main onboard computer. The Robot Operating System (ROS) serves as a middleware interconnecting the hardware and software parts \cite{quigley2009ros}. For object manipulation, a 7 DOF Kinova Robotics Gen3 arm is employed. The arm is mounted to the mobile base as shown in Fig. \ref{fig:platform}. The arm is equipped with a custom-designed, force compliant end-effector with a magnetic gripper as shown in Fig. \ref{fig:gripper_detail}. Its design is based on our previous experience with magnetic object grasping by aerial vehicles \cite{spurny2019cooperative, loianno2018localization}. The gripper features two YJ-40/20 solenoid electromagnets, mounted at a $130$~mm center-to-center pitch. The magnets are rated to operate at $12$~V, with each magnet providing up to $25$~kg of holding force. However, the force is strongly affected by the relative distance and orientation of contact the surfaces. In our setup, the magnets are overcharged with a $24$~V input, which effectively doubles the available grasping force. On the other hand, it makes the gripper susceptible to overheating when powered on for extended periods of time. Each magnet is fitted with a Hall-effect sensor, which serves as a proximity detector for ferromagnetic objects. The sensoric readout and power management of the magnets is handled by an Arduino Nano board, which is connected via USB to the main computer. \begin{figure}[htbp] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/gripper_render.png} \subcaption{CAD design} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/gripper_photo.png} \subcaption{3D printed and mounted} \end{subfigure} \caption{Detail of the custom-designed magnetic gripper for the end-effector. The gripper is fitted with two electromagnets and Hall-effect sensors. The design also incorporates a mounting point for the Realsense D435 camera, which is used for visual servoing.} \label{fig:gripper_detail} \end{figure} To minimize the time spent by transitions between the material loading area and the deployment area, a custom-designed cargo bay is attached to the rear of the robot. The cargo bay allows the robot to store up to 7 bricks (4 red, 2 green and 1 blue), which are sufficient for construction of an entire section of the structure. Loading a brick into the cargo bay takes around 30 seconds. During this time period, the gripper temperature is kept to reasonable levels. For environment sensing, the robot is fitted with a Velodyne Puck VLP-16 LiDAR\footnote{Light Detection and Ranging}, consisting of a rotating infrared laser range-finder. The LiDAR provides a $360^{\circ}$ horizontal and a $30^{\circ}$ vertical field of view, with an effective range of around $50$~m. The sensor is connected to the main computer via Ethernet, and provides measurements at a rate of $10$~Hz in the form of a 3D point cloud. The manipulator is already fitted with an Intel Realsense D410 RGBD camera integrated into the wrist link. This camera was intended to provide visual feedback during the brick grasping process. However, we have experienced a significant delay in the image output, which rendered it impossible to use in a feedback loop. Therefore, an additional Intel Realsense D435 RGBD camera was mounted to the gripper and connected directly to the onboard computer. The camera provides an RGB resolution of up to $1920 \times 1080$ pixels, and a depth image resolution of up to $1280 \times 720$ pixels. \subsection{Contribution} We present a self-contained robotic system for autonomous localization, grasping, transportation and precise placement of magnetic blocks. The system design is provided to the community together with the necessary onboard software as open-source \cite{github}. In contrast to the commercially available mobile manipulators, the proposed system is highly modular and dexterous, while maintaining a compact form-factor. As was mentioned earlier, these two design requirements often contradict each other. The proposed system greatly benefits of a unique combination of a compact, yet powerful mobile base, and a very lightweight ($8.2$~kg) manipulator arm with 7 DOF, which is able to carry objects weighing up to $3$~kg. Another major advantage is the ability to traverse uneven terrain, which is essential for outdoor operation, but is not present with the aforementioned industrial solutions. We propose a modified variant of the fast segmentation algorithm introduced in \cite{krajnik2014jint} to efficiently extract objects of interest from both depth and RGB image streams. The system is designed to be invariant to external lighting, and was proven to work in darkness as well as broad daylight. We also show, how to enhance the limited perception range of a ground-based robot by employing a cooperating aerial vehicle equipped with an onboard camera, to quickly detect visually distinct features of the area. Finally, we share the experience gained by our participation in the Brick challenge of the MBZIRC 2020. Out of 19 teams participating in this challenge, only 2 teams managed to complete the task with their ground robots in autonomous mode. As the winners of the competition, we believe that presenting the complete system to the community will make a valuable contribution. \section{Problem description} This work tackles a construction task in an outdoor semi-structured environment. The area of operation is rectangular, approximately $50$~m~$\times$~$60$~m in size. It is largely devoid of features and obstacles. One unmanned ground vehicle (UGV) and three unmanned aerial vehicles (UAV) are deployed in the area simultaneously. The following objects of interest are located in the area: stacked bricks for the UGV to pick up and a colored pattern to place the bricks to. The arena also contains stacked bricks and a placement area for the UAVs. These objects are not of interest to the UGV and therefore will be treated as obstacles. The objects may be located at an arbitrary position in the arena. A precise position and orientation of these objects is not known in advance. The stacked bricks are arranged in a predefined pattern, as shown in Fig. \ref{fig:objects_of_interest}. There are four classes of bricks, which differ in size, weight and color. Each brick features a white ferrous plate attached to the top side. The placement area consists of two flat segments forming an L-shape. The segments are covered in a high contrast yellow-magenta checkered pattern, see Fig. \ref{fig:objects_of_interest}, and each segment is $4$~m long, and $0.4$~m wide. The pattern is laid directly on the ground. The structures related to UAVs are clearly distinguishable, with the pickup area only consisting of one layer of bricks, and the placement area being placed on an elevated platform with a height of $1.7$~m. The goal is to use the colored bricks to construct a wall in the area marked by the checkered pattern. Each layer of the wall consists of exactly 2 orange, 1 blue, 2 green and 4 red bricks. The ordering of the bricks is specified in advance, but changes with each trial. Each correctly placed brick is awarded by a score gain, with larger bricks being worth more points. The time limit to complete the challenge is 30 minutes. \begin{figure}[thpb] \centering \begin{subfigure}[b]{.78\columnwidth} \includegraphics[width=1.0\textwidth, clip, trim=1.7cm 0 0 0]{img/brick_pile.png} \label{fig:brick_pile} \end{subfigure} \begin{subfigure}[b]{.2\columnwidth} \includegraphics[width=1.0\textwidth]{img/checkers.png} \label{fig:build_pattern} \end{subfigure} \caption{Objects of interest, with which the UGV will interact. The brick pickup area (left) consists of colored bricks stacked in multiple layers. The checkered pattern (right) indicates the deployment zone.} \label{fig:objects_of_interest} \end{figure} \section{Navigation and localization} The UGV uses a default ROS navigation stack, as implemented by the STRANDS project \cite{strands}. The main source of positional data for navigation is the onboard LiDAR. During the development, we have discovered that the dynamically changing area of a construction site and a non-flat ground plane introduce a lot of false-positive measurements to the localization stack. By experimental evaluation, it was found that the localization performance can be significantly improved by focusing on the upper layers of the point cloud. Therefore, the LiDAR scan is separated into two horizontal slices, with the upper slice being used for robot localization, and the lower slice for obstacle avoidance. For this particular application, the slicing threshold was empirically set to $1.5$~m above the sensor. Prior to the deployment, a high-resolution scan of the surrounding environment is taken by a Leica BLK360 3D scanner. The point cloud obtained by the 3D scanner is used to augment the measurements taken by the onboard LiDAR. The augmentation contributes significantly to the quality of localization, especially when the UGV moves near the area center, where the feature density is the lowest. The pose of the UGV is estimated using the Adaptive Monte Carlo Localization (AMCL)\footnote{\url{http://wiki.ros.org/amcl}}. The AMCL essentially attempts to correct the drift in measured wheel odometry by matching onboard laser scans to the area map \cite{fox1999monte}. However, the odometry estimator of the Husky gets distorted by shifting the center of mass of the vehicle, which occurs with every manipulator movement and with every brick added into the cargo bay. We have managed to solve this issue by exhaustively tuning the parameters of the odometry estimator for various vehicle and payload configurations. The ROS move\_base\footnote{\url{http://wiki.ros.org/move_base}} package is employed for path planning. Several adaptations of the code were made to allow the UGV to dynamically change the precision of path following. During the exploration phase, the precision is relaxed, as the focus is on covering large area in the shortest amount of time. On the contrary, while approaching the stacked bricks, high precision is strongly required, since the area reachable by the manipulator is limited. In the high precision mode, the UGV performs a series of ``parallel parking'' manoeuvres to reach the desired position with very tight margins. Without any additional information, the UGV conducts a full exploration of the area by generating a grid of waypoints to achieve a full sensoric coverage of the area, see Fig. \ref{fig:exploration}. An early termination of the procedure is triggered, if all objects of interest are located. \begin{figure}[thpb] \vspace{0.3em} \centering \includegraphics[width=0.75\columnwidth]{img/exploration.png} \caption{Visualization of the full coverage exploration strategy. The high resolution map obtained by the 3D scanner is outlined in black. The yellow frame determines the operational area, the navigational waypoints are shown in purple. The green circles centered around each waypoint represent the maximal perception range of the system. The UGV then follows a zig-zag pattern in order to visit all the waypoints.\looseness=-1} \label{fig:exploration} \end{figure} \subsection{Build pattern detection} The checkered building pattern is impossible to detect by the LiDAR, as it blends in with the ground plane. Therefore we use the RGB portion of the data provided by the arm-mounted RGBD camera. For the pattern detection, the fast segmentation algorithm based on \cite{krajnik2014jint,vstvepan2019vision} is used. The algorithm uses a 3D RGB lookup grid, to classify individual pixels as background, object or unknown. The method enables pixel-wise classification in constant time, which is achieved by pre-computing the lookup grid during a calibration process. In the case of the high-contrast pattern, the grid is initialized using a Gaussian mixture to model the desired color in the HSV color space. The RGB image is then searched for the desired color. Once a satisfactory pixel is found, the flood-fill algorithm is initiated to extract the segment to which the pixel belongs. In the default state, the algorithm would only find individual squares in the checkered pattern, which would require costly post-processing to merge the objects back together. Instead, we propose a modified flood-fill algorithm capable of bypassing small discontinuities in the segments. The algorithm, as presented in \cite{vstvepan2019vision}, naturally obtains the number of colors surrounding each pixel in the segment, allowing for an easy corner detection. For each corner, the searched neighborhood is increased to 5 pixels. If a new pixel of the desired color is found within this neighborhood, it is added to the original segment. This way, half of the entire checkered pattern may be extracted as a single segment. Comparing the dimensions of the segment with the apriori knowledge of the build area allows for effective filtration of false positive candidates. During the pattern search, the UGV fully extends the arm, placing the camera into an elevated position, see Fig. \ref{fig:exploarm}. The camera is then rotated to fully scan the surroundings of the robot. This process requires the UGV to stop moving, as the center of mass is greatly offset by the arm. Due to the camera resolution and density of the pattern, detection range is limited to approximately $10$~m. Achieving a full area coverage would require performing the scan at 20 different locations, see Fig.~\ref{fig:exploration}, which is not feasible within the given time limit. \begin{figure}[htbp] \vspace{0.4em} \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/pattern_search.jpg} \subcaption{Long range pattern search} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/pattern_detect.png} \subcaption{Precise pattern alignment} \end{subfigure} \caption{While searching for the pattern, the UGV extends the arm upwards to provide the camera an elevated view point (left). Doing so requires the vehicle to stop moving due to a large shift in the center of gravity. Once the building area is located, the UGV uses visual servoing to precisely align the mobile base with the pattern (right). Green highlights detected segment edges. The proposed segmentation algorithm enables extraction of all the magenta squares as one object in a single pass, without the need for post-processing.} \label{fig:exploarm} \end{figure} Since the UGV shares the construction area with a team of aerial vehicles, we opted to use a cooperative approach to detect the building pattern. The UAVs are equipped with a downward facing RGBD camera, and are able to run the pattern detection pipeline on board. Due to the orientation of the camera, the UAV does not have to stop moving, and the detection may run continuously while in motion. The detections are reported to the UGV over a WiFi network. We employ the NimbroNetwork \cite{nimbro_git}, which allows ROS message sharing between multiple independent robotic systems. Once the approximate position of the build pattern is reached, the arm is folded so that the camera aims forward, as shown in Fig. \ref{fig:folded_arm}. The UGV then drives in an outward spiral until the pattern appears in the image. Finally, the arm is extended to the right hand side, and the UGV uses a similar visual servoing approach to align itself with the pattern, as it does with the brick stack, see Fig.~\ref{fig:exploarm}. The bricks are then deployed in an order determined by the inventory manager. During the MBZIRC 2020 finals, the UAV assistance was not used due to the fear of overloading the communication channel and disrupting the mutual coordination of the UAVs. Instead, the pattern position was estimated from images taken during the previous contest rounds, and the cooperation was only emulated. \subsection{Inventory management} The proposed design features a cargo bay, which can hold up to 7 bricks. However, in order to preserve a compact form factor, the bricks in the cargo bay are stacked into three layers. This renders some bricks unreachable, if other bricks are loaded on top of them. We have developed an inventory management system, which monitors the reachability of individual bricks. The inventory manager also processes the blueprint for the desired wall assembly, and determines which bricks to pick up, and where to place them. This module also allows the UGV to dynamically change the building strategy. This feature is especially useful when the UGV encounters an unexpected issue, such as an inability to grasp some of the bricks. In such case, a new strategy is to immediately navigate to the building pattern and unload all the remaining cargo. \section{Introduction} Robotic assembly has become a staple of the manufacturing process over the past decades. For many years, industrial robots have been purpose-built to perform a singular task with nearly zero versatility or possibility to reconfigure the assembly process. The status quo is slowly changing with the increasing availability of compact but powerful computational units and new lightweight actuators. The available processing power allows for extensive use of advanced planning and perception methods, as well as deep learning techniques supported by a widespread availability of datasets \cite{reco}. The growth in e-commerce also led into the development of robots in warehouses, where they are used for automated storage or product retrieval \cite{bogue2016growth}. Moreover, robots also see increased prevalence outside of the structured environments, e.g., in households and offices \cite{strands,spencer}. The construction task addressed in this paper takes the logistic aspects of warehouse automation and employs them into a less organized semi-structured environment, as shown in Fig. \ref{fig:promo}. \begin{figure}[ht!] \centering \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/loading.jpg} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \includegraphics[width=\textwidth]{img/placement.jpg} \end{subfigure} \caption{The described system loading and placing building material during the MBZIRC 2020 contest.} \label{fig:promo} \end{figure} Despite the undeniable progress made in recent years, common household robots like intelligent vacuum cleaners are still a long way from versatile robotic assistants. The challenges of designing such system include navigation in a cluttered dynamic environment and interaction with a wide variety of household objects. Most importantly, the operation has to be reliable and safe, especially since the robot is expected to share its operational space with humans. Unfortunately, the design requirements often contradict each other. Excellent dexterity can only be satisfied by a large robotic arm with many actuators, which in turn results in a need for a large and stable mobile base. However, safe and precise navigation in a cluttered environment is often difficult for bulky robots. Therefore, present day mobile manipulators appear as clumsy and require human supervision while operating, which somewhat negates the purpose of an automated system. This becomes even more pronounced when the operation spans over larger time periods \cite{strands,lta}. The need for human supervision also makes the mobile manipulator economically unviable. The economical aspects may be offset by considering environments, which are potentially hazardous for humans. These include search and rescue operations in underground environments \cite{rouvcek2019darpa,petrlik2020robust}, clearing areas of toxic or radioactive debris \cite{amjadi2019cooperative,nawaz2009underwater}, high-rise building maintenance \cite{maintenance} or power line inspection \cite{9213924, 9213967}. \subsection{Brick detection, grasping and loading} The brick detection pipeline is built on the fast image segmentation proposed in \cite{krajnik2014jint}. The algorithm has been successfully deployed in the Treasure hunt challenge of MBZIRC 2017 \cite{vstvepan2019vision,spurny2019cooperative, loianno2018localization} for detection of circular objects. The original algorithm works with data provided by an RGB camera. For the brick detection, we opted to use a depth camera, which proved to be more resilient to illumination changes. Therefore, the algorithm was adapted for depth image processing, which enabled the robot to operate in direct sunlight and also at night.\looseness=-1 The algorithm factors in the position of the arm to estimate the height of the camera above the ground plane. The depth information contained within each pixel is transformed into height above the estimated ground level. The algorithm begins by discarding all data below the height of $0.1$~m. Then, a flood-fill algorithm described in \cite{krajnik2014jint} is initiated to extract brick candidate segments. In addition to the segments, the algorithm also obtains their centers, sizes, eccentricities and distances from the camera. Finally, the method determines, whether all edges of the segment are visible by the camera. The processing pipeline then attempts to assign a brick class to the fully visible segments. It starts by locating a pixel with the highest distance from the center, which we denote as $c_0$. Then, a pixel $c_1$ is found by searching for the maximal distance from $c_0$. The third corner $c_2$ is found by searching for a maximal distance from both $c_0, c_1$. The final corner $c_3$ is located by maximizing the sum of distances from the other three corners. Using the coordinates of the corner pixels and the depth information, the segment is transformed from camera coordinates into 3D and the length and width of the object are calculated. Since the brick classes are clearly distinguishable by dimensions only, RGB processing may be omitted. Therefore, the system does not require color calibration to the current lighting conditions. \begin{figure}[htbp] \centering \begin{subfigure}{0.32\columnwidth} \includegraphics[width=\textwidth]{img/servoing_rgb.png} \subcaption{RGB image} \end{subfigure} \begin{subfigure}{0.32\columnwidth} \includegraphics[width=\textwidth]{img/servoing_depth_raw.png} \subcaption{Raw depth image} \end{subfigure} \begin{subfigure}{0.32\columnwidth} \includegraphics[width=\textwidth]{img/servoing_segmented_with_features.png} \subcaption{Processed depth} \end{subfigure} \caption{Visual servoing based on the wrist-mounted RGBD camera. The RGB image is only shown for comparison. Due to the tight packing of the bricks, multiple segments are visible at the same time. The segmented depth image is shown on the right, with detected corners marked in red, and the current alignment target in yellow. Note the robot wheels at the bottom of the image.} \label{fig:visual_servoing} \end{figure} The processed depth image (see Fig.~\ref{fig:visual_servoing}) is used for visual servoing of both the mobile base and the manipulator. If the image contains the brick class requested by the inventory manager, the segment position is transformed into the coordinate system of the gripper, and the arm attempts to align with its center. If multiple segments fit the request, the system prioritizes the segment closest to the lower right corner of the image. If the segment center is unreachable by the arm, the mobile base starts to perform small turns to adjust its position. The motion is stopped, once the brick center is located directly below the gripper with a $\pm8$~cm tolerance. Afterwards, the gripper is aligned with the brick center (tolerance $\pm2$~cm) and the arm begins to descend.\looseness=-1 During the descent, lateral adjustments to the gripper position are made as long as the brick is fully visible. Afterwards, the magnets are powered on, and the arm descends straight down until the Hall-effect sensors report a successful grasp. Grasping failure is detected by either reaching lower than expected (brick missed completely), or by detecting an excessive torque on the arm joints (ferromagnetic plate missed). In case of failure, the robot will attempt the grasping process again. After two failed attempts, the brick is marked invalid, and the UGV moves on to the next one. Once the brick is grasped, the inventory manager assigns it a position in the cargo bay. A series of predefined actions ensures that the arm stores the brick in the assigned inventory slot. Based on the brick class and the current payload status, additional actions are added into the loading sequence to avoid potential collisions.
train/arxiv
BkiUeGo4ubnhC4Ri7gbf
5
1
\section*{}} \newcommand{\mathds{1}}{\mathds{1}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\bm{\mathcal{A}}}{\bm{\mathcal{A}}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\bm{\mathcal{C}}}{\bm{\mathcal{C}}} \newcommand{\bm{z}}{\bm{z}} \newcommand{\bm{I}}{\bm{I}} \newcommand{{\bf P}}{{\bf P}} \newcommand{\bm{\mathcal{G}}}{\bm{\mathcal{G}}} \newcommand{\bm{\xi}}{\bm{\xi}} \newcommand{\bm{v}}{\bm{v}} \newcommand{\bm{x}}{\bm{x}} \newcommand{\bm{w}}{\bm{w}} \newcommand{\bm{m}}{\bm{m}} \newcommand{\bm{\rho}}{\bm{\rho}} \newcommand{\bm{T}}{\bm{T}} \newcommand{\beta}{\bm{e}} \newcommand{\bm{\Sigma}}{\bm{\Sigma}} \newcommand{\bm{K}}{\bm{K}} \newcommand{\bm{\eta}}{\bm{\eta}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\varnothing}{\varnothing} \newcommand{\mathbf{\Sigma}}{\mathbf{\Sigma}} \newcommand{\mathbf{\Pi}}{\mathbf{\Pi}} \newcommand{\mathbf{\Delta}}{\mathbf{\Delta}} \newcommand{F:X\to 2^Y\backslash \left\{ \keno \right\} }{F:X\to 2^Y\backslash \left\{ \varnothing \right\} } \newcommand{\breve{\mathcal{B}}_d}{\breve{\mathcal{B}}_d} \newcommand{\overline{\mathcal{B}}_s}{\overline{\mathcal{B}}_s} \newcommand{D_g^-f}{D_g^-f} \newcommand{D_g^+f}{D_g^+f} \newcommand{\mathrel{\mathop:}=}{\mathrel{\mathop:}=} \newcommand{=\mathrel{\mathop:}}{=\mathrel{\mathop:}} \newcommand{\norm}[1]{\big\|#1\big\|} \newcommand{\smnorm}[1]{\|#1\|} \newcommand{\sum_{j=1}^{\infty}}{\sum_{j=1}^{\infty}} \newcommand{\sum_{k=1}^{\infty}}{\sum_{k=1}^{\infty}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\pr}[2]{\big\langle#1,#2\big\rangle} \newcommand{(\raisebox{-.5pt}{\scalebox{1.45}{\Letter}}\kern-1.7pt)}{(\raisebox{-.5pt}{\scalebox{1.45}{\Letter}}\kern-1.7pt)} \newcommand{{\lambda,\delta}}{{\lambda,\delta}} \newcommand{\lambda}{\lambda} \newcommand{{\rm I}}{{\rm I}} \newcommand{\bm{u}}{\bm{u}} \newcommand{\bm{y}}{\bm{y}} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\bigO}[1]{\mathcal{O}{\left(#1\right)}} \newcommand\smallO[1]{ \mathchoice {{\scriptstyle\mathcal{O}} {{\scriptstyle\mathcal{O}} {{\scriptscriptstyle\mathcal{O}} {\scalebox{.7}{$\scriptscriptstyle\mathcal{O}$} {\left(#1\right)}} \newcommand{\nu_\delta}{\nu_\delta} \newcommand{\nu_{\lambda,\delta}}{\nu_{\lambda,\delta}} \newcommand{\lambda j^{-4\ell+2b}+\delta j^{2a}}{\lambda j^{-4\ell+2b}+\delta j^{2a}} \newcommand{\de}[1]{\delta^{(#1)}} \newcommand{\si}[1]{\lambda} \newcommand{\sum_{j=1}^{N}}{\sum_{j=1}^{N}} \newcommand{\prh}[2]{\big\langle#1,#2\big\rangle_{h}} \newcommand{\lambda}{\lambda} \newcommand{\frac{\delta}{\lambda}}{\frac{\delta}{\lambda}} \newcommand{{\rm {Gamma}}}{{\rm {Gamma}}} \newcommand{\upalpha_1}{\upalpha_1} \newcommand{\upbeta_1}{\upbeta_1} \newcommand{\upalpha_0}{\upalpha_0} \newcommand{\upbeta_0}{\upbeta_0} \newcommand{\mathcal{R}}{\mathcal{R}} \begin{document} \title{\textbf{Analysis of the Gibbs sampler for hierarchical inverse problems}} \author{Sergios Agapiou \\\small Mathematics Institute, University of Warwick\\ \small Coventry CV4 7AL, United Kingdom\\\small [email protected]\\\\ Johnathan M. Bardsley \\\small Department of Mathematics Sciences, University of Montana\\\small Missoula, MT, 59812-0864 USA\\\small [email protected]\\\\Omiros Papaspiliopoulos \\\small Department of Economics, Universitat Pompeu Fabra\\ \small Ramon Trias Fargas 25-27, 08005 Barcelona, Spain\\\small [email protected]\\\\Andrew M. Stuart\\\small Mathematics Institute, University of Warwick\\\small Coventry CV4 7AL, United Kingdom\\\small [email protected] } \date{} \maketitle {\bf Abstract} Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized resulting in a problem in $\mathbb{R}^N$, with an understanding that refining the discretization, that is increasing $N$, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyper-parameters in such a way that they are interpretable in the continuum limit $N \to \infty$ and so that their values may be compared between different discretization levels; (ii) understanding the efficiency of algorithms for probing the posterior distribution, as a function of large $N.$ Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modelling framework based on a Gaussian prior for the unknown field and an inverse-gamma prior for a hyper-parameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. Subscribing to the dogma that one should think infinite-dimensionally before implementing in finite dimensions, we present function space intuition and provide rigorous theory showing that as $N$ increases, the component of the Gibbs sampler for sampling the amplitude of the prior variance becomes increasingly slower. We discuss a reparametrization of the prior variance that is robust with respect to the increase in dimension; we give numerical experiments which exhibit that our reparametrization prevents the slowing down. Our intuition on the behaviour of the prior hyper-parameter, with and without reparametrization, is sufficiently general to include a broad class of nonlinear inverse problems as well as other families of hyper-priors. \vspace{0.4cm} {\bf Key words.} Gaussian process priors, Markov chain Monte Carlo, inverse covariance operators, hierarchical models, diffusion limit.\\ {\bf 2010 Mathematics Subject Classification.} 62G20, 62C10, 62D05, 45Q05. \vspace{0.4cm} \pagestyle{myheadings} \thispagestyle{plain} \markboth{S. Agapiou, J. M. Bardsley, O. Papaspiliopoulos, A. M. Stuart} {Analysis of Gibbs sampler for inverse problems} \section{Introduction}\label{ch3:sec:int} We consider the possibly nonlinear inverse problem of recovering an unknown parameter $\bm{u}\in\mathcal{X}$ from a noisy indirect observation $\bm{y}\in\mathcal{Y}$. We work in a framework where $\mathcal{X}$ is an infinite-dimensional separable Hilbert space with inner product $\pr{\cdot}{\cdot}$ and norm $\smnorm{\cdot}$, and $\mathcal{Y}$ is also a separable Hilbert space. We will be especially interested in the case $\mathcal{Y}=\mathcal{X}$ or $\mathcal{Y}=\mathbb{R}^M$. The unknown parameter and the observation are related through an additive noise model \begin{equation} \label{ch3:eq:1} \bm{y}=\bm{\mathcal{G}}(\bm{u})+\bm{\eta},\end{equation} where $\bm{\mathcal{G}}:\mathcal{X}\to\mathcal{Y}$ is the forward map which is assumed to be continuous, and $\bm{\eta}$ is Gaussian noise \begin{align}\label{ch3:eq:2}\bm{\eta}\sim\mathcal{N}(0,\lambda^{-1}\bm{\mathcal{C}}_1).\end{align} The linear operator $ \bm{\mathcal{C}}_1:\mathcal{Y}\to\mathcal{Y}$ is bounded and positive definite and $\lambda>0$ models the noise level; we do not enforce that $\bm{\mathcal{C}}_1$ is trace-class, thereby allowing the case of Gaussian white noise where it is the identity. We adopt a Bayesian approach with a Gaussian prior on the unknown parameter $\bm{u}$ \begin{align}\label{ch3:eq:3}\bm{u}|\delta\sim\mathcal{N}(0,\delta^{-1}\bm{\mathcal{C}}_0),\end{align} where $\bm{\mathcal{C}}_0:\mathcal{X}\to\mathcal{X}$ is a positive definite and trace-class operator and $\delta>0$ models the amplitude of the prior variance; the unknown $\bm{u}$ is assumed to be independent of the noise $\bm{\eta}$. The trace-class assumption on $\bm{\mathcal{C}}_0$ ensures that draws from the prior on $\bm{u}|\delta$ are in $\mathcal{X}$. For a fixed $\bm{u}$ the likelihood is Gaussian, $\bm{y}|\bm{u},\delta\sim\mathcal{N}(\bm{\mathcal{G}} (\bm{u}),\lambda^{-1}\bm{\mathcal{C}}_1)$. We work under certain regularity conditions on the forward map $\bm{\mathcal{G}}$, which imply that the inverse problem is sufficiently ill-posed; in particular, for the noise model at hand, these conditions imply that the unknown $\bm{u}$ is not perfectly identifiable from a single realization of the data. Under the additional assumption that the prior on $\bm{u}|\delta$ is such that the regularity conditions on $\bm{\mathcal{G}}$ are satisfied in its support, it can be shown that almost surely with respect to the data the posterior on $\bm{u}|\bm{y},\delta$ is well defined, non-degenerate and absolutely continuous with respect to the prior on $\bm{u}|\delta$, \cite{AS10}. In what follows, we consider the hyper-parameter $\delta$ as a part of the inference problem, that is, we endow it with a prior $\mathbb{P}(\delta)$; this leads to a hierarchical Bayesian model. The potential for the use of hierarchical priors in inverse problems has been highlighted in \cite{KS05}, where the authors express the conviction that \emph{if a parameter is not known, it is a part of the inference problem}; see also \cite{CS08, CHPS09} where conditionally Gaussian hierarchical models have been considered in finite dimensional contexts. Returning to our setting, we note that of course in practice other aspects of the model, such as parameters that control the regularity of the draws from the prior, will also be part of the inference problem. Section \ref{ch3:sec:con} discusses how the results of this paper can be extended to such situations, but the focus here is the joint hierarchical inference on $\bm{u}$ and $\delta$. Statistical inference is achieved by Markov chain Monte Carlo sampling from the resulting full posterior on $\bm{u},\delta|\bm{y}$, where by Bayes' rule \[\mathbb{P}(\bm{u},\delta|\bm{y})\propto\mathbb{P}(\bm{y}|\bm{u},\delta)\mathbb{P}(\bm{u}|\delta)\mathbb{P}(\delta)\propto\mathbb{P}(\bm{u}|\bm{y},\delta)\mathbb{P}(\delta|\bm{y}).\] A sufficient condition for this posterior to be well defined is that the prior $\mathbb{P}(\delta)$ is proper. Due to the nature of the pair $(\bm{u},\delta)\in\mathcal{X}\times [0,\infty)$, sampling can be achieved by a two-component Metropolis-within-Gibbs (MwG) algorithm. There is a range of possible parametrizations for this MwG algorithm, perhaps the most natural of which is the so-called \emph{centered algorithm} (CA), \cite{PRS07}. This scheme alternates between simulating from $\bm{u}|\bm{y},\delta$ and $\delta|\bm{y},\bm{u}$ using Metropolis-Hastings steps. Each pair of such simulations is one algorithmic iteration of a prescribed number $k_{max}$. For specific models the simulation from the two conditionals can be done directly, without Metropolis-Hastings, in which case the resultant algorithm is the Gibbs sampler. Note that the model structure implies that $\delta$ and $\bm{y}$ are conditionally independent given $\bm{u}$, that is $\delta|\bm{y},\bm{u}\equiv \delta | \bm{u}$. This is the defining property of the so-called \emph{centered parameterisation} of a hierarchical model, \cite{PRS07} In practice the inverse problem and the algorithm are discretized and Bayesian inference is implemented in finite dimensions. We then have two sources of error in the estimated posterior distribution: a) the approximation error due to the discretization of the unknown and the forward problem, that is the discretization bias, {discussed in a general Bayesian (non-hierarchical) inverse problem setting in \cite{CDS10}; b) the Monte Carlo error due to the use of a Markov chain Monte Carlo method to sample the discretized posterior distribution. Assuming that the discretization level of the unknown is $N$, we have that the total error is of the order \begin{align}{\label{ch3:eq:toterr}}\frac{1}{N^s}+\frac{C(N)}{\sqrt{k_{max}}},\end{align}for some $s>0$ which relates to the quality of approximation of the unknown and forward problem, and $C(N)$ which depends on the mixing properties of the particular algorithm used to probe the posterior. This picture allows the practitioner to get a rough idea how to distribute the computational budget by balancing investments in higher discretization levels with investments in longer chains in order to achieve the desired error level in the estimated posterior distribution. In reality, of course, the constants that multiply these rates will be relevant and hard to determine. There are four principal motivations for formulating the inverse problem and the simulation algorithms in infinite dimensions, while using consistent discretizations (in the sense of numerical analysis, see subsection \ref{ch3:sec:dm}) for the numerical implementation. First, such formulation is often more faithful to the mathematical model that we wish to learn from the data. Second, it makes the inference comparable across different levels of discretization, so that the estimation of the model with increasing values of $N$ corresponds to a reduction in the discretization bias at the cost of additional computation. Third, the prior distribution on hyperparameters, such as $\delta$, represents the same prior beliefs across different levels of discretization. On the contrary, when the finite-dimensional model is not a consistent discretization of an infinite-dimensional one, the prior on the hyperparameters might contain an amount of information that depends on the level of discretization chosen; see for example the last paragraph in subsection \ref{fd} below. Finally, practically useful algorithms can be designed for moderate or even small values of $N$ by studying their behaviour at the asymptotic limit $N \to \infty$. In fact, it is usually unrealistic to try to obtain practically useful theoretical results on the convergence of Markov chain Monte Carlo for sampling non-trivial targets, unless such asymptotic regimes are constructed and invoked. This is precisely the case with the Gibbs sampler and related MwG algorithms, which are particularly hard to analyse (see for example \cite{stable}). Similarly, conceiving of Metropolis-Hastings methods in the infinite-dimensional limit leads to algorithms with provably dimension-independent convergence properties, whilst standard methods have convergence properties which degenerate with increased refinement of the discretization; see \cite{CRSW13} and discussion therein. In this paper we investigate theoretically and numerically the performance of MwG algorithms in the asymptotic regime of large $N$. In order to have a mathematically tractable analysis, we focus on linear inverse problems, see subsection \ref{sec:linear}. For these models, and under a commonly adopted prior on $\delta$, the MwG becomes a Gibbs sampler. We establish a result on the mean drift and diffusion of the $\delta$-chain in CA, which has the informal interpretation that $C(N)$ is of the order $N^{1/2}$. An immediate consequence of this result, is that in order to minimize the total error in (\ref{ch3:eq:toterr}), $k_{max}$ should be scaled like $N^{1+2s}$, whilst for algorithms for which $C(N)$ is uniformly bounded with respect to $N$, the same error level can be achieved by scaling $k_{max}$ like $N^{2s}$; we expect this to be the case for the non-centered algorithm proposed later in this section. We emphasize that although we prove this result for the linear model and for a specific prior on $\delta$, a detailed understanding of the ideas underlying our proofs, indicates that most of the details of the model, including linearity, and the prior used on $\delta$, do not really affect the validity of our main finding, that is, that CA deteriorates with $N$. The fundamental reason why this algorithm becomes unusable for large $N$ is an absolute continuity property, a high-level description of which we now provide. Note, however, that proving the result in such generality is definitely beyond the scope of this paper. In the infinite-dimensional limit, $\delta$ is an almost sure property of $\bm{u}|\delta\sim\mathcal{N}(0,\delta^{-1}\bm{\mathcal{C}}_0)$. This means that a single draw of $\bm{u}$ contains infinite information about the value of $\delta$ that generated it. In measure-theoretic terms, it means that the prior measures $\mathbb{P}(\bm{u} | \delta)$ and $\mathbb{P}(\bm{u} | \delta')$ for $\delta \ne \delta'$ are mutually singular, \cite[Remark 2.10]{DP05}. Recalling that we work under assumptions which imply that $\bm{u}|\bm{y},\delta$ is absolutely continuous with respect to $\bm{u}|\delta$, we deduce that $\delta$ is also an almost sure property of $\bm{u}|\bm{y},\delta$. As a result, iterative simulation from the distributions, $\bm{u}|\bm{y},\delta$ and $\delta|\bm{y},\bm{u}$, will fail in ever changing the initial value of $\delta$. On the other hand, recall that we also work under assumptions that imply that $\bm{u}$, hence $\delta$, are not perfectly identifiable from the data. Therefore, $\delta|\bm{y}$ is non-degenerate (provided the prior is non-degenerate) and hence any single value of $\delta$ has zero probability under the data. Concatenating, we have that when iteratively simulating from $\bm{u}|\bm{y},\delta$ and $\delta|\bm{y},\bm{u}$, the values of $\bm{u}$ will be changing along the iterations, but will be in fact sampled from a subspace which has probability zero under $\mathbb{P}(\bm{u} | \bm{y})$. In other words CA is \emph{reducible} in infinite dimensions and will fail to sample from $\bm{u},\delta | \bm{y}$. Iterative conditional sampling of the finite-dimensional approximation of $\bm{u}, \delta|\bm{y}$, will be able to obtain samples from the (approximated) posterior distribution of $\delta$, but will suffer from increasingly slow \emph{mixing} as the discretization level $N$ increases. In fact, the dependence between the discretized unknown parameter $u$ and $\delta$ increases with $N$, and becomes infinitely strong in the limit $N\to \infty$; it is this dependence that slows down the MwG. In order to alleviate the undesirable effects of the strong dependence between the prior on $\bm{u}$ and $\delta$, using intuition from \cite{PRS07, RS01}, we reparametrize the prior by writing $\bm{u}=\delta^{-\frac12}\bm{v}$ where $\bm{v}\sim\mathcal{N}(0,\bm{\mathcal{C}}_0)$ and $\delta\sim \mathbb{P}(\delta)$. This results in a MwG algorithm which alternates between a step of updating $\bm{v}|\bm{y},\delta$ and a step of updating $\delta|\bm{y},\bm{v}$; this is an example of a \emph{non-centered algorithm} (NCA), \cite{PRS07}. Since $\bm{v}$ and $\delta$ are now a priori independent, and recalling that $\bm{u}$ is not perfectly identified by the data, the dependence of these two parameters is not perfect conditionally on the data. Thus, the NCA is \emph{irreducible} in infinite dimensions and is thus robust with respect to the discretization level $N$. Hence, for NCA we expect that $C(N)$ is uniformly bounded with respect to $N$; we show numerical evidence in support of this statement in section \ref{ch3:sec:sim}. \subsection{The linear case - modelling and notation} \label{sec:linear} We will concentrate on the linear inverse problem case with gamma priors on $\delta$ which has the convenient property of conditional conjugacy. Specifically, we restrict our attention to the case $\bm{\mathcal{G}}=\bm{K}$ where $\bm{K}:\mathcal{X}\to\mathcal{Y}$ is a bounded linear operator. Then, the posterior distribution $\bm{u}|\bm{y},\delta$ is also Gaussian \begin{align*}\bm{u}|\bm{y},\delta\sim\mathcal{N}(\bm{m}_{\lambda,\delta}(\bm{y}),\bm{\mathcal{C}}_{\lambda,\delta});\end{align*} see \cite{AM84,LPS89} where formulae for the posterior mean and covariance operator are provided. When the prior distribution and the noise are specified in terms of precision operators (that is, inverse covariance operators) the following expressions for the posterior mean and precision are known to hold in a range of situations in \cite{ALS13, ASZ12}: \begin{align}\bm{\mathcal{C}}_{\lambda,\delta}^{-1}&=\lambda \bm{K}^\ast\bm{\mathcal{C}}_1^{-1}\bm{K}+\delta\bm{\mathcal{C}}_0^{-1},\label{ch3:eq:prec}\\ \bm{\mathcal{C}}_{\lambda,\delta}^{-1}\bm{m}_{\lambda,\delta}(\bm{y})&=\lambda \bm{K}^\ast \bm{\mathcal{C}}_1^{-1}\bm{y}.\label{ch3:eq:mean} \end{align} In order to introduce discretizations and their connection to the continuum limit we need some additional notation; subsection \ref{ch3:sec:dm} gives specific examples of continuum models and their discretizations, where the notation introduced below is put into practice. In order to avoid a notational overload, in the development of the theory we assume that $\mathcal{X}=\mathcal{Y}$ and that the discretization levels of the unknown and the data are the same. This assumption is not crucial to our results and we refer to the PhD thesis \cite[section 4.5]{SA13} for the more general statements. Furthermore, in section \ref{ch3:sec:sim}, we present numerical examples corresponding to both $\mathcal{Y}=\mathcal{X}$ with an increasing discretization level which is the same for both the unknown and the data, and $\mathcal{Y}=\mathbb{R}^M$ for some fixed $M$, whilst the dimension of the discretization of the unknown is increased. The case $\mathcal{Y}=\mathcal{X}$ arises for example when we observe the whole unknown function subject to blurring and noise, while the case $\mathcal{Y}=\mathbb{R}^M$ can arise when we have available blurred and noisy observations of the unknown at only $M$ spatial locations (see subsection \ref{fd}). The two cases can also arise if we work in the spectral domain, depending on the availability of observations of a full or only a partial spectral expansion of a blurred noisy version of the unknown. We denote by $\pr{\cdot}{\cdot}_{\mathbb{R}^N}$ and $\smnorm{\cdot}_{\mathbb{R}^N}$ the (possibly scaled) Euclidean inner product and norm in $\mathbb{R}^N$ and by $\smnorm{\cdot}_{2,N}$ the induced operator norm for $N\times N$ matrices. Throughout the paper we assume that this norm and inner product on $\mathbb{R}^N$ are scaled so that, formally, the large $N$ limit recovers the norm and inner product on the Hilbert space when, for example, spectral or finite difference approximations are made. Henceforward, we use boldface and regular typeface letters to distinguish between infinite and finite-dimensional objects respectively. We assume that we have a way of computing discretizations ${y}\in\mathbb{R}^N$ of the observation $\bm{y}$ and replace the operators $\bm{K}, \bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$ by the $N\times N$ matrices $K, \mathcal{C}_0$ and $\mathcal{C}_1$ respectively, which arise from a consistent, in the sense of numerical analysis, family of approximations of the corresponding operators. In this finite-dimensional setting, the unknown is $u \in \mathbb{R}^N$ and it is assigned a finite-dimensional Gaussian prior, $u|\delta \sim \mathcal{N}(0,\delta^{-1}\mathcal{C}_0)$. The noise distribution has Lebesgue density and the corresponding log-likelihood is quadratic in $u$. Thus, standard Bayesian linear theory (see e.g. \cite{LS72}) implies that the posterior is also Gaussian, $u|y,\delta\sim\mathcal{N}(m_{\lambda,\delta}(y), \mathcal{C}_{\lambda,\delta})$, where $m_{\lambda,\delta}(y)$ and $\mathcal{C}_{\lambda,\delta}^{-1}$ solve equations (\ref{ch3:eq:prec}) and (\ref{ch3:eq:mean}) where the boldface infinite-dimensional quantities are replaced by the corresponding finite-dimensional regular typeface quantities. {Bayesian modelling for finite-dimensional approximations of linear inverse problems using Gaussian priors and noise models was recently carried out in \cite{JB12}. The approach consisted in simultaneous inference for the unknown $u$ and the hyper-parameters $\lambda$ and $\delta$. We will concentrate on simultaneous inference on $u$ and $\delta$ only, since $\lambda$ can be efficiently estimated from a single high dimensional realization of the data, for example using quadratic variation. We again refer the interested reader to the PhD thesis \cite[Chapter 4]{SA13} for theoretical and numerical results on the large $N$ behaviour of $\lambda$ when considered as part of the inference problem; we stress here that for low-dimensional data, the inference on $\lambda$ is non-trivial. In \cite{JB12}, a standard conditionally conjugate prior was used for the hyper-parameter, $\delta\sim{\rm {Gamma}}(\upalpha_0,\upbeta_0)$, which in this type of finite-dimensional Gaussian models is known to lead to a gamma conditional posterior distribution, \cite[Chapter 5.2]{BS09}\begin{equation}\label{ch3:eq:int4}\delta|{y},u\sim{\rm {Gamma}}(\upalpha_0+ \frac{N}2, \upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u}_{\mathbb{R}^N}^2).\end{equation}} The inference for this model was carried out using CA which in this case is a Gibbs sampler (see Algorithm \ref{ch3:algstd} in section \ref{ch3:sec:met} below), since both conditional distributions $u|y,\delta$ and $\delta|y,u$ belong to known parametric families and can be sampled directly. One of the main aims of this paper is to analyze the convergence of this algorithm in the large $N$ limit. We also aim to exhibit via numerical simulations, the deterioration of the performance of CA in the large $N$ limit, as well as the benefits of reparametrizing the prior and using the corrseponding NCA (see Algorithm \ref{ch3:algrep} in section \ref{ch3:sec:met} below). \subsection{Examples of consistent discretizations}\label{ch3:sec:dm} In order to aid the understanding of the paper and in anticipation of the subsequent developments, we briefly describe two methods for passing from the continuum infinite-dimensional model in $\mathcal{X}$ to a discrete model in $\mathbb{R}^N$. Here and elsewhere in the paper, we define a Gaussian white noise in $\mathbb{R}^N$ to be a random variable $\zeta$ given as $\zeta=\sum_{j=1}^{N}\zeta_je_j,$ where $\{e_j\}_{j=1}^N$ is a basis in $\mathbb{R}^N$ which is orthonormal in the possibly scaled Euclidean inner product $\pr{\cdot}{\cdot}_{\mathbb{R}^N}$, and $\{\zeta_j\}_{j\in\mathbb{N}}$ is a sequence of independent standard Gaussian random variables in $\mathbb{R}$. \subsubsection{Spectral truncation}\label{sp} Let $\{\beta_j\}_{j\in\mathbb{N}}$ be a complete orthonormal basis in $\mathcal{X}$. An element $\bm{w}\in\mathcal{X}$ can be identified with the sequence $\{w_j\}_{j\in\mathbb{N}}$ of coefficients $w_j:=\pr{\bm{w}}{\beta_j}$ and by Parseval's identity the Hilbert space norm of $\bm{w}$ can be replaced by the $\ell_2$-norm of the sequence of coefficients (similarly for the inner product). One can then discretize $\bm{w}$ by replacing it with $w\in{\rm{span}}\{\beta_1,...,\beta_N\}$ which is identified with the truncated sequence of coefficients $\{w_1,...,w_N\}\in\mathbb{R}^N$. The $\ell_2$-norm and inner product are then replaced by the Euclidean norm and inner product. Let $\bm{\Sigma}:\mathcal{X}\to\mathcal{X}$ be a bounded operator which is diagonalizable in $\{\beta_j\}_{j\in\mathbb{N}}$ with eigenvalues $\{\mu_j^{\Sigma}\}_{j\in\mathbb{N}}$. The operator $\bm{\Sigma}$ can be identified with the sequence $\{\mu_j^{\Sigma}\}_{j\in\mathbb{N}}$ and we can discretize $\bm{\Sigma}$ at level $N$ by replacing it with the finite rank operator which is identified with the $N \times N$ diagonal matrix $\Sigma=\rm{diag}(\mu_1^{\Sigma},...,\mu_N^{\Sigma})$. If $\bm{x}\sim\mathcal{N}(0,\bm{\Sigma})$ is a Gaussian random variable in $\mathcal{X}$, we can discretize by replacing $\bm{x}$ with $x\in{\rm{span}}\{\beta_1,...,\beta_N\}$ which is identified with a random variable with distribution $\mathcal{N}(0,\Sigma)$ in $\mathbb{R}^N$. Equivalently, $x$ is identified with $\Sigma^\frac12x_0$ where $x_0$ is a Gaussian white noise in $\mathbb{R}^N$ with respect to the standard orthonormal basis of Euclidean space. For more details see subsection \ref{ch3:ssec:diag}. \subsubsection{Finite differences approximation}\label{fd} Let $\mathcal{X}=L^2({\rm I}), {\rm I}=(0,1)$, and denote by $\bm{\mathcal{A}}_0$ the negative Laplacian densely defined on $\mathcal{X}$ with domain $H^2({\rm I})\cap H^1_0({\rm I})$, that is with Dirichlet boundary conditions. We discretize the domain ${\rm I}$ using a grid of $N$ equally spaced points $\{\frac{1}{N+1},...,\frac{N}{N+1}\}$; we can restrict our attention to the interior points due to the Dirichlet boundary conditions. We define the inner product and norm in $\mathbb{R}^N$ \begin{align*}\pr{u}{v}_{\mathbb{R}^N}=\frac{1}{N+1}\sum_{j=1}^{N} u_jv_j \quad\mbox{and} \quad\norm{u}_{\mathbb{R}^N}=\bigg(\frac{1}{N+1}\sum_{j=1}^{N} u_j^2\bigg)^\frac12.\end{align*} Note that the natural orthonormal basis on the $N$-dimensional space of grid points with respect to the above norm and inner product is $\{e_j\}_{j=1}^N$, with $e_j=\{\sqrt{N+1}\delta_{ij}\}_{i=1}^N$, where $\delta_{ij}$ is Kronecker's delta. For a function $\bm{u}$ in ${\rm I}$ which vanishes on the boundary, we consider its discretization on the grid, hence $u_j=\bm{u}(\frac{j}{N+1})$. We thus have a discrete approximation of $\mathcal{X}$ with norm and inner product which are the discrete analogues of the $L^2$-norm and inner product. We use finite differences to discretize $\bm{\mathcal{A}}_0$. In particular, we replace $\bm{\mathcal{A}}_0$ by the $N\times N$ matrix \begin{equation*}\mathcal{A}_0=(N+1)^2 \begin{bmatrix} \;\;2 & -1 & \;\;0 & \;\;\hdots & \;\;0 \\ -1& \;\;2 & -1 & \;\;\ddots & \;\;\vdots\\ \;\;0 &\;\;\ddots & \;\;\ddots & \;\;\ddots &\;\;0\\ \;\;\vdots &\;\;\ddots &-1 & \;\;2 &-1 \\ \;\;0&\;\;\hdots&\;\;0&-1&\;\;2 \end{bmatrix}. \end{equation*} If $\bm{z}\sim\mathcal{N}(0,\bm{\Sigma})$ is a Gaussian random variable in $\mathcal{X}$ where $\bm{\Sigma}$ is a function of $\bm{\mathcal{A}}_0$ (for example a power), we discretize $\bm{z}$ by considering the $N$-dimensional random variable $z=\Sigma^\frac12z_0$ defined on the grid, where $\Sigma$ is the corresponding function of the matrix $\mathcal{A}_0$ and $z_0$ is a Gaussian white noise with respect to $\{e_j\}_{j=1}^N$. In subsection \ref{ch3:nex2} we consider subsampling at a set of $M$ equally spaced points amongst the $N$ grid points, where $\frac{N+1}{M+1}$ is a nonnegative power of 2. To this end, we define the matrix $P\in\mathbb{R}^{M\times N}$ by \begin{align*}P_{i,j}=\left\{\begin{array}{ll} 1, & \mbox{if} \;\mbox{$j=i\frac{N+1}{M+1}$} \\ 0, &\mbox{otherwise}. \end{array}\right. \end{align*} The matrix $P$ maps the vector of values on the fine grid $\{{\bf u}(\frac{j}{N+1})\}_{j=1}^N$ to the subsampled vector of the values on the coarse grid $\{{\bf u}(\frac{i}{M+1})\}_{i=1}^M$. If we fix $M$ and let $N$ increase, then $P$ corresponds to a discretization of the operator ${\bf P}:C({\rm I})\to\mathbb{R}^M$ defined as $M$ pointwise evaluations at the points $x_i=\frac{i}{M+1}, \;i=1,...,M$, $({\bf Pu})_i={\bf u}(\frac{i}{M+1})$, for any continuous function ${\bf u}$. A formal calculation suggests that the adjoint of the pointwise evaluation operator at $x\in {\rm I}$, is an operator mapping $r\in\mathbb{R}$ to $r\delta_x$, where $\delta_x$ is the Dirac distribution at $x$. This suggests that ${\bf P^\ast}:\mathbb{R}^M\to C({\rm I})$, maps $r\in\mathbb{R}^M$ to the linear combination of Dirac distributions $r_1\delta_{x_1}+...+r_M\delta_{x_M}$. At the same time the matrix $P^T\in\mathbb{R}^{N\times M}$ maps the vector of values on the coarse grid $\{{\bf u}(\frac{i}{M+1})\}_{i=1}^M$ to a vector in $\mathbb{R}^N$ which is zero everywhere except from the $i\frac{N+1}{M+1}$-th components where it is equal to ${\bf u}(\frac{i}{M+1})$, $i=1,...,M$. Combining, and in order to capture the effect of the Dirac distribution at the locations $\frac{i}{M+1}$, we have that ${\bf P^\ast}$ should be discretized using the matrix $(N+1)P^T$. Note that if $\mathcal{N}(0,\delta^{-1}\mathcal{T}^{-1})$ is used as a prior on $u|\delta$ at level $N$, where $\mathcal{T}$ is the $N\times N$ tridiagonal matrix in the definition of $\mathcal{A}_0$, then this corresponds to having a prior with covariance matrix $(N+1)^2\delta^{-1}\mathcal{A}_0$. In particular, if $\delta\sim{\rm {Gamma}}(\upalpha_0,\upbeta_0)$, then we have that $\frac{1}{(N+1)^2}\delta\sim{\rm {Gamma}}(\upalpha_0,(N+1)^2\upbeta_0)$ where in the large $N$ limit the last gamma distribution converges to a point mass at zero, while $\mathcal{A}_0$ approximates $\bm{\mathcal{A}}_0$. This means that as $N\to\infty$ the correlation structure of the prior is described by the limiting $\bm{\mathcal{A}}_0$ but with an amplitude which becomes larger and larger with ever increasing confidence; in other words as $N$ grows the prior on $u|\delta$ looks increasingly flatter. \subsection{Notation} We use subscripts to make explicit the dependence of the $\delta$-chain on the discretization level $N$ and superscripts to denote the iteration number in the Gibbs sampler. For a random variable $x$ which depends on the mutually independent random variables $z_1$ and $z_2$, we use $\mathbb{E}^{z_1}[x]$ to denote the expectation of $x$ with respect to $z_1$ for fixed $z_2$. We use $x_1\stackrel{\mathcal{L}}{=} x_2$ to denote that the random variables $x_1$ and $x_2$ have the same law. Finally, for two sequences of positive numbers $\{s_j\}$ and $\{t_j\}$, we use the notation $s_j\asymp t_j$ to mean that $s_j/t_j$ is bounded away from zero and infinity uniformly in $j$. \subsection{Paper structure} In the next section we present the centered Gibbs and non-centered MwG algorithms in our assumed linear conjugate setting; we also discuss the option of integrating $u$ out of the data likelihood and the resulting marginal algorithm. In section \ref{ch3:sec:main} we present our main result on the deterioration of the centered Gibbs sampler which holds under certain assumptions made at the discrete level and which are stated explicitly in the same section. Our discrete level assumptions are typically inherited from Assumptions \ref{ch3:infass1} on the underlying infinite-dimensional model also stated in section \ref{ch3:sec:main}, when consistent numerical discretizations are used. In section \ref{ch3:sec:ex} we exhibit three classes of linear inverse problems satisfying our assumptions on the underlying infinite-dimensional model. For the first two of these classes, that is a class of mildly ill-posed and a class of severely ill-posed linear inverse problems both in a simultaneously diagonalizable setting, we also explicitly prove that our discrete level assumptions are inherited from the infinite-dimensional assumptions when discretizing via spectral truncation (see subsections \ref{ch3:ssec:diag} and \ref{ch3:ssec:sev}). In section \ref{ch3:sec:sim} we present numerical evidence supporting our theory and intuition on the deterioration of the centered algorithm and the merits of using the non-centered algorithm, using both spectral truncation (subsection \ref{ch3:nex1}) and discretization via finite differences and subsampling (subsection \ref{ch3:nex2}). The main body of the paper ends with concluding remarks in section \ref{ch3:sec:con}, while the Appendix in section \ref{ch3:sec:ap} contains the proof of our main result as well as several technical lemmas. \section{Sampling algorithms}\label{ch3:sec:met} We now present in more detail the different algorithms for sampling $u,\delta|y$ in linear hierarchical inverse problems, and provide a high-level comparison of their relative merits in the asymptotic regime of large $N$. \subsection{Centered Algorithm (CA)}\label{CA} We first provide pseudo-code for the most natural algorithm for sampling $u,\delta|y$ in this linear conjugate setting, that is the centered Gibbs sampler used in \cite{JB12} and discussed in section \ref{ch3:sec:int}. \begin{framed} \begin{algor}\label{ch3:algstd}{\ } \emph{\begin{enumerate} \item[0.] Initialize $\delta^{(0)}$ and set $k=0;$ \item[1.] $u^{(k)}\sim \mathcal{N}\big(m_{\lambda,\delta^{(k)}}({y}),\mathcal{C}_{\lambda,\delta^{(k)}}\big);$ \item[2.] $\delta^{(k+1)}\sim {\rm {Gamma}}(\upalpha_0+\frac{N}2,\upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u^{(k)}}_{\mathbb{R}^N} ^2);$ \item[3.] Set $k=k+1$. If $k<k_{max}$ return to step 1, otherwise stop. \end{enumerate}} \end{algor} \end{framed} \subsection{Non-centered Algorithm (NCA)}\label{NCA} We now formulate in more detail the non-centered algorithm introduced in section \ref{ch3:sec:int}. We define the algorithm in the infinite-dimensional setting, and then discretize it. We reparametrize the prior by writing $\bm{u}=\delta^{-\frac12} \bm{v}$, where now $\bm{v} \sim \mathcal{N}(0,\bm{\mathcal{C}}_0)$, and the observation model becomes \begin{equation}{\bm{y}}=\delta^{-\frac12} \bm{K} \bm{v}+\bm{\eta}\,.\end{equation} The MwG sampler is used to sample $\bm{v},\delta | \bm{y}$ by iteratively sampling from the two conditionals. Recall from the discussion on CA in section \ref{ch3:sec:int}, that $\delta|\bm{y},\bm{u}\equiv \delta|\bm{u}$ and note that $\delta | \bm{y},\bm{v}$, no longer simplifies to $\delta | \bm{v}$, since even conditionally on $\bm{v}$, $\delta$ and $\bm{y}$ are dependent; this is the non-centered property in the hierarchical model, \cite{PRS07}. Additionally, note that a practically useful way to sample from $\bm{v} | \bm{y},\delta$, which recycles available code for CA, is to first sample $\bm{u} | \bm{y},\delta$, as in CA, and then transform $\bm{u}$ to $\bm{v}$ via $\bm{v} = \delta^{\frac12} \bm{u}$. Finally, for reasons of efficiency described below, we prefer to sample $\tau=\delta^{-\frac12}$ instead of $\delta$ directly. In order to obtain the same Bayesian model as the one before the transformation, the prior distribution for $\tau$ should be the one obtained from the prior on $\delta$ after the $1/\sqrt{\delta}$ transformation, that is a square root of an inverse-gamma distribution. Of course, we can deterministically calculate $\delta=1/\tau^2$ after each such update, to get $\delta$-samples and proceed to the next conditional simulation in the algorithm. The finite-dimensional discretization of the algorithm is obtained in the same way as CA. We notice that the log-likelihood is quadratic in $\tau$, for given $v$. We can exploit this property to sample $\tau$ efficiently. The conditional posterior $\tau|{y},v$ is not Gaussian, because the prior on $\tau$ is not Gaussian, hence for our numerical results we replace direct simulation from the conditional with a Metropolis-Hastings step that targets the conditional. Given that the conditional posterior is the product of the prior and the conditional likelihood, and we expect the likelihood to be the dominant term of the two, we use the likelihood, seen as a function of $\tau$, as a proposal density in the Metropolis-Hastings step. The likelihood as a function of $\tau$ is Gaussian $\mathcal{N}(r_{\lambda,v},q_{\lambda,v}^2)$, where \begin{equation}\frac{1}{q_{\lambda,v}^2}=\lambda\norm{\mathcal{C}_1^{-\frac12} Kv}_{\mathbb{R}^N}^2, \quad\quad\frac{r_{\lambda,v}}{q_{\lambda,v}^2}=\lambda\pr{ K^\ast\mathcal{C}_1^{-1}{y}}{v}_{\mathbb{R}^N},\end{equation} hence easy to simulate from. Proposals generated in this way are immediately rejected if negative, and if not they are accepted according to the Metropolis-Hastings ratio that by construction only involves the prior density. Note that the same complication would arise had we chosen to work with $\delta$ instead of $\tau$, since $\delta|y,v$ is also not a known distribution. The difference in that case is that there is no apparent good proposal density for the Metropolis-Hastings step, since the likelihood is not a known distribution as a function of $\delta$. We use the following Gibbs sampler, where $p(\cdot)$ denotes the density of the square root of the inverse-gamma distribution with parameters $\upalpha_0, \upbeta_0$: \begin{framed} \begin{algor}\label{ch3:algrep}{\ }\emph{\begin{enumerate} \item[0)]Initialize $\tau^{(0)}$, calculate $\delta^{(0)}=1/(\tau^{(0)})^2$ and set $k=0;$ \item[1)] $u^{(k)}\sim\mathcal{N}\big(m_{\si{k},\delta^{(k)}}({y}),\mathcal{C}_{\si{k},\delta^{(k)}}\big)$;\\ $v^{(k)}=(\de{k})^{\frac12}u^{(k)}$; \item[2)] propose $\tau\sim\mathcal{N}(r_{\si{k},v^{(k)}},q_{\si{k},v^{(k)}}^2)$;\\ if $\tau\leq0$ reject; if $\tau>0$ accept with probability $\frac{p(\tau)}{p(\tau^{(k)})}\wedge1$ otherwise reject;\\ if $\tau$ accepted set $\tau^{(k+1)}=\tau$, otherwise set $\tau^{(k+1)}=\tau^{(k)}$;\\ $\de{k+1}=1/(\tau^{(k+1)})^2$; \item[3)]Set $k=k+1$. If $k<k_{max}$ return to step 1, otherwise stop. \end{enumerate}} \end{algor} \end{framed} \subsection{Marginal Algorithm (MA)}\label{MA} Given that $\bm{u}$ (hence $\bm{K}\bm{u}$) and $\bm{\eta}$ are independent Gaussian random variables, the marginal distribution of the data $\bm{y}$ given $\delta$ is also Gaussian, \[\bm{y}|\delta\sim\mathcal{N}(0,\delta^{-1}\bm{K}\bm{\mathcal{C}}_0\bm{K}+\lambda^{-1}\bm{\mathcal{C}}_1)\,. \] One can then use Bayes' theorem to get that \[\mathbb{P}(\delta|\bm{y})\propto \mathbb{P}(\bm{y}|\delta)\mathbb{P}(\delta).\] This distribution can be sampled using the random walk Metropolis (RWM) algorithm. In order to get samples from $\bm{u},\delta|\bm{y}$, we alternate between drawing $\delta|\bm{y}$ and updating $\bm{u}|\bm{y},\delta$. Furthermore, it is beneficial to the performance of the RWM, to sample $\log(\delta)|\bm{y}$ instead of $\delta|\bm{y}$; of course, samples from $\log(\delta)|\bm{y}$ can be deterministically transformed to samples from $\delta|\bm{y}$. The resultant algorithm is what we call the \emph{marginal algorithm} (MA). MA in the discrete level is as follows, where $p(\cdot)$ now denotes the density of the logarithm of a gamma distribution with parameters $\upalpha_0,\upbeta_0$ and $\rho=\log(\delta)$: \begin{framed} \begin{algor}\label{ch3:algmar}{\ }\emph{\begin{enumerate} \item[0)]Initialize $\rho^{(0)}$ and set $k=0;$ \item[1)] $u^{(k)}\sim\mathcal{N}\big(m_{\si{k},\de{k}}({y}),\mathcal{C}_{\si{k},\de{k}}\big);$ \item[2)] propose $\rho\sim\mathcal{N}(\rho^{(k)},s^2)$;\\ accept with probability $\frac{\mathbb{P}(y|\exp({\rho}))p_0(\rho)}{\mathbb{P}\left(y|\exp(\rho^{(k)})\right)p_0(\rho^{(k)})}\wedge1$ otherwise reject;\\ if $\rho$ accepted set $\rho^{(k+1)}=\rho$, otherwise set $\rho^{(k+1)}=\rho^{(k)}$;\\ set $\de{k+1}=\exp(\rho^{(k+1)})$; \item[3)]Set $k=k+1$. If $k<k_{max}$ return to step 1, otherwise stop. \end{enumerate}} \end{algor} \end{framed} We follow the rule-of-thumb proposed in \cite{GGR96} and choose the RWM proposal variance $s^2$ to achieve an acceptance probability around 44\%. \subsection{Contrasting the methods}\label{contr} As discussed in section \ref{ch3:sec:int}, and is formally shown in section \ref{ch3:sec:main}, CA will deteriorate as the discretization level of the unknown, $N,$ becomes larger. To get a first understanding of this phenomenon in the linear-conjugate setting, note that the ${\rm {Gamma}}(\upalpha_0,\upbeta_0)$ distribution has mean and variance $\upalpha_0\upbeta_0^{-1}$ and $\upalpha_0\upbeta_0^{-2}$ respectively. Hence, for any $\mu>0$, as $N$ grows, the random variable $ {\rm {Gamma}}(\upalpha_0+\frac{N}2,\upbeta_0+\mu\frac{N}2)$ behaves like a Dirac distribution centred on $\mu^{-1}$. Furthermore, we will show that, because of the consistency of the approximation of the operators defining the Bayesian inverse problem, together with scaling of the norms on $\mathbb{R}^N$ to reproduce the Hilbert space norm limit, it is natural to assume that \[\norm{\mathcal{C}_0^{-\frac12}u^{(k)}}^2_{\mathbb{R}^N}\simeq (\de{k})^{-1}N.\] Using the limiting behaviour of the gamma distribution described above, this means that as the dimension $N$ increases, we have $\de{k+1}\simeq \de{k}$ hence the $\delta$-chain makes very small moves and slows down. In contrast, both conditionals $\bm{u}|\bm{y},\delta$ and $\delta|\bm{y},\bm{v}$ sampled in NCA are non-degenerate even in the infinite-dimensional limit. Our numerical results show that this reparametrization is indeed robust with respect to the increase in dimension (see section \ref{ch3:sec:sim}), although establishing formally that a spectral gap exists for NCA in this limit is beyond the scope of this paper. Similarly, both distributions $\bm{u}|\bm{y},\delta$ and $\delta|\bm{y}$ sampled in MA are non-degenerate in the continuum limit, hence MA is robust with respect to $N$. Moreover, MA is optimal with respect to the dependence between the two components of the algorithm, since the $\delta$-chain is independent of the $u$-draws; there is a loss of efficiency due to the use of RWM to sample $\delta|y$, but provided the proposal variance is optimally tuned, this will only have a minor effect in the performance of MA. For these reasons, in section \ref{ch3:sec:sim} we use the optimally tuned MA as the gold standard with which we compare the performance of CA and NCA. Nevertheless, we stress here that: \begin{enumerate} \item[i)] MA requires at each iteration the potentially computationally expensive calculation of the square root and the determinant of the precision matrix of $y|\delta$. This makes the implementation of MA in large scale linear inverse problems less straightforward compared to CA and NCA. \item[ii)]even though we view MA as a gold, albeit potentially expensive, standard in our linear setting, for nonlinear problems MA is not available. On the contrary, CA and NCA are straightforward to extend to the nonlinear case (see Section \ref{ch3:sec:con}); this is one of the principal motivations for studying the optimal parametrization of Gibbs sampling in this context.\end{enumerate} \section{Theory}\label{ch3:sec:main}In this section we present our theory concerning the behaviour of CA as the discretization level increases, in the linear inverse problem setting introduced in subsection \ref{sec:linear}. We first formulate our assumptions on the underlying infinite-dimensional model as well as a corresponding set of discrete-level assumptions, before presenting our main result on the large $N$ behaviour of Algorithm \ref{ch3:algstd}. \subsection{Assumptions} We work under the following assumptions on the underlying infinite-dimensional linear inverse problem: \begin{assumptions}\label{ch3:infass1}{\ } \begin{enumerate} \item[i)] For any $\lambda,\delta>0$, we have $\bm{m}_{\lambda,\delta}(\bm{y})\in\mathcal{D}(\bm{\mathcal{C}}_0^{-\frac12})$ $\bm{y}$-almost surely; that is, the posterior mean belongs to the Cameron-Martin space of the prior on $\bm{u}| \delta;$ \item[ii)] $\bm{\mathcal{C}}_1^{-\frac12}\bm{K}\bm{\mathcal{C}}_0 \bm{K}^\ast\bm{\mathcal{C}}_1^{-\frac12}$ is trace-class; that is, the prior is sufficiently regularizing. \end{enumerate} \end{assumptions} Assumption \ref{ch3:infass1}(ii) implies the second and third conditions of the Feldman-Hajek theorem \cite[Theorem 2.23]{DZ92}. Together with Assumption \ref{ch3:infass1}(i), they thus imply that $\bm{y}$-almost surely $\bm{u}|\bm{y}, \delta$ is absolutely continuous with respect to $\bm{u}|\delta$ and hence the infinite-dimensional intuition on the behaviour of CA described in section \ref{ch3:sec:int} applies. In the following, we assume that $\mathcal{C}_0$ and $\mathcal{C}_1$ are positive definite $N\times N$ matrices which are the discretizations of the positive definite operators $\bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$ respectively, and the $N\times N$ matrix $K$ is the discretization of the bounded operator $\bm{K}$. Our analysis of the $\delta$-chain is valid under the following assumptions at the discrete level: \begin{assumptions}\label{ch3:ass1} {\ } \begin{enumerate} \item[i)] For almost all data $\bm{y}$, for any $\lambda,\delta>0$, there exists a constant $c_1=c_1(\bm{y};\lambda, \delta)\geq0$, independent of $N$, such that \begin{align*}\norm{\mathcal{C}_{0}^{-\frac12}m_{\lambda,\delta} ({y})}_{\mathbb{R}^N}\leq c_1;\end{align*} \item[ii)] there exists a constant $c_2\geq0$, independent of $N$ and $\bm{y}$, such that \begin{align*} {\rm {Tr}}(\mathcal{C}_1^{-\frac12}K\mathcal{C}_{0}K^\ast\mathcal{C}_1^{-\frac12})\leq c_2.\end{align*} \end{enumerate} \end{assumptions} These assumptions are typically inherited from Assumptions \ref{ch3:infass1} when consistent discretizations are used; see subsection \ref{ch3:sec:dm} and section \ref{ch3:sec:ex} for more details and examples. \subsection{Main Result} We now present our main result on the behaviour of Algorithm \ref{ch3:algstd} in the asymptotic regime of large $N$. We start by noting that the two steps of updating $u|{y},\delta$ and $\delta|{y},u$ in Algorithm \ref{ch3:algstd}, can be compressed to give one step of updating $\delta$ and involving the noise in the $u$ update. Indeed, we denote by $\de{k+1}_N$ the $\delta$-draw in the $k+1$ iteration of the Gibbs sampler where the problem is discretized in $\mathbb{R}^N$. This draw is made using the previous draw of $u|{y},\delta$, which assuming that $\de{k}_N=\delta$, is denoted by $u_{\delta}^{(k)}$ and can be written as \begin{equation}\label{ch3:eq:uu} u_{\delta}^{(k)}=m_{\lambda,\delta}({y})+\mathcal{C}_{\lambda,\delta}^{\frac12}\zeta, \end{equation}where $\zeta$ is an $N$-dimensional Gaussian white noise representing the fluctuation in step 1, and $ \mathcal{C}_{\lambda,\delta}, m_{\lambda,\delta}$ are given by the formulae (\ref{ch3:eq:prec}), (\ref{ch3:eq:mean}) respectively. Hence we have \begin{align}\label{ch3:eq:dd}\delta_N^{(k+1)}\sim{\rm {Gamma}}(\upalpha_0+\frac{N}2,\upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12} u_{\delta}^{(k)}}_{\mathbb{R}^N}^2).\end{align} Assumptions \ref{ch3:ass1} ensure that the squared norm appearing in (\ref{ch3:eq:dd}) behaves like $ \delta^{-1}N$, as assumed in the discrete level intuition discussed in subsection \ref{contr}. This is made precise in the following lemma which forms the backbone of our analysis and is proved in subsection \ref{ch3:ssec:ap1}. \begin{lemma}\label{ch3:lem1} Under Assumptions \ref{ch3:ass1}, for any $\lambda,\delta>0$ we have, \begin{align}\label{ch3:eq:denom} \upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u^{(k)}_{\delta}}^2_{\mathbb{R}^N}=\delta^{-1}\frac{N}{2}+\delta^{-1}\sqrt{\frac{N}{2}} W_{1,N}+F_N(\delta), \end{align} where i) $W_{1,N}$ only depends on the white noise $\zeta$ in (\ref{ch3:eq:uu}), has mean zero and variance one, higher order moments which are bounded uniformly in $N$, and converges weakly to a standard normal random variable as $N\to\infty;$ ii) $F_N(\delta)$ depends on the data ${y}$ and $\bm{y}$-almost surely has finite moments of all positive orders uniformly in $N$ (where the expectation is taken with respect to $\zeta$). \end{lemma} Combining with the scaling property of the gamma distribution as in the intuition described in subsection \ref{contr}, we show that as the dimension increases the $\delta$-chain makes increasingly smaller steps, and quantify the scaling of this slowing down. Indeed, we prove that for large $N$ the $ \delta$-chain makes moves which on average are of order $N^{-1}$ with fluctuations of order $N^{- \frac12}$. As a result, it takes $\mathcal{O}(N)$ steps for the $\delta$-chain to move by $\mathcal{O}(1)$. \begin{theorem}\label{ch3:thm1} Let $\lambda>0$ and consider Algorithm \ref{ch3:algstd} under Assumptions \ref{ch3:ass1}. In the limit $N\to\infty$, we have almost surely with respect to $\bm{y}$ and where all the expectations are taken with respect to the randomness in the algorithm:\begin{enumerate}\item[i)]the expected step in the $\delta$-chain scales like $\frac{2}N$, that is, for any $\delta>0,$ \begin{align*}\frac{N}2\mathbb{E}\left[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta\right]=(\upalpha_0+1)\delta-f_N(\delta;{y}) \delta^2+\mathcal{O}(N^{-\frac12}),\end{align*} where $f_N(\delta;{y})$ is bounded uniformly in $N$. In particular, if there exists $f(\delta;\bm{y})\in\mathbb{R}$ such that $f_N(\delta;{y})\to f(\delta;\bm{y})$, then \begin{align*} \frac{N}2\mathbb{E}\left[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta\right]=(\upalpha_0+1)\delta-f(\delta;\bm{y})\delta^2+\smallO{1}; \end{align*} \item[ii)]the variance of the step also scales like $\frac{2}N$ and in particular for any $\delta>0,$ \begin{align*}\frac{N}2\mbox{Var}\left[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta\right]=2\delta^2+\mathcal{O}(N^{-\frac12}).\end{align* \end{enumerate} \end{theorem} \begin{remark}\label{ch3:rem1}{\ } \begin{enumerate} \item[i)]The proof of Theorem \ref{ch3:thm1} can be found in subsection \ref{ch3:ssec:dproof} in the Appendix. Equation (\ref{ch3:eq:sp1}) is a key identity, as it very clearly separates the three sources of fluctuation in the draw $\de{k+1}_N$, that is, the fluctuation in the Gaussian-draw $u|y,\delta$, the fluctuation in the gamma-draw $\delta|y,u$ and the fluctuation in the data. \item[ii)]$f_N(\delta;{y}):=\mathbb{E}^\zeta[F_N(\delta;{y})]$, where $F_N$ is defined in the proof of Lemma \ref{ch3:lem1}. The assumption on the convergence of $f_N(\delta;{y})$ is trivially satisfied under Assumptions \ref{ch3:ass1}, if the discretization scheme used is such that if the vector $x\in\mathbb{R}^N$ and the $N\times N$ matrix $T$ are the discretizations at level $N$ of $\bm{x}\in\mathcal{X}$ and the linear operator $\bm{T}$ respectively, then $\norm{T x}_{\mathbb{R}^N}$ is a non-decreasing sequence. This is the case for example in spectral truncation methods, when $\bm{T}$ is diagonalizable in the orthonormal basis used (see subsection \ref{sp}). \end{enumerate} \end{remark} Theorem \ref{ch3:thm1} suggests that \begin{align}\label{ch3:eq:em} \de{k+1}_N-\de{k}_N\approx \frac2N\Big((\upalpha_0+1)\de{k}_N-f_N(\de{k}_N;y)(\de{k}_N)^2\Big)+\frac{2\de{k}_N}{\sqrt{N}}\Xi, \end{align} where $\Xi$ is a real random variable with mean zero and variance one. In the case where $f_N$ has a limit, the last expression looks like the Euler-Maruyama discretization of the stochastic differential equation \begin{align} \label{ch3:eq:dl}d\delta=\big(\upalpha_0+1-f(\delta;\bm{y})\delta\big)\delta dt+\sqrt{2}\delta dW,\end{align} where $W=W(t)$ is a standard Brownian motion, with time step $\frac2N$. This is another manifestation of the fact that it takes $\mathcal{O}(N)$ steps for the $\delta$-chain to make a move of $\mathcal{O}(1)$ size. Note that \eqref{ch3:eq:em} implies that the \emph{expected square jumping distance} of the Markov chain for $\delta$ generated by CA is $\mathcal{O}(1/N)$. Recall (see for example \cite{SR09} for a recent account) that this distance is defined as $\mathbb{E}[(\de{k+1}_N-\de{k}_N)^2]$, where $\de{k}_N$ is drawn from the stationary distribution. Hence, it is the expected squared step of the chain in stationarity. It is easy to check that it equals $2 \mbox{Var} (\de{k}_N) (1-Corr(\de{k}_N, \de{k+1}_N))$, where again all quantities are computed in stationarity. Although the expected square jumping distance is a sensible and practically useful measure of efficiency of a Markov chain, there is no explicit result that links it to the variance of Monte Carlo averages formed by using the output of the chain. This variance will not only depend on autocorrelation at other lags, but also on the function being averaged. Still, it gives a rough idea: if the autocorrelation function associated to the identity function is geometrically decaying, with lag-1 autocorrelation $\rho_N$, then the variance of the sample average of $k_{max}$, $\de{k}_N$ values in stationarity will be $\mbox{Var} (\de{k}_N) (1+\rho_N)/\big((1-\rho_N)k_{max}\big)$. The point here is that $\rho_N$ behaves like $1-c/N$, for some $c$, but $\mbox{Var} (\de{k}_N)$ is $\mathcal{O}(1)$. Hence, the Monte Carlo error associated with $k_{max}$ draws in stationarity is $\mathcal{O}(\sqrt{N / k_{max}})$. \section{Examples satisfying our assumptions}\label{ch3:sec:ex} We now present three families of linear inverse problems satisfying Assumptions \ref{ch3:infass1} on the underlying conitnuum model: a family of mildly ill-posed inverse problems, where the operators defining the problem are simultaneously diagonalizable, \cite{KVZ12}; a family of severely ill-posed inverse problems again in a diagonal setting, \cite{KVZ13, ASZ12}; and a family of mildly ill-posed inverse problems in a nondiagonal setting, \cite{ALS13}. We expect that Assumptions \ref{ch3:ass1}, will be satisfied by consistent discretizations of these models. Indeed, we show that our discrete level assumptions are satisfied if we discretize the two diagonal examples using spectral truncation (see subsection \ref{sp}). Furthermore, in section \ref{ch3:sec:sim} we provide numerical evidence that our ideas also apply in nondiagonal settings and when using other discretization schemes, in particular discretization via finite difference approximations (see subsection \ref{fd}). We do not prove that discretization via finite differences satisfies our discrete level assumptions as it is beyond the scope of this paper; we expect however this to be the case. \subsection{Linear mildly ill-posed simultaneously diagonalizable inverse problem}\label{ch3:ssec:diag} We consider the linear inverse problem setting of subsection \ref{sec:linear}, where $\bm{K},\bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$ commute with each other and $\bm{K}^\ast \bm{K}, \bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$ are simultaneously diagonalizable with common complete orthonormal eigenbasis $\{\beta_j\}_{j\in\mathbb{N}}$. Note that we do not assume that $\bm{K}$ and $\bm{\mathcal{C}}_1$ are compact, but we do assume that $\bm{K}^\star \bm{K}$ and $\bm{\mathcal{C}}_1$ are both diagonalizable in $\{\beta_j\}_{j\in\mathbb{N}}$; in particular, we allow for $\bm{K}$ and $\bm{\mathcal{C}}_1$ to be the identity. For any $\bm{w}\in\mathcal{X}$, let $w_j:=\pr{\bm{w}}{\beta_j}$. Let $\bm{\Sigma}$ be a positive definite and trace class operator in $\mathcal{X}$ which is diagonalizable in the orthonormal basis $\{\beta_j\}_{j\in\mathbb{N}}$, with eigenvalues $\{\mu^\Sigma_j\}_{j\in\mathbb{N}}$. Then for any $\bm{\rho}\in\mathcal{X}$, we can write a draw $\bm{x}\sim\mathcal{N}(\rho,\bm{\Sigma})$ as \begin{equation*} \bm{x}=\bm{\rho}+\sum_{j=1}^{\infty}\sqrt{\mu^\Sigma_j}\gamma_j\beta_j,\end{equation*} where $\gamma_j$ are independent standard normal random variables in $\mathbb{R}$; this is the Karhunen-Loeve expansion \cite[Chapter III.3]{RA90}. In fact, the Karhunen-Loeve expansion makes sense even if $\mu^\Sigma_j$ are not summable, that is if $\bm{\Sigma}$ is not trace class in $\mathcal{X}$; the expansion then defines a Gaussian measure in a bigger space than $\mathcal{X}$ in which $\bm{\Sigma}$ is trace class. This expansion suggests that since we are in a simultaneously diagonalizable setting we can use the Parseval identity and work entirely in the frequency domain as in subsection \ref{sp}. Indeed, we identify an element $\bm{w}\in\mathcal{X}$ with the sequence of coefficients $\{w_j\}_{j\in\mathbb{N}}$, and the norm and inner product in $\mathcal{X}$ with the $\ell^2$-norm and inner product. Furthermore, we identify the operators $\bm{\mathcal{C}}_0, \bm{\mathcal{C}}_1$ and $\bm{K}$ with the sequences of their eigenvalues $\{\mu^{\mathcal{C}_0}_j\}_{j\in\mathbb{N}}, \{\mu^{\mathcal{C}_1}_j\}_{j\in\mathbb{N}}$ and $\{\mu^{K}_j\}_{j\in\mathbb{N}}$ respectively. Algebraic operations on the operators $\bm{\mathcal{C}}_0, \bm{\mathcal{C}}_1, \bm{K}$ are defined through the corresponding operations on the respective sequences. We make the following assumptions on the spectral decay of $\bm{K}, \bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$: \begin{assumptions}\label{ch3:decass} The eigenvalues of $\bm{K}^\ast \bm{K}, \bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$, denoted by $(\mu^{K}_j)^2, \mu^{\mathcal{C}_0}_j, \mu^{\mathcal{C}_1}_j$, respectively, satisfy \addtocounter{footnote}{0}\footnote{$\alpha,\beta$ not to be confused with $\upalpha, \upbeta$ used respectively as shape and rate parameters of the gamma distribution.}\addtocounter{footnote}{-1}\begin{enumerate} \item[-]$(\mu^{K}_j)^2\asymp j^{-4\ell}, \;\ell\geq0;$ \item[-]$\mu^{\mathcal{C}_0}_j\asymp j^{-2\alpha}, \;\alpha>\frac12$$;$ \item[-]$\mu^{\mathcal{C}_1}_j\asymp j^{-2\beta}, \;\beta\geq0$. \end{enumerate} \end{assumptions} Let $\nu$ be the joint distribution of $\bm{y}$ and $\bm{u}$, where $\bm{u}|\delta\sim\mathcal{N}(0,\delta^{-1}\bm{\mathcal{C}}_0)$ and $\bm{y}|\bm{u},\delta\sim\mathcal{N}(\bm{K} \bm{u},\lambda^{-1}\bm{\mathcal{C}}_1)$. Then in this diagonal case, it is straightforward to show in the infinite-dimensional setting that the conditional posterior $\bm{u}|\bm{y},\delta$ is $\nu$-almost surely Gaussian, $\mathcal{N}(\bm{m}_{\lambda,\delta}(\bm{y}),\bm{\mathcal{C}}_{\lambda,\delta})$, where $\bm{\mathcal{C}}_{\lambda,\delta}$ and $\bm{m} _{\lambda,\delta}(\bm{y})$ satisfy (\ref{ch3:eq:prec}) and (\ref{ch3:eq:mean}) respectively. We make the following additional assumption: \begin{assumption}\label{ch3:dc:ass} The parameters $\alpha,\beta, \ell$ in Assumptions \ref{ch3:decass} satisfy $2\alpha+4\ell-2\beta>1$. \end{assumption} We show that under Assumptions \ref{ch3:decass} and \ref{ch3:dc:ass}, Assumptions \ref{ch3:infass1} on the underlying infinite-dimensional model are satisfied $\nu$-almost surely. Without loss of generality assume $\delta=\lambda=1$. For Assumption \ref{ch3:infass1}(i), we have using the Karhunen-Loeve expansion and Assumption \ref{ch3:decass},\begin{align*} \mathbb{E}^\nu\norm{\bm{\mathcal{C}}_0^{-\frac12}\bm{m}(\bm{y})}^2\leq c\mathbb{E}^\nu\sum_{j=1}^{\infty}\frac{j^{2\alpha-4\ell+4b}}{(j^{-4\ell+2\beta}+j^{2\alpha})^2}(j^{-2\ell-\alpha}\zeta_j+j^{-\beta}\xi_j)^2, \end{align*} where $\{\zeta_j\}_{j\in\mathbb{N}}, \{\xi_j\}_{j\in\mathbb{N}}$ are two independent sequences of independent standard normal random variables. The assumption $2\alpha+4\ell-2\beta>1$ secures that the right hand side is finite, hence $\bm{m}(\bm{y})\in\mathcal{D}(\bm{\mathcal{C}}_0^{-\frac12}) \;\nu$-almost surely. For Assumption \ref{ch3:infass1}(ii), the operator $\bm{\mathcal{C}}_1^{-\frac12}\bm{K}\bm{\mathcal{C}}_0 \bm{K}^\ast\bm{\mathcal{C}}_1^{-\frac12}$ has eigenvalues that decay like $j^{-2\alpha-4\ell+2\beta}$ and hence are summable by Assumption \ref{ch3:dc:ass}. We define the Sobolev-like spaces $\mathcal{H}^t, t\in\mathbb{R}$: for $t\geq0$, define \begin{equation*}\mathcal{H}^t:=\{\bm{u}\in\mathcal{X}: \norm{\bm{u}}_{\mathcal{H}^t}:=\sum_{j=1}^{\infty} j^{2t}\pr{u_j}{\beta_j}^2<\infty\},\end{equation*} and for $t<0$, $\mathcal{H}^{t}:=(\mathcal{H}^{-t})^\ast$. We assume to have data of the following form: \begin{assumption}\label{ch3:ass22} $\bm{y}=\bm{K}{\bm{u}^\dagger}+\lambda^{-\frac12}\bm{\mathcal{C}}_1^{\frac12}\bm{\xi}$, where ${\bm{u}^\dagger}\in \mathcal{H}^{\beta-2\ell}$ is the underlying true solution and $\bm{\xi}$ is a Gaussian white noise, $\xi\sim\mathcal{N}(0,I)$. \end{assumption} Note that under Assumptions \ref{ch3:decass}, \ref{ch3:dc:ass} and \ref{ch3:ass22}, it is straightforward to check that Assumption \ref{ch3:infass1}(i) is also satisfied $\bm{\xi}$-almost surely. Indeed, using the Karhunen-Loeve expansion we have\begin{align*}\mathbb{E}\norm{\bm{\mathcal{C}}_0^{-\frac12}\bm{m}(\bm{y})}^2\leq c\mathbb{E}\sum_{j=1}^{\infty}\frac{j^{2\alpha-4\ell+4b}}{(j^{-4\ell+2\beta}+j^{2\alpha})^2}(j^{-2\ell}(u^\dagger_j)^2+\lambda^{-\frac12}j^{-\beta}\xi_j)^2,\end{align*} where $\{\xi_j\}_{j\in\mathbb{N}}$ is a sequence of independent standard normal random variables. The assumption $2\alpha+4\ell-2\beta>1$ together with ${\bm{u}^\dagger}\in \mathcal{H}^{\beta-2\ell}$ secure that the right hand side is finite. Assumption \ref{ch3:infass1}(ii) is independent of $\bm{y}$, hence also holds by our previous considerations. A natural way to discretize random draws in this setup is by truncating the Karhunen-Loeve expansion which is equivalent to the spectral truncation in subsection \ref{sp}. We assume to have discrete data of the form \begin{equation*}{y}=K{u^\dagger}+\lambda^{-\frac12}\mathcal{C}_1^\frac12\xi,\end{equation*} where $K, \mathcal{C}_1, {u^\dagger}$ and $\xi$ are discretized as in subsection \ref{sp}. The prior is also discretized using spectral truncation, $u\sim\mathcal{N}(0,\mathcal{C}_0)$. We show that Assumptions \ref{ch3:ass1} are satisfied under Assumptions \ref{ch3:decass} and \ref{ch3:dc:ass}, for this data and discretization scheme. By Assumption \ref{ch3:decass}, there exists a constant $c\geq0$ independent of $N$, such that \begin{align*}\mathbb{E}\norm{\mathcal{C}_0^{-\frac12}m({y})}^2_{\mathbb{R}^N}\leq c\mathbb{E}\sum_{j=1}^{N}\frac{j^{2\alpha-4\ell+4b}}{(j^{-4\ell+2\beta}+j^{2\alpha})^2}(j^{-2\ell}{u^\dagger_j}+j^{-\beta}\xi_j)^2, \end{align*} where the right hand side is bounded uniformly in $N$, since we are summing nonnegative numbers and we have seen that under Assumptions \ref{ch3:dc:ass} and \ref{ch3:ass22} the corresponding infinite series is summable. Furthermore, again by Assumption \ref{ch3:decass}, there exists another constant $c\geq0$ independent of $N$, such that \begin{align*}{\rm {Tr}}(\mathcal{C}_1^{-\frac12}K\mathcal{C}_0K^\ast\mathcal{C}_1^{-\frac12})\leq c\sum_{j=1}^{N} j^{-2\alpha-4\ell+2\beta},\end{align*} where the right hand side is bounded uniformly in $N$, since we have seen that under Assumption \ref{ch3:dc:ass} the corresponding infinite series is summable. \subsection{Linear severely ill-posed simultaneously diagonalizable inverse problem}\label{ch3:ssec:sev} We consider the setting of \cite{KVZ13, ASZ12}, that is, a similar situation with the previous example, where instead of having $(\mu_j^{K})^2\asymp j^{-4\ell}$ we now have $(\mu_j^{K})^2\asymp e^{-2sj^b},$ for $b,s>0$. The proof of the validity of Assumptions \ref{ch3:infass1} $\nu$-almost surely is identical to the proof in the previous example, where we now have the added advantage of the exponential decay of the eigenvalues of $\bm{K}^\ast \bm{K}$. We can also prove that for data of the form $\bm{y}=K{\bm{u}^\dagger}+\lambda^{-\frac12}\bm{\mathcal{C}}_1^{\frac12}\bm{\xi}$, where now it suffices to have ${\bm{u}^\dagger}\in\mathcal{X}$, Assumption \ref{ch3:infass1} is satisfied $\bm{\xi}$-almost surely. Finally, in a similar way to the previous example, Assumptions \ref{ch3:ass1} are valid if we discretize this setup by spectral truncation (subsection \ref{sp}). \subsection{Nondiagonal linear inverse problem} We consider the setting of \cite{ALS13}, that is the linear inverse problem setting of subsection \ref{sec:linear}, where $\bm{K}^\ast \bm{K}, \bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$ are not necessarily simultaneously diagonalizable but they are related to each other via a range of norm equivalence assumptions expressing that $\bm{K}\simeq \bm{\mathcal{C}}_0^\ell$ and $\bm{\mathcal{C}}_1\simeq \bm{\mathcal{C}}_0^\beta$ for some $\ell, \beta\geq0$ (see \cite[Assumption 3.1]{ALS13}). Here $\simeq$ is used loosely to indicate two operators which induce equivalent norms. As before let $\nu$ be the joint distribution of $\bm{y}$ and $\bm{u}$, where $\bm{u}|\delta\sim\mathcal{N}(0,\delta^{-1}\bm{\mathcal{C}}_0)$ and $\bm{y}|\bm{u},\delta\sim\mathcal{N}(\bm{K} \bm{u},\lambda^{-1}\bm{\mathcal{C}}_1)$. Then as in the simultaneously diagonalizable case examined above, we have that the conditional posterior $\bm{u}|\bm{y},\delta$ is $\nu$-almost surely $\mathcal{N}(\bm{m}_{\lambda,\delta}(\bm{y}),\bm{\mathcal{C}}_{\lambda,\delta})$, where $\bm{\mathcal{C}}_{\lambda,\delta}$ and $\bm{m} _{\lambda,\delta}(\bm{y})$ satisfy (\ref{ch3:eq:prec}) and (\ref{ch3:eq:mean}) respectively (see \cite[Theorem 2.1]{ALS13}). It is implicit in \cite[Theorem 2.1]{ALS13} that $\bm{m}_{\lambda,\delta}(\bm{y})\in\mathcal{D}(\bm{\mathcal{C}}_0^{-\frac12})$ $\nu$-almost surely, hence Assumption \ref{ch3:infass1}(i) holds $\nu$-almost surely. Assumption \ref{ch3:infass1}(ii) also holds $\nu$-almost surely since if $\{\phi_j\}_{j\in\mathbb{N}}$ is a complete orthonormal system of eigenfunctions of $\bm{\mathcal{C}}_0$ and $\{\mu^{\mathcal{C}_0}_j\}_{j\in\mathbb{N}}$ the corresponding eigenvalues, by \cite[Assumption 3.1(3)]{ALS13} we have $\norm{\bm{\mathcal{C}}_1^{-\frac12}\bm{K}\bm{\mathcal{C}}_0^\frac12\phi_j}^2\leq c\norm{\bm{\mathcal{C}}_0^{-\frac{\beta}2+\ell+\frac12}\phi_j}^2=c(\mu^{\mathcal{C}_0}_j)^{-\beta+2\ell+1}$ which is summable by \cite[Assumption 3.1(1) and 3.1(2)]{ALS13}. Hence, $\bm{\mathcal{C}}_1^{-\frac12}\bm{K}\bm{\mathcal{C}}_0^\frac12$ is Hilbert-Schmidt thus $\bm{\mathcal{C}}_1^{-\frac12}\bm{K}\bm{\mathcal{C}}_0\bm{K}^\ast\bm{\mathcal{C}}_1^{-\frac12}$ is trace-class. We believe that Assumptions \ref{ch3:ass1} on the discrete level are also satisfied in this example if consistent discretization methods are used, however proving this is beyond the scope of the present paper. \section{Numerical Results}\label{ch3:sec:sim} We now present numerical simulations supporting our result in section \ref{ch3:sec:main} on the large $N$ behaviour of CA described in subsection \ref{CA} and our intuition contained in subsection \ref{contr} on the benefits of the reparametrization described in subsection \ref{NCA}. We consider an instance and a modification of the mildly ill-posed diagonal setting presented in subsection \ref{ch3:ssec:diag}. In subsection \ref{ch3:nex1} we use spectral truncation (see subsections \ref{sp}, \ref{ch3:ssec:diag}) and in subsection \ref{ch3:nex2} we use finite differences approximation (see subsection \ref{fd}). \subsection{Signal in white noise model using truncated Karhunen-Loeve expansion}\label{ch3:nex1} We consider the simultaneously diagonalizable setup described in subsection \ref{ch3:ssec:diag}, where $\mathcal{X}=L^2({\rm I}), {\rm I}=(0,1)$. We consider the orthonormal basis $\beta_j(x)=\sqrt{2}\sin(j\pi x), \;x\in {\rm I}$, and define the operators $\bm{K}, \bm{\mathcal{C}}_0$ and $\bm{\mathcal{C}}_1$ directly through their eigenvalues $\mu^K_j=1, \mu^{\mathcal{C}_0}_j=j^{-3}$ and $\mu^{\mathcal{C}_1}_j=1,$ for all $j\in\mathbb{N}$, respectively. In particular, this is the \emph{normal mean model}, in which one assumes observations of the form \[y_i=u_i+\eta_j, \quad j\in\mathbb{N},\] where $\eta_j\sim\mathcal{N}(0,\lambda^{-1})$ and the unknown is $\{u_j\}_{j\in\mathbb{N}}\in\ell^2$. This model is clearly equivalent to the \emph{white noise model}, \begin{align}\label{wn}\bm{y}=\bm{u}+\bm{\eta},\end{align} where $\bm{\eta}=\lambda^{-\frac12}\bm{\xi}$ and $\bm{\xi}$ is an unobserved Gaussian white noise, see subsection \ref{sp}. Note that $\bm{\xi}$ whose covariance function is a Dirac delta function, is not realizable in the basic Hilbert space $\mathcal{X}$ (instead $\mathcal{X}$ is the corresponding Cameron-Martin space), but can be interpreted in process form as for example in \cite{BHMR07, LC08} in the context of inverse problems. Although it can be argued that white noise data models are unrealistic at the very smallest scales, they are a useful idealization of noise which is best thought of as a continuous process with very short correlation lengthscales; in particular if the correlation lengthscale is much smaller than the grid scale used, then it is reasonable to use a white noise model. The white noise model (\ref{wn}) is an important statistical model which is known to be asymptotically equivalent to several standard statistical models, for example nonparametric regression, \cite{BL96, LZ00}. It is also practically relevant, since it is a nontrivial special case of the deconvolution inverse problem, \cite{JKPR04, KR13}. Finally, it gives rise to Gaussian posterior distributions which are well studied in the sense of posterior consistency, see \cite{KVZ12, ALS13, KR13}. Defining $\bm{\mathcal{A}}_0$ to be the negative Laplace operator in ${\rm I}$ with Dirichlet boundary conditions, we recognize that we use a Gaussian prior with covariance operator $\bm{\mathcal{C}}_0$ proportional to $\bm{\mathcal{A}}_0^{-\frac{3}2}$. Assumptions \ref{ch3:decass} are satisfied with $\alpha=1.5$ and $\beta=\ell=0$; since $2\alpha+4\ell-2\beta=3>1$, Assumption \ref{ch3:dc:ass} is also satisfied. We assume that we have data produced from the underlying true signal ${\bm{u}^\dagger}(x)=\sum_{j=1}^{\infty} {u^\dagger_j} \sqrt{2}\sin(j\pi x),$ for $x\in{\rm I}$, where ${u^\dagger_j}=j^{-2.25}\sin(10j)$ and $\lambda=200$, and in particular we have that the coefficients of $\bm{y}$, are given as \begin{equation*}y_j={u^\dagger_j}+\lambda^{-\frac12}\xi_j,\end{equation*} where $\xi_j$ are standard normal random variables. It is straightforward to check that ${\bm{u}^\dagger}\in \mathcal{H}^{t}$ for any $t<1.75$, hence Assumption \ref{ch3:ass22} is also satisfied. According to the considerations in subsection \ref{ch3:ssec:diag}, we thus have that Assumptions \ref{ch3:ass1} hold when using the spectral truncation discretization method. This example is studied in \cite{SVZ13} where the interest is in studying the asymptotic performance of the posterior in the small noise limit (see section \ref{ch3:sec:con}). We use the hierarchical setup presented in subsection \ref{sec:linear} and implement Algorithms \ref{ch3:algstd} (CA), \ref{ch3:algrep} (NCA) and \ref{ch3:algmar} (MA) contained in section \ref{ch3:sec:met} at discretization levels $N=32, 512, 8192$, with hyper-parameters $\upalpha_0=1,\upbeta_0=10^{-4},$ chosen to give uninformative hyper-priors, that is, hyper-priors whose variance is much larger than their mean. Following the discussion in subsection \ref{contr}, we view MA as the gold standard and benchmark CA and NCA against it. We use $10^4$ iterations and choose $\de{0}=1$ in all cases. In order to have fair comparisons, we use a fixed burn-in time of $10^3$ iterations. We take the viewpoint that we have a fixed computational budget, hence we choose not to increase the burn-in time as $N$ increases as one can do if infinite resources are available. In Figure \ref{ch3:fig1} we plot the true solution, the noisy data and the sample means and credibility bounds using CA and NCA for $N=8192$. The sample means and credibility bounds at other discretization levels of the unknown are similar and are therefore omitted. \begin{figure}[htp] \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/data8192} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/recostd8192} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/recorep8192} \caption{Left: true solution (dashed black) and noisy data (blue continuous). Middle and right: true solution (dashed black), sample mean (red continuous) and 87.5$\%$ credibility bounds (shaded area) for CA (middle) and NCA (right). Dimension is $N=8192$.} \label{ch3:fig1} \end{figure} \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsstd32} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsstd512} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsstd8192} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densstdmar32} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densstdmar512} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densstdmar8192}} \caption{CA: $\delta$-chains (top) and kernel density estimates of the posterior on $\delta$ (bottom) for dimensions $N=32, 512$ and $8192$ left to right. In dashed red in the density plots is the density estimate using MA, considered as the gold standard.} \label{ch3:fig2} \end{figure} In Figure \ref{ch3:fig2} we see that for CA, in small dimensions the $\delta$-chain has a healthy mixing, however as predicted by Theorem \ref{ch3:thm1}, as $N$ increases it becomes increasingly slower and exhibits diffusive behaviour. This is also reflected in the density plots where we observe that as $N$ increases, the kernel density estimates computed using CA look less and less like the density estimates computed using MA which we consider to be optimal in this setting. In Figure \ref{ch3:fig3} we see that for NCA as expected, the $\delta$-chain appears to be robust with respect to the increase in dimension; this is also reflected in the density estimates using NCA which now look very close to the ones obtained using MA for all discretization levels. \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsrep32} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsrep512} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsrep8192} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densrepmar32} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densrepmar512} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densrepmar8192}} \caption{NCA: $\delta$-chains (top) and kernel density estimates of the posterior on $\delta$ (bottom) for dimensions $N=32, 512$ and $8192$ left to right. {In dashed red in the density plots is the density estimate using MA, considered as a gold standard.}} \label{ch3:fig3} \end{figure} Our observations in Figures \ref{ch3:fig2} and \ref{ch3:fig3} are supported by the autocorrelation plots presented in Figure \ref{ch3:fig4}. The rate of decay of correlations in the $\delta$-chain in CA appears to decrease as the dimension increases, and in particular for $N=8192$ the correlations seem not to decay at all. On the contrary, the rate of decay of correlation in the $\delta$-chain in NCA appears not to be affected by the increase in dimension and is very similar to the one in MA. \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{spplots/autocormar} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{spplots/autocorstd} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{spplots/autocorrep} } \caption{Autocorrelation functions of $\delta$-chain, dimensions 32 (black), 512 (red) and 8192 (blue); {left for MA, middle for CA, right for NCA}.} \label{ch3:fig4} \end{figure} \subsection{Linear Bayesian inverse problem with coarse data using finite difference discretization}\label{ch3:nex2} We consider a modification of the simultaneously diagonalizable setup described in subsection \ref{ch3:ssec:diag}, where $\mathcal{X}=L^2({\rm I}), {\rm I}=(0,1)$ and we allow $\bm{K}$ to map $\mathcal{X}$ into $\mathbb{R}^M$ and hence have data $y\in\mathbb{R}^M$. This setting is not directly covered by the theoretical analysis presented in section \ref{ch3:sec:main}, however our theory readily generalizes to cover this setting; we refer the interested reader to the PhD thesis \cite[section 4.5]{SA13} for more details. The generalized analysis holds again under Assumptions \ref{ch3:ass1} on the discrete-level based on intuition which holds for problems satisfying Assumptions \ref{ch3:infass1} on the underlying continuum model for the unknown $\bm{u}$. In particular, we consider the problem of recovering a true signal ${\bm{u}^\dagger}$, by observing a blurred version of it at $M$ equally spaced points $\{\frac{1}{M+1},...,\frac{M}{M+1}\}$, polluted by additive independent Gaussian noise of constant variance $\lambda^{-1}$. We define $\bm{\mathcal{A}}_0$ to be the negative Laplacian with Dirichlet boundary condtions in ${\rm I}$. We let ${\bf P}$ be defined as in subsection \ref{fd} and define $\tilde{\bm{K}}=(\bm{I}+\frac{1}{100\pi^2} \bm{\mathcal{A}}_0)^{-1}$, and consider the case $\bm{K}={\bf P} \tilde{\bm{K}}$, $\bm{\mathcal{C}}_0=\bm{\mathcal{A}}_0^{-1}$ and $\mathcal{C}_1=I_M$ in the setting of subsection \ref{sec:linear} and where $I_M$ is the $M\times M$ identity matrix. Notice that due to the smoothing effect of $\tilde{\bm{K}}$, the operator $\bm{K}$ is bounded in $\mathcal{X}$. However, due to the presence of ${\bf P}$, $\bm{K}$ is not simultaneously diagonalizable with $\bm{\mathcal{C}}_0$. We now check that this problem satisfies Assumptions \ref{ch3:infass1}. Indeed, assuming without loss of generality that $\lambda=\delta=1$, by \cite[Example 6.23]{AS10} we have that the posterior covariance and mean satisfy (\ref{ch3:eq:prec}) and (\ref{ch3:eq:mean}), hence $\bm{\mathcal{C}}_0^{-\frac12}\bm{m}(y)=\bm{\mathcal{C}}_0^{-\frac12}(\bm{\mathcal{C}}_0^{-1}+\bm{K}^\ast \bm{K})^{-1}\bm{K}^\ast y=(I+\bm{\mathcal{C}}_0^\frac12\bm{K}^\ast \bm{K}\bm{\mathcal{C}}_0^\frac12)^{-1}\bm{\mathcal{C}}_0^\frac12\bm{K}^\ast y$, where $\bm{\mathcal{C}}_0^\frac12 \bm{K}^\ast y\in \mathcal{X},$ and $(I+\bm{\mathcal{C}}_0^\frac12\bm{K}^\ast \bm{K}\bm{\mathcal{C}}_0^\frac12)^{-1}$ is bounded in $\mathcal{X}$ by the nonnegativity of $\bm{\mathcal{C}}_0^\frac12\bm{K}^\ast \bm{K}\bm{\mathcal{C}}_0^\frac12$. Furthermore, we have that ${\rm {Tr}}(\mathcal{C}_1^{-\frac12}\bm{K}\bm{\mathcal{C}}_0 \bm{K}^\ast \mathcal{C}_1^{-\frac12})={\rm {Tr}}(\bm{K} \bm{\mathcal{C}}_0 \bm{K}^\ast)$, which is finite since $\bm{K} \bm{\mathcal{C}}_0 \bm{K}^\ast$ is an $M\times M$ matrix. We discretize this setup at level $N$, using the finite differences approximation as explained in subsection \ref{fd}. In particular, we discretize $\bm{\mathcal{A}}_0, {\bf P}$ and ${\bf P}^\ast$ by replacing them with the matrices $\mathcal{A}_0, P$ and $(N+1)P^T$ respectively as in subsection \ref{fd}; this induces a discretization of the operators $\bm{K}$ and $\bm{\mathcal{C}}_0$ by replacing them with the corresponding matrices $K$ and $\mathcal{C}_0$ calculated through the appropriate functions of $\mathcal{A}_0$ and $P$. In defining $K$, we also replace the identity operator by the $N\times N$ identity matrix. We do not prove that this discretization scheme satisfies Assumptions \ref{ch3:ass1}, however we expect this to be the case. We assume that we have data produced from the underlying true signal ${\bm{u}^\dagger}(x)=0.75\cdot\mathds{1}_{[0.1,0.25]}(x)+0.25\cdot\mathds{1}_{[0.35,0.38]}+\sin^4(2\pi x)\cdot\mathds{1}_{[0.5,1]}(x), \;x\in{\rm I}.$ In particular, we construct data of the form \[y=\bm{K}{\bm{u}^\dagger}+\lambda^{-\frac12}\mathcal{C}_1^\frac12\xi,\] where $\lambda=100$ and using a discretization level $N_c=8192$ for the unknown; we treat this discretization level as the continuum limit. We implement Algorithms \ref{ch3:algstd} (CA), \ref{ch3:algrep} (NCA) and \ref{ch3:algmar} (MA) for constant number of observation points $M=15$, and for discretization levels of the unknown $N=15, 127, 1023$, with hyper-parameters $\upalpha_0=1,\upbeta_0=10^{-4},$ chosen to give uninformative hyper-priors, that is, hyper-priors whose variance is much larger than their mean. Following the discussion in subsection \ref{contr}, we view MA as the gold standard and benchmark CA and NCA against it. We use $10^4$ iterations and choose $\de{0}=1$ in all cases. We again use a constant burn-in time of $10^3$ iterations. In Figure \ref{ch3:fig6} we plot the true solution, the noisy data and the sample means and credibility bounds using CA and NCA for $N=1023$. The sample means and credibility bounds at other discretization levels of the unknown are similar and are therefore omitted. \begin{figure}[htp] \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/data1024} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/recostd1024} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/recorep1024} \caption{Left: true solution (dashed black) and discrete blurred noisy data (blue asterisks). Middle and right: true solution (dashed black), sample mean (red continuous) and 87.5$\%$ credibility bounds (shaded area) for CA (middle) and NCA (right). Dimensions of true solution and observed data are $N=1023$ and $M=15$ respectively.} \label{ch3:fig6} \end{figure} In Figure \ref{ch3:fig7} we see that for CA, in small dimensions the $\delta$-chain has a healthy mixing, however as predicted by our theory, as $N$ increases it becomes increasingly slower and exhibits diffusive behaviour. This is also reflected in the density plots where we observe that as $N$ increases, the kernel density estimates computed using CA look less and less like the density estimates computed using MA which we consider to be optimal in this setting. In Figure \ref{ch3:fig8} we see that for NCA the $\delta$-chain appears to be robust with respect to the increase in dimension; this is also reflected in the density estimates using NCA which now look very close to the ones obtained using MA for all discretization levels. \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/sampsstd16} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/sampsstd128} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/sampsstd1024} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/densstdmar16} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/densstdmar128} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/densstdmar1024}} \caption{CA: $\delta$-chains (top) and kernel density estimates of the posterior on $\delta$ (bottom) for dimensions $N=15, 127$ and $1023$ left to right. In dashed red in the density plots is the density estimate using MA, considered as a gold standard.} \label{ch3:fig7} \end{figure} \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/sampsrep16} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/sampsrep128} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/sampsrep1024} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/densrepmar16} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/densrepmar128} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{dfdplots/densrepmar1024}} \caption{NCA: $\delta$-chains (top) and kernel density estimates of the posterior on $\delta$ (bottom) for dimensions $N=15, 127$ and $1023$ left to right. {In dashed red in the density plots is the density estimate using MA, considered as a gold standard.}} \label{ch3:fig8} \end{figure} Our observations in Figures \ref{ch3:fig7} and \ref{ch3:fig8} are supported by the autocorrelation plots presented in Figure \ref{ch3:fig9}. The rate of decay of correlations in the $\delta$-chain in CA appears to decrease as the dimension increases, and in particular for large $N$ the correlations seem to decay very slowly. On the contrary, the rate of decay of correlations in the $\delta$-chain in NCA appears not to be affected by the increase in dimension and is relatively close to the one in MA. \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{dfdplots/autocormar} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{dfdplots/autocorstd} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{dfdplots/autocorrep} } \caption{Autocorrelation functions of $\delta$-chain, dimensions 15 (black), 127 (red) and 1023 (blue); {left for MA, middle for CA, right for NCA}.} \label{ch3:fig9} \end{figure} \section{Conclusions}\label{ch3:sec:con} We considered a hierarchical Bayesian approach to the function-space general inverse problem (\ref{ch3:eq:1}), with Gaussian priors on the unknown function $\bm{u}$ which depend on a variance-scaling parameter $\delta$ also endowed with a prior. We studied the finite-dimensional implementation of this setup and in particular, examined the mixing properties of MwG algorithms for sampling the posterior, as the discretization level $N$ of the unknown increases. We provided measure-theoretic intuition suggesting that under natural assumptions on the underlying function space model, as $N$ increases, CA, which is the most natural algorithm in this setting, deteriorates (see section \ref{ch3:sec:int}). We then used this intuition to propose a reparametrization of the prior for which the resultant algorithm, NCA, is expected to be robust with respect to $N$. In the linear-conjugate setting we formulated rigorous theory which quantifies the deterioration of CA in the asymptotic regime of large $N$ (see section \ref{ch3:sec:main}). This theory holds under assumptions on the discrete level (Assumptions \ref{ch3:ass1}) which we expect to be inherited from our assumptions on the function-space model (Assumptions \ref{ch3:infass1}) when consistent discretizations are used. Indeed, we provided three families of linear inverse problems satisfying our assumptions on the underlying infinite-dimensional model (section \ref{ch3:sec:ex}), and for two of them, which are families of mildly and severely ill-posed problems in a simultaneously diagonal setting, we also showed that a spectral truncation method based on the common eigenbasis satisfies our discrete level assumptions (subsections \ref{ch3:ssec:diag} and \ref{ch3:ssec:sev}). It would be interesting to show that discretization via finite differences of these examples also satisfies our discrete assumptions. Our numerical results confirmed our theory on the deterioration of CA as well as our intuition about the robustness of NCA in the large $N$ limit. However, for NCA the $\delta$-chain slows down in the small noise limit. This is because even though $v$ and $\delta$ are a priori independent, they both need to explain the data, and this creates an increasingly severer constraint as $\lambda$ becomes large. Hence, $\delta$ and $v$ concentrate near a lower dimensional manifold, where $\delta^{-\frac12} K v \approx y$, and the Gibbs sampler mixes poorly (see Figure \ref{ch3:fig5} for a numerical illustration of this effect in the example of subsection \ref{ch3:nex1}). Although MA is robust in both the large $N$ and the small noise limit, it can be prohibitively expensive for large scale inverse problems; new work is required to produce effective hierarchical algorithms in this small noise limit, when $N$ is large. We have considered the interweaving method of \cite{YM11}, which combines in each iteration centered and non-centered draws of $\delta$, and the partially non-centered parametrizations of \cite{PRS03}, in which the prior is reparametrized as $u=\delta^{-\frac{t}2}v_t$ where $v_t\sim\mathcal{N}(0,\delta^{t-1}\bm{\mathcal{C}}_0)$, for some $t\in[0,1]$. Our numerical experimentation did not suggest significant benefits from their use, hence we do not report them here, but further investigation of these issues would be of interest. \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/sampsrep512sm} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.32\columnwidth]{spplots/densrepmar512sm}} \caption{Signal in white noise model - NCA for small noise, $\lambda=200^2$, and dimension $N=512$: $\delta$-chain (left) and kernel density estimate of posterior on $\delta$ (right, black). In dashed red in right plot is the density estimate using MA, considered as a gold standard.} \label{ch3:fig5} \end{figure} In addition to \cite{JB12}, a similar hierarchical setup has been considered in \cite{SVZ13} in the \emph{signal in white noise} model (see subsection \ref{ch3:nex1}). The authors of \cite{SVZ13} study a different aspect of the problem, namely the asymptotic performance of the posterior distribution in the small noise limit. This is motivated by results on posterior consistency suggesting that the optimal rates of contraction are achieved by rescaling the prior depending on the size of the noise \cite{KVZ12, ALS13, KVZ13, ASZ12}. They also study an empirical Bayes method for estimating the value of the prior scaling from the data and show that both methods achieve optimal posterior contraction rates over a range of regularity classes of the true solution. However, we have seen in this paper that the implementation of the hierarchical Bayesian method in the large dimensional limit is problematic. On the other hand, while the empirical Bayes method is appealing because of the lack of mixing issues, it involves solving an optimization problem which in more complicated models can be computationally demanding, and it does not provide uncertainty quantification of the prior scaling which may be desirable. Again we highlight the need for more research and new ideas in the small noise limit, when $N$ is large. An asymptotic regime which we have not investigated yet, is the case where we have a sequence of $N$-dimensional linear inverse problems, with the relevant matrices being consistent discretizations of linear operators and where the size of the noise decreases as $N$ grows larger, that is $\lambda=\lambda(N)\to\infty$ as $N\to\infty$. This is the limit of an infinite dimensional unknown which is also identifiable from the data. Since in this regime, as $N$ grows larger the supports of both $\delta|y,u$ and $\delta|y$ shrink to zero, we expect that there will be an optimal relationship between $\lambda$ and $N$, for which CA will not deteriorate for large $N$. Our theory on the slowing down of the $\delta$-chain can be extended to cover nonlinear Gaussian-conjugate Bayesian inverse problems and in particular the nonparametric drift estimation in SDE's setting considered in \cite{ PPRS12, PSZ13, MSZ13}; see \cite[Chapter 4.5]{SA13}. Again the main result holds under assumptions on the discrete level which we expect to be inherited by consistent discretizations from natural assumptions on the underlying infinite-dimensional model which express that the posterior is absolutely continuous with respect to the prior. Furthermore, our infinite-dimensional intuition extends to hierarchical setups for inference on other hyper-parameters, for instance the prior regularity parameter $\alpha$, where $\mathcal{C}_0=\bm{\mathcal{A}}_0^{-\alpha}$, as studied in \cite{KSVZ12}. In Figure \ref{ch3:fig11} we plot autocorrelation functions for the centered MwG algorithm used in this setting and the corresponding version of the non-centered algorithm; as before we also implemented the corresponding marginal algorithm and use it as the gold standard. The underlying truth, the noise distribution and the discretization method are the same as in subsection \ref{ch3:nex1} and we use an exponential hyper-prior on $\alpha$. The idea is the same as the intuition presented in section \ref{ch3:sec:int}, since in infinite dimensions two Gaussian measures $\mathcal{N}(0,\Sigma_1)$ and $\mathcal{N}(0,\Sigma_2),$ where $\Sigma_1$ and $\Sigma_2$ are simultaneously diagonalizable with eigenvalues $\{j^{-{\alpha_1}}\}_{j\in\mathbb{N}}$ and $\{j^{-{\alpha_2}}\}_{j\in\mathbb{N}}$ respectively, are mutually singular unless $\alpha_1=\alpha_2$. Indeed, our numerical simulations confirm again the deterioration of the centered algorithm and the robustness of the non-centered algorithm, in the large $N$ limit. More generally, as suggested in section \ref{ch3:sec:int}, our intuition holds for inference on any prior on $\bm{u}$ which depends on a hyper-parameter $\theta$, when it holds that $\bm{u}|\bm{y},\theta$ is absolutely continuous with respect to $\bm{u}|\theta$ almost surely with respect to the data, while $\bm{u}|\theta$ and $\bm{u}|\theta'$ are mutually singular when $\theta\neq\theta'$. \begin{figure}[htp] \center{ \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{spregplots/autocormar} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{spregplots/autocorstd} \includegraphics[type=pdf, ext=.pdf, read=.pdf, width=0.3\columnwidth]{spregplots/autocorrep}} \caption{Autocorrelation functions of $\alpha$-chain, dimensions 32 (black), 512 (red) and 8192 (blue); left for marginal, middle for centered, right for non-centered.} \label{ch3:fig11} \end{figure} Returning to the general nonlinear setting discussed in section \ref{ch3:sec:int}, we note that both Algorithms \ref{ch3:algstd} and \ref{ch3:algrep} are straightforward to generalize, however with a certain loss of efficiency compared to the linear-conjugate setting. The distribution of $\bm{u}|\bm{y},\delta$ no longer belongs to a known parametric family of distributions, and thus has to be sampled using a Metropolis-Hastings (for example one based on Langevin diffusion) step. Moreover, for nonlinear inverse problems there is no longer an easy way of finding the marginal distribution $\bm{y}|\delta$, hence MA will not be an option. The so-called \emph{pseudo-marginal algorithm} \cite{AR09}, might be an alternative for non-linear problems, and has been recently employed to perform Bayesian inference using Gaussian process priors in \cite{FG13}. An interesting research direction is the comparison of the performance of the two MwG algorithms with the pseudo-marginal algorithm in both the large $N$ and the small noise limits. Finally, our research agenda includes the extension to the hierarchical setting of the present paper, of the analysis contained in \cite{CDS10} of the bias in the estimated posterior distribution due to the discretization of the unknown and forward problem. \section{Appendix}\label{ch3:sec:ap} In this section we present the proof of Theorem \ref{ch3:thm1}, as well as several technical results and lemmas. Subsection \ref{ch3:ssec:dproof} contains the proof of Theorem \ref{ch3:thm1}, the backbone of which is Lemma \ref{ch3:lem1} proved in subsection \ref{ch3:ssec:ap1}. In subsection \ref{ch3:ssec:ap3} we state and prove a lemma on the negative moments of the rate parameter in the $\delta$ draw (\ref{ch3:eq:dd}), which allows us to control the lower order terms arising in the proof of Theorem \ref{ch3:thm1}. Finally, in subsection \ref{ch3:ssec:ap4}, we prove several probability and linear algebra lemmas, which are useful in our analysis. \subsection{Proof of Theorem \ref{ch3:thm1}}\label{ch3:ssec:dproof} We now prove Theorem \ref{ch3:thm1} under Assumptions \ref{ch3:ass1}. Using the scaling property of the gamma distribution, ${\rm {Gamma}}(\upalpha,\upbeta)\stackrel{\mathcal{L}}{=} \upbeta^{-1}{\rm {Gamma}}(\upalpha,1)$, and multiplying and dividing by $\frac{2}N\delta$, we can write the $\de{k+1} _N$ draw in (\ref{ch3:eq:dd}) as \begin{align}\label{ch3:eq:dd1}\de{k+1}_N&\stackrel{\mathcal{L}}{=}\delta \frac{\Gamma_{0,N}}{\frac2N\delta(\upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u_{\delta}^{(k)}}^2_{\mathbb{R}^N})} \end{align} where $\Gamma_{0,N}\sim{\rm {Gamma}}(\upalpha_0+\frac{N}2,\frac{N}2)$ is independent of ${y}$ and $u_{\delta}^{(k)}$. Defining $ W_{2,N}:=\frac{\Gamma_{0,N}-1-\frac{2\upalpha_0}N}{\sqrt{\frac{2}N+\frac{4\upalpha_0}{N^2}}}$, we have \begin{align*}\Gamma_{0,N}=1+\frac{2\upalpha_0}N+\sqrt{\frac{2}N+\frac{4\upalpha_0}{N^2}}W_{2,N},\end{align*} where for every $N$, the random variable $W_{2,N}$ has mean zero and variance one, third and fourth moments bounded uniformly in $N$ (see Lemma \ref{ch3:lemgam}), and is independent of the data ${y} $ and $\zeta$, the Gaussian white noise expressing the fluctuation in $u^{(k)}_{\delta}$. Concatenating we get \begin{align}\label{ch3:eq:sp1}\de{k+1}_N\stackrel{\mathcal{L}}{=} \delta\frac{1+\frac{2\upalpha_0}N+\sqrt{\frac{2}N+\frac{4\upalpha_0}{N^2}}W_{2,N}}{1+\sqrt{\frac2N}W_{1,N}+\frac{2} NF_N(\delta)\delta},\end{align} and we are now ready to prove Theorem \ref{ch3:thm1}: \begin{proof} By the independence of $W_{2,N}$ and $\zeta$ and since $\mathbb{E}[W_{2,N}]=0$, we have \begin{align*}\mathbb{E}[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta]&=\delta\mathbb{E}\left[\frac{1+\frac{2\upalpha_0}{N}+\sqrt{\frac2N+ \frac{4\upalpha_0}{N^2}}W_{2,N}}{1+\sqrt{\frac2N}W_{1,N}+\frac{2F_N\delta}N}-1\right]\\ &=\delta\mathbb{E}^\zeta\left[\frac{\frac{2\upalpha_0}N-\sqrt{\frac2N}W_{1,N}-\frac{2F_N\delta}N}{1+\sqrt{\frac2N}W_{1,N}+ \frac{2F_N\delta}N}\right]. \end{align*} Using the identity $\frac{1}{1+x}=1-x+\frac{x^2}{1+x}$ we get \begin{align*}\mathbb{E}&[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta]\\=&\delta\mathbb{E}^\zeta\left[\left(\frac{2(\upalpha_0-F_N\delta)}N- \sqrt{\frac2N}W_{1,N}\right)\left(1-\sqrt{\frac2N}W_{1,N}-\frac{2F_N\delta}N\right)\right]+\mathbb{E}^\zeta[e_{1,N}], \end{align*} where \begin{align*}e_{1,N}=\delta\frac{\left(\frac{2(\upalpha_0-F_N\delta)}N-\sqrt{\frac2N}W_{1,N}\right) \left(\frac{2W_{1,N}^2}N+\frac{4F_N^2\delta^2}{N^2}+\frac{4\sqrt{2}F_NW_{1,N}\delta}{N^\frac32}\right)} {1+\sqrt{\frac2N}W_{1,N}+\frac{2F_N\delta}N}.\end{align*} Using H\"older's inequality and the fact that $F_N$ and $W_{1,N}$ have moments of all positive orders which are bounded uniformly in $N$, we get \begin{align*}\mathbb{E}[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta]&=\frac{2}N\left((\upalpha_0+1)\delta-\mathbb{E}^\zeta[F_N] \delta^2\right)+\mathcal{O}(N^{-\frac32})+\mathbb{E}^\zeta[e_{1,N}], \end{align*} almost surely with respect to $\bm{y}$. For the residual $e_{1,N}$, by Cauchy-Schwarz inequality and (\ref{ch3:eq:denom}), we have \begin{align*} \mathbb{E}^\zeta&[e_{1,N}]=\mathbb{E}^\zeta\bigg[\frac{\left(\frac{2(\upalpha_0-F_N\delta)}N-\sqrt{\frac2N}W_{1,N}\right) \left(W_{1,N}^2+\frac2{N}F_N^2\delta^2+\frac{2\sqrt{2}}{N^\frac12}F_NW_{1,N}\delta\right)}{\frac{N} {2\delta}(1+\sqrt{\frac2N}W_{1,N}+\frac{2F_N\delta}N)}\bigg]\\ &\leq\bigg(\mathbb{E}\Big[\left(\frac{2(\upalpha_0-F_N\delta)}N-\sqrt{\frac2N}W_{1,N}\right)^2\left(W_{1,N}^2+ \frac{2F_N^2\delta^2}N+\frac{2\sqrt{2}F_NW_{1,N}\delta}{N^\frac12}\right)^2\Big]\bigg)^\frac12\\ &\quad\;.\bigg(\mathbb{E}\Big[(\upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u^{(k)}_\delta}^2_{\mathbb{R}^N})^{-2}\Big]\bigg)^\frac12. \end{align*} The square root of the first expectation on the right hand side of the inequality is of order $N^{-\frac12}$, while by Lemma \ref{ch3:lemdres} the square root of the second expectation is of order $N^{-1}$ for almost all $\bm{y}$. Combining we get that $\mathbb{E}^\zeta[e_{1,N}]=\mathcal{O}(N^{-\frac32})$, almost surely with respect to $\bm{y},$ hence \begin{align*}\mathbb{E}[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta]&=\frac2N\left((1+\upalpha_0)\delta-\mathbb{E}^\zeta[F_N] \delta^2\right)+\mathcal{O}(N^{-\frac32}), \end{align*}$\bm{y}$-almost surely. For the variance of the step, we have \begin{align*}\mbox{Var}\left[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta\right]=&\mathbb{E}\left[(\de{k+1}_N-\de{k}_N)^2|\de{k}_N=\delta\right]-\mathbb{E}\left[\de{k+1}_N-\de{k}_N|\de{k}_N=\delta \right]^2, \end{align*} where by the first part of the proof the second term is $\mathcal{O}(N^{-2})$. Thus, we need only consider the first term, which will be shown to be $\mathcal{O}(N^{-1})$. By equation (\ref{ch3:eq:sp1}) we have \begin{align*}\mathbb{E}\left[(\de{k+1}_N-\de{k}_N)^2|\de{k}_N=\delta\right]&=\delta^2\mathbb{E}\left[\left(\frac{\frac{2\upalpha_0} N+\sqrt{\frac2N+\frac{4\upalpha_0}{N^2}}W_{2,N}-\sqrt{\frac2N}W_{1,N}-\frac{2F_N\delta}N}{1+\sqrt{\frac2N} W_{1,N}+\frac{2F_N\delta}N}\right)^2\right]\\ &=\delta^2\mathbb{E}\left[\frac{\frac{2W_{2,N}^2}N+\frac{2W_{1,N}^2}N+\frac{V_N}{N^\frac32}}{\left(1+\sqrt{\frac2N} W_{1,N}+\frac{2F_N\delta}N\right)^2}\right], \end{align* where the random variable $V_N$ depends only on $W_{1,N}$ and $F_N$ and has higher order moments which are bounded uniformly in $N$, $\bm{y}$-almost surely (the dependence on $W_{2,N}$ disappears by the independence of $W_{2,N}$ and $\zeta$ and the fact that both $W_{2,N}$ has mean zero and variance one). Using the identity $\frac{1}{(1+x)^2}=1-2x+ \frac{3x^2+2x^3}{(1+x)^2}$, we get \begin{align*}\mathbb{E}&\left[(\de{k+1}_N-\de{k}_N)^2|\de{k}_N=\delta\right]\\=&\delta^2\mathbb{E}\left[\left(\frac{2W_{2,N} ^2}N+\frac{2W_{1,N}^2}N+\frac{V_N}{N^\frac32}\right)\left(1-2\sqrt{\frac2N}W_{1,N}-\frac4NF_N\delta\right) \right]+\mathbb{E}[e_{2,N}], \end{align*} where \begin{align*}e_{2,N}&=\delta^2\left(\frac{2W_{2,N}^2}N+\frac{2W_{1,N}^2}N+\frac{V_N}{N^ \frac32}\right)\frac{3\left(\sqrt{\frac2N}W_{1,N}+\frac{2F_N\delta}N\right)^2+2\left(\sqrt{\frac2N}W_{1,N}+ \frac{2F_N\delta}N\right)^3}{\left(1+\sqrt{\frac2N}W_{1,N}+\frac{2F_N\delta}N\right)^2}\\ &:=\frac{E_N\delta^2}{\left(1+\sqrt{\frac2N}W_{1,N}+\frac{2F_N\delta}N\right)^2}.\end{align*} Using the fact that $\bm{y}$-almost surely $W_{1,N}$, $F_N$ and $V_N$ have moments of all positive orders which are bounded uniformly in $N$, by H\"older inequality (we do not need to consider higher order moments for $W_{2,N}$ here, because it is independent with $W_{1,N}$ and $F_N,$ hence bounding terms involving $W_{2,N}$ does not require the use of H\"older's inequality which needs higher moments), we get that \begin{align*} \mathbb{E}[(\de{k+1}_N-\de{k}_N)^2|\de{k}_N=\delta]&=\frac{2\delta^2}{N}\left(\mathbb{E}[W_{2,N}^2]+\mathbb{E}[W_{1,N}^2]\right)+ \mathcal{O}(N^{-\frac32})+\mathbb{E}[e_{2,N}], \end{align*} $\bm{y}$-almost surely. For the residual $e_{2,N}$, as before using Cauchy-Schwarz inequality and (\ref{ch3:eq:denom}), \begin{align*} \mathbb{E}[e_{2,N}]&\leq\frac{N^{2}}4\big(\mathbb{E}[E_N^2]\big)^\frac12\bigg(\mathbb{E}[(\upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u^{(k)}_ \delta}^2_{\mathbb{R}^N})^{-4}]\bigg)^\frac12.\end{align*} Since by Lemma \ref{ch3:lemgam} the first four moments of $W_{2,N}$ are also bounded uniformly in $N$, the square root of the first expectation on the right hand side is of order $N^{-2}$, while by Lemma \ref{ch3:lemdres} the square root of the second expectation is of order $N^{-2}$, for almost all $\bm{y}$. Combining we get $\mathbb{E}^\zeta[e_{2,N}]=\mathcal{O}(N^{-2})$, almost surely with respect to $\bm{y}$, hence since $\mathbb{E}[W_{1,N}^2]=\mathbb{E}[W_{2,N}^2]=1$, \begin{align*} \mathbb{E}[(\de{k+1}_N-\de{k}_N)^2|\de{k}_N=\delta] &=\frac{4\delta^2}{N}+\mathcal{O}(N^{-\frac32}), \end{align*}$\bm{y}$-almost surely. Concatenating, we get the result. \end{proof} \subsection{Proof of Lemma \ref{ch3:lem1}}\label{ch3:ssec:ap1} \begin{proof} Let $\{e_j\}_{j=1}^N$ be any orthonormal basis of $\mathbb{R}^N$ (with respect to the possibly scaled norm $\smnorm{\cdot}_{\mathbb{R}^N}$) and for any $w\in\mathbb{R}^N$ write $w_j:=\pr{w}{e_j}_{\mathbb{R}^N}$. We then have that $\zeta=\sum_{j=1}^{N}\zeta_je_j$ where $\{\zeta_j\}_{j=1}^N$ is a sequence of independent standard normal random variables. Using (\ref{ch3:eq:uu}) we have, \begin{align*}\norm{\mathcal{C}_0^{-\frac12}u^{(k)}_\delta}_{\mathbb{R}^N}^2&=\norm{\mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y})}_{\mathbb{R}^N}^2+\norm{\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}^2_{\mathbb{R}^N}+2\pr{\mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y})}{\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}_{\mathbb{R}^N}\\ &:=A_N+B_N+C_N.\end{align*} Under Assumptions \ref{ch3:ass1}, we can analyze each term as follows: \begin{enumerate} \item[A)] by Assumption \ref{ch3:ass1}(i), for almost all data $\bm{y}$, this term and all its positive integer powers are bounded uniformly in $N$. \item[B)] the second term can be written as\begin{align*}\norm{\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}_{\mathbb{R}^N}^2&=\pr{\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}{\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}_{\mathbb{R}^N}=\pr{\mathcal{C}_{\lambda,\delta}^\frac12\mathcal{C}_0^{-1}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}{\zeta}_{\mathbb{R}^N}\\&=\delta^{-1}\pr{\mathcal{C}_{\lambda,\delta}^\frac12(\mathcal{C}_{\lambda,\delta}^{-1}-\lambda K^\ast \mathcal{C}_1^{-1} K)\mathcal{C}_{\lambda,\delta}^\frac12\zeta}{\zeta}_{\mathbb{R}^N} =\delta^{-1}\norm{\zeta}_{\mathbb{R}^N}^2-\delta^{-1}\lambda\norm{\mathcal{C}_1^{-\frac12}K\mathcal{C}_{\lambda,\delta}^\frac12\zeta}^2_{\mathbb{R}^N}\\& :=b_{1,N}-b_{2,N}, \end{align*} where \begin{enumerate} \item[b1)] $b_{1,N}=\delta^{-1}\norm{\zeta}^2_{\mathbb{R}^N}=\frac{N}{\delta}+\frac{1}{\delta}\sum_{j=1}^{N}(\zeta_j^2-1):=\frac{N}\delta+\frac{\sqrt{2N}}{\delta}W_{1,N},$ where as $N\to\infty$, $W_{1,N}=\frac1{\sqrt{2N}}\sum_{j=1}^{N}(\zeta_j^2-1)$ converges weakly to a standard normal random variable by the Central Limit Theorem and by Lemma \ref{ch3:lemmom} has all positive integer moments bounded uniformly in $N; \item[b2)] for $b_{2,N}$ we have by Lemma \ref{ch3:asslem1}(ii) that $\mathbb{E}^{\zeta}[b_{2,N}]$ is uniformly bounded in $N$. In fact using Lemma \ref{ch3:kollem} together with Lemma \ref{ch3:asslem1}(ii), we get that for any $p\in\mathbb{N}$, $\mathbb{E}^\zeta[b_{2,N}^p]$ is bounded independently of $N$. \end{enumerate} \item[C)] for the third term we have \begin{align*} \pr{\mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y})}{\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12\zeta}_{\mathbb{R}^N}&=\pr{(\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12)^\ast\mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y})}{\zeta}_{\mathbb{R}^N}=\sum_{j=1}^{N}((\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12)^\ast\mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y}))_j\zeta_j.\end{align*} It holds that \begin{align*} \sum_{j=1}^{N}((\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12)^\ast \mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y}))_j^2=\norm{(\mathcal{C}_0^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12)^\ast \mathcal{C}_0^{-\frac12}m_{\lambda,\delta}({y})}^2_{\mathbb{R}^N}, \end{align*} and we claim that the norm on the right hand side is uniformly bounded in $N$ $\bm{y}$-almost surely. Indeed, by (\ref{ch3:eq:prec}), the Cauchy-Schwarz inequality and the non-negative definiteness of the matrix $\mathcal{C}_{0}^{\frac12}K^\ast \mathcal{C}_1^{-1} K\mathcal{C}_{0}^\frac12$, we have\begin{align*} \norm{(\mathcal{C}_{0}^{-\frac12}\mathcal{C}_{\lambda,\delta}^\frac12)^\ast u}^2_{\mathbb{R}^N}&=\pr{\mathcal{C}_{0}^{-\frac12}\mathcal{C}_{\lambda,\delta}\mathcal{C}_{0}^{-\frac12}u}{u}_{\mathbb{R}^N}=\pr{\delta^{-1}(I+\frac{\lambda}{\delta}\mathcal{C}_{0}^{\frac12}K^\ast \mathcal{C}_1^{-1} K\mathcal{C}_{0}^\frac12)^{-1}u}{u}_{\mathbb{R}^N}\\ &\leq\norm{\delta^{-1}(I+\frac{\lambda}{\delta}\mathcal{C}_{0}^{\frac12}K^\ast \mathcal{C}_1^{-1} K\mathcal{C}_{0}^\frac12)^{-1}u}_{\mathbb{R}^N}\norm{u}_{\mathbb{R}^N}\leq\delta^{-1}\norm{u}^2_{\mathbb{R}^N}. \end{align*} Combining with Assumption \ref{ch3:ass1}(i) we get the claim and therefore by Lemma \ref{ch3:sumlem} below we get that the third term has $\bm{y}$-almost surely all even moments uniformly bounded in $N$. \end{enumerate} We define $F_N=\upbeta_0+\frac{A_N-b_{2,N}+C_N}2$ and observe that since all terms have bounded moments of every order uniformly in $N$ $\bm{y}$-almost surely, H\"older's inequality secures that $F_N$ also has bounded moments of every order uniformly in $N$ almost surely with respect to $\bm{y}$. \end{proof} \subsection{Negative moments of the rate parameter in the $\delta$ draw}\label{ch3:ssec:ap3}{\ } \begin{lemma}\label{ch3:lemdres} Let $u^{(k)}_\delta$ be as in (\ref{ch3:eq:uu}), for any $\delta, \lambda>0$. Under Assumptions \ref{ch3:ass1}, we have \begin{align*} \mathbb{E}^\zeta\bigg[(\upbeta_0+\frac12\norm{\mathcal{C}_0^{-\frac12}u_\delta^{(k)}}_{\mathbb{R}^N}^2)^{-2i}\bigg]=\mathcal{O}(N^{-2i}), \end{align*} as $N\to\infty$, almost surely with respect to $\bm{y}$, for $i=1,2$. \end{lemma} \begin{proof} Without loss of generality we consider the case $\delta=\lambda=1$ and drop the $\lambda$ and $\delta$ dependence in $u, m$ and $\mathcal{C}$. To de-clutter our notation we also drop the dependence of $m$ on the data. Since $\upbeta_0\geq0$ it suffices to show it for $\upbeta_0=0$. Formally, the random variable $\norm{\mathcal{C}_0^{-\frac12}u^{(k)}}_{\mathbb{R}^N}^2$ behaves like a chi-squared random variable with $N$ degrees of freedom. We estimate the squared norm by a random variable $Y_N$ of known moment generating function $M_{Y_N}(t)$, and use the following formula from \cite{CDF81} for the calculation of negative moments of nonnegative random variables \begin{align}\label{ch3:eq:nm} \mathbb{E}[Y_N^{-l}]=\Gamma(l)^{-1}\int_0^\infty t^{l-1}M_{Y_N}(-t)dt, \; l\in\mathbb{N}. \end{align} We begin by showing that there exists a constant $c>0$ independent of $N$ such that $\norm{\mathcal{C}^{-\frac12}\mathcal{C}_0^\frac12v}_{\mathbb{R}^N}\leq c\norm{v}_{\mathbb{R}^N}$ for any $v\in\mathbb{R}^N$. We have, \begin{align*} \norm{\mathcal{C}^{-\frac12}\mathcal{C}_0^\frac12v}_{\mathbb{R}^N}^2&=\pr{\mathcal{C}_0^\frac12\mathcal{C}^{-1}\mathcal{C}_0^\frac12v}{v}_{\mathbb{R}^N}=\pr{(I+\mathcal{C}_0^\frac12K^\ast \mathcal{C}_1^{-1} K\mathcal{C}_0^\frac12)v}{v}_{\mathbb{R}^N}\\&=\norm{v}_{\mathbb{R}^N}^2+\norm{\mathcal{C}_1^{-\frac12}K\mathcal{C}_0^\frac12v}^2_{\mathbb{R}^N}\leq(1+c_2)\norm{v}_{\mathbb{R}^N}^2, \end{align*} by Lemma \ref{ch3:asslem1}(iii). The proved claim gives the estimate \begin{align*} \norm{\mathcal{C}_0^{-\frac12}u^{(k)}}^2_{\mathbb{R}^N}&=\norm{\mathcal{C}_0^{-\frac12}(m+\mathcal{C}^\frac12\zeta)}^2_{\mathbb{R}^N}=\norm{\mathcal{C}_0^{-\frac12}\mathcal{C}^\frac12(\mathcal{C}^{-\frac12}m+\zeta)}_{\mathbb{R}^N}^2\geq c^{-1}\norm{\mathcal{C}^{-\frac12}m+\zeta}^2_{\mathbb{R}^N}, \end{align*} hence it suffices to show that almost surely with respect to $\bm{y}$ we have $\mathbb{E}^\zeta[Y_N^{-2i}]=\mathcal{O}(N^{-2i})$, for $Y_N:=\norm{\mathcal{C}^{-\frac12}m+\zeta}^2_{\mathbb{R}^N}$. Indeed, let $\{e_j\}_{j=1}^N$ be any orthonormal basis of $\mathbb{R}^N$ (with respect to the possibly scaled norm $\smnorm{\cdot}_{\mathbb{R}^N}$), and define $w_j:=\pr{w}{e_j}$ for any $w\in\mathbb{R}^N$. Then we have \begin{align*} Y_N=\sum_{j=1}^{N}((\mathcal{C}^{-\frac12}m)_j+\zeta_j)^2, \end{align*} where $\zeta_j\sim\mathcal{N}(0,1)$ are the mutually independent components of the white noise $\zeta$ and $(\mathcal{C}^{-\frac12}m)_j$ are independent of $\zeta$, therefore $Y_N$ is a non-central chi-squared random variable with $N$ degrees of freedom and non-centrality parameter $p_N:=\sum_{j=1}^{N} (\mathcal{C}^{-\frac12}m)_j^2\geq0$. The definition and properties of the non-central chi-squared distribution can be found in \cite{JKB95}, where in particular, we find the moment generating function of $Y_N$ \begin{align*} M_{Y_N}(t)=(1-2t)^{-\frac{N}2}\exp\big(\frac{p_Nt}{1-2t}\big), \end{align*} hence using (\ref{ch3:eq:nm}) we have for $i=1,2$, \begin{align*} \mathbb{E}^\zeta[Y_N^{-2i}]&=\Gamma(2i)^{-1}\int_0^\infty t^{2i-1}(1+2t)^{-\frac{N}2}\exp\big(\frac{-p_Nt}{1+2t}\big)dt\\ &\leq c\int_0^\infty t^{2i-1}(1+2t)^{-\frac{N}2}dt=\mathcal{O}(N^{-2i}), \end{align*}provided $N>4i$, where the last integral can by calculated by integration by parts. \end{proof} \subsection{Technical lemmas}\label{ch3:ssec:ap4}{\ } \begin{lemma}\label{ch3:sumlem} Let $\{X_j\}$ be a sequence of random variables, such that $X_j=c_jY_j$, where the $Y_j, \;j\in\mathbb{N}$ are independent and identically distributed random variables with finite even moments up to order $2r\in\mathbb{N}$ and zero odd moments, and the $c_j, \;j\in\mathbb{N}$ are deterministic real numbers. Then for any $N\in\mathbb{N}$, \begin{align*}\mathbb{E}[(\sum_{j=1}^{N} X_j)^{2r}]\leq \kappa(\sum_{j=1}^{N} c_j^2)^r,\end{align*}where $\kappa=\mathbb{E}[Y_1^{2r}]>0$ is independent of $N$. \end{lemma} \begin{proof} Denote by $m_n$ the $2n$-th moment of $Y_1$, $m_n=\mathbb{E}[Y_1^{2n}].$ Observe that since by H\"older's inequality for $0<s\leq t$, $\mathbb{E}[|Y_1|^s]^\frac1s\leq \mathbb{E}[|Y_1|^t]^\frac1t$, we have that for $n_1,...,n_q>0$ such that $n_1+...+n_q=r$ \begin{align*}m_{n_1}...m_{n_q}\leq\mathbb{E}[Y_1^{2r}]^\frac{n_1+...+n_q}{r}=\mathbb{E}[Y_1^{2r}].\end{align*} Combining with the fact that the random variables $Y_j$ are independent with zero odd moments, \begin{align*} \mathbb{E}[(\sum_{j=1}^{N} X_j)^{2r}]&=\sum_{j=1}^{N} c_j^{2r}m_r+\sum_{j_1\neq j_2}^Nc_{j_1}^{2(r-1)}m_{r-1}c_{j_2}^2m_1+\sum_{j_1\neq j_2}^Nc_{j_1}^{2(r-2)}m_{r-2}c_{j_2}^4m_2\\&+...+\sum_{j_1\neq j_2\neq...\neq j_r}^Nc_{j_1}^2c_{j_2}^2...c_{j_r}^2m_1^r\leq m_r(\sum_{j=1}^{N} c_j^2)^r. \end{align*}\end{proof} \begin{lemma}\label{ch3:kollem} For any $p\in\mathbb{N}$, there exists a constant $c=c(p)\geq0$, independent of $N$ such that for any centered Gaussian random variable $x_N$ in $\mathbb{R}^N$, it holds \begin{equation*}\mathbb{E}[\norm{x_N}^{2p}_{\mathbb{R}^N}]\leq c(p)(\mathbb{E}[\norm{x_N}^2_{\mathbb{R}^N}])^p.\end{equation*} \end{lemma} \begin{proof} Direct consequence of \cite[Corollary 2.17]{DZ92}. \end{proof} \begin{lemma}\label{ch3:lemmom} Let $(\gamma_j)_{j\in\mathbb{N}}$ be a sequence of independent standard normal random variables and define $G_N:=\frac{1}{\sqrt{2N}}\sum_{j=1}^{N} (\gamma_j^2-1).$ Then all the positive integer moments of $G_N$ are bounded uniformly in $N$. \end{lemma} \begin{proof} For $k\in\mathbb{N}$, we have $\mathbb{E}[G_{N}^{k}]=\frac{1}{(2N)^{\frac{k}2}}\sum_{j_1,...,j_{k}}^N\mathbb{E}[(\gamma_{j_1}^2-1)...(\gamma_{j_{k}}^2-1)]. $ Since $\gamma_{j}^2-1$ are independent and identically distributed with finite moments of every order, the sum on the right hand side has a dependence on $N$ determined by the total number of non zero terms in the summation. By independence and the fact that $\mathbb{E}[\gamma_j^2-1]=0$, all the terms in the sum which contain a term with an index $j_i$ which occurs only once in the product are equal to zero. We thus have that if $k$ is even the sum on the right hand side is of order $N^{\frac{k}2}$, while if $k$ is odd it is of order $N^{\frac{k-1}2}$. In both cases the $k$-th moment of $G_{N}$ is bounded uniformly in $N$. \end{proof} \begin{lemma}\label{ch3:lemgam} Let $\Gamma_N\sim{\rm {Gamma}}(\upalpha+\frac{N}2,\frac{N}2)$, for $\upalpha>0$, and define \begin{align*}\Theta_{N}:=\frac{\Gamma_N-1-\frac{2\upalpha}N}{\sqrt{\frac{2}N+\frac{4\upalpha}{N^2}}}.\end{align*} Then the first four moments of $\Theta_N$ are bounded uniformly in $N$. \end{lemma} \begin{proof} The random variable ${\rm {Gamma}}(a,1)$ has mean and variance $a$ and third and fourth central moments $2a$ and $3a^2+6a$ respectively, \cite{JKB94}. Hence by the scaling property of the gamma distribution, $\Gamma_N\stackrel{\mathcal{L}}{=}\frac{2}N{\rm {Gamma}}(\upalpha+\frac{N}2,1)$ has mean $1+\frac{2\upalpha}N$, variance $\frac{2}N+\frac{4\upalpha}{N^2}$, and third and fourth central moments which are both of order $N^{-2}$. It is thus straightforward to see that $\Theta_{N}$ has mean zero, variance equal to one, and since the denominator in $\Theta_{N}$ is of order $N^{-\frac12}$ it has third and fourth moments which are $\mathcal{O}(N^{-\frac12})$ and $\mathcal{O}(1)$ respectively. \end{proof} \begin{lemma}\label{ch3:asslem1} Under Assumptions \ref{ch3:ass1}, we have that for any $\lambda,\delta>0$, \begin{enumerate} \item[i)] ${\rm {Tr}}(\mathcal{C}_1^{-\frac12}K\mathcal{C}_{\lambda,\delta}K^\ast \mathcal{C}_1^{-\frac12})\leq c_2\delta^{-1};$ \item[ii)] $\mathbb{E}^{\theta}\norm{\mathcal{C}_1^{-\frac12}K\mathcal{C}_{\lambda,\delta}^\frac12\theta}_{\mathbb{R}^N}^2\leq c_2\delta^{-1},$ where $\theta$ is a Gaussian white noise in $\mathbb{R}^N$; \item[iii)]$\norm{\mathcal{C}_1^{-\frac12}K\mathcal{C}_0^\frac12}_{2,N}\leq \sqrt{c_2};$ \end{enumerate} where $c_2$ is defined in Assumption \ref{ch3:ass1}(ii). \end{lemma} \begin{proof}{\ } \begin{enumerate} \item[i)]By (\ref{ch3:eq:prec}), we have \begin{align*}\mathcal{C}_1^{-\frac12}K\mathcal{C}_{\lambda,\delta}K^\ast \mathcal{C}_1^{-\frac12}=\delta^{-1}\mathcal{C}_1^{-\frac12}K\mathcal{C}_0^\frac12(I+\frac{\lambda}{\delta}\mathcal{C}_0^\frac12K^\ast\mathcal{C}_1^{-1}K\mathcal{C}_0^\frac12)^{-1}\mathcal{C}_0^\frac12K^\ast\mathcal{C}_1^{-\frac12},\end{align*} hence the fact that for any matrix $A\in\mathbb{R}^{N\times N}$ it holds ${\rm {Tr}}(A(I+cA^\ast A)A^\ast)\leq {\rm {Tr}}(AA^\ast)$ for any $c>0$, together with Assumption \ref{ch3:ass1}(ii) give the claim. \item[ii)]It is well known that for $x\sim\mathcal{N}(0,\Sigma)$, $\mathbb{E}\norm{x}^2_{\mathbb{R}^N}={\rm {Tr}}(\Sigma)$. Since for $\theta\sim\mathcal{N}(0,I)$ we have $\mathcal{C}_1^{-\frac12}K\mathcal{C}_{\lambda,\delta}^\frac12\theta\sim\mathcal{N}(0,\mathcal{C}_1^{-\frac12}K\mathcal{C}_{\lambda,\delta}K^\ast\mathcal{C}_1^{-\frac12})$, the claim follows from part (i). \item[iii)]It is well known that for any matrix $A\in\mathbb{R}^{N\times N}$, the Euclidean norm satisfies $\norm{A}_{2,N}=\norm{A^\ast}_{2,N}=\sqrt{\rho(A^\ast A)}\leq \sqrt{{\rm {Tr}}(A^\ast A)}$ where $\rho(B)$ is the spectral radius of the matrix $B$. Hence we have $\norm{\mathcal{C}_1^{-\frac12}K\mathcal{C}_0^\frac12}_{2,N}\leq \sqrt{{\rm {Tr}}(\mathcal{C}_1^{-\frac12}K\mathcal{C}_0 K^\ast\mathcal{C}_1^{-\frac12})}\leq\sqrt{c_2},$ by Assumption \ref{ch3:ass1}(ii).\end{enumerate} \end{proof} \bibliographystyle{plain}
train/arxiv
BkiUc785qoYArn4hRFXF
5
1
\section{Introduction} \label{Section_Introduction} Recently the XENON1T experiment data concerning electronic recoil events in the energy region between 1 and 210 keV have been reported \cite{apr}. The excess of electronic recoil events in the energy region between 1 and 7 keV has been observed using the data. At the moment there is an increasing list of papers related to explanation for the excess (e.g. \cite{bell,choi,lind,ge,amin,ch,ar,sen,moh,far}). In the present paper we suggest an interpretation of the observed effect in the framework of the neutrino model with three active and three sterile neutrino \cite{khfo} with decaying two sterile neutrinos. It is known that oscillations of solar, atmospheric, reactor and accelerator active neutrinos can be attributed to mixing of three mass states of neutrinos that is effected by way of the Pontecorvo--Maki--Nakagawa--Sakata matrix $U_{\rm PMNS}\equiv U = V\!P$, so that $\psi_a^L=\sum_iU_{ai}\psi_i^L$, where $\psi_{a,i}^L$ are left chiral fields with flavor $a$ or mass $m_i$, $a=\{e,\mu,\tau\}$ and $i=\{1,2,3\}$. The matrix $V$ is expressed in the standard parametrization \cite{PDG} for three active neutrinos via the mixing angles $\theta_{ij}$ and the CP-phase, namely, the phase $\delta\equiv\delta_{\rm CP}$ associated with CP violation in the lepton sector for Dirac or Majorana neutrinos, and $P={\rm diag}\{1,e^{i\alpha},e^{i\beta}\}$, where $\alpha\equiv\alpha_{\rm CP}$ and $\beta\equiv\beta_{\rm CP}$ are phases associated with CP violation only for Majorana neutrinos. With the help of high-precision experimental data, the values of the mixing angles $\theta_{ij}$ and the differences of the neutrino masses in square $\Delta m_{21}^2$ and $\Delta m_{31}^2$ were found \cite{PDG,salas} (where $\Delta m_{ij}^2=m_i^2-m_j^2$), but only absolute value of $\Delta m_{31}^2$ is known, therefore, the absolute values of the neutrino masses can be ordered by two ways, namely, as $m_1<m_2<m_3$ or $m_3<m_1<m_2$ which are called as normal neutrino mass ordering (NO) and as inverse neutrino mass ordering (IO), respectively. Including nonzero neutrino masses results in the Modified Standard Model (MSM) instead of the Standard Model (SM). If we take into account the data of the T2K experiment \cite{Kabe} and the limitations on the sum of the neutrino masses from cosmological observations \cite{Wang}, then the NO-case of the neutrino mass spectrum turns out to be preferable (see also \cite{salas}). However the estimation of the value of CP-phase $\delta_{\rm CP}$ \cite{salas} and the possibility of realization of the IO-case \cite{kelly} has been obtained. Nevertheless we restrict ourselves in what follows to the NO-case only, assuming $\delta_{\rm CP}=1.2\pi$. At the same time, there are indications to anomalies of neutrino fluxes for some processes that can not be explained with using oscillation parameters only for three active neutrinos. These anomalies include the LSND (or accelerator) anomaly \cite{Atha1996,Agu2001,Agu2013,Agu2018}, the reactor antineutrino anomaly \cite{Mu2011,Me2011,Hu2011,Ko,ale18,ser18} and the gallium (or calibration) \cite{Abdu2009,Kae2010,Giunti2013} anomaly. The anomalies manifest themselves at short distances (more precisely, at distances $L$ such that the numerical value of the parameter $\Delta m^2 L/E$, where $E$ is the neutrino energy, is of the order of unity). In the LSND anomaly, an excess of the electron antineutrinos in beams of muon antineutrinos in comparison with the expected value according to the MSM is observed. Similar results were observed in the MiniBooNE experiments for electron neutrinos and antineutrinos \cite{Agu2013,Agu2018}. Deficit of reactor electron antineutrinos at short distances is called as the reactor antineutrino anomaly, while the deficit of electron neutrinos from a radioactive source observed at calibration of detectors for the SAGE and GALLEX experiments is commonly called as the gallium or calibration anomaly. In other words, data on the neutrino anomalies refer to both the appearance of the electron neutrinos or antineutrinos excess in beams of muon neutrinos or antineutrinos, respectively, and to the deficit of electron neutrinos or antineutrinos. These three types of the shot-baseline (SBL) neutrino anomalies, for which there are indications at present, are attributed to the presence of one or two new neutrinos that do not interact directly with the gauge bosons of the MSM, that is sterile neutrinos. The characteristic mass scale of sterile neutrino used for explanation of the SBL anomalies is about $1$~eV. In principle, the number of additional neutrinos can be arbitrary (see, for example, Refs.~\citen{Bilenky1977,Abazajian2012,Bilenky}). Phenomenological models with sterile neutrinos are usually denoted as (3+$N$) models, or, in detail, as ($k$+3+$n$+$m$) models, where $k$ is a number of new neutrinos with masses less than masses of active neutrinos, and $n$ and $m$ are numbers of new neutrinos with masses higher and considerably higher, respectively, than masses of the active neutrinos. In Section~\ref{Section_OscillationModel}, the main concepts of the (3+3) model (to be exact, the (3+1+2) model) are given, which based on the results reported in Ref.~\citen{khfo}. In Section~\ref{xenon_ex}, we present a short description of data relevant to the electronic recoil events excess in the XENON1T experiment and their interpretation in the context of the (3+1+2) model. In the final Section~\ref{Section_Conclusion} it is noticed that the results of the present paper can help to explain the available XENON1T experimental data, as well as to interpret both data of SBL experiments on the search of sterile neutrinos and some astrophysical and cosmological data. \section{Basic propositions of the phenomenological (3+1+2) model} \label{Section_OscillationModel} The (0+3+$N$) or (0+3+$m$+$n$) phenomenological neutrino models can be used to describe the SBL anomalies, as well as some astrophysical data, where $N=m+n$ is the number of additional neutrinos. It is desirable that the number of new neutrinos would be minimal, so the most common are the (3+1) and (3+2) models \cite{Kopp2013} ((3+1) is used instead of (0+3+1) for short). However, if we apply the principle of extended symmetry of weak interactions, then, for example, for the left-right symmetry it is necessary to consider (3+3) models \cite{Conrad2013,Zysina2014,KhruFom2016}. So, below we use the (3+1+2) model to account for effects of light and heavy sterile neutrinos. This model includes three active neutrinos $\nu_a$ ($a=e,\mu,\tau$) and three new neutrinos: a sterile neutrino $\nu_s$, a hidden neutrino $\nu_h$ and a dark neutrino $\nu_d$. Thus six neutrino flavour states and six neutrino mass states are present in the (3+1+2) model\cite{khfo}. Hence below we consider the $6\!\times\!6$ mixing matrix, which can be called as the generalized mixing matrix $U_{\rm mix}$, or the generalized Pontecorvo--Maki--Nakagawa--Sakata matrix $U_{\rm GPMNS}\equiv U_{\rm mix}$. This matrix can be represented as the matrix product $V\!P$, where $P$ is a diagonal matrix with Majorana CP-phases $\phi_i$, $i=1,\dots,5$, namely, $P={\rm diag}\{1,e^{i\phi_1},\dots,e^{i\phi_5}\}$. We deal only with a particular type of matrix $V$. Keeping continuity of the notations, we denote Dirac CP-phases as $\delta_i$ and $\kappa_j$, and mixing angles as $\theta_i$ and $\eta_j$, with $\delta_1\equiv\delta_{\rm CP}$, $\theta_1\equiv\theta_{12}$, $\theta_2\equiv\theta_{23}$ and $\theta_3\equiv\theta_{13}$. For the compactness of the formulas, we introduce the symbols $h_s$ and $h_{i'}$ for left flavor fields and left mass fields, respectively. As $s$ we will use a set of indices that allocate $\nu_s$, $\nu_h$ and $\nu_d$ fields among $h_s$, and as $i'$ we will use a set of indices $4$, $5$ and $6$. The common $6\!\times\!6$ mixing matrix $U_{\rm mix}$ can then be expressed through $3\!\times\!3$ matrices $R$, $T$, $V$ and $W$ as follows \begin{equation} \left(\begin{array}{c}\nu_a\\ h_s \end{array}\right)= U_{\rm mix}\left(\begin{array}{c}\nu_i\\ h_{i'}\end{array}\right)\equiv \left(\begin{array}{cc}R&T\\ V&W\end{array}\right) \left(\begin{array}{c}\nu_i\\ h_{i'}\end{array}\right). \label{eq_Umix} \end{equation} We represent the matrix $R$ in the form of $R=\varkappa U_{\rm PMNS}$, where $\varkappa=1-\epsilon$ and $\epsilon$ is a small value, while the matrix $T$ in equation~(\ref{eq_Umix}) should also be small as compared with the known unitary $3\!\times\!3$ mixing matrix of active neutrinos $U_{\rm PMNS}$ ($U_{\rm PMNS}U_{\rm PMNS}^+=I$). Thus, when choosing the appropriate normalization, the active neutrinos mix, as it should be in the MSM, according to Pontecorvo--Maki--Nakagawa--Sakata matrix $U_{\rm PMNS}$. Below we use the notation $U_{\rm PMNS}\equiv U$. On the current stage of the study, it is quite reasonable to restrict our consideration to the minimal number of mixing matrix parameters that is able to explain the available (still rather dispersive) experimental data attributed to the SBL anomalies. The transition to full matrix with all parameters can be done in the future, when quite reliable experimental results will be obtained. So, we will consider only some particular cases, but not the most common form for the matrix $U_{\rm mix}$. Bearing in mind that, in accordance with data available due to astrophysical and laboratory measurements, the mixing between active and new neutrinos is small, we choose the matrix $T$ as $T=\sqrt{1-\varkappa^2}\,a$, where $a$ is an arbitrary unitary $3\!\times\!3$ matrix, that is, $aa^+=I$. The matrix $U_{\rm mix}$ can now be written in the form of \begin{equation} U_{\rm mix}=\left(\begin{array}{cc}R&T\\ V&W\end{array}\right)\equiv \left(\begin{array}{cc}\varkappa U&\sqrt{1-\varkappa^2}\,a\\ \sqrt{1-\varkappa^2}\,bU&\varkappa c \end{array}\right), \label{eq_Utilde} \end{equation} where $b$ is also an arbitrary unitary $3\!\times\!3$ matrix ($bb^+=I$), and $c=-ba$. With these conditions, the matrix $U_{\rm mix}$ will be unitary ($U_{\rm mix}U_{\rm mix}^+=I$). In particular, we will use the following matrices $a$ and $b$: \begin{equation} a=\left(\begin{array}{lcr}\,\,\,\,\,\cos\eta_2 & \sin\eta_2 & 0\\ -\sin\eta_2 & \cos\eta_2 & 0\\ \qquad 0 & 0 & e^{-i\kappa_2}\end{array}\right),\quad b=-\left(\begin{array}{lcr}\,\,\,\,\,\cos\eta_1 & \sin\eta_1 & 0\\ -\sin\eta_1 & \cos\eta_1 & 0\\ \qquad 0 & 0 & e^{-i\kappa_1}\end{array}\right), \label{eq_matricesab} \end{equation} where $\kappa_1$ and $\kappa_2$ are mixing phases between active and sterile neutrinos, whereas $\eta_1$ and $\eta_2$ are mixing angles between them. The matrix $a$ in the form of equation~(\ref{eq_matricesab}) was proposed in Ref.~\citen{KhruFom2016}. In order to make our calculations more specific, we will use the following sample values for new mixing parameters: \begin{equation} \kappa_1=\kappa_2=-\pi/2,\quad \eta_1=5^{\circ},\quad \eta_2=\pm 30^{\circ}, \label{eq_etakappa} \end{equation} and assume that the small parameter $\epsilon$ satisfies at least the condition $\epsilon\lesssim 0.03$. The neutrino masses will be given by a normally ordered set of values $\{m\}=\{m_i,m_{i'}\}$. For active neutrinos we will use the neutrino mass estimations, which were proposed in Refs.~\citen{Zysina2014,KhruFom2016,PAZH2016} for NO-case (in units of eV) and which do not contradict to the known experimental data up to now. \begin{equation} m_1\approx 0.0016, \quad m_2\approx 0.0088, \quad m_3\approx 0.0497\,. \label{eq_activmasses} \end{equation} The values of the mixing angles $\theta_{ij}$ of active neutrinos that determine the Pontecorvo--Maki--Nakagawa--Sakata mixing matrix will be taken from relations $\sin^2\theta_{12}\approx 0.318$, $\sin^2\theta_{23}\approx 0.566$ and $\sin^2\theta_{13}\approx 0.0222$, which are obtained from the processing of experimental data for NO and given in Ref.~\citen{salas}. In Ref.~\citen{khfo} the version of the Light Mass Option (LMO1 version) of the (3+1+2) model has been considered for $m_4$, $m_5$, and $m_6$ mass values: \begin{equation} \{m\}_{\rm LMO1}=\{1.1,\,1.5\!\times\!10^3,\,7.5\!\times\!10^3 \}. \label{eq_LMO1} \end{equation} In order to reproduce in more detail the electrons energy spectrum observed in the XENON1T experiment in what follows we choose a comparatively higher mass $m_5$, than the corresponding mass value given in Ref.~\citen{khfo} (see (\ref{eq_LMO1})). The $m_4$ and practically $m_6$ mass values are unchanged, furthermore the $m_4$ value meets currently available constraints ~\cite{archi,vag}. Thus, below we will use the following $m_4$, $m_5$, and $m_6$ mass values for sterile mass states: \begin{equation} \{m\}_{\rm LMO}=\{1.1,\,3.4\!\times\!10^3,\,7.6\!\times\!10^3 \}. \label{eq_LMO} \end{equation} With the LMO set of the mass values above it remains possible to explain the appearance of anomalies at short distances in neutrino data \cite{Gariazzo2017}. Note that sterile neutrinos with masses about several keVs are also used for interpretation of some astrophysical data \cite{asch18}, so this adds considerable support for our choice of the $m_5$ mass value as $3.4$~keV and the $m_6$ mass value as $7.6$~keV. \section{Data relevant to the electronic recoil events excess in the XENON1T experiment and their interpretation in the context of the (3+1+2) model} \label{xenon_ex} Recently the XENON1T experiment data have been reported on the observation of the excess of electronic recoil events in the energy region between 1 and 7 keV \cite{apr}. The XENON1T experiment operated underground at the INFN Laboratori Nazionali del Gran Sasso. This experiment, employing a liquid-xenon time projection chamber with a 2.0-tonne active target, was primarily designed to detect Weakly Interacting Massive Particle (WIMP) dark matter. A particle interaction within the detector produces both prompt scintillation and delayed electroluminesence signals. These light signals are detected by arrays of photomultiplier tubes on the top and bottom of the active volume, and are used to determine the deposited energy and interaction position of an event. The ratio between delayed electroluminesence signals and prompt scintillation signals is used to distinguish electronic recoils, produced by, e.g., gamma rays or beta electrons, from nuclear recoils, produced by, e.g., neutrons or WIMPs, allowing for a degree of particle identification. In what follows we focus on the possibility of describing, in the framework of the (3+1+2) model considered above, the excess of electronic recoil events observed in the XENON1T experiment . We suggest that this excess can be naturally attributed to interaction of electrons with dark bosons arose for the most part in decay processes of hidden and dark neutrinos. Note that these processes can only slightly produce photons as well. A plausible mechanism for photon appearance can be a kinetic mixing to only a small extent between a photon and a dark boson \cite{hold}. It is assumed that hidden and dark neutrinos originally possess of nonrelativistic velocities and the dark boson have a very small mass. So dark bosons and photons can be emitted in transitions among mass component parts of dark, hidden and sterile neutrinos assuming that the sterile neutrino, which is mainly the $m_4$ mass state, is practically stable. Thus using this approach we predict three peaks in the 1 -- 7 keV energy region of electronic recoil events at energies about $1.7$ keV, $3$ keV and $3.8$ keV. This prediction can be tested as in the XENON1T experiment when a high-statistics data set will be available as in future experiments of this kind. Note that the used above the LMO variant of the (3+1+2) neutrino model with the decaying heavy neutrinos and the light stable sterile neutrino still remain operable for description of the SBL neutrino anomalies (see, e.g., \cite{khfo,mona,dego,abdu}). \section{Discussion and conclusions} \label{Section_Conclusion} In this paper, we use the phenomenological (3+1+2) neutrino model with three active and three sterile neutrinos for description of the excess of electronic recoil events in the 1 -- 7 keV energy region found in the data of the XENON1T experiment \cite{apr}. This excess can be naturally attributed to interaction of electrons with dark bosons and photons emitted in decays of the sterile neutrino mass states with the masses $m_5=3.4$ keV and $m_6=7.6$ keV while the sterile neutrino mass state with the mass $m_4=1.1$ eV is practically stable. In the context of this approach three peaks in the 1 -- 7 keV energy region of electronic recoil events at energies are predicted. These predictions will be tested as in the XENON1T experiment as in future experiments, such as the upcoming PandaX-4T \cite{panda}, LZ \cite{lz} and XENONnT \cite{apr} experiments. The possible existence of the three massive sterile neutrinos may have a perceptible influence on some phenomena in neutrino physics, astrophysics and cosmology. By way of illustration we refer to the possibility to interpret the SBL anomalies data in the framework of the (3+1+2) model with sterile neutrinos \cite{khfo}. Moreover the incorporation of two decaying sterile neutrinos with $3.4$~keV and $7.6$~keV masses allows us to predict amplification or appearance of the lines in the range of several keVs in the gamma spectra of some astrophysical sources. The presence of stable sterile neutrino mass state with the mass about $1$~eV will make an impact on a value of the important cosmological parameter $ \Delta N_{eff}$, besides it is possible to some extent this can matter for the resolution of the issue concerning the $ H_{0}$ tension \cite{archi,vag}.
train/arxiv
BkiUai425V5ha7jY7Jm4
5
1
\section{INTRODUCTION} Ordinary Differential Equations (ODEs) provide a universal language to describe deterministic systems via equations that determine how variables change in time as a function of other variables. They provide an immensely popular and highly successful modelling framework, with applications in many diverse disciplines, such as physics, chemistry, biology, and economy. They are \emph{causal} in the sense that at least in principle they allow us to reason about interventions: any external intervention in a system---e.g., moving an object by applying a force---can be modelled using modified differential equations by, for instance, including suitable forcing terms. In practice, of course, this may be arbitrarily difficult. Structural Causal Models (SCMs, also known as Structural Equation Models) are another language capable of describing causal relations and interventions and have been widely applied in the social sciences, economics, genetics and neuroscience \citep{Pearl2009, bollen2014structural}. One of the successes of SCMs over other causal frameworks such as causal Bayesian networks, for instance, has been their ability to express cyclic causal models \citep{spirtes1995directed,mooij2011causal,hyttinen2012learning,voortman2012learning,lacerda2012discovering,Bongers++_1611.06221v2}. We view SCMs as an intermediate level of description between the highly expressive differential equation models and the probabilistic, non-causal models typically used in machine learning and statistics. This intermediate level of description ideally retains the benefits of a data-driven statistical approach while still allowing a limited set of causal statements about the effect of interventions. While it is well understood how an SCM induces a statistical model \citep{Bongers++_1611.06221v2}, much less is known about how a differential equation model---our most fundamental level of modelling---can imply an SCM in the first place. This is an important question because if we are to have models of a system on different levels of complexity, we should understand how they relate and the conditions under which they are consistent with one another. Indeed, recent work has begun to address the question of how SCMs arise naturally from more fundamental models by showing how, under strong assumptions, SCMs can be derived from an underlying discrete time difference equation or continuous time ODE \citep{iwasaki1994causality,dash2005restructuring,lacerda2012discovering,voortman2012learning,MooJanSch13,SokolHansen2014}. With the exception of \citep{voortman2012learning} and \citep{SokolHansen2014}, each of these methods assume that the dynamical system comes to a static equilibrium that is independent of initial conditions, with the derived SCM describing how this equilibrium changes under intervention. More recently, the more general case in which the equilibrium state may depend on the initial conditions has been addressed \citep{BongersMooij_1803.08784,BlomMooij_1805.06539}. If the assumption that the system reaches a static equilibrium is reasonable for a particular system under study, the SCM framework can be useful. Although the derived SCM then lacks information about the (possibly rich) transient dynamics of the system, if the system equilibrates quickly then the description of the system as an SCM may be a more convenient and compact representation of the causal structure of interest. By making assumptions on the dynamical system and the interventions being made, the SCM effectively allows us to reason about a `higher level' qualitative description of the dynamics---in this case, the equilibrium states. There are, however, two major limitations that stem from the equilibrium assumption. First, for many dynamical systems the assumption that the system settles to a unique equilibrium, either in its observational state or under intervention, may be a bad approximation of the actual system dynamics. Second, this framework is only capable of modelling interventions in which a subset of variables are clamped to fixed values (\emph{constant} interventions). Even for rather simple physical systems such as a forced damped simple harmonic oscillator, these assumptions are violated. Motivated by these observations, the work presented in this paper tries to answer the following questions: (i) Can the SCM framework be extended to model systems that do not converge to an equilibrium? (ii) If so, what assumptions need to be made on the ODE and interventions so that this is possible? Since SCMs are used in a variety of situations in which the equilibrium assumption does not necessarily hold, we view these questions as important in order to understand when they are indeed theoretically grounded as modelling tools. The main contribution of this paper is to show that the answer to the first question is `Yes' and to provide sufficient conditions for the second. We do this by extending the SCM framework to encompass time-dependent dynamics and interventions and studying how such objects can arise from ODEs. We refer to this as a \emph{Dynamic SCM (DSCM)} to distinguish it from the static equilibrium case for the purpose of exposition, but note that this is conceptually the same as an SCM on a fundamental level. Our construction draws inspiration from the approach of \cite{MooJanSch13}, that was recently generalized to also incorporate the stochastic setting \citep{BongersMooij_1803.08784}. Here, we adapt the approach by replacing the static equilibrium states by continuous-time \emph{trajectories}, considering two trajectories as equivalent if they do not differ asymptotically. Note that whilst this paper applies a causal perspective to the study of dynamical systems, the goal of this paper is not to derive a learning algorithm which can be applied to time series data. In this sense, we view our main results as `orthogonal' to methods such as Granger causality \citep{granger1969investigating} and difference-in-differences \citep{card1993minimum} which aim to infer causal effects given time-series observations of a system. We envision that DSCMs may be used for causal analysis of dynamical systems that undergo periodic motion. Although these systems have been mostly ignored so far in the field of causal discovery, they have been studied extensively in the field of control theory. Some examples of systems that naturally exhibit oscillatory stationary states and where our framework may be applicable are EEG signals, circadian signals, seasonal influences, chemical oscillations, electric circuits, aerospace vehicles, and satellite control. We refer the reader to \citep{bittanti2009periodic} for more details on these application areas from the perspective of periodic control theory. Since the DSCM derived for a simple harmonic oscillator (see Example \ref{example:dscm}) is already quite complex, we leave the task of deriving methods that estimate the parameters from data for future work. Rather, our current work presents a first necessary theoretical step that needs to be done before applications of this theory can be developed, enabling the development of data-driven causal discovery and prediction methods for oscillatory systems, and possibly even more general systems, down the road. The remainder of this paper is organised as follows. In Section~\ref{section:ode}, we introduce notation to describe ODEs. In Section~\ref{section:interventions}, we describe how to apply the notion of an intervention on an ODE to the dynamic case. In Section~\ref{section:dynamic-stability}, we define regularity conditions on the asymptotic behaviour of an ODE under a set of interventions. In Section~\ref{section:dscm}, we present our main result: subject to conditions on the dynamical system and interventions being modelled, a \emph{Dynamic SCM} can be derived that allows one to reason about how the asymptotic dynamics change under interventions on variables in the system. We conclude in Section~\ref{section:discussion}. \vspace{-0.15cm} \section{ORDINARY DIFFERENTIAL EQUATIONS}\label{section:ode} \vspace{-0.1cm} Let ${\mathcal{I}= \{1,\ldots,D\}}$ be a set of variable labels. Consider time-indexed variables ${X_i(t) \in \mathcal{R}_i}$ for ${i \in \mathcal{I}}$, where ${\mathcal{R}_i \subseteq \mathbb{R}}$ and ${t\in\mathbb{R}_{\geq 0} = [0,\infty)}$. For ${I \subseteq \mathcal{I}}$, we write ${\mathbf{X}_I(t) \in \prod_{i\in I} \mathcal{R}_i}$ for the tuple of variables ${(X_i(t))_{i\in I}}$. By an ODE ${\mathcal{D}}$, we mean a collection of $D$ coupled ordinary differential equations with initial conditions $\mathbf{X}^{(k)}_0$: \begin{align*} \mathcal{D}: \: \left\lbrace \begin{array}{ll} f_i(X_i,\mathbf{X}_{\mathtt{pa}(i)})(t) = 0, \quad X_i^{(k)}(0) = (\mathbf{X}^{(k)}_0)_i, \\ \hfill 0\leq k \leq n_i-1, \quad i \in \mathcal{I}, \end{array} \right. \end{align*} where the $i$th differential equation determines the evolution of the variable $X_i$ in terms of $\mathbf{X}_{\mathtt{pa}(i)}$, where $\mathtt{pa}(i) \subseteq \mathcal{I}$ are the \emph{parents of $i$}, and $X_i$ itself, and where $n_i$ is the order of the highest derivative $X^{(k)}_i$ of $X_i$ that appears in equation $i$. Here, $f_i$ is a functional that can include time-derivatives of its arguments. We think of the $i$th differential equation as modelling the \emph{causal mechanism} that determines the dynamics of the effect $X_i$ in terms of its direct causes $\mathbf{X}_{\mathtt{pa}(i)}$. One possible way to write down an ODE is to canonically decompose it into a collection of first order differential equations, such as is done in \cite{MooJanSch13}. We choose to present our ODEs as ``one equation per variable'' rather than splitting up the equations due to complications that would otherwise occur when considering time-dependent interventions (cf.\ Section~\ref{section:ode_interventions}). \begin{figure*} \centering \begin{subfigure}{0.5\textwidth} \begin{tikzpicture}[every node/.style={draw,outer sep=0pt,thick}] \tikzstyle{mass}=[circle,fill=black,minimum width=0.1cm] \tikzstyle{spring}=[thick,decorate,decoration={zigzag,pre length=0.3cm,post length=0.3cm,segment length=6}] \tikzstyle{ground}=[fill,pattern=north east lines,draw=none,minimum width=0.75cm,minimum height=0.3cm] \node (leftwall) [ground, rotate=-90, minimum width=1.5cm,yshift=-2cm,label=left:{\small $X_0=0$}] {}; \draw (leftwall.north east) -- (leftwall.north west); \node (M1) [mass,xshift=0cm,label={$X_1$}] {}; \node (M2) [mass,xshift=2cm,label={$X_2$}] {}; \draw [spring] (leftwall) -- (M1.west); \draw [spring] (M1.east) -- (M2.west); \node (k0) [draw=none, yshift=-0.4cm] at ($(leftwall)!0.5!(M1)$){\small $k_0$}; \node (k1) [draw=none, yshift=-0.4cm] at ($(M1)!0.5!(M2)$){\small $k_1$}; \node (rightwall) [ground, rotate=90, minimum width=1.5cm, yshift = -4cm, label=right:{\small $X_3 = L$}]{}; \draw (rightwall.north west) -- (rightwall.north east); \draw [spring] (M2.east) -- (rightwall); \node (k2) [draw=none, yshift=-0.4cm] at ($(M2)!0.5!(rightwall)$){\small $k_2$}; \end{tikzpicture} \caption{Mass-spring system\label{fig:mass-spring}} \end{subfigure} \begin{subfigure}{0.2\textwidth} \centering \begin{tikzpicture}[->,>=stealth',auto,node distance=2.5cm, thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}] \node[main node] (1) {$X _1$}; \node[main node] (2) [right of=1] {$X_2$}; \draw [->] (1) to [out=30,in=150] (2); \draw [->] (1) edge [loop above] (1); \draw [->] (2) to [out=210,in=-30] (1); \draw [->] (2) edge [loop above] (2); \end{tikzpicture} \caption{$\mathcal{D}$\label{fig:subfig:graphical_model_observational}} \end{subfigure} \hfill \begin{subfigure}{0.2\textwidth} \centering \begin{tikzpicture}[->,>=stealth',auto,node distance=2.5cm, thick,main node/.style={circle,draw,font=\sffamily\Large\bfseries}] \node[main node, fill,pattern=north west lines, pattern color = lightgray] (1) {$X_1$}; \node[main node] (2) [right of=1] {$X_2$}; \draw [->] (1) to [out=0,in=180] (2); \draw [->] (2) edge [loop above] (2); \end{tikzpicture} \caption{$\mathcal{D}_{\mathtt{do}(X_1 = \zeta_1)}$\label{fig:subfig:graphical_model_intervened}} \end{subfigure} \caption{(a) The mass-spring system of Example \ref{example:3-mass-spring} with $D=2$; (b--c) graphs representing the causal structure of the mass-spring system for (b) the observational system, (c) after the intervention on variable $X_1$ described in Example \ref{example:3-mass-spring-intervened}. As a result of the intervention, $X_1$ is not causally influenced by any variable, while the causal mechanism of $X_2$ remains unchanged. \label{fig:graphical_models_mass_spring}} \end{figure*} \begin{example}\label{example:3-mass-spring} Consider a one-dimensional system of $D$ particles of mass ${m_i \: (i=1,\ldots,D)}$ with positions $X_i$ coupled by springs with natural lengths $l_i$ and spring constants $k_i$, where the $i$th spring connects the $i$th and $(i+1)$th masses and the outermost springs have fixed ends (see Figure \ref{fig:mass-spring}). Assume further that the $i$th mass undergoes linear damping with coefficient $b_i$. Denoting by $\dot{X}_i$ and $\ddot{X}_i$ the first and second time derivatives of $X_i$ respectively, the equation of motion for the $i$th variable is given by \begin{align*} m_i \ddot{X}_i(t) = &k_i[X_{i+1}(t) - X_i(t) - l_i] \\ &- k_{i-1}[X_i(t) - X_{i-1}(t) - l_{i-1}] - b_i \dot{X}_i(t) \end{align*} where we take ${X_0 = 0}$ and ${X_D=L}$ to be the fixed positions of the end springs. For the case that ${D=2}$, we can write the system of equations as: \begin{align*} \mathcal{D}: \left\lbrace \begin{array}{lll} 0 = m_1 \ddot{X}_1(t) + b_1 \dot{X}_1(t) + (k_1 + k_0) X_{1}(t) \\ \hspace{1cm} - k_1X_{2}(t) - k_{0} l_{0}+ k_1 l_1 \,, \quad \\ \\ 0 = m_2 \ddot{X}_2(t) + b_2 \dot{X}_2(t) + (k_2 + k_{1}) X_{2}(t) \\ \hspace{1cm} - k_2L - k_{1} X_{1}(t) - k_{2} l_1 + k_2 l_2 \,, \\ \\ X_i^{(k)}(0) = (\mathbf{X}^{(k)}_0)_i \quad k \in \{0,1\}, \: i \in \{1,2\}\,.\\ \end{array} \right. \end{align*} \end{example} We can represent the functional dependence structure between variables implied by the functions $f_i$ with a graph, in which variables are nodes and arrows point ${X_j \longrightarrow X_i}$ if ${j \in \mathtt{pa}(i)}$. Self loops ${X_i \longrightarrow X_i}$ exist if $X_i^{(k)}$ appears in the expression of $f_i$ for more than one value of $k$. This is illustrated for the system described in Example \ref{example:3-mass-spring} in Figure \ref{fig:subfig:graphical_model_observational}. \vspace{-0.15cm} \section{INTERVENTIONS ON ODES}\label{section:interventions} We interpret ODEs as \emph{causal} models. In particular, we consider the graph expressing the functional dependence structure to be the causal graph of the system, with an edge between $X_i$ and $X_j$ iff $X_i$ is a direct cause of $X_j$ (in the context of all variables $\mathbf{X}_{\mathcal{I}}$). In this section, we will formalize this causal interpretation by studying interventions on the system. \vspace{-0.15cm} \subsection{TIME-DEPENDENT PERFECT INTERVENTIONS} Usually in the causality literature, by a \emph{perfect intervention} it is meant that a variable is clamped to take a specific given value. The natural analogue of this in the time-dependent case is a perfect intervention that forces a variable to take a particular \emph{trajectory}. That is, given a subset ${I \subseteq \mathcal{I}}$ and a function ${\pmb{\zeta}_I : \mathbb{R}_{\geq0} \longrightarrow \prod_{i \in I}\mathcal{R}_i}$, we can intervene on the subset of variables $\mathbf{X}_I$ by forcing ${\mathbf{X}_I(t) = \pmb{\zeta}_I(t) \: \forall t \in \mathbb{R}_{\geq0}}$. Using Pearl's do-calculus notation \citep{Pearl2009} and for brevity omitting the $t$, we write ${\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ for this intervention. Such interventions are more general objects than those of the equilibrium or time-independent case, but in the specific case that we restrict ourselves to constant trajectories the two notions coincide. \vspace{-0.15cm} \subsection{SETS OF INTERVENTIONS}\label{sec:sets_of_interventions} Recall that when modelling equilibrating dynamical systems under constant interventions, the set of interventions modelled coincides with the asymptotic behaviour of the system. We will generalise this relation to non-equilibrating behaviour. The Dynamic SCMs that we will derive will describe the asymptotic dynamics of the ODE and how they change under different interventions. If we want to model `all possible interventions', then the resulting asymptotic dynamics that can occur are arbitrarily complicated. The idea is to fix a simpler set of interventions and derive an SCM that models only these interventions, resulting in a model that is simpler than the original ODE but still allows us to reason about interventions we are interested in. In the examples in this paper, we restrict ourselves to periodic or quasi-periodic interventions, but the results hold for more general sets of interventions that satisfy the stability definitions presented later. We need to define some notation to express the sets of interventions and the set of system responses to these interventions that we will model. Since interventions correspond to forcing variables to take some trajectory, we describe notation for defining sets of trajectories: For ${I\subseteq \mathcal{I}}$, let $\mathtt{Dyn}_I$ be a set of trajectories in ${\prod_{i\in I} \mathcal{R}_i}$. Let ${\mathtt{Dyn} = \cup_{I \in \mathcal{P}(\mathcal{I})}\mathtt{Dyn}_I}$ (where $\mathcal{P}(\mathcal{I})$ is the power set of $\mathcal{I}$ i.e., the set of all subsets of $\mathcal{I}$). Thus, an element ${\pmb{\zeta}_I \in \mathtt{Dyn}_I}$ is a function ${\mathbb{R}_{\geq 0} \longrightarrow \prod_{i\in I} \mathcal{R}_i}$, and $\mathtt{Dyn}$ consists of such functions for different $I \subseteq \mathcal{I}$. The main idea is that we want both the interventions and the system responses to be elements of $\mathtt{Dyn}$; in other words, the set of possible system responses should be large enough to contain all interventions that we would like to model, and in addition, all responses of the system to those interventions. The reader might wonder why we do not simply take the set of \emph{all} possible trajectories, but that set would be so large that it would not be practical for modeling purposes.\footnote{For example, one might want to parameterize the set of trajectories in order to learn the model from data. Without any restriction on the smoothness of the trajectories, the problem of estimating a trajectory from data becomes ill-posed. Secondly, since we would like to identify trajectories that are asymptotically identical in order to focus the modeling efforts on the \emph{asymptotic} behaviour of the system, we will only put a single trajectory into $\mathtt{Dyn}$ to represent all trajectories that are asymptotically identical to that trajectory, but whose transient dynamics may differ.} Since our goal will be to derive a causal model that describes the relations between components (variables) of the system, we will need the following definition in Section \ref{section:dscm}. \begin{definition} A set of trajectories $\mathtt{Dyn}$ is \textbf{modular} if, for any ${\{i_1, \ldots, i_n\} = I \subseteq \mathcal{I}}$, \[ \pmb{\zeta}_I \in \mathtt{Dyn} \iff \ \zeta_{i_k} \in \mathtt{Dyn} \quad \forall k \in \{1,\ldots,n\}.\] \end{definition} This should be interpreted as saying that admitted trajectories of single variables can be combined arbitrarily into admitted trajectories of the whole system (and \emph{vice versa}, admitted system trajectories can be decomposed into trajectores of individual variables), and in addition, that interventions on each variable can be made independently and combined in any way.\footnote{This is related to notions that have been discussed in the literature under various headings, for instance autonomy and invariance \citep{Pearl2009}.} This is not to say that all such interventions must be physically possible to implement in practice. Rather, this means that the mathematical model we derive should allow one to \emph{reason} about all such interventions. Not all sets of trajectories $\mathtt{Dyn}$ are modular; in the following sections we will assume that the sets of trajectories we are considering \textit{are} for the purposes of constructing the Dynamic SCMs. Some examples of trivially modular sets of trajectories are: (i) all static (i.e., time-independent) trajectories, corresponding to \citep{MooJanSch13}; (ii) all continuously-differentiable trajectories that differ asymptotically; (iii) all periodic motions. The latter is the running example in this paper. \vspace{-0.15cm} \subsection{DESCRIBING INTERVENTIONS ON ODEs}\label{section:ode_interventions} We can realise a perfect intervention by replacing the equations of the intervened variables with new equations that fix them to take the specified trajectories:\footnote{Note that in the intervened ODE, the initial conditions of the intervened variables do not need to be specified explicitly as for the other variables, since they are implied by considering $t=0$.} \begin{align*} &\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}: \\ &\left\lbrace \begin{array}{ll} f_i(X_i,\mathbf{X}_{\mathtt{pa}(i)})(t) = 0 \,, & X_i^{(k)}(0) = (\mathbf{X}^{(k)}_0)_i \,, \quad \\ 0\leq k \leq n_i-1 \,, \quad &i \in \mathcal{I}\setminus I \,, \\ \\ X_i(t) - \zeta_i(t) = 0\,, & i \in I \,. \end{array} \right. \end{align*} This procedure is analogous to the notion of intervention in an SCM. In reality, this corresponds to decoupling the intervened variables from their usual causal mechanism by forcing them to take a particular value, while leaving the non-intervened variables' causal mechanisms unaffected. Perfect interventions will not generally be realisable in the real world. In practice, an intervention on a variable would correspond to altering the differential equation governing its evolution by adding extra forcing terms; perfect interventions could be realised by adding forcing terms that push the variable towards its target value at each instant in time, and considering the limit as these forcing terms become infinitely strong so as to dominate the usual causal mechanism determining the evolution of the variable. \begin{example}[continued]\label{example:3-mass-spring-intervened} Consider the mass-spring system described in Example \ref{example:3-mass-spring}. If we were to intervene on the system to force the mass $X_1$ to undergo simple harmonic motion, we could express this as a change to the system of differential equations as: \begin{align*} &\mathcal{D}_{\mathtt{do}(X_1(t) = l_1 + A\cos(\omega t))}: \\ &\left\lbrace \begin{array}{l} 0 = X_1(t)- l_1 - A \cos(\omega t) \,, \\ \\ 0 = m_2\ddot{X}_2(t) + b_2\dot{X}_2(t) + (k_2 + k_{1}) X_{2}(t) \\ \hfill - k_2L - k_{1} X_{1}(t) - k_{2} l_1 + k_2 l_2 \,, \\ \\ X_2^{(k)}(0) = (\mathbf{X}^{(k)}_0)_2 \quad k \in \{0,1\}.\\ \end{array} \right. \end{align*} \end{example} This induces a change to the graphical description of the causal relationships between the variables. We break any incoming arrows to any intervened variable, including self loops, as the intervened variables are no longer causally influenced by any other variable in the system. See Figure \ref{fig:subfig:graphical_model_intervened} for the graph corresponding to the intervened ODE in Example \ref{example:3-mass-spring-intervened}. \vspace{-0.15cm} \section{DYNAMIC STABILITY}\label{section:dynamic-stability} A crucial assumption of \cite{MooJanSch13} was that the systems considered were \emph{stable} in the sense that they would converge to unique stable equilibria (if necessary, also after performing a constant intervention). This made them amenable to study by considering the ${t \longrightarrow \infty}$ limit in which any complex but transient dynamical behaviour would have decayed. The SCMs derived would allow one to reason about the asymptotic equilibrium states of the systems after interventions. Since we want to consider non-constant asymptotic dynamics, this is not a notion of stability that is fit for our purposes. Instead, we define our stability with reference to a set of trajectories. We will use $\mathtt{Dyn}_\mathcal{I}$ for this purpose. Recall that elements of $\mathtt{Dyn}_\mathcal{I}$ are trajectories for all variables in the system. To be totally explicit, we can think of an element ${\pmb{\eta} \in \mathtt{Dyn}_\mathcal{I}}$ as a function \begin{align*} \pmb{\eta}: \quad \mathbb{R}_{\geq0} & \longrightarrow \mathcal{R}_\mathcal{I} \\ t & \mapsto (\eta_1(t),\eta_2(t),\ldots,\eta_D(t)) \end{align*} where $\eta_i(t) \in \mathcal{R}_i$ is the state of the $i$th variable $X_i$ at time $t$. Note that $\mathtt{Dyn}_\mathcal{I}$ is not a single fixed set, independent of the situation we are considering. We can choose $\mathtt{Dyn}_\mathcal{I}$ depending on the ODE $\mathcal{D}$ under consideration, and the interventions that we may wish to make on it. Informally, stability in this paper means that the asymptotic dynamics of the dynamical system converge to a unique element of $\mathtt{Dyn}_\mathcal{I}$, independent of initial condition. If $\mathtt{Dyn}_\mathcal{I}$ is in some sense simple, we can simply characterise the asymptotic dynamics of the system under study. The following definitions of stability extend those of \cite{MooJanSch13} to allow for non-constant trajectories in $\mathtt{Dyn}_{\mathcal{I}}$, and coincide with them in the case that $\mathtt{Dyn}_{\mathcal{I}}$ consists of all constant trajectories in $\mathcal{R}_\mathcal{I}$. \begin{definition}\label{def:single_stable} The ODE $\mathcal{D}$ is \textbf{dynamically stable with reference to} $\mathtt{Dyn}_\mathcal{I}$ if there exists a unique $\pmb{\eta}_\emptyset \in \mathtt{Dyn}_\mathcal{I}$ such that ${\mathbf{X}_{\mathcal{I}}(t) = \pmb{\eta}_\emptyset(t) \: \forall t}$ is a solution to $\mathcal{D}$ and that for any initial condition, the solution ${\mathbf{X}_\mathcal{I}(t) \rightarrow \pmb{\eta}_\emptyset(t)}$ as $t \rightarrow \infty$.\footnote{The convergence we refer to here is the usual asymptotic convergence of real-valued functions, i.e., for $f : [0,\infty) \to \mathbb{R}^d$, $g : [0,\infty) \to \mathbb{R}^d$ we have that $f \to g$ iff for every $\epsilon > 0$ there is a $T \in [0,\infty)$ such that $|f(t) - g(t)| < \epsilon$ for all $t \in [T,\infty)$.} \end{definition} We use a subscript $\emptyset$ to emphasise that $\pmb{\eta}_\emptyset$ describes the asymptotic dynamics of $\mathcal{D}$ without any intervention. Observe that $\mathtt{Dyn}_\mathcal{I}$ could consist of the single element $\pmb{\eta}_\emptyset$ in this case. The requirement that this hold for all initial conditions can be relaxed to hold for all initial conditions except on a set of measure zero, but that would mean that the proofs later on require some more technical details. For the purpose of exposition, we stick to this simpler case. \begin{example}\label{example:trivial-non-constant} Consider a single mass on a spring that is undergoing simple periodic forcing and is underdamped. Such a system could be expressed as a single (parent-less) variable with ODE description: \begin{align*} \mathcal{D}: \left\lbrace \begin{array}{ll} m \ddot{X_1}(t) + b \dot{X_1}(t) + k(X_1(t)-l) \\ \hfill= F \cos(\omega t + \phi) \,, \\ \\ \hfill X_1^{(k)}(0) = (X^{(k)}_0) \quad k \in \{0,1\} \,. \end{array} \right. \end{align*} The solution to this differential equation is \begin{equation}\label{eqn:mass-spring-solution} X_1(t) = r(t) + l + A \cos(\omega t + \phi') \end{equation} where $r(t)$ decays exponentially quickly (and is dependent on the initial conditions) and $A$ and $\phi'$ depend on the parameters of the equation of motion (but not on the initial conditions). Therefore such a system would be dynamically stable with reference to (for example) \[\mathtt{Dyn}_\mathcal{I} = \{l + A\cos(\omega t + \phi') : A\in \mathbb{R}, \: \phi' \in [0,2\pi) \}. \] \end{example} \begin{remark} We use a subscript $\pmb{\zeta}_I$ to emphasise that $\pmb{\eta}_{\pmb{\zeta}_I}$ describes the asymptotic dynamics of $\mathcal{D}$ after performing the intervention $\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)$. Observe that $\mathtt{Dyn}_{\mathcal{I}}$ could consist only of the single element $\pmb{\eta}_{\pmb{\zeta}_{I}}$ and the above definition would be satisfied. But then the original ODE wouldn't be dynamically stable with reference to $\mathtt{Dyn}_{\mathcal{I}}$, nor would other intervened versions of $\mathcal{D}$. This motivates the following definition, extending dynamic stability to sets of intervened systems. \end{remark} \begin{definition}\label{def:intset_stable} Let $\mathtt{Traj}$ be a set of trajectories. We say that the pair $(\mathcal{D}, \mathtt{Traj})$ is \textbf{dynamically stable with reference to} $\mathtt{Dyn}_\mathcal{I}$ if, for any $\pmb{\zeta}_I\in \mathtt{Traj}$\,, $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I=\pmb{\zeta}_I)} $ is dynamically stable with reference to $\mathtt{Dyn}_\mathcal{I}$. \end{definition} \begin{contexample} Suppose we are interested in modelling the effect of changing the forcing term, either in amplitude, phase or frequency. We introduce a second variable $X_2$ to model the forcing term: \begin{align*} \mathcal{D}: \left\lbrace \begin{array}{lll} 0 &= f_1(X_1,X_2)(t) \\ & = m \ddot{X}_1(t) + b \dot{X}_1(t) + k(X_1(t)-l) - X_2(t) \,, \\ \\ 0 &= f_2(X_2) (t) \\ & = X_2(t) - F_0 \cos(\omega_0 t + \phi_0) \,, \\ \\ &X_1^{(k)}(0) = (\mathbf{X}^{(k)}_0)_1\,, \quad k \in \{0,1\}\, . \end{array} \right. \end{align*} If we want to change the forcing term that we apply to the mass, we can interpret this as performing an intervention on $X_2$. We could represent this using the notation we have developed as \begin{align*} \mathtt{Dyn}_{\{2\}} = \{ \zeta_2(t) = F_2 \cos (\omega t + \phi_2) : \\ \: F_2, \omega \in \mathbb{R}, \: \phi_2 \in [0,2\pi) \}. \end{align*} For any intervention $\zeta_2 \in \mathtt{Dyn}_{\{2\}}$, the dynamics of $X_1$ in $\mathcal{D}_{\mathtt{do}(X_2 = \zeta_2)}$ will be of the form (\ref{eqn:mass-spring-solution}). Therefore $(\mathcal{D}, \mathtt{Dyn}_{\{2\}})$ will be dynamically stable with reference to \begin{align*} \mathtt{Dyn}_\mathcal{I} = \Big{\{} \pmb{\zeta}(t) = (l + F_1 \cos (\omega t + \phi_1), F_2 \cos (\omega t + \phi_2)) \\ : \: F_1,F_2, \omega \in \mathbb{R}, \: \phi_1,\phi_2 \in [0,2\pi) \Big{\}}. \end{align*} \end{contexample} The independence of initial conditions for Example \ref{example:trivial-non-constant} is illustrated in Figure \ref{fig:decay_shm}. \begin{figure*} \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[scale=0.35]{./decay_shm_slow1.pdf} \caption{\label{subfig:decay_shm_1}} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[scale=0.35]{./decay_shm_slow2.pdf} \caption{\label{subfig:decay_shm_2}} \end{subfigure} \caption{Simulations from the forced simple harmonic oscillator in Example \ref{example:trivial-non-constant} showing the evolution of $X_1$ with different initial conditions for different forcing terms (interventions on $X_2$). The parameters used were $m=1, k=1, l=2, F=2 , b=0.1$, with (a) $\omega = 3$ and (b) $\omega=2$. Dynamic stability means that asymptotic dynamics are independent of initial conditions, and the purpose of the DSCM is to quantify how the asymptotic dynamics change under intervention.\label{fig:decay_shm}} \end{figure*} Note that if ${(\mathcal{D},\mathtt{Traj})}$ is dynamically stable with reference to ${\mathtt{Dyn}_\mathcal{I}}$, and ${\mathtt{Dyn}_\mathcal{I}' \supseteq \mathtt{Dyn}_\mathcal{I}}$ is a larger set of trajectories that still satisfies the uniqueness condition in the definition of dynamic stability,\footnote{Namely: $ \forall \pmb{\zeta}_I \in \mathtt{Traj},\: \exists!\, \pmb{\eta}_{\pmb{\zeta}_I}\in \mathtt{Dyn}_\mathcal{I}' $ such that under $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I=\pmb{\zeta}_I)}$ and for any initial condition, $X_\mathcal{I}(t) \rightarrow \pmb{\eta}_{\pmb{\zeta}_I}(t)$ as $t\rightarrow \infty$. Assuming that $(\mathcal{D},\mathtt{Traj})$ is dynamically stable with reference to $\mathtt{Dyn}_\mathcal{I}$, a sufficient condition for this is that none of the elements in $\mathtt{Dyn}_\mathcal{I}'\setminus\mathtt{Dyn}_\mathcal{I}$ are asymptotically equal to any of the elements of $\mathtt{Dyn}_\mathcal{I}$. That is: $\forall \pmb{\zeta} \in \mathtt{Dyn}_\mathcal{I},\, \forall\pmb{\zeta}' \in \mathtt{Dyn}_{\mathcal{I}}'\setminus \mathtt{Dyn}_\mathcal{I} $, ${\pmb{\zeta}(t) \nrightarrow \pmb{\zeta}'(t)}$ as ${t \rightarrow \infty}$\,.} then ${(\mathcal{D},\mathtt{Traj})}$ is dynamically stable with reference to $\mathtt{Dyn}_\mathcal{I}'$. \vspace{-0.15cm} \section{DYNAMIC STRUCTURAL CAUSAL MODELS}\label{section:dscm} A deterministic SCM $\mathcal{M}$ is a collection of structural equations, the $i$th of which defines the value of variable $X_i$ in terms of its parents. We extend this to the case that our variables do not take fixed values but rather represent entire trajectories. \begin{definition} Let $\mathtt{Dyn}=\bigcup_{I\subseteq\mathcal{I}} \mathtt{Dyn}_I$ be a modular set of trajectories, where $\mathtt{Dyn}_I\subseteq \mathcal{R}_I^{\mathbb{R}_{\geq 0}}$. A deterministic Dynamic Structural Causal Model (DSCM) on the time-indexed variables $\mathbf{X}_\mathcal{I}$ taking values in $\mathtt{Dyn}$ is a collection of \emph{structural equations} \begin{align*} \mathcal{M}: \left\lbrace \begin{array}{ll} X_i = F_i(\mathbf{X}_{\mathtt{pa}(i)}) & i \in \mathcal{I} \,, \end{array} \right. \end{align*} where ${\mathtt{pa}(i) \subseteq \mathcal{I}\setminus \{i\}}$ and each $F_i$ is a map ${\mathtt{Dyn}_{\mathtt{pa}(i)}\longrightarrow \mathtt{Dyn}_i}$ that gives the trajectory of an effect variable in terms of the trajectories of its direct causes. \end{definition} The point of this paper is to show that, subject to restrictions on $\mathcal{D}$ and $\mathtt{Dyn}$, we can derive a DSCM that allows us to reason about the effect on the asymptotic dynamics of interventions using trajectories in $\mathtt{Dyn}$. `Traditional' deterministic SCMs arise as a special case, where all trajectories are constant over time. In an ODE, the equations $f_i$ determine the causal relationship between the variable $X_i(t)$ and its parents $\mathbf{X}_{\mathtt{pa}(i)}(t)$ \emph{at each instant} in time. In contrast, we think of the function $F_i$ of the DSCM as a causal mechanism that determines the entire trajectory of $X_i$ in terms of the trajectories of the variables $\mathbf{X}_{\mathtt{pa}(i)}$, integrating over the instantaneous causal effects over all time. In the case that $\mathtt{Dyn}$ consists of constant trajectories (and thus the instantaneous causal effects are constant over time), a DSCM reduces to a traditional deterministic SCM. The rest of this section is laid out as follows. In Section~\ref{section:scm_interventions} we define what it means to make an intervention in a DSCM. In Section~\ref{section:struc_eqns_and_scm} we show how, subject to certain conditions, a DSCM can be derived from a pair ${(\mathcal{D},\mathtt{Dyn})}$. The procedure for doing this relies on intervening on all but one variable at a time. In Section~\ref{section:solutions-to-dscm}, Theorem~\ref{theorem:same-solutions} states that the DSCM thus derived is capable of modelling the effect of intervening on arbitrary subsets of variables, even though it was constructed by considering the case that we consider interventions on exactly ${D-1}$ variables. Theorem~\ref{theorem:commuting-diagram} and Corollary~\ref{corr:double-commuting-diagram} in Section~\ref{section:causal-reasoning-preserved} prove that the notions of intervention in ODE and the derived DSCM coincide. Collectively, these theorems tell us that we can derive a DSCM that allows us to reason about the effects of interventions on the asymptotic dynamics of the ODE. Proofs of these theorems are provided in Section~\ref{supp:proofs} of the Supplementary Material. \vspace{-0.15cm} \subsection{INTERVENTIONS IN A DSCM}\label{section:scm_interventions} Interventions in (D)SCMs are realized by replacing the structural equations of the intervened variables. Given $\pmb{\zeta}_I \in\mathtt{Dyn}_I$ for some $I \subseteq \mathcal{I}$, the intervened DSCM $\mathcal{M}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ can be written: \begin{align*} \mathcal{M}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}: \left\lbrace \begin{array}{lll} X_i &= F_i(\mathbf{X}_{\mathtt{pa}(i)}) & i \in \mathcal{I}\setminus I \,,\\ X_i &= \zeta_i & i \in I \,.\\ \end{array} \right. \end{align*} The causal mechanisms determining the non-intervened variables are unaffected, so their structural equations remain the same. The intervened variables are decoupled from their usual causal mechanisms and are forced to take the specified trajectory. \vspace{-0.15cm} \subsection{DERIVING DSCMs FROM ODEs}\label{section:struc_eqns_and_scm} In order to derive a DSCM from an ODE, we require the following consistency property between the asymptotic dynamics of the ODE and the set of interventions. \begin{definition}[Structural dynamic stability] Let $\mathtt{Dyn}$ be modular. The pair $(\mathcal{D},\mathtt{Dyn})$ is \textbf{structurally dynamically stable} if $(\mathcal{D},\mathtt{Dyn}_{\mathcal{I}\setminus \{ i \} })$ is dynamically stable with reference to $\mathtt{Dyn}_{\mathcal{I}}$ for all i. \end{definition} This means that for any intervention trajectory ${\pmb{\zeta}_{\mathcal{I}\setminus \{ i \} } \in \mathtt{Dyn}_{\mathcal{I}\setminus \{ i \} }}$, the asymptotic dynamics of the intervened ODE ${\mathcal{D}_{\mathtt{do}(\mathbf{X}_{\mathcal{I}\setminus \{ i \} } = \pmb{\zeta}_{\mathcal{I}\setminus \{ i \} })}}$ are expressible uniquely as an element of $\mathtt{Dyn}_\mathcal{I}$. Since $\mathtt{Dyn}$ is modular, the asymptotic dynamics of the non-intervened variable can be realised as the trajectory $\zeta_{i} \in \mathtt{Dyn}_{i}$, and thus $\mathtt{Dyn}$ is rich enough to allow us to make an intervention which forces the non-intervened variable to take this trajectory. This is a crucial property that allows the construction of the structural equations. In the particular case that $\mathtt{Dyn}$ consists of all constant trajectories, structural dynamic stability means that after any intervention on all-but-one-variable, the non-intervened variable settles to a unique equilibrium. In the language of \cite{MooJanSch13}, this would imply that the ODE is \emph{structurally stable}. It should be noted that $(\mathcal{D},\mathtt{Dyn})$ being structurally dynamically stable is a strong assumption in general. If $\mathtt{Dyn}$ is too small,\footnote{For example, if $\mathtt{Dyn}$ is not modular or represents interventions on only a subset of the variables.} then it may be possible to find a larger set $\mathtt{Dyn}' \supset \mathtt{Dyn}$ such that $(\mathcal{D},\mathtt{Dyn}')$ \emph{is} structurally dynamically stable. The procedure described in this section describes how to derive a DSCM capable of modelling all interventions in $\mathtt{Dyn}'$, which can thus be used to model interventions in $\mathtt{Dyn}$. Henceforth, we use the notation $I_i = \mathcal{I}\setminus \{i\}$ for brevity. Suppose that $(\mathcal{D},\mathtt{Dyn})$ is structurally dynamically stable. We can {\bf derive structural equations} ${F_i : \mathtt{Dyn}_{\mathtt{pa}(i)} \longrightarrow \mathtt{Dyn}_i }$ to describe the asymptotic dynamics of children variables as functions of their parents as follows. Pick $i\in \mathcal{I}$. The variable $X_i$ has parents $\mathbf{X}_{\mathtt{pa}(i)}$. Since $\mathtt{Dyn}$ is modular, for any configuration of parent dynamics $\pmb{\eta}_{\mathtt{pa}(i)} \in \mathtt{Dyn}_{\mathtt{pa}(i)}$ there exists $\pmb{\zeta}_{I_i} \in \mathtt{Dyn}_{I_i}$ such that $(\pmb{\zeta}_{I_i})_{\mathtt{pa}(i)} = \pmb{\eta}_{\mathtt{pa}(i)}$. By structural dynamic stability, the system $\mathcal{D}_{\mathtt{do}(\mathbf{X}_{I_i} = \pmb{\zeta}_{I_i})}$ has asymptotic dynamics specified by a unique element $\pmb{\eta} \in \mathtt{Dyn}_\mathcal{I}$, which in turn defines a unique element $\eta_i \in \mathtt{Dyn}_i$ specifying the asymptotic dynamics of variable $X_i$ since $\mathtt{Dyn}$ is modular. \begin{theorem}\label{theorem:structural-equations-well-defined} Suppose that $(\mathcal{D},\mathtt{Dyn})$ is structurally dynamically stable. Then the functions \[ F_i : \mathtt{Dyn}_{\mathtt{pa}(i)} \to \mathtt{Dyn}_i :\pmb{\eta}_{\mathtt{pa}(i)} \mapsto \eta_i \] constructed as above are well-defined. \end{theorem} Given the structurally dynamically stable pair $(\mathcal{D},\mathtt{Dyn})$ we define the derived DSCM \begin{align*} \mathcal{M}_\mathcal{D}: \left\lbrace \begin{array}{ll} X_i = F_i(\mathbf{X}_{\mathtt{pa}(i)}) & i \in \mathcal{I} \,, \end{array} \right. \end{align*} where the $F_i: \mathtt{Dyn}_{\mathtt{pa}(i)} \to \mathtt{Dyn}_i $ are defined as above. Note that structural dynamic stability was a crucial property that ensured $F_i(\mathtt{Dyn}_{\mathtt{pa}(i)}) \subseteq \mathtt{Dyn}_i$. If $(\mathcal{D},\mathtt{Dyn})$ is not structurally dynamically stable, we cannot build structural equations in this way. \begin{figure*}[h!] \centering \begin{tikzpicture}[->,>=stealth',auto,node distance=2.5cm, thin,main node/.style={rectangle,draw,minimum width=3.3cm, minimum height=1.3cm}] \node[main node,align=center] (1) {ODE \\ $\mathcal{D}$}; \node[main node,align=center] (2) [right of=1, xshift=2.8cm] {Intervened ODE \\ $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$}; \node[main node,align=center] (3) [below of=1, yshift=-0.5cm] {DSCM \\ $\mathcal{M}_\mathcal{D}$}; \node[main node,align=center] (4) [below of=2, yshift=-0.5cm] {Intervened DSCM \\ $\mathcal{M}_{\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}}$}; \node[main node,align=center] (5) [right of=2, xshift=2.8cm] {Intervened ODE \\ $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I, \mathbf{X}_J = \pmb{\zeta}_J)}$}; \node[main node,align=center] (6) [below of=5, yshift=-0.5cm] {Intervened DSCM \\ $\mathcal{M}_{\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I, \mathbf{X}_J = \pmb{\zeta}_J)}}$}; \draw [|->, shorten <=2pt, shorten >=2pt] (1.south) to (3.north) ; \draw [|->, shorten <=2pt, shorten >=2pt] (1.east) to (2.west); \draw [|->, shorten <=2pt, shorten >=2pt] (2.south) to (4.north); \draw [|->, shorten <=2pt, shorten >=2pt] (3.east) to (4.west); \draw [|->, shorten <=2pt, shorten >=2pt] (5.south) to (6.north) ; \draw [|->, shorten <=2pt, shorten >=2pt] (2.east) to (5.west); \draw [|->, shorten <=2pt, shorten >=2pt] (4.east) to (6.west); \node [draw=none, above=0.2cm] (a) at ($(1)!0.5!(2)$) {Sec.~\ref{section:ode_interventions}}; \node [draw=none, above=0.2cm] (a) at ($(3)!0.5!(4)$) {Sec.~\ref{section:scm_interventions}}; \node [draw=none, right=0.2cm] (a) at ($(1)!0.5!(3)$) {Sec.~\ref{section:struc_eqns_and_scm}}; \node [draw=none, right=0.2cm] (a) at ($(2)!0.5!(4)$) {Sec.~\ref{section:struc_eqns_and_scm}}; \node [draw=none, above=0.2cm] (a) at ($(2)!0.5!(5)$) {Sec.~\ref{section:ode_interventions}}; \node [draw=none, above=0.2cm] (a) at ($(4)!0.5!(6)$) {Sec.~\ref{section:scm_interventions}}; \node [draw=none, right=0.2cm] (a) at ($(5)!0.5!(6)$) {Sec.~\ref{section:struc_eqns_and_scm}}; \end{tikzpicture} \caption{Top-to-bottom arrows: Theorems \ref{theorem:structural-equations-well-defined} and \ref{theorem:same-solutions} together state that if $(\mathcal{D},\mathtt{Dyn})$ is structurally dynamically stable then we can construct a DSCM to describe the asymptotic behaviour of $\mathcal{D}$ under different interventions in the set $\mathtt{Dyn}$. Left-to-right arrows: Both ODEs and DSCMs are equipped with notions of intervention. Theorem \ref{theorem:commuting-diagram} and Corollary~\ref{corr:double-commuting-diagram} say that these two notions of intervention coincide, and thus the diagram commutes. \label{fig:commuting_diagram}} \end{figure*} We provide next an example of a DSCM for the mass-spring system of Example \ref{example:3-mass-spring} with $D=2$. The derivation of this for the general case of arbitrarily many masses is included in the Supplementary Material. \begin{example}\label{example:dscm} Consider the system $\mathcal{D}$ governed by the differential equation of Example \ref{example:3-mass-spring} with $D=2$. Let $\mathtt{Dyn}_{\{1,2\}}$ be the modular set of trajectories with \begin{align*} \mathtt{Dyn}_{\{i\}} = \Bigg\lbrace & \sum_{j=1}^\infty A_i^j \cos(\omega_i^j t + \phi_i^j) \: : \\ & w_i^j, \phi_i^j, A_i^j \in \mathbb{R}, \sum_{j=1}^\infty |A_i^j| < \infty\Bigg\rbrace \end{align*} for $i=1,2$, where for each $i$ it holds that $\sum_{j=1}^\infty |A_i^j| < \infty$ (so that the series is absolutely convergent). Then $(\mathcal{D}, \mathtt{Dyn}_{\{1,2\}})$ is structurally dynamically stable and admits the following DSCM. \begin{align*} \mathcal{M}: \left\lbrace \begin{array}{lll} X_1 &= F_1(X_2) \\ X_2 &= F_2(X_1)\\ \end{array} \right. \end{align*} where, writing $C_1^j = [k_1 + k_{2} - m_1(\omega_{2}^j)^2]^2$ and $C_2^j = [k_1 + k_{2} - m_2(\omega_{1}^j)^2]^2$, the functionals $F_1$ and $F_2$ are given by Equations \ref{eqn:structural_equations_1} and \ref{eqn:structural_equations_2} overleaf. \end{example} \begin{figure*} \begin{align} \resizebox{0.85\textwidth}{!}{\parbox{\textwidth}{$$ F_1 \left(\sum_{j=1}^\infty A_{2}^j \cos(\omega_{2}^j t + \phi_{2}^j) \right) =\frac{- k_{1}l_1}{k_1 + k_{0}} +\sum_{j=1}^\infty \frac{k_{1}A_{2}^j}{\sqrt{C_1^j + b_1m_1(\omega_{2}^j)^2}} \cos\left(\omega_{2}^j t + \phi_{2}^j - \arctan\left[\frac{b_1\omega_{2}^j }{C_1^j}\right]\right) $$ }} \label{eqn:structural_equations_1}\\ \resizebox{0.85\textwidth}{!}{\parbox{\textwidth}{$$ F_2 \left(\sum_{j=1}^\infty A_{1}^j \cos(\omega_{1}^j t + \phi_{1}^j) \right) =\frac{k_{1}l_1 - k_2l_2}{k_1 + k_{2}} + \frac{k_2L}{k_2 + k_3} +\sum_{j=1}^\infty \frac{k_{1}A_{1}^j}{\sqrt{C_2^j + b_2m_2(\omega_{1}^j)^2}} \cos\left(\omega_{1}^j t + \phi_{1}^j - \arctan\left[\frac{b_2\omega_{1}^j }{C_2^j}\right]\right) $$ }}\label{eqn:structural_equations_2} \end{align} \caption{Equations giving the structural equations for the DSCM describing the mass-spring system of Example \ref{example:dscm}} \end{figure*} \vspace{-0.15cm} \subsection{SOLUTIONS OF A DSCM}\label{section:solutions-to-dscm} \vspace{-0.15cm} Theorem \ref{theorem:structural-equations-well-defined} states that we can construct a DSCM by the described procedure. We constructed each equation by intervening on $D-1$ variables at a time. The result of this section states that the DSCM can be used to correctly model interventions on \emph{arbitrary} subsets of variables. We say that $\pmb{\eta}_\mathcal{I} \in \mathtt{Dyn}_{\mathcal{I}}$ is a \emph{solution} of $\mathcal{M}$ if $\eta_i = F_i(\pmb{\eta}_{\mathtt{pa}(i)}) \: \forall i \in \mathcal{I}$. \begin{theorem}\label{theorem:same-solutions} Suppose that $(\mathcal{D},\mathtt{Dyn})$ is structurally dynamically stable. Let $I \subseteq \mathcal{I}$, and let $\pmb{\zeta}_I \in \mathtt{Dyn}_I$. Then $\mathcal{D}_{do(\mathbf{X}_I = \pmb{\zeta}_I)}$ is dynamically stable if and only if the intervened SCM $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})}$ has a unique solution. If there is a unique solution, it coincides with the element of $\mathtt{Dyn}_\mathcal{I}$ describing the asymptotic dynamics of $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$. \end{theorem} \begin{remark} We could also take $I = \emptyset$, in which case the above theorem applies to just $\mathcal{D}$. \end{remark} \vspace{-0.15cm} \subsection{CAUSAL REASONING IS PRESERVED}\label{section:causal-reasoning-preserved} \vspace{-0.15cm} We have defined ways to model interventions in both ODEs and DSCMs. The following theorem and its immediate corollary proves that these notions of intervention coincide, and hence that DSCMs provide a representation to reason about the asymptotic behaviour of the ODE under interventions in $\mathtt{Dyn}$. A consequence of these results is that the diagram in Figure \ref{fig:commuting_diagram} commutes. \begin{theorem}\label{theorem:commuting-diagram} Suppose that $(\mathcal{D},\mathtt{Dyn})$ is structurally dynamically stable. Let $I \subseteq \mathcal{I}$ and let $\pmb{\zeta}_I \in \mathtt{Dyn}_{I}$. Then $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})} = (\mathcal{M}_{\mathcal{D}})_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$. \end{theorem} \begin{corollary}\label{corr:double-commuting-diagram} Suppose additionally that $J \subseteq \mathcal{I}\setminus I$ and let ${\pmb{\zeta}_J \in \mathtt{Dyn}_{J}}$. Then \[{\left(\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})}\right)_{\mathtt{do}(\mathbf{X}_J = \pmb{\zeta}_J)} = (\mathcal{M}_{\mathcal{D}})_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I, \mathbf{X}_J = \pmb{\zeta}_J)}}\,.\] \end{corollary} To summarise, Theorems \ref{theorem:structural-equations-well-defined}--\ref{theorem:commuting-diagram} and Corollary \ref{corr:double-commuting-diagram} collectively state that if $(\mathcal{D},\mathtt{Dyn})$ is dynamically structurally stable then it is possible to derive a DSCM that allows us to reason about the asymptotic dynamics of the ODE under any possible intervention in $\mathtt{Dyn}$. \vspace{-0.15cm} \subsection{RELATION TO ODEs AND DYNAMIC BAYESIAN NETWORKS} \vspace{-0.15cm} An ODE is capable of modelling arbitrary interventions on the system it describes. At the cost of only modelling a restricted set of interventions, a DSCM can be derived which describes the asymptotic behaviour of the system under these interventions. This may be desirable in cases for which transient behaviour is not important. We now compare DSCMs to Dynamic Bayesian Networks (DBNs), an existing popular method for causal modelling of dynamical systems \citep{PGM2009}. DBNs are essentially Markov chains, and thus are appropriate for discrete-time systems. When the discrete-time Markov assumption holds, DBNs are a powerful tool capable of modelling arbitrary interventions. However, approximations must be made whenever these assumptions do not hold. In particular, a continuous system must be approximately discretised in order to be modelled by a DBN \citep{SokolHansen2014}. By using the Euler method for numerically solving ODEs, we can make such an approximation to derive a DBN describing the system in Example~\ref{example:3-mass-spring}, leading to the discrete time equation given in \eref{eq:DBN} the Supplementary Material. For DBNs, the main choice to be made is how fine the temporal discretisation should be. The smaller the value of $\Delta$, the better the discrete approximation will be. Even if there is a natural time-scale on which measurements can be made, choosing a finer discretisation than this will provide a better approximation to the behaviour of the true system. The choice of $\Delta$ should reflect the natural timescales of the interventions to be considered too; for example, it is not clear how one would model the intervention $\mathtt{do}\left(X_1(t) = \cos\left(\frac{2\pi t}{\Delta}\right)\right)$ with a discretisation length $\Delta$. Another notable disadvantage of DBNs is that the computational cost of learning and inference increases for smaller $\Delta$, where computational cost becomes infinitely large in the limit $\Delta \to 0$. In contrast, the starting point for DSCMs is to fix a convenient set of interventions we are interested in modelling. If a DSCM containing these interventions exists, it will model the asymptotic behaviour of the system under each of these interventions \emph{exactly}, rather than approximately modelling the transient and asymptotic behaviour as in the case of a DBN. Computational cost does not relate inversely to accuracy as for DBNs, but depends on the chosen representation of the set of admitted interventions. \vspace{-0.15cm} \section{DISCUSSION AND FUTURE WORK}\label{section:discussion} \vspace{-0.15cm} The main contribution of this paper is to show that the SCM framework can be applied to reason about time-dependent interventions on an ODE in a dynamic setting. In particular, we showed that if an ODE is sufficiently well-behaved under a set of interventions, a DSCM can be derived that captures how the asymptotic dynamics change under these interventions. This is in contrast to previous approaches to connecting the language of ODEs with the SCM framework, which used SCMs to describe the stable (constant-in-time) equilibria of the ODE and how they change under intervention. We identify three possible directions in which to extend this work in the future. The first is to properly understand how learning DSCMs from data could be performed. This is important if DSCMs are to be used in practical applications. Challenges to be addressed include finding practical parameterizations of DSCMs, the presence of measurement noise in the data and the fact that time-series data are usually sampled at a finite number of points in time. The second is to relax the assumption that the asymptotic dynamics are \emph{independent of initial conditions}, as was done recently for the static equilibrium scenario by \citet{BlomMooij_1805.06539}. The third extension is to move away from deterministic systems and consider Random Differential Equations \citep{BongersMooij_1803.08784}, thereby allowing to take into account model uncertainty, but also to include systems that may be inherently stochastic. \vspace{-0.2cm} \subsubsection*{ACKNOWLEDGEMENTS} \vspace{-0.2cm} Stephan Bongers was supported by NWO, the Netherlands Organization for Scientific Research (VIDI grant 639.072.410). This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n$^{\mathrm{o}}$ 639466). \newpage \input{dynamic_scm.bbl} \newpage \onecolumn \section*{\centerline{SUPPLEMENTARY MATERIAL}} \renewcommand{\thesection}{\Alph{section}} \setcounter{section}{0} \section{PROOFS}\label{supp:proofs} \subsection{PROOF OF THEOREM 1}\label{supp:theorem1proof} \begin{proof} We need to show that if $\pmb{\zeta}_{I_i}$ and $\pmb{\zeta}'_{I_i}$ are such that $(\pmb{\zeta}_{I_i})_{\mathtt{pa}(i)} = (\pmb{\zeta}'_{I_i})_{\mathtt{pa}(i)} = \pmb{\eta}_{\mathtt{pa}(i)}$, then $\eta_i = \eta'_i$. To see that this is the case, observe that the system of equations for $\mathcal{D}_{\mathtt{do}(\mathbf{X}_{I_i} = \pmb{\zeta}_{I_i})}$ is given by: \begin{align*} \mathcal{D}_{\mathtt{do}(\mathbf{X}_{I_i} = \pmb{\zeta}_{I_i})}: \left\lbrace \begin{array}{ll} X_j(t) = \zeta_j(t) & j \in \mathcal{I}\setminus (\mathtt{pa}(i) \cup \{i\}) \,, \\ X_j(t) = \eta_j(t) & j \in \mathtt{pa}(i) \,, \\ f_i(X_i,\mathbf{X}_{\mathtt{pa}(i)})(t) = 0 \quad & X_i^{(k)}(0) = (\mathbf{X}_0^{(k)})_i, \: 0\leq k \leq n_i - 1 \,. \\ \end{array} \right. \end{align*} The equations for $\mathcal{D}_{\mathtt{do}(\mathbf{X}_{I_i} = \pmb{\zeta}'_{I_i})}$ are similar, except with $X_j(t) = \zeta'_j(t)$ for $j \in \mathcal{I}\setminus (\mathtt{pa}(i) \cup \{i\})$. In both cases, the equations for all variables except $X_i$ are solved already. The equation for $X_i$ in both cases reduces to the same quantity by substituting in the values of the parents, namely \[ f_i(X_i,\pmb{\eta}_{\mathtt{pa}(i)})(t) = 0 \,. \] The solution to this equation in $\mathtt{Dyn}_i$ must be unique and independent of initial conditions, else the dynamic stability of the intervened systems $\mathcal{D}_{\mathtt{do}(\mathbf{X}_{I_i} = \pmb{\zeta}_{I_i})}$ and $\mathcal{D}_{\mathtt{do}(\mathbf{X}_{I_i} = \pmb{\zeta}'_{I_i})}$ would not hold, contradicting the dynamic structural stability of $(\mathcal{D},\mathtt{Dyn})$. It follows that $\eta_i = \eta'_i$. \end{proof} \subsection{PROOF OF THEOREM 2}\label{supp:theorem2proof} \begin{proof} By construction of the SCM, $\pmb{\eta} \in \mathtt{Dyn}_\mathcal{I}$ is a solution of $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})}$ if and only if the following two conditions hold: \begin{compactitem} \item for $i \in \mathcal{I}\setminus I$, $X_i(t) = \eta_i(t) \; \forall t$ is a solution to the differential equation $f_i(X_i, \pmb{\eta}_{\mathtt{pa}(i)})(t) = 0$; \item for $i \in I$, $\eta_i(t) = \zeta_i(t)$ for all $t$. \end{compactitem} which is true if and only if $\mathbf{X} = \pmb{\eta}$ is a solution to $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ in $\mathtt{Dyn}_\mathcal{I}$. Thus, by definition of dynamic stability, $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ is dynamically stable with asymptotic dynamics describable by $\pmb{\eta} \in \mathtt{Dyn}$ if and only if $\mathbf{X} = \pmb{\eta}$ uniquely solves $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})}$. \end{proof} \subsection{PROOF OF THEOREM 3}\label{supp:theorem3proof} \begin{proof} We need to show that the structural equations of $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})}$ and $(\mathcal{M}_{\mathcal{D}})_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ are equal. Observe that the equations for $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ are given by: \begin{align*} \mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)} :\left\lbrace \begin{array}{ll} X_i = \zeta_i, \quad & i \in I \,,\\ f_i(X_i, \mathbf{X}_{\mathtt{pa}(i)}) = 0, X_i^{(k)}(0) = (\mathbf{X}_0^{(k)})_i, \: 0\leq k \leq n_i - 1, \quad & i \in \mathcal{I} \setminus I \,. \end{array} \right. \end{align*} Therefore, when we perform the procedure to derive the structural equations for $\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$, we see that: \begin{compactitem} \item if $i \in I$, the $i$th structural equation will simply be $X_i = \zeta_i$ since intervening on $I_i$ does not affect variable $X_i$. \item if $i \in \mathcal{I}\setminus I$, the $i$th structural equation will be the same as for $\mathcal{M}_\mathcal{D}$, since the dependence of $X_i$ on the other variables is unchanged. \end{compactitem} Hence the structural equations for $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})}$ are given by: \begin{align*} \mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})} :\left\lbrace \begin{array}{ll} X_i = \zeta_i, \quad & i \in I \,,\\ X_i = F_i(\mathbf{X}_{\mathtt{pa}(i)}), \quad & i \in \mathcal{I} \setminus I \,. \\ \end{array} \right. \end{align*} and therefore $\mathcal{M}_{(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)})} = (\mathcal{M}_{\mathcal{D}})_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}$ . \end{proof} \subsection{PROOF OF COROLLARY 1}\label{supp:cor1proof} \begin{proof} Corollary 1 follows very simply from the observation that if $(\mathcal{D},\mathtt{Dyn})$ is structurally dynamically stable then so is $(\mathcal{D}_{\mathtt{do}(\mathbf{X}_I = \pmb{\zeta}_I)}, \mathtt{Dyn}_{\mathcal{I}\setminus I} )$. The result then follows by application of Theorem 3. \end{proof} \section{DERIVING THE DSCM FOR THE MASS-SPRING SYSTEM} Consider the mass-spring system of Example \ref{example:3-mass-spring}, but with $D\geq 1$ an arbitrary integer. We repeat the setup: We have $D$ masses attached together on springs. The location of the $i$th mass at time $t$ is $X_i(t)$, and its mass is $m_i$. For notational ease, we denote by $X_0=0$ and $X_{D+1} = L$ the locations of where the ends of the springs attached to the edge masses meet the walls to which they are affixed. $X_0$ and $X_{D+1}$ are constant. The natural length and spring constant of the spring connecting masses $i$ and $i+1$ are $l_i$ and $k_i$ respectively. The $i$th mass undergoes linear damping with coefficient $b_i$, where $b_i$ is small to ensure that the system is underdamped. The equation of motion for the $i$th mass ($1\leq i \leq D$) is given by: \begin{align*} m_i\ddot{X}_i(t) = k_i[X_{i+1}(t) - X_i(t) - l_i] - k_{i-1}[X_i(t) - X_{i-1}(t) - l_{i-1}] - b_i \dot{X}_i(t) \end{align*} so, defining \[ f_i(X_i,X_{i-1},X_{i+1})(t) = m_i\ddot{X}_i(t) - k_i[X_{i+1}(t) - X_i(t) - l_i] + k_{i-1}[X_i(t) - X_{i-1}(t) - l_{i-1}] + b_i \dot{X}_i(t) \] we can write the system of equations $\mathcal{D}$ for our mass-spring system as \begin{align*} \mathcal{D} :\left\lbrace \begin{array}{ll} f_i(X_i,X_{i-1},X_{i+1})(t) = 0 \quad & i \in \mathcal{I} \,. \\ \end{array} \right. \end{align*} In the rest of this section we will explicitly calculate the structural equations for the DSCM derived from $\mathcal{D}$ with two different sets of interventions. First, we will derive the structural equations for the case that $\mathtt{Dyn}$ consists of all constant trajectories, corresponding to constant interventions that fix variables to constant values for all time. This illustrates the correspondence between the theory in this paper and that of \cite{MooJanSch13}. Next, we will derive the structural equations for the case that $\mathtt{Dyn}$ consists of interventions corresponding to sums of periodic forcing terms. \begin{subsection}{MASS-SPRING WITH CONSTANT INTERVENTIONS}\label{supp:mass-spring-constant-scm} In order to derive the structural equations we only need to consider, for each variable, the influence of its parents on it. (Formally, this is because of Theorem \ref{theorem:structural-equations-well-defined}). Consider variable $i$. If we intervene to fix its parents to have locations $X_{i-1}(t) = \eta_{i-1}$ and $X_{i+1}(t) = \eta_{i+1}$ for all $t$, then the equation of motion for variable $i$ is given by \begin{align*} m_i\ddot{X}_i(t) + b_i \dot{X}_i(t) + (k_i+k_{i-1})X_{i}(t)= k_i[\eta_{i+1} - l_i] + k_{i-1}[ \eta_{i-1} + l_{i-1}] \,. \end{align*} There may be some complicated transient dynamics that depend on the initial conditions $X_i(0)$ and $\dot{X}_i(0)$ but provided that $b_i > 0$, we know that the $X_i(t)$ will converge to a constant and therefore the asymptotic solution to this equation can be found by setting $\ddot{X}_i$ and $\dot{X}_i$ to zero. Note that in general, we could explicitly find the solution to this differential equation (and indeed, in the next example we will) but for now there is a shortcut to deriving the structural equations.\footnote{This is analogous to the approach taken in \cite{MooJanSch13} in which the authors first define the Labelled Equilibrium Equations and from these derive the SCM.} The asymptotic solution is: \begin{align*} X_{i} = \frac{k_i[\eta_{i+1} - l_i] + k_{i-1}[ \eta_{i-1} + l_{i-1}] }{k_i+k_{i-1}}. \end{align*} Therefore the $i$th structural equation is: \begin{align*} F_i(X_{i-1},X_{i+1}) = \frac{k_i[X_{i+1} - l_i] + k_{i-1}[ X_{i-1} + l_{i-1}] }{k_i+k_{i-1}}. \end{align*} Hence the SCM for $(\mathcal{D},\mathtt{Dyn}_c)$ is: \begin{align*} \mathcal{M}_{\mathcal{D}} :\left\lbrace \begin{array}{ll} X_i = \displaystyle\frac{k_i[X_{i+1} - l_i] + k_{i-1}[ X_{i-1} + l_{i-1}] }{k_i+k_{i-1}} \quad & i \in \mathcal{I} \,. \\ \end{array} \right. \end{align*} We can thus use this model to reason about the effect of constant interventions on the asymptotic equilibrium states of the system. \end{subsection} \begin{subsection}{SUMS OF PERIODIC INTERVENTIONS}\label{supp:mass-spring-periodic-scm} Suppose now we want to be able to make interventions of the form: \begin{equation}\label{eqn:periodic-intervention} \mathtt{do}\big( X_i(t) = A \cos(\omega t + \phi) \big) \,. \end{equation} Such interventions cannot be described by the DSCM derived in Section \ref{supp:mass-spring-constant-scm}. In this section we will explicitly derive a DSCM capable of reasoning about the effects of such interventions. It will also illustrate why we need dynamic structural stability. By Theorem \ref{theorem:structural-equations-well-defined}, to derive the structural equation for each variable we only need to consider the effect on the child of intervening on the parents according to interventions of the form \eref{eqn:periodic-intervention}. Consider the following linear differential equation: \begin{align}\label{equation:forced-de} m\ddot{X}(t) + b \dot{X}(t) + kX(t)= g(t)\,. \end{align} In general, the solution to this equation will consist of two parts---the \emph{homogeneous} solution and the \emph{particular} solution. The homogeneous solution is one of a family of solutions to the equation \begin{align} m\ddot{X}(t) + b \dot{X}(t) + kX(t)= 0 \end{align} and this family of solutions is parametrised by the initial conditions. If $b>0$ then all of the homogeneous solutions decay to zero as $t\longrightarrow\infty$. The particular solution is any solution to the original equation with arbitrary initial conditions. The particular solution captures the asymptotic dynamics due to the forcing term $g$. Equation \ref{equation:forced-de} is a linear differential equation. This means that if $X=X_1$ is a particular solution for $g = g_1$ and $X=X_2$ is a particular solution for $g = g_2$, then $X=X_1+X_2$ is a particular solution for $g=g_1+g_2$. In order to derive the structural equations, the final ingredient we need is an explicit representation for a particular solution to (\ref{equation:forced-de}) in the case that $g(t) = A\cos(\omega t + \phi)$. We state the solution for the case that the system is underdamped---this is a standard result and can be verified by checking that the following satisfies \eref{equation:forced-de}: \[ X(t) = A'\cos(\omega t + \phi') \] where \begin{align}\label{eqn:motion-params-transform} A' = \frac{A}{\sqrt{[k - m\omega^2]^2 + bm\omega^2}}\,, && \phi' = \phi - \arctan\left[\frac{b\omega}{k - m\omega^2}\right]\,. \end{align} Therefore if we go back to our original equation of motion for variable $X_i$ \begin{align*} m_i\ddot{X}_i(t) + b_i \dot{X}_i(t) + (k_i+k_{i-1})X_{i}(t)= k_i[X_{i+1}(t) - l_i] + k_{i-1}[ X_{i-1}(t) + l_{i-1}] \end{align*} and perform the intervention \[ \mathtt{do}(X_{i-1}(t) = A_{i-1} \cos(\omega_{i-1} t + \phi_{i-1}), X_{i+1}(t) = A_{i+1} \cos(\omega_{i+1} t + \phi_{i+1})) \] we see that we can write the RHS of the above equation as the sum of the three terms \begin{align*} g_1(t) &= k_{i-1}l_{i-1} - k_{i}l_i \,,\\ g_2(t) &= k_{i-1}A_{i-1} \cos(\omega_{i-1} t + \phi_{i-1}) \,, \\ g_3(t) &= k_{i}A_{i+1} \cos(\omega_{i+1} t + \phi_{i+1})\,. \end{align*} Using the fact that linear differential equation have superposable solutions and \eref{eqn:motion-params-transform}, we can write down the resulting asymptotic dynamics of $X_i$: \begin{align*} X_i(t&) = \frac{k_{i-1}l_{i-1} - k_{i}l_i}{k_i + k_{i-1}} \\ + &\frac{k_{i-1}A_{i-1}}{\sqrt{[k_i + k_{i-1} - m_i\omega_{i-1}^2]^2 + b_im_i\omega_{i-1}^2}} \cos\left(\omega_{i-1} t + \phi_{i-1} - \arctan\left[\frac{b_i\omega_{i-1}}{k_i + k_{i-1} - m_i\omega_{i-1}^2}\right]\right) \\ + &\frac{k_{i}A_{i+1}}{\sqrt{[k_i + k_{i-1} - m_i\omega_{i+1}^2]^2 + b_im_i\omega_{i+1}^2}} \cos\left(\omega_{i+1} t + \phi_{i+1} - \arctan\left[\frac{b_i\omega_{i+1}}{k_i + k_{i-1} - m_i\omega_{i+1}^2}\right]\right)\,. \end{align*} However, note that if we were using $\mathtt{Dyn}$ consisting of interventions of the form of equation \eref{eqn:periodic-intervention}, then we have just shown that the mass-spring system would not be structurally dynamically stable with respect to this $\mathtt{Dyn}$, since we need two periodic terms and a constant term to describe the motion of a child under legal interventions of the parents. This illustrates the fact that we may sometimes be only interested in a particular set of interventions that may not itself satisfy structural dynamic stability, and that in this case we must consider a larger set of interventions that \emph{does}. In this case, we can consider the modular set of trajectories generated by trajectories of the following form for each variable: \begin{align*} X_i(t) = \sum_{j=1}^\infty A_i^j \cos(\omega_i^j t + \phi_i^j) \end{align*} where for each $i$ it holds that $\sum_{j=1}^\infty |A_i^j| < \infty$ (so that the series is absolutely convergent and thus does not depend on the ordering of the terms in the sum). Call this set $\mathtt{Dyn}_{qp}$ (``quasi-periodic''). By equation \eref{eqn:motion-params-transform}, we can write down the structural equations \begin{align*} F_i & \left(\sum_{j=1}^\infty A_{i-1}^j \cos(\omega_{i-1}^j t + \phi_{i-1}^j), \sum_{j=1}^\infty A_{i+1}^j \cos(\omega_{i+1}^j t + \phi_{i+1}^j) \right) \\ =&\frac{k_{i-1}l_{i-1} - k_{i}l_i}{k_i + k_{i-1}} \\ &+ \sum_{j=1}^\infty \frac{k_{i-1}A_{i-1}^j}{\sqrt{[k_i + k_{i-1} - m_i(\omega_{i-1}^j)^2]^2 + b_im_i(\omega_{i-1}^j)^2}} \cos\left(\omega_{i-1}^j t + \phi_{i-1}^j - \arctan\left[\frac{b_i\omega_{i-1}^j }{k_i + k_{i-1} - m_i(\omega_{i-1}^j)^2}\right]\right) \\ &+\sum_{j=1}^\infty \frac{k_{i}A_{i+1}^j}{\sqrt{[k_i + k_{i+1} - m_i(\omega_{i+1}^j)^2]^2 + b_im_i(\omega_{i+1}^j)^2}} \cos\left(\omega_{i+1}^j t + \phi_{i+1}^j - \arctan\left[\frac{b_i\omega_{i+1}^j }{k_i + k_{i+1} - m_i(\omega_{i+1}^j)^2}\right]\right) \,. \end{align*} Since this is also a member of $\mathtt{Dyn}_{qp}$, the mass-spring system is dynamically structurally stable with respect to $\mathtt{Dyn}_{qp}$ and so the equations $F_i$ define the Dynamic Structural Causal Model for asymptotic dynamics. \end{subsection} \section{DYNAMIC BAYESIAN NETWORK REPRESENTATION} By using Euler's method, we can obtain a (deterministic) Dynamic Bayesian Network representation of the mass-spring system. For $D=2$, this yields \begin{align}\label{eq:DBN} DBN: \left\lbrace \begin{array}{lll} X_1^{(t+1)\Delta} = X_1(t\Delta) + \Delta \dot{X_1}(t\Delta) \\ \dot{X_1}^{(t+1)\Delta} = \dot{X_1}(t\Delta) + \frac{\Delta}{m_1}\Big[k_1X_2(t\Delta) - b_1 \dot{X_1}(t\Delta) - (k_0 + k_1) X_1(t\Delta) + k_0l_0 - k_1l_1 \Big]\\\\ X_2^{(t+1)\Delta} = X_2(t\Delta) + \Delta \dot{X_2}(t\Delta) \\ \dot{X_2}^{(t+1)\Delta} = \dot{X_2}(t\Delta) + \frac{\Delta}{m_2}\Big[k_1X_1(t\Delta) - b_2 \dot{X_2}(t\Delta) - (k_1 + k_2) X_2(t\Delta) + k_1l_1 - k_2l_2 + k_2L \Big]\\ \\ X_i^{(k)}(0) = (\mathbf{X}^{(k)}_0)_i \quad k \in \{0,1\}, \: i \in \{1,2\}\,.\\ \end{array} \right. \end{align} \end{document}
train/arxiv
BkiUdpU4uBhi32IAycxY
5
1
\section{Introduction} Let $\Omega$ be a bounded, open and Lipschitz set and let $u \in W^{1,p}(\Omega)$, for some $p \geq 1$, be a non-negative function. In this paper, we deal with the problem of comparing a function $u \in W^{1,p}(\Omega)$ with a radial function whose gradient is equi-rearranged with the gradient of $u$. Hence, we aim to extend the results contains in and Nunziante \cite{GN} to a more general setting. If $A$ is a bounded and open set with the same measure as $\Omega$, we say that a function $f^\star \in L^p(A)$ is equi-rearranged to $f \in L^p(\Omega)$ if they have the same distribution function, i.e. \begin{definizione} Let $f: \Omega \to \R$ be a measurable function, the \emph{distribution function} of $f$ is the function $\mu_f : [0,+\infty[\, \to [0, +\infty[$ defined by \[ \mu_f(t)= \abs{\Set{x \in \Omega \, :\, \abs{f(x)} > t}}. \] where $\abs{\cdot}$ denotes the $n$-dimensional Lebesgue measure of a measurable set. \end{definizione} In order to state our results, we recall some definitions \begin{definizione} \label{rearrangements} Let $f: \Omega \to \R$ be a measurable function: \begin{itemize} \item the \emph{decreasing rearrangement} of $f$, denoted by $f^\ast$, is the distribution function of $\mu_f$. Moreover, we can write \[ f^\ast(s)= \inf \{ t \geq 0 \, |\, \mu_f(t) < s\}; \] \item the \emph{increasing rearrangement} of $f$ is defined as \[ f_\ast(s)= f^\ast(\abs{\Omega}-s); \] \item the \emph{spherically symmetric decreasing rearrangement} of $f$, defined in $\Omega^\sharp$ i.e. the ball centered at the origin with the same measure as $\Omega$, is the function \[ f^\sharp(x) = f^\ast(\omega_n \abs{x}^n), \] where $\omega_n$ is the measure of the $n$-dimensional unit-ball of $\R^n$; \item the \emph{spherically symmetric increasing rearrangement} of $f$, defined in $\Omega^\sharp$, is \[ f_\sharp(x) = f_\ast(\omega_n \abs{x}^n). \] \end{itemize} \end{definizione} Clearly, we can construct several rearrangements of a given function $f$, but the one we will refer to is the spherically symmetric increasing rearrangement defined in $\Omega^\sharp$. The starting point of our work, and many others, is \cite[Theorem 2.2]{GN} \begin{teorema} \label{Giarrusso_Nunziante} Let $p \geq 1$, $f \colon \Omega \to \R$, $H \colon \R^n \to \R$ be measurable non-negative functions and let $K \colon [0,+\infty) \to [0,+\infty)$ be a strictly increasing real-valued function such that \[ 0 \leq K(\abs{y}) \leq H(y) \qquad \forall y \in \R^n \qquad \text{ and } K^{-1}(f) \in L^p(\Omega) \] Let $v \in W_0^{1,p}(\Omega)$ be a function that satisfy \[ \begin{cases} H(\nabla v) = f(x) &\text{a.e. in }\Omega \\ v = 0 &\text{on } \partial \Omega \end{cases} \] then, denoting with $\overline{v}$ the unique decreasing spherically symmetric solution to \[ \begin{cases} K(\abs{\nabla \overline{v}}) = f_{\sharp}(x) & \text{a.e. in } \Omega^{\sharp} \\ \overline{v} = 0 & \text{on } \partial \Omega^{\sharp} \end{cases} \] it holds \begin{equation} \label{eq_Giarrusso_Nunziante} \norma{v}_{L^1(\Omega)} \leq \norma{\overline{v}}_{L^1(\Omega^{\sharp})} \end{equation} \end{teorema} They give also a similar result for the spherically symmetric decreasing rearrangement of the gradient, with an $L^\infty$ comparison. In recent decades, many authors studied this kind of problems, in particular in \cite{ALT} Alvino, Lions and Trombetti proved the existence of a spherically symmetric rearrangement of the gradient of $v$ which gives a $L^q$ comparison as in \eqref{eq_Giarrusso_Nunziante} for a fixed $q$. Moreover, Cianchi in \cite{Cia} gives a characterization of such rearrangement; clearly, the rearrangement found by Cianchi is different both from the spherically symmetric increasing and decreasing rearrangement if $q \in (1, \infty)$. Incidentally, let us mention that the case where the $L^{q, 1}$ Lorentz norm, see Section \ref{Section_2} for its definition, takes the place of the $L^q$ norm in \eqref{eq_Giarrusso_Nunziante} has been studied in \cite{Ta6}. In particular, he stated the following \begin{teorema} Let $u$ be a real-valued function defined in $\R^n$. Suppose $u$ is nice enough - e.g. Lipschitz continuous - and the support of $u$ has finite measure. Let $M$ and $V$ denote the distribution function of $\abs{\nabla u}$ and the measure of the support of $u$, respectively. Let $v$ the real-valued function defined in $\R^n$ that satisfies the following conditions: \begin{enumerate} \item $\abs{\nabla v}$ is a rearrangement of $\abs{\nabla u}$; \item the support of $v$ has the same measure of the support of $u$; \item $v$ is radially decreasing and $\abs{\nabla v}$ is radially increasing. \end{enumerate} Then \[ \lnorma{u}_{L^{p,1}(\Omega)} \leq \lnorma{v}_{L^{p,1}(\Omega^{\sharp})} \quad \text{ if } n=1 \text{ or } 0 < p \leq \frac{n}{n-1}, \] furthermore \[ \lnorma{v}_{L^{p,1}(\Omega^{\sharp})} = \frac{p^2}{ \omega_n^{\frac{1}{n}}(n+p)} \int_0^\infty \left[V^{\frac{1}{p} + \frac{1}{n}} - (V-M(t))^{\frac{1}{p} + \frac{1}{n}}\right]\, dt. \] \end{teorema} As we already said, we focus on the case in which the functions do not vanish on the boundary. Our main theorem is the following: \begin{teorema} \label{Teorema_che_scriveremo} Let $\Omega \subset \R^n$ be a bounded, open and Lipschitz set and let $u \in W^{1,p}(\Omega)$ be a non-negative function. If we denote with $\Omega^{\sharp}$ the ball centered at the origin with same measure as $\Omega$, then there exists a non-negative function $u^{\star} \in W^{1,p}(\Omega^{\sharp})$ that satisfies \begin{equation} \label{eq_che_risolve_u_picche} \begin{cases} \lvert \nabla u^{\star} \rvert = \abs{\nabla u}_{\sharp}(x) & \text{a.e. in }\Omega^{\sharp} \\[1ex] u^\star = \cfrac{\displaystyle{ \int_{\partial \Omega} \abs{u} \, d \mathcal{H}^{n-1}} }{\displaystyle{ \lvert \partial \Omega^{\sharp} \rvert }} &\text{ on } \partial \Omega^{\sharp}. \end{cases} \end{equation} and such that \begin{align} \label{norma_L1} \norma{u}_{L^1(\Omega)} &\leq \norma{u^{\star}}_{L^1(\Omega^{\sharp})}, \\ \label{norma_traccia_Lp} \lvert \partial \Omega^{\sharp} \rvert^{p-1} \int_{\partial \Omega^{\sharp}} (u^{\star})^p \, d\mathcal{H}^{n-1} &\leq \abs{\partial \Omega}^{p-1}\int_{\partial \Omega} u^p \, d\mathcal{H}^{n-1} \qquad \forall p \geq 1. \end{align} \end{teorema} This result allows us to compare solutions to PDE with Robin boundary conditions with solutions to their symmetrized. Precisely we are able to compare solutions to \begin{equation} \label{eq_soluzione_debole_Omega_intro} \begin{cases} -\Delta u = 1 &\text{in } \Omega \\[1ex] \displaystyle{\parzder{u}{\nu} + \beta \abs{\partial \Omega } \, u = 0} &\text{on } \partial \Omega \end{cases} \end{equation} with the solution to \begin{equation} \label{eq_soluzione_debole_Omega_sharp_intro} \begin{cases} -\Delta v = 1 &\text{in } \Omega^{\sharp} \\[1ex] \displaystyle{\parzder{v}{\nu} + \beta \lvert \partial \Omega^{\sharp} \rvert \, v = 0} &\text{on } \partial \Omega^{\sharp} \end{cases} \end{equation} In particular we get \begin{corollario} \label{corollario_torsione_pesata} Let $\beta>0,$ let $\Omega \subset \R^n$ be a bounded, open and Lipschitz set. If we denote with $\Omega^{\sharp}$ the ball centered at the origin with same measure as $\Omega$, it holds \begin{equation} T(\Omega,\beta) \geq T(\Omega^{\sharp},\beta) \end{equation} \end{corollario} where \begin{equation} T(\Omega,\beta) = \inf_{w \in W^{1,2}(\Omega)} \cfrac{ \displaystyle{ \int_{\Omega} \abs{\nabla w}^2 \, dx + \beta \abs{\partial \Omega} \, \int_{\partial \Omega} w^2 \, d\mathcal{H}^{n-1} } }{\displaystyle{ \biggl(\int_{\Omega} w \, dx \biggr)^2 }} \qquad \text{for } w \in W^{1,2}(\Omega). \end{equation} The paper is organized as follows. In Section \ref{Section_2} we recall some basic notions, definitions and classical results and we prove Theorem \ref{Teorema_che_scriveremo}. Eventually, Section \ref{Section_4} is dedicated to the application to the Robin torsional rigidity and in Section \ref{Section_5} we get a comparison between Lorentz norm of $u$ and $u^{\star}$. \section{Notations, Preliminaries and proof of the main result} \label{Section_2} Observe that obviously $\forall p \geq 1$ \[ \displaystyle{\norma{f}_{L^p(\Omega)}=\norma{f^*}_{L^p([0, \abs{\Omega}])}=\lVert{f^\sharp}\rVert_{L^p(\Omega^\sharp)}=\norma{f_*}_{L^p([0, \abs{\Omega}])}=\lVert{f_\sharp}\rVert_{L^p(\Omega^\sharp)}}, \] moreover, the Hardy-Littlewood inequalities hold true: \begin{equation*} \int_{\Omega} \abs{f(x)g(x)} \, dx \le \int_{0}^{\abs{\Omega}} f^*(s) g^*(s) \, ds= \int_{\Omega^\sharp} f^\sharp(x) g^\sharp(x) \, dx, \end{equation*} \begin{equation*} \int_{\Omega^\sharp} f^\sharp(x) g_\sharp(x) \, dx=\int_{0}^{\abs{\Omega}} f^*(s) g_*(s) \, ds \leq \int_{\Omega} \abs{f(x)g(x)} \, dx . \end{equation*} Finally, the operator which assigns to a function its symmetric decreasing rearrangement is a contraction in $L^p$ , see (\cite{CP}), i.e. \begin{equation} \label{eq_riarr_diminuiscono_distanza_Lp} \norma{f^*-g^*}_{L^p([0,\abs{\Omega}])} \leq \norma{f-g}_{L^p(\Omega)} \end{equation} One can find more results and details about rearrangements for instance in \cite{HLP} and in \cite{Ta6}. Other powerful tools are the pseudo-rearrangements. Let $u \in W^{1,p}(\Omega)$ and let $f \in L^1(\Omega)$, as in \cite{AT} $\forall s \in [0,\abs{\Omega}]$, there exists a subset $D(s) \subseteq \Omega$ such that \begin{enumerate} \item $\abs{D(s)}=s$; \item $D(s_1) \subseteq D(s_2)$ if $s_1<s_2$; \item $D(s) = \Set{x \in \Omega \, | \, \abs{u(x)}>t}$ if $s=\mu(t)$. \end{enumerate} So the function \[ \int_{D(s)}f(x) \, dx \] is absolutely continuous, therefore it exists a function $F$ such that \[ \int_0^s F(t) \, dt = \int_{D(s)} f(x) \, dx \] We will use the following propriety (\cite[Lemma 2.2]{AT}) \begin{lemma} \label{lemma_Alvino_Trombetti} Let $f \in L^p$ for $p>1$. Then there exists a sequence $\Set{ F_k }$ such that $F_k$ has the same rearrangement as $f$ and \[ F_k \rightharpoonup F \qquad \text{in } L^p([0,\abs{\Omega}]) \] If $f \in L^1$ it follows that \[ \lim_k \int_0^{\abs{\Omega}} F_k(s) g(s) \, ds = \int_0^{\abs{\Omega}} F(s) g(s) \, ds \] for each function $g \in BV([0,\abs{\Omega}])$. \end{lemma} Moreover, for sake of completeness, we will recall the definition of the Lorentz norm. \begin{definizione} \label{def_spazi_Lorentz} Let $\Omega \subseteq \R^n$ a measurable set, $0<p<+\infty$ and $0<q<+\infty$. Then a function $g$ belongs to the Lorentz space $L^{p,q}(\Omega)$ if \begin{equation} \label{eq_norma_Lorentz} \lnorma{g}_{L^{p,q}(\Omega)} = \biggl( \int_0^{+\infty} \bigl[t^{\frac{1}{p}} g^*(t) \bigr]^q \frac{dt}{t} \biggr)^{\frac{1}{q}} < + \infty. \end{equation} \end{definizione} Let us notice that for $p=q$ the Lorentz space $L^{p,p}(\Omega)$ coincides with the Lebesgue space $L^p(\Omega)$ by the Cavalieri's principle. Let us now prove the main Theorem. \begin{proof}[Proof of Theorem \ref{Teorema_che_scriveremo}] Let us consider the sets \begin{equation} \label{def_Omega_eps} \begin{aligned} \Omega_{\varepsilon} &= \Set{x \in \R^n | d(x, \Omega) < \varepsilon} \qquad & \Sigma_\varepsilon &= \Omega_\varepsilon \setminus \Omega,\\ \Omega^{\sharp}_\varepsilon & = \Set{x \in \R^n | d(x, \Omega^{\sharp}) < \delta} \qquad & \Sigma^\sharp_\varepsilon &= \Omega^\sharp_\varepsilon \setminus \Omega^\sharp,\\ \abs{\Omega_\varepsilon} &= \lvert \Omega^{\sharp}_\varepsilon \rvert \qquad & \abs{\Sigma_{\varepsilon}} &= \lvert \Sigma_{\varepsilon}^{\sharp} \rvert \end{aligned} \end{equation} where \[ \lim_{\varepsilon \to 0} \, \frac{\delta}{\varepsilon} = \frac{\abs{\partial \Omega}}{\abs{\partial \Omega^\sharp}}. \] and $d(\cdot, \Omega)$ defined as follows: $$d(x,\Omega):=\inf_{y\in \Omega}\abs{x-y}.$$ Then we divide the proof into four steps. \begin{enumerate} \item[\textbf{Step 1}] First of all we assume $\Omega $ with $ C^{1,\alpha}$ boundary, $u \in W^{1,\infty}(\Omega)$ and $u \geq \sigma >0$ in $\Omega$. So we can consider the following "linear" extension of $u$, $u_\varepsilon$ in $\Omega_\varepsilon$ \[ u_{\varepsilon}(x) = u \bigl( p(x) \bigr) \biggl( 1-\frac{d(x,\partial \Omega)}{\varepsilon} \biggr) \qquad \forall x \in \Omega_\varepsilon\setminus\Omega, \] where $p(x)$ is the projection of $x$ on $\partial\Omega$ (for $\varepsilon$ sufficiently small, this definition is well posed since $\Omega$ is smooth, see \cite{GT}). The function $u_\varepsilon$, has the following properties: \begin{enumerate} \item $\displaystyle{u_\varepsilon \rvert_{\Omega} = u}$, \item $\displaystyle{u_\varepsilon=0}$ on $\partial \Omega_\varepsilon$ \item\label{pro-gra_1.1} $\displaystyle{\norma{\nabla u_\varepsilon}_{L^{\infty}(\Omega)} \leq \abs{\nabla u_\varepsilon}(y)} $ $\forall y \in \Sigma_\varepsilon$ for $\varepsilon$ sufficiently small, \item $\displaystyle{ \lim_{\varepsilon \to 0^+} \int_{\Sigma_\varepsilon} \abs{\nabla u_\varepsilon}\, dx = \int_{\partial\Omega} u \, d\mathcal{H}^{n-1}.}$ \end{enumerate} Properties $(a)$ and $(b)$ follow immediately by the definition of $u_{\varepsilon}$, while $(c)$ is a consequence of the regularity of $u$. Property $(d)$ can be obtained by an easy calculation, indeed \[ \nabla u_{\varepsilon} (x) = \nabla \bigl( u(p(x)) \bigr) \biggl[ 1-\frac{d(x,\partial \Omega)}{\varepsilon} \biggr] - u \bigl( p(x) \bigr) \frac{\nabla d(x,\partial \Omega)}{\varepsilon} \] For the first term, we can notice \[ \int_{\Sigma_{\varepsilon}} \bigl \lvert \nabla \bigl(u (p(x)) \bigr) \bigr \rvert \biggl[ 1-\frac{d(x,\partial \Omega)}{\varepsilon} \biggr] \, dx \leq L \int_{\Sigma_{\varepsilon}} \, dx = L \abs{\Sigma_{\varepsilon}} \] where $L$ is the $L^{\infty}$ norm of $\nabla u(p(x))$. Now we deal with the second term and, keeping in mind that $\abs{\nabla d} = 1$ and using coarea formula, we hav \begin{align*} \lim_{\varepsilon \to 0^+} \int_{\Sigma_{\varepsilon}} \abs{\nabla u_{\varepsilon}} \, dx & = \lim_{\varepsilon \to 0^+} \frac{1}{\varepsilon} \int_{\Sigma_{\varepsilon}} u(p(x)) \, dx = \lim_{\varepsilon \to 0^+} \int_0^{\varepsilon} \, dt \int_{\Gamma_t} (u\circ p) \, d \mathcal{H}^{n-1} \end{align*} where $\Gamma_t = \Set{x \in \Sigma_{\varepsilon} \, | \, d(x,\partial \Omega) = \varepsilon}$. By continuity of $u$ and Lebesgue differentation theorem we get \[ \lim_{\varepsilon \to 0^+} \frac{1}{\varepsilon} \int_0^{\varepsilon} \, dt \int_{\Gamma_t} u\circ p \, d \mathcal{H}^{n-1} = \int_{\Gamma_0} (u \circ p) \, d\mathcal{H}^{n-1} = \int_{\partial \Omega} u \, d \mathcal{H}^{n-1} \] that proves property $(d)$. For every $\varepsilon>0$, we consider the following problem \begin{equation} \label{eps_gn} \begin{cases} \abs{\nabla v_\varepsilon} (x) = \abs{\nabla u_\varepsilon}_{\sharp} (x) &\text{ in } \Omega_\varepsilon^{\sharp} \\ v_\varepsilon = 0 &\text{ on } \partial \Omega_\varepsilon^{\sharp} \end{cases} \end{equation} and by Theorem \ref{Giarrusso_Nunziante} it holds \begin{equation} \label{Giarrusso_Nunziante_con_u_eps} \norma{u_\varepsilon}_{L^1(\Omega_\varepsilon)} \leq \norma{v_\varepsilon}_{L^1(\Omega^\sharp_\varepsilon)}. \end{equation} Moreover it exists $\overline{\varepsilon}$ such that for every $\varepsilon \leq \overline{\varepsilon}$ \begin{equation} \label{gradienti_uguali_dentro} \abs{\nabla v_\varepsilon} (x) = \abs{\nabla u_{\varepsilon}}_{\sharp}(x) = \abs{\nabla u}_{\sharp} (x) \qquad \forall x \in \Omega^\sharp. \end{equation} We can see $u_\varepsilon$ as a $W^{1,1}(\Omega_{\overline{\varepsilon}})$ function and we have \begin{equation} \label{gradiente_limitato} \begin{split} \int_{\Omega_{\overline{\varepsilon}}^\sharp}\abs{\nabla v_\varepsilon} =\int_{\Omega_{\overline{\varepsilon}}}\abs{\nabla u_\varepsilon} &= \int_{\Omega}\abs{\nabla u} + \int_{\Sigma_\varepsilon}\abs{\nabla u_\varepsilon} \leq \norma{\nabla u}_{L^1(\Omega)} + 2 \norma{u}_{L^1(\partial \Omega)}. \end{split} \end{equation} by property $(d)$. \noindent Finally, by Poincarè and \eqref{gradiente_limitato}, there exists a constant $0<C=C(n,\Omega)$ such that \begin{equation*} \norma{v_\varepsilon}_{W^{1,1}(\Omega_{\overline{\varepsilon}}^\sharp)} \leq C \norma{\nabla v_\varepsilon}_{L^{1}(\Omega_{\overline{\varepsilon})}} \leq C(n,\Omega) \norma{u}_{W^{1,1}(\Omega)}. \end{equation*} Therefore, up to a subsequence, there exists a limit function $u^\star \in BV(\Omega_{\overline{\varepsilon}}^\sharp)$ such that (\cite[Proposition 3.13]{AFP}) \begin{equation*} v_\varepsilon \to u^\star \text{ in } L^1(\Omega_{\overline{\varepsilon}}^\sharp) \qquad \nabla v_{\varepsilon} \overset{*}{\rightharpoonup} \nabla u^{\star} \text{ in } \Omega \end{equation*} namely \[ \lim_{\varepsilon \to 0} \int_{\Omega_{\overline{\varepsilon}}^{\sharp}} \varphi \, d \nabla v_{\varepsilon} = \int_{\Omega_{\overline{\varepsilon}}^{\sharp}} \varphi \, d \nabla u^{\star} \qquad \forall \varphi \in C_0(\Omega,\R^n) \] Our aim is to show that $u^\star$ satisfies properties \eqref{eq_che_risolve_u_picche}, \eqref{norma_L1} and \eqref{norma_traccia_Lp}. \begin{description} \item[Inside] Obvious by \eqref{gradienti_uguali_dentro}. \item[Boundary] By \eqref{eps_gn} and \eqref{gradienti_uguali_dentro}, we have \begin{equation*} \int_{\Sigma_{\varepsilon}} \abs{\nabla u_\varepsilon} = \int_{\Sigma^\sharp_{\varepsilon}} \abs{\nabla v_\varepsilon}. \end{equation*} Now, for $t>0$ setting $\Gamma_t= \Set{d(x, \Omega) = t}$, $\Gamma_t^{\sharp} = \{d(x, \Omega^{\sharp}) = t \}$, $r=\displaystyle{\biggl( \frac{\abs{\Omega}}{\omega_n} \biggr)^{\frac{1}{n}}}$ and recalling that $v_{\varepsilon}$ is radially symmetric we have \begin{gather*} \int_{\Sigma^\sharp_{\varepsilon}} \abs{\nabla v_\varepsilon} = \int_r^{r+\delta} \int_{\Gamma^\sharp_t} \abs{\nabla v_\varepsilon} \, d \mathcal{H}^{n-1} \, dt = \lvert \Gamma^\sharp_t \rvert \int_r^{r+\delta} - v'_\varepsilon \lvert \Gamma^\sharp_t \rvert \, dt = \lvert \Gamma^\sharp_t \rvert \, v_\varepsilon(r). \end{gather*} Therefore by monotonicity of $\lvert \Gamma_t^{\sharp} \rvert$ we have \[ \lvert \Gamma^\sharp_r \rvert v_{\varepsilon}(r) \leq \int_r^{r+\delta} \bigl( -v_{\varepsilon}'(t) \lvert \Gamma_t^{\sharp} \rvert \bigr) \, dt \leq \lvert \Gamma_{r+\delta}^{\sharp} \rvert v_{\varepsilon}(r) \] and since \[ \lvert \Gamma^\sharp_r \rvert v_{\varepsilon}(r) = \int_{\partial \Omega^{\sharp}} v_{\varepsilon} \, d\mathcal{H}^{n-1} \] using the fact that $v_{\varepsilon} \to v$ in $L^{1}(\Omega)$, $\nabla v_{\varepsilon} = \nabla u$ in $\Omega$ and the continuity embedding of $W^{1,1}(\Omega)$ in $L^1(\Omega)$, in the end we have \[ \int_{\Sigma^\sharp_{\varepsilon}} \abs{\nabla v_\varepsilon} \to \int_{\partial \Omega^{\sharp}} \abs{u^{\star}} \, d\mathcal{H}^{n-1}. \] Using property $(d)$ we obtain \begin{equation*} \int_{\partial \Omega} \abs{u} \, d \mathcal{H}^{n-1}= \int_{\partial \Omega^\sharp} \lvert u^\star \rvert \, d \mathcal{H}^{n-1}. \end{equation*} In the end we have that for $u^\star$ it holds \begin{equation} \begin{cases} \abs{\nabla u^\star} = \abs{\nabla u}_{\sharp} &\text{ in } \Omega^{\sharp} \\ u^\star = \cfrac{\displaystyle{ \int_{\partial \Omega} \abs{u} \, d \mathcal{H}^{n-1}}}{\displaystyle{\lvert \partial \Omega^{\sharp} \rvert}} &\text{ on } \partial \Omega^{\sharp}. \end{cases} \end{equation} that proves \eqref{eq_che_risolve_u_picche}. \item[$L^p$ trace estimate] Now we compute the $L^p$ traces of $u^{\star}$ and $u$ \begin{align*} \int_{\partial \Omega^{\sharp}} (u^{\star})^p \, d \mathcal{H}^{n-1} & = \frac{1}{\abs{\partial \Omega^{\sharp} }^p } \int_{\partial \Omega^{\sharp}} \biggl(\int_{\partial \Omega} u \, d \mathcal{H}^{n-1} \biggr)^p \\ & = \frac{\displaystyle{ \biggl( \int_{\partial \Omega} u \, d \mathcal{H}^{n-1} \biggr)^p } }{ \abs{\partial \Omega^{\sharp} }^{p-1} } \\ & \leq \cfrac{\displaystyle{ \biggl( \int_{\partial \Omega} u^p \, d \mathcal{H}^{n-1} \biggr) \abs{\partial \Omega}^{p-1} } }{ \abs{\partial \Omega^{\sharp}}^{p-1} } \end{align*} Therefore \begin{equation} \lvert \partial \Omega^{\sharp} \rvert^{p-1} \int_{\partial \Omega^{\sharp}} (u^{\star})^p \, d \mathcal{H}^{n-1} \leq \abs{\partial \Omega}^{p-1} \int_{\partial \Omega} u^p \, d \mathcal{H}^{n-1} \end{equation} that proves \eqref{norma_traccia_Lp}. \end{description} Furthermore by \begin{equation*} \norma{u_\varepsilon}_{L^1(D)} \to \norma{u}_{L^1(D)} \quad \text{ and } \quad \norma{v_\varepsilon}_{L^1(D^\sharp)} \to \lVert{u^\star}\rVert_{L^1(D^\sharp)}. \end{equation*} we can pass to the limit $\varepsilon \to 0$ in \eqref{Giarrusso_Nunziante_con_u_eps} and we get \begin{equation*} \norma{u}_{L^1(\Omega)} \leq \lVert{u^\star}\rVert_{L^1(\Omega^\sharp)}. \end{equation*} that proves \eqref{norma_L1}. \item[\textbf{Step 2}] Now we remove the extra-assumption $u \geq \delta >0$ definin \begin{equation*} u_\sigma := u+ \sigma. \end{equation*} Then $u_{\sigma}$ is strictly positive in $\Omega$ and we can apply the previous result: there exists a function $v_\sigma$ in $\Omega^\sharp$ such that \begin{equation*} \begin{cases} \abs{\nabla v_\sigma} = \abs{\nabla u_\sigma}_{\sharp}=\abs{\nabla u}_{\sharp} &\text{ a.e. in } \Omega^{\sharp} \\ v_\sigma = \cfrac{\displaystyle{ \int_{\partial \Omega} \abs{u_\sigma} \, d \mathcal{H}^{n-1}}}{\displaystyle{\lvert \partial \Omega^{\sharp} \rvert }} = \cfrac{\displaystyle{ \int_{\partial \Omega} \abs{u} \, d \mathcal{H}^{n-1}}}{\displaystyle{ \lvert \partial \Omega^{\sharp} \rvert }} + \sigma \frac{\displaystyle{\abs{\partial \Omega}}}{\displaystyle{\lvert \partial \Omega^\sharp \rvert}} &\text{ on } \partial \Omega^{\sharp}, \end{cases} \end{equation*} and \begin{equation} \label{eq_per_v_sigma_e_u_sigma} \norma{u_\sigma}_{L^1(\Omega)} \leq \lVert{v_\sigma}\rVert_{L^1(\Omega^\sharp)}, \qquad \lvert \partial \Omega^{\sharp} \rvert^{p-1} \int_{\partial \Omega^{\sharp}} v_{\sigma}^p \, d\mathcal{H}^{n-1} \leq \abs{\partial \Omega}^{p-1} \int_{\partial \Omega} u_{\sigma}^p \, d\mathcal{H}^{n-1} \end{equation} If we define \begin{equation*} u^\star:= v_\sigma - \sigma \frac{\displaystyle{\abs{\partial \Omega}}}{\displaystyle{\lvert \partial \Omega^\sharp \rvert}}, \end{equation*} then $u^{\star}$ solves \begin{equation} \begin{cases} \abs{\nabla u^\star} = \abs{\nabla u}_{\sharp} &\text{ in } \Omega^{\sharp} \\ u^\star = \cfrac{\displaystyle{ \int_{\partial \Omega} \abs{u} \, d \mathcal{H}^{n-1}}}{\displaystyle{\lvert \Omega^{\sharp} \rvert }} &\text{ on } \partial \Omega^{\sharp}, \end{cases} \end{equation} Sending $\sigma \to 0$ in \eqref{eq_per_v_sigma_e_u_sigma} we have \begin{align*} \norma{u}_{L^1(\Omega)} &\leq \lVert{u^\star}\rVert_{L^1(\Omega^\sharp)} \\ \lvert \partial \Omega^{\sharp} \rvert^{p-1}\int_{\partial \Omega^{\sharp}} (u^{\star})^p \, d\mathcal{H}^{n-1} & \leq \abs{\partial \Omega}^{p-1} \int_{\partial \Omega} u^p \, d\mathcal{H}^{n-1} \end{align*} \item[\textbf{Step 3}] Now we remove the assumption on the regularity of $\Omega$. Let $\Omega$ be a bounded, open and Lipschitz set, and $u \in W^{1,\infty}(\Omega)$. Then there exists a sequence $\Set{\Omega_k} \subset \R^n$ of open set with $C^{2}$ boundary such that $\Omega \subset \Omega_k, \; \forall k \in \mathbb{N}$ (for istance you can mollify $\chi_{\Omega}$ and take a suitable superlevel set) and \[ \abs{\Omega_k \, \triangle \, \Omega} \to 0 \qquad \mathcal{H}^{n-1}(\partial \Omega_k) \to \mathcal{H}^{n-1}(\partial \Omega) \qquad \text{ for } k \to +\infty . \] Let $\tilde{u}$ be an extension of $u$ in $\R^n$ such that \[ \tilde{u} \rvert_{\Omega} \equiv u, \qquad \norma{\tilde{u}}_{W^{1,\infty}(\R^n)} \leq C \norma{u}_{W^{1,\infty}(\Omega)}. \] \noindent We define \[ u_k = \tilde{u} \chi_{\Omega_k}, \] and clearly $u_k = u$ in $\Omega$. By the previous step, we can construct $u_{k}^{\star} \in W^{1,\infty}(\Omega_k^{\sharp})$ such that it is radial, $\abs{\nabla u_k}_* = \abs{\nabla u_k^{\star}}_*$ and \begin{align} \label{confronto_norme_L^1_Lipschitz} \norma{u_k}_{L^1(\Omega_k)} &\leq \lVert u_k^{\star} \rVert_{L^1(\Omega_k^{\sharp})} \\ \label{tracce_uguali_Lipschitz} \int_{\partial \Omega_k} u_k \, d\mathcal{H}^{n-1} & =\int_{\partial \Omega_k^{\sharp}} u_k^{\star} \, d\mathcal{H}^{n-1}\\ \label{confronto_tracce_Lipschitz} \lvert \partial \Omega_k^{\sharp} \rvert^{p-1} \int_{\partial \Omega_k^{\sharp}} (u_k^{\star})^p \, d \mathcal{H}^{n-1} &\leq \abs{\partial \Omega_k}^{p-1} \int_{\partial \Omega_k} u_k^p \, d \mathcal{H}^{n-1} \end{align} Therefore, since $\lVert u_k \rVert_{W^{1,p}(\Omega_k)} \leq M$, for all $p$, the sequence $\Set{u_k^{\star}}$ is equibounded in $W^{1,p}(\Omega^\sharp)$ and it has a subsequence which converges strongly in $L^p$ and weakly in $W^{1,p}$ to a function $w$. Let us prove that $\abs{\nabla u}$ and $\abs{\nabla w}$ has the same rearrangement. \[ \limsup_k \, \bigl \lVert \abs{\nabla u_k^{\star}} - \abs{\nabla u}_{\sharp} \bigr \rVert_{L^p(\Omega^{\sharp})} \leq \lim_k \, \bigl \lVert (f_k)_{\sharp} - f_{\sharp} \bigr \rVert_{L^p(\R^n)} \] where \[ f (x) = \begin{cases} \abs{\nabla \tilde{u}} & \text{in }\Omega \\ \lVert \nabla \tilde{u} \rVert_{L^{\infty}(\R^n)} & \text{in } \R^n \setminus \Omega \end{cases} \qquad \text{ and } f_k = \begin{cases} \abs{\nabla u_k} &\text{in } \Omega_k \\ \lVert \nabla \tilde{u} \rVert_{L^{\infty}(\R^n)} & \text{in }\R^n \setminus \Omega_k \end{cases} \] So using \eqref{eq_riarr_diminuiscono_distanza_Lp} we have \[ \bigl \lVert (f_k)_{\sharp} - f_{\sharp} \bigr \rVert_{L^p(\R^n)} \leq \lVert f_k - f \rVert_{L^p(\R^n)} = \lVert f_k - f \rVert_{L^p(\Omega_k \setminus \Omega)} \leq 2 \lVert \nabla \tilde{u} \rVert_{L^{\infty}(\R^n)} \abs{\Omega_k \setminus \Omega}\ \] that tends to $0$ as $k \to +\infty$ by the fact that $\abs{\Omega_k \triangle \Omega} \to 0$. \noindent Hence, the functions $\nabla w$ and $\nabla u$ has the same rearrangement, by the uniqueness of the weak limit in $\Omega^{\sharp}$. In the end, passing to limit $k \to +\infty$ in \eqref{confronto_norme_L^1_Lipschitz}, \eqref{tracce_uguali_Lipschitz} and \eqref{confronto_tracce_Lipschitz}, we have \begin{align*} \lVert u \rVert_{L^1(\Omega)} &\leq \lVert w \rVert_{L^1(\Omega^{\sharp})} \\ \int_{\partial \Omega} u \, d \mathcal{H}^{n-1} &= \int_{\partial \Omega^{\sharp}} w \, d\mathcal{H}^{n-1} \\ \lvert \partial \Omega^{\sharp} \rvert^{p-1} \int_{\partial \Omega^{\sharp}} (w)^p \, d\mathcal{H}^{n-1} &\leq \lvert \partial \Omega \rvert^{p-1} \int_{\partial \Omega} u^p \, d\mathcal{H}^{n-1} \end{align*} Hence $w= u^\star$. \item[\textbf{Step 4}] Finally, we proceed by removing the assumption $u \in W^{1,\infty}(\Omega)$. If $u \in W^{1,p}(\Omega)$, by Meyers-Serrin Theorem, there exists a sequence $\{ u_k \} \subset C^{\infty}(\Omega) \cap W^{1,p}(\Omega)$ such that $u_k \to u$ in $W^{1,p}(\Omega)$. We can apply previous step to obtain $u_k^{\star} \in W^{1,\infty}(\Omega^{\sharp})$ such that $\abs{\nabla u_k}$ and $\abs{\nabla u_k^{\star}}$ are equally distributed and \begin{align} \label{confronto_norme_u_k} \norma{u_k}_{L^1(\Omega)} &\leq \norma{u_k^{\star}}_{L^1(\Omega^{\sharp})} & & \forall k \in \mathbb{N} \\ \label{uguaglianza_tracce_u_k} \int_{\partial \Omega} u_k \, d\mathcal{H}^{n-1} & = \int_{\partial \Omega^{\sharp}} u_k^{\star} \, d\mathcal{H}^{n-1} & & \forall k \in \mathbb{N} \\ \label{disug_tracce_u_k} \lvert \partial \Omega^{\sharp} \rvert^{p-1} \int_{\partial \Omega} (u_k^{\star})^p \, d\mathcal{H}^{n-1} & \leq \abs{\partial \Omega}^{p-1} \int_{\partial \Omega} (u_k)^p \, d\mathcal{H}^{n-1} && \forall k \in \mathbb{N} \end{align} Using the same reasonings of the previous step, there exists a function $w$ such that, up to a subsequence \[ u_k^{\star} \to w \text{ in } L^p(\Omega) \qquad \nabla u_k^{\star} \rightharpoonup \nabla w \text{ in } L^p(\Omega; \R^n) \] and $\abs{\nabla w}$ has the same rearrangement as $\abs{\nabla u}$. Finally, sending $k \to +\infty$ in \eqref{confronto_norme_u_k}, \eqref{uguaglianza_tracce_u_k} and \eqref{disug_tracce_u_k}, we have \begin{align*} \norma{u}_{L^1(\Omega)} &\leq \lVert w \rVert_{L^1(\Omega^{\sharp})} \\ \int_{\partial \Omega} u \, d\mathcal{H}^{n-1} & = \int_{\partial \Omega^{\sharp}} w \, d\mathcal{H}^{n-1} \\ \lvert \partial \Omega^{\sharp} \rvert^{p-1} \int_{\partial \Omega^{\sharp}} (w)^p \, d\mathcal{H}^{n-1} &\leq \lvert \partial \Omega \rvert^{p-1} \int_{\partial \Omega} u^p \, d\mathcal{H}^{n-1} \end{align*} Hence $w= u^\star$.\qedhere \end{enumerate} \end{proof} \section{An application to torsional rigidity} \label{Section_4} Let $\beta >0$, let $\Omega \subset \R^n$ be a bounded and open set with Lipschitz boundary and let us consider the functional \begin{equation} \mathcal{F}_{\beta}(\Omega, w) = \cfrac{ \displaystyle{ \int_{\Omega} \abs{\nabla w}^2 \, dx + \beta \abs{\partial \Omega} \, \int_{\partial \Omega} w^2 \, d\mathcal{H}^{n-1} } }{\displaystyle{ \biggl(\int_{\Omega} w \, dx \biggr)^2 }} \qquad w \in W^{1,2}(\Omega) \end{equation} and the associate minimum problem \begin{equation} T(\Omega,\beta) = \min_{ w \in W^{1,2}(\Omega) } \mathcal{F}_{\beta}(w) \end{equation} The minimum $u$ is a weak solution to \begin{equation} \label{eq_soluzione_debole_Omega} \begin{cases} -\Delta u = 1 &\text{in } \Omega \\[1ex] \displaystyle{\parzder{u}{\nu} + \beta \abs{\partial \Omega } \, u = 0} &\text{on } \partial \Omega \end{cases} \end{equation} Our aim is to compare $T(\Omega, \beta)$ with \[ T(\Omega^{\sharp},\beta) : = \min_{v \in W^{1,2}(\Omega)} \mathcal{F}_{\Omega,\beta}(v) = \min_{v \in W^{1,2}(\Omega)} \cfrac{ \displaystyle{ \int_{\Omega^\sharp} \abs{\nabla v}^2 \, dx +\beta \lvert \partial \Omega^{\sharp} \rvert \, \int_{\partial \Omega^\sharp} v^2 \, d\mathcal{H}^{n-1} } }{\displaystyle{ \biggl( \int_{\Omega^\sharp} v \, dx \biggr)^2 }} \] where the minimum is a weak solution to \begin{equation} \label{eq_soluzione_debole_Omega_sharp} \begin{cases} -\Delta z = 1 &\text{in } \Omega^{\sharp} \\[1ex] \displaystyle{\parzder{z}{\nu} + \beta \lvert \partial \Omega^{\sharp} \rvert \, z = 0} &\text{on } \partial \Omega^{\sharp} \end{cases} \end{equation} \begin{proof}[Proof of Corollary \ref{corollario_torsione_pesata}] Let $w \in W^{1,p}(\Omega)$, by Theorem \ref{Teorema_che_scriveremo} there exists $w^{\star} \in W^{1,\infty}(\Omega^{\sharp})$ radial such that \[ \int_{\Omega} \abs{\nabla w}^2 \, dx = \int_{\Omega^{\sharp}} \lvert \nabla w^{\star} \rvert^2 \, dx \qquad \int_{\Omega} \abs{w} \, dx \leq \int_{\Omega^{\sharp}} \lvert w^{\star} \rvert \, dx \qquad \lvert \partial \Omega^{\sharp} \rvert \, \int_{\partial \Omega^{\sharp}} (w^\star)^2 \leq \abs{ \partial \Omega } \, \int_{\partial \Omega} w^2 \] Therefore \[ \mathcal{F}_{\beta}(w) \geq \mathcal{F}_{\beta}(w^{\star}) \] Passing to the infimum on right-hand side and successively to the left-hand side, we obtain \[ T(\Omega, \beta) \geq T(\Omega^{\sharp},\beta) \] \end{proof} \begin{oss} We highlight that all the arguments work also in the non-linear case, where the functional \begin{equation} \mathcal{F}_{\beta,p}(w) = \cfrac{ \displaystyle{ \int_{\Omega} \abs{\nabla w}^p \, dx + \beta \abs{\partial \Omega}^{p-1} \, \int_{\partial \Omega} w^p \, d\mathcal{H}^{n-1} } }{\displaystyle{ \biggl(\int_{\Omega} w \, dx \biggr)^p }} \qquad \text{for } w \in W^{1,p}(\Omega). \end{equation} is considered. \end{oss} \vspace{1 em} \section{\texorpdfstring{A weighted $L^1$ comparison}{}} \label{Section_5} Let us check how extend the result by \cite{Ta6} to the case of function non vanishing on the boundary. \begin{teorema} \label{Teorema_con_f} Let $\Omega \subset \R^n$ be a bounded, open and Lipschitz set. Let $f \in L^{\infty}(\Omega)$ be a function such that \begin{equation} \label{condizione_per_g} f^*(t) \geq \biggl( 1-\frac{1}{n} \biggr) \frac{1}{t} \int_0^t f^*(s) \, ds \qquad \forall t \in [0, \abs{\Omega}]. \end{equation} If $u \in W^{1,p}(\Omega)$ and $u^{\star}$ is the function given by Theorem \ref{Teorema_che_scriveremo}, then \begin{equation} \label{f-giiann} \int_{\Omega} f(x) u (x) \, dx \leq \int_{\Omega^{\sharp}} f^{\sharp} (x) u^{\star} (x) \, dx. \end{equation} \end{teorema} \begin{proof} If $u \in W_0^{1,p}(\Omega)$, the result is contained in \cite{Ta6}. We recall it, for sake of completeness. By \cite[eq. 2.7]{GN} it is known \begin{equation} \label{Giarrusso_Nunziante_puntuale} u^*(s) \leq \frac{1}{n \omega_n^{\frac{1}{n}}} \int_s^{\abs{\Omega}} \frac{F(t)}{t^{1-\frac{1}{n}}} \, dt \end{equation} where $F$ is a function such that \[ \int_0^s F(t) \, dt = \int_{D(s)} \abs{\nabla u}_{*}(s) \, ds \] with $D(s)$ defined in Section \ref{Section_2}. \noindent Multiplying both terms of \eqref{Giarrusso_Nunziante_puntuale} for $f^*(s)$, integrating from $0$ to $\abs{\Omega}$ and using Fubini's Theorem we get \begin{equation} \label{integrale_f_u} \int_0^{\abs{\Omega}} f^*(s) u^*(s) \, ds \leq \frac{1}{n\omega_n^{\frac{1}{n}}} \int_0^{\abs{\Omega}} f^*(s) \biggl( \int_s^{\abs{\Omega}} \frac{F(t)}{t^{1-\frac{1}{n}}} \, dt \biggr) \, ds = \frac{1}{n\omega_n^{\frac{1}{n}}} \int_0^{\abs{\Omega}} F(t) \underbrace{\biggl( \frac{1}{t^{1-\frac{1}{n}}} \int_0^t f^*(s) \, ds \biggr)}_{:=g(t)} \, dt \end{equation} Let us suppose that $g(t)$ is non-decreasing, so $g_*(s) = g(s)$ and by Lemma \ref{lemma_Alvino_Trombetti} there exists a sequence $\{F_k \}$ such that $(F_k)_* = (\nabla u)_*$ and $F_k \rightharpoonup F$ in $BV$. Therefore \[ \int_0^{\abs{\Omega}} F(t) g(t) \, dt = \lim_k \int_0^{\abs{\Omega}} F_k(t) g(t) \, dt \] Using Hardy-Littlewood's inequality we have \[ \lim_k \int_0^{\abs{\Omega}} F_k(t) g(t) \, dt \leq \int_0^{\abs{\Omega}} \abs{\nabla u}_*(t) g_*(t) \, dt = \int_0^{\abs{\Omega}} \abs{\nabla u}_*(t) g(t) \, dt \] Hence, by \eqref{integrale_f_u} and Fubini's Theorem, we obtain \begin{align*} \int_0^{\abs{\Omega}} f^*(t) u^*(t) \, dt &\leq \frac{1}{n\omega_n^{\frac{1}{n}}} \int_0^{\abs{\Omega}} \abs{\nabla u}_*(t) \, g(t) \, dt \\ & = \frac{1}{n\omega_n^{\frac{1}{n}}} \int_0^{\abs{\Omega}} \abs{\nabla u}_*(t) \Biggl( \frac{1}{t^{1-\frac{1}{n}}} \int_0^t f^*(s) \, ds \Biggr) \, dt \\ & = \int_0^{\abs{\Omega}} f^*(s) \biggl( \frac{1}{n\omega_n^{\frac{1}{n}}} \int_s^{\abs{\Omega}} \frac{\abs{\nabla u}_*(t)}{t^{1-\frac{1}{n}}} \, dt \biggr) \, ds \\ & = \int_0^{\abs{\Omega}} f^*(s) (u^{\star})^*(s) \, ds \end{align*} Therefore, by Hardy-Littlewood inequality, we have \begin{equation} \label{vafammoc} \int_{\Omega} f(x)u(x) \, dx \leq \int_0^{\abs{\Omega}} f^*(t) u^*(t) \leq \int_0^{\abs{\Omega}} f^*(s) (u^{\star})^*(s) \, ds = \int_{\Omega^{\sharp}} f^{\sharp}(x) \, u^{\star}(x) \, dx \end{equation} But we have to deal with the assumption that $g$ is non-decreasing, that is \begin{equation*} g'(t) \geq 0 \iff \frac{d}{dt} \biggl( \frac{1}{t^{1-\frac{1}{n}}} \int_0^t f^*(s) \, ds \biggr) = - \frac{n-1}{n} \frac{1}{t^{2-\frac{1}{n}}} \biggl( \int_0^t f^*(s) \, ds \biggr) + \frac{1}{t^{1-\frac{1}{n}}}f^*(t) \geq 0, \end{equation*} hence, if and only if \begin{equation*} f^*(t) \geq \biggl( 1-\frac{1}{n} \biggr) \frac{1}{t} \int_0^t f^*(s) \, ds. \end{equation*} \noindent Now let us deal with $u \notin W^{1,p}_0(\Omega)$. Suppose that $u \in C^2(\Omega)$ is a non-negative function, that $\Omega$ has $C^2$ boundary and that $f$ satisfies \eqref{condizione_per_g}. Proceeding as in Step 1 of Theorem \ref{Teorema_che_scriveremo}, for every $\varepsilon>0$ we can construct $u_{\varepsilon}$ that coincides with $u$ in $\Omega$ and is zero on $\partial \Omega_{\varepsilon}$. Moreover we can extend $f$ to $\Omega_{\varepsilon}$ simply defining \[ f_{\varepsilon} (t) = \begin{cases} f(x) &\text{in } \Omega \\ f^*(\abs{\Omega}) &\text{in } \Omega_{\varepsilon} \setminus \Omega \end{cases} \] The rearrangement, for every $\varepsilon>0$, is \[ f_{\varepsilon}^*(t) = \begin{cases} f^*(t) &\text{in } \bigl[ 0,\abs{\Omega} \bigr ] \\[1ex] f^*(\abs{\Omega}) &\text{in } \bigl[ \abs{\Omega}, \abs{\Omega_{\varepsilon}} \bigr ], \end{cases} \] so we just have to check \eqref{condizione_per_g} for $t \in \bigl[ \abs{\Omega}, \abs{\Omega_{\varepsilon}} \bigr]$, namely \begin{equation} \label{27_fuori} f_{\varepsilon}^*(t) \geq \biggl( \frac{n-1}{n} \biggr) \frac{1}{t} \int_0^{t} f_{\varepsilon}^*(s) \, ds. \end{equation} Keeping in mind that $f$ verifies \eqref{condizione_per_g}, we have \[ f_{\varepsilon}^*(t)= f^{\ast}(\abs{\Omega}) \geq\biggl( \frac{n-1}{n} \biggr) \frac{1}{\abs{\Omega}} \int_0^{\abs{\Omega}} f^*(s) \, ds. \] If we show that \[ \frac{1}{\abs{\Omega}} \int_0^{\abs{\Omega}} f^*(s) \, ds \geq\left[ \frac{1}{t} \int_0^{\abs{\Omega}} f^*(s) \, ds + \frac{t-\abs{\Omega}}{t} f^*(\abs{\Omega}) \right] =\frac{1}{t} \int_0^{t} f_\varepsilon^*(s) \, ds \] then \eqref{27_fuori} is true. By directs calculations \[ \frac{t-\abs{\Omega}}{t \abs{\Omega}} \int_0^{\abs{\Omega}} f^*(s) \, ds \geq \frac{t-\abs{\Omega}}{t} f^*(\abs{\Omega}) \iff \frac{1}{\abs{\Omega}} \int_0^{\abs{\Omega}} f^*(s) \, ds \geq f^*(\abs{\Omega}). \] that is true of the fact that $f^*$ is decreasing. So, $\forall \varepsilon >0 $ we can apply the first part of the Theorem obtaining \[ \int_{\Omega_{\varepsilon}} u_{\varepsilon} f_{\varepsilon} \, dx \leq \int_{\Omega_{\varepsilon}^{\sharp}} v_{\varepsilon} f_{\varepsilon}^{\sharp} \, dx \] Sending $\varepsilon \to 0$ we get \[ \int_{\Omega} u f \, dx \leq \int_{\Omega_{\sharp}} u^{\star} f^{\sharp} \, dx. \] An argumentation similar to Steps 3 and 4 of Theorem \ref{Teorema_che_scriveremo} brings to the general statement. \end{proof} \begin{oss} We observe that a necessary condition is $f >0 $ in $\Omega$ because if not we should have \[ 0 = f^*(\abs{\Omega}) \geq \biggl( 1-\frac{1}{n} \biggr) \frac{1}{\abs{\Omega}} \int_0^\abs{\Omega} f^*(s) \, ds > 0. \] We remark that a sufficient condition is that the essential oscillation of $f$ is bounded. More precisely \begin{equation} \label{condizione_per_g_crescente} \essosc \abs{f} := \frac{\displaystyle{\esssup_{x \in \Omega} \abs{f(x)}}}{\displaystyle{\essinf_{x \in \Omega} \abs{f(x)}}} \leq \frac{n}{n-1}, \end{equation} indeed \[ f^*(t)\geq f^*(\abs{\Omega}) \geq \frac{f^*(\abs{\Omega})}{f^*(0)}f^*(0)\geq \biggl( 1-\frac{1}{n} \biggr) \frac{1}{t} \int_0^t f^*(0) \, ds \geq \biggl( 1-\frac{1}{n} \biggr) \frac{1}{t} \int_0^t f^*(t) \, ds \qquad \forall t \in [0,\abs{\Omega}]. \] Clearly, condition \eqref{condizione_per_g_crescente} is satisfied whenever $f$ is constant. \end{oss} Theorem \ref{Teorema_con_f} allows us to compare the minimum of \[ T_{\beta, f}(\Omega) : = \min_{w \in W^{1,2}(\Omega)}\left\{\frac{1}{2} \int_{\Omega} \abs{\nabla w}^2 \, dx + \frac{\beta \abs{\partial \Omega}}{2} \, \int_{\partial \Omega} w^2 \, d\mathcal{H}^{n-1} - \int_{\Omega} wf \, dx\right\} \] with the one of \[ T_{\beta, f}(\Omega^{\sharp}) : =\min_{v \in W^{1,2}(\Omega^{\sharp})} \left\{\frac{1}{2} \int_{\Omega^{\sharp}} \abs{\nabla v}^2 \, dx + \frac{\beta \lvert \partial \Omega^{\sharp} \rvert }{2} \, \int_{\partial \Omega^{\sharp}} v^2 \, d\mathcal{H}^{n-1} - \int_{\Omega^{\sharp}} vf^{\sharp} \, dx \right\}. \] \begin{corollario} Let $\beta>0$, let $\Omega \subset \R^n$ be a bounded, open and Lipschitz set. If $f$ satisfies \eqref{condizione_per_g}, then denoting with $\Omega^{\sharp}$ the ball centered at the origin with same measure as $\Omega$, it holds \[ T_{\beta,f}(\Omega) \geq T_{\beta, f^{\sharp}} (\Omega^{\sharp}) \] \end{corollario} Moreover we can use Theorem \ref{Teorema_con_f} to get a comparison between Lorentz norm of $u$ and $u^{\star}$. \begin{corollario} Let $1 \leq p \leq \frac{n}{n-1}$, under the assumption of Theorem \ref{Teorema_che_scriveremo} it holds \begin{equation} \label{eq_norme_Lorentz_L_p1} \lnorma{u}_{L^{p,1}(\Omega)} \leq \lnorma{u^{\star}}_{L^{p,1}(\Omega^{\sharp})} \end{equation} where $u^{\star}$ is the function given by Theorem \ref{Teorema_che_scriveremo} \end{corollario} \begin{proof} Let us explicit the $L^{p,1}$ norm of $u$ \[ \lnorma{u}_{L^{p,1} (\Omega)} = \int_0^{+\infty} t^{\frac{1}{p}-1} u^*(t) \, dt = \int_0^{+\infty} t^{-\frac{1}{p'}} u^*(t) \, dt \] Hence by Theorem \ref{Teorema_con_f}, it is sufficient that \begin{equation} \label{condiz_per_norma_lorentz} t^{-\frac{1}{p'}} - \frac{n-1}{n} \frac{1}{t} \int_0^t s^{-\frac{1}{p'}} \, ds \geq 0. \end{equation} If we compute \[ \frac{1}{t} \int_0^t s^{-\frac{1}{p'}} \, ds = \frac{1}{t} p \, t^{-\frac{1}{p'}+1} = p \, t^{-\frac{1}{p'}}, \] then we have \[ t^{-\frac{1}{p'}} - \frac{n-1}{n} \frac{1}{t} \int_0^t s^{-\frac{1}{p'}} \, ds =t^{-\frac{1}{p'}} \biggl( 1-\frac{n-1}{n}p \biggr) \geq 0 \iff p \leq \frac{n}{n-1} \] so \eqref{condiz_per_norma_lorentz} is true and we can apply Theorem \ref{Teorema_con_f} obtaining \[ \int_0^{+\infty} t^{-\frac{1}{p'}} u^*(t) \, dt \leq \int_0^{+\infty} t^{-\frac{1}{p'}} u^{\star}(t) \, dt \] that is \eqref{eq_norme_Lorentz_L_p1}. \end{proof} \begin{oss} We emphasize that the bound $p \leq \frac{n}{n-1}$ is the best we can hope for Lorentz norm $L^{q,1}$. Indeed, if by absurd \eqref{eq_norme_Lorentz_L_p1} holds for $p>\frac{n}{n-1}$, by the embedding of $L^{p,q}$ spaces, $L^{q,1}(\Omega) \subseteq L^{q,q}(\Omega) = L^q(\Omega)$, which gives a contradiction. \end{oss} \vspace{1em} \begin{open} Here we have some open problems: \begin{enumerate} \item It is possible to obtain a $L^\infty$ comparison in the non-zero trace setting? \item Moreover, one may investigate a comparison of this kind holds for other $L^p$ norms, eventually changing the type of rearrangement of the gradient in the same spirit of \cite{ALT} and \cite{Cia} \end{enumerate} \end{open} \printbibliography[heading=bibintoc, title={References}] \renewcommand{\abstractname}{} \begin{abstract} \noindent \textsc{Dipartimento di Matematica e Applicazioni "R. Caccioppoli", Università degli Studi di Napoli "Federico II", Complesso Universitario Monte S. Angelo, via Cintia - 80126 Napoli, Italy.} \textsf{e-mail: [email protected]} \vspace{0.5cm} \noindent \textsc{Mathematical and Physical Sciences for Advanced Materials and Technologies, Scuola Superiore Meridionale, Largo San Marcellino 10, 80126 Napoli, Italy.} \textsf{e-mail: [email protected]} \end{abstract} \end{document}
train/arxiv
BkiUbDk5qWTD6dLV52v7
5
1
\section{Introduction} \noindent Let $\Delta=\partial^2 \theta$ be the Laplacian operator on the unit circle ${\T}$. The maps $$T^r_t=e^{-t(-\Delta)^{^\frac r2}}: e^{i2\pi k\theta}\rightarrow e^{-4\pi^2t|k|^r}e^{i2\pi k\theta}$$ define a semigroup of uniformly bounded operators on $L^\infty(\T)$ for any $0<r<\infty$. Moreover, for $0<r\leq 2$, $T_t^r$ are positivity preserving contractions, which can be easily seen by their integral representations. For $r=1,2$, the semigroups $T_t^r$ are called Poisson semigroup and heat semigroup. They both play important roles, and very often complementary roles, in harmonic analysis. It is easier to work with heat semigroups, for some problems, because of the general gaussian upper estimate of the heat kernels. Let ${\mathbb F}_n$, $2\leq n\leq\infty$ be the free group on $n$ generators. Let $\lambda_g$ be the left regular representations of $g\in {\mathbb F}_n$. One may consider the analogue of the classical Poisson or heat semigroups on free group von Neumann algebra ${\mathcal L}({\mathbb F}_n)$, $$S_t^r: \lambda_g\rightarrow e^{-t|g|^r}\lambda_g, $$ with $|g|$ the reduced word length of $g$. U. Haagerup proved (see \cite{H79}) that for $r=1$, $(S_t^1)_{t\geq 0}$ is a semigroup of completely positive operators on ${\mathcal L}({\mathbb F}_n)$. For $0<r\leq 1$, the maps $S_t^r$ are therefore still unital completely positive (u. c. p.) by the theory of subordinated semigroups (see e.g. \cite{Y80}). For $r>1$, $S_t^r$ cannot be completely positive for all $t$, as the functions $\phi_t(g)=e^{-t|g|^r}$ is in $\ell_1({\mathbb F}_2)$ for each $t$, and the positive definiteness of $\phi_t$ would imply the amenability of the free group ${\mathbb F}_2$ (see the proof of \cite[Theorem 2.6.8]{BN08}). This discussion settles the question of the complete positivity of the semigroup $S_t^r$. How about the complete boundedness (c. b.) of $S_t^r$? For $r\leq1$, $S_t^r$ have of course completely bounded norm $1$, since u. c. p. implies c. b. with norm one on $C^*$ algebras. For $r>1$, it is not hard to see that $S_t^r$ are bounded with an upper bound $1+c t^{-\frac 1{ r}}$ for each $t>0$ by the fact that the projection on words of length at most $k$ has c. b. norm $\simeq k$. The question is then wether $S_t^r$ are (completely) bounded uniformly in $t$ for any (all) $r>1$. By duality, the principle of uniform boundness and the pointwise convergence of $S_t^r$, asking whether $\sup_t \|S_t^r\| <\infty$ is equivalent to asking whether $S_t^r$, as an operator on the predual $L_1(\hat{\mathbb F}_n)$ of $\mathcal L({\mathbb F}_n)$, converges to the identity in the strong (or weak) operator topology when $t\rightarrow 0$. Similarly, $\sup_t \|S_t^r\|_{cb} <\infty$ if and only if $S_t^r$ converges to the identity in the stable point-norm (or stable point-weak) topology. A characterization of the completely bounded radial multipliers on free groups was given by Haagerup and Szwarc in terms of some Hankel matrix being of trace class (this was published in \cite{HSS10}, see also \cite{W95}). On the other hand a famous result of V. V. Peller (\cite{P80}) states that a Hankel matrix belongs to the trace class iff the symbol function belongs to the Besov space $B_1^1$ (see Section 3). These results together provide a precise method for estimating the complete bounds of radial maps on free groups. However, the answer to the uniform complete boundedness of $S_t^r$ (for $r>1)$ remained open, to the best knowledges of the authors, before the writing of this article. Besides the possible technicality of the corresponding estimation, another reason may be the general belief of a negative answer to the question. For example Knudby proved in \cite[Theorem 1.6]{K14} that the symbol of a completely contractive semigroup of radial multipliers on $\mathbb{F}_n$ ($n \geq 2$) grows linearly. By considering the function $n^r -C$ which does not grow linearly if $r>1$, this implies that for each $r>1$ there does not exist a constant $C$ such $\|S_t^r\|_{cb} \leq e^{t C}$ for all $t>0$. In the present work we record a proof that $S_t^r$ is a weak * continuous analytic semigroup of completely bounded maps on the free group von Neumann algebras for any $r>0$, by a careful estimation of the corresponding Besov norm. We remark that by \cite{W95,HM12,M14,D13}, this also proves that the analogous semigroups on free products of groups, on free products of operator algebras, and on amalgamated free products of finite von Neumann algebras, are completely bounded with bound independent on $t$. We do not elaborate on this and refer to \cite{W95,HM12,M14,D13} for the definitions of radial multipliers on free product of groups and (amalgamated) free products of operator algebras, and for details. The authors hope that the study of $S_t^r$ would benefit the recent research in harmonic analysis on noncommutative $L_p$ spaces (see \cite{JMX06,JM12,JMP}), as the classical case suggests. The main ingredient that we introduce is a result on the trace class norm of Hankel matrices with smooth symbol. A particular case of our result is the following (see Theorem \ref{R+} for a more precise statement). \begin{thm}\label{thm=R+_intro} Let $f\colon [0,\infty) \to \R$ be a bounded continuous function of class $C^2$ on $(0,\infty)$, and $ \frac12\geq\alpha> 0$. Then the trace class norm of the matrix $(f(j+k) - f(j+k+1))_{j,k \geq 0}$ is less than $\frac{C}{\sqrt {\alpha}}\sqrt{\| x^{\frac 3 2 - \alpha} f''\|_{L^2(\R_+)} \| x^{\frac 3 2 + \alpha} f''\|_{L^2(\R_+)}}$ for some universal constant $C$. \end{thm} After we communicated to him the aforementioned result, Narutaka Ozawa asked us whether the same holds for other length functions on ${\mathbb F}_n$, or more generally for an arbitrary finitely generated hyperbolic group. To our surprise, the answer is yes. Indeed we can extend to hyperbolic graphs the sufficient condition from \cite{W95,HSS10} for a radial multiplier to be bounded. This is a consequence of \cite{O08}. \begin{thm}\label{hyperbolicgraph} Let $\Gamma$ be a hyperbolic graph with bounded degree. Then there is a constant $C$ such that for every $\dot{\phi}\colon\N \to \C$ such that the matrix \[ H=(\dot{\phi}(j+k)-\dot{\phi}(j+k+1))_{j,k \geq 0}\] belongs to the trace class $S^1$, then $\lim_n \dot{\phi}(n)$ exists and the map \[ A= (a_{x,y}) \in B(\ell^2 (\Gamma)) \mapsto (\dot{\phi}(d(x,y))a_{x,y}) \in B(\ell^2 (\Gamma))\] is bounded with norm less than $C \|H\|_{S^1} + \lim_n |\dot\phi(n)|$. \end{thm} In view of this theorem it is natural to wonder whether there are non hyperbolic groups also satisfying the same criterion. It turns out that there are, namely the groups $\Z^d$ ($d \geq 1$) for their standard generating set. This follows from some delicate estimates of $L^1(\R^d/\Z^d)$-norms that we prove in Section~\ref{section=Zd}. The main result of this paper can be summarized as \begin{thm}\label{main-result} Let $\Gamma$ be a finitely generated hyperbolic group, and $|\cdot|$ the word length associated to a fixed finite generating set. Then there is a constant $C$ such that for every $r>0$, and every $z \in \C$ with positive real part, the multiplier $S^r_z\colon\lambda_g \mapsto e^{-z |g|^r} \lambda_g$ is completely bounded on $\mathcal L(\Gamma)$ with norm less than \[C(1+|\tan(\arg z)|)^{3/2} (1+r).\] \end{thm} The dependence on the constant on the argument of $z$ is probably not optimal, but the order $r$ is sharp as $r \to \infty$ (see Example \ref{ex=heat_sgp_real}). Our method applies to other radial multipliers. In Example \ref{Riesz} we show that the Bochner-Riesz multipliers $\lambda_g \mapsto (1-\frac{|g|^2}{N^2})^z\chi_{\{|g|\leq N\}} \lambda_g$ are completely bounded on the noncommutative $L^p$ spaces associated with ${\cal L}(\Gamma)$ uniformly in $N$ if $|\frac 2p-1|< \mathrm{Re}(z)$, for any $1\leq p\leq \infty$ and any finitely generated hyperbolic group $\Gamma$. The same result holds for the F\'ejer-type multipliers $\lambda_g \mapsto (1-\frac{|g|}{N})^z\chi_{\{|g|\leq N\}} \lambda_g$. This article is organized as follows. In Section \ref{section=multipliers} we recall facts on Schur multipliers and prove Theorem \ref{hyperbolicgraph}. Section \ref{section=trace_class_estimates} contains estimates for the Schatten $1$-norm of Hankel matrices and the proof of Theorem \ref{main-result}. Section \ref{section=Zd} contains of proof that $\Z^d$ satisfies the conclusion of Theorem \ref{hyperbolicgraph}. In Section \ref{section=motivation}, we explain a motivation in studying $S_t^r$ and prove an end-point result on $H_\Sigma^\infty$-calculus. {\bf Notation:} We denote by $\N$ the set of nonnegative integers including $0$~: $\N =\{0,1,2,\dots\}$. We denote by $S^1$ the Banach space of trace class operators on $\ell^2(\N)$. For a discrete group $\Gamma$ we denote by $L^p(\hat{\Gamma})$ the non commutative $L^p$ space on the von Neumann algebra of $\Gamma$. \section{Radial multipliers on hyperbolic graphs}\label{section=multipliers} \subsection{Reminders on Schur and Fourier multipliers} We start with some reminders. If $X$ is a set, a function $\varphi\colon X \times X \to \C$ is a \emph{Schur multiplier} if the map $(a_{s,t})_{s,t \in X} \in B(\ell^2(X)) \mapsto (\varphi(s,t) a_{s,t})_{s,t \in X}$, denoted $M_\varphi$, is bounded on $B(\ell^2(X))$. The following proposition, which is essentially due to Grothendieck (\cite[Theorem 5.1]{Pis01}), characterizes the Schur multipliers. \begin{prop}\label{Schur} Let $X$ be a nonempty set and assume that $\varphi: X\times X\mapsto \C$ and $C\geq0$ are given, then the following are equivalent: (i) $\varphi(x,y)$ extends to a Schur multiplier on $B(\ell^2(X))$ with norm $\leq C$. (ii) There exists a Hilbert space $H$ and two bounded, continuous maps $P,Q:\Gamma\mapsto H$ such that $$\varphi(x,y)=\langle P(x),Q(y)\rangle$$ and $\|P\|_\infty\|Q\|_\infty\leq C$, where $$\|P\|_\infty=\sup_{x\in X}\|P(x)\|, \|Q\|_\infty=\sup_{x\in X}\|Q(x)\|.$$ \end{prop} Let $\Gamma$ be a discrete group and $\phi \colon \Gamma \to \C$. The \emph{Fourier multiplier} $\lambda_g\mapsto \phi(g)\lambda_g$ is completely bounded on the group von Neumann algebra $\mathcal L(\Gamma)$ if and only if the associated function $\varphi(g,h)=\phi(h^{-1}g)$ is a Schur multiplier. In this case the c.b. norm of the Fourier multiplier is equal to the norm (and also the c.b. norm) of the Schur multiplier (see \cite{BF84}). Let us state an immediate consequence of Proposition \ref{Schur} that will be used later (see \cite[Theorem 6.1]{Pis01} for a complete characterization of Hankelian Schur mutlipliers). \begin{lemma}\label{lemma=HankelSchur} Let $(a_n)_{n \in \Z}$ be a finitely supported sequence of complex numbers. Then for each matrix $B = (b_{j,k})_{j,k \in \N} \in S^1$, \[ \|(a_{j+k} b_{j,k})_{j,k \in \N}\|_{S^1} \leq \left(\int_0^1 |\sum_{n \in \Z} a_n e^{2i\pi n \theta}| d\theta \right)\|B\|_{S^1}.\ \end{lemma} \begin{proof} Write $f(\theta) = \sum_{n \in \Z} a_n e^{2i\pi n \theta}$ and decompose $f= gh$ a product of two $L^2$ functions with $\|g\|_{L^2} \|h\|_{L^2} = \|f\|_{L^1}$. We can write $a_{j+k} = \int f(\theta) e^{-2i\pi(j+k)\theta} = \int g(\theta) e^{-2i\pi j\theta} h(\theta) e^{-2i\pi k\theta} = \langle g e^{-2i\pi j\theta}, \overline h e^{2i\pi k\theta} \rangle$. Proposition \ref{Schur} implies that $(j,k) \mapsto a_{j+k}$ is a Schur multiplier with norm less than $\|g\|_{L^2}\|h\|_{L^2}=\|f\|_{L^1}$ on $B(\ell^2)$. By duality $M_{a_{j+k}}$ is bounded on $S^1$ with norm $\leq \|f\|_{L^1}$, which is the content of the Lemma. \end{proof} \subsection{Radial multipliers on trees} Let $\Gamma$ be a group with a fixed finite generating set, and denote by $d$ the associated left-invariant distance on $\Gamma$, or equivalently the distance on the Cayley graph of $\Gamma$. Recall that $d(x,y) = |x^{-1} y|$ where $|\cdot|$ is the word-length with respect to the fixed generating set of $\Gamma$. Following \cite{HSS10}, we say a Schur multiplier $\varphi$ on $ \Gamma\times \Gamma$ is {\it radial} if $\varphi(x,y)=\dot{\phi}(d(x,y))$ for some function $\dot{\phi}:\N \mapsto \C$. We say a Fourier multiplier $\lambda_x \mapsto {\phi}(x)\lambda_x$ is a {\it radial Fourier multiplier} if $\phi(x)=\dot{\phi}(|x|)$ for some $\dot{\phi}\colon \N \to \C$. By \cite{BF84}, the completely bounded norm of a radial Fourier multiplier associated with $\phi$ equals to the norm of the Schur multiplier $\dot{\phi}(d(x,y))$. An exact formula for the norm of radial Schur multipliers on free groups was given by Haagerup and Szwarc in 1987. The result was published in \cite{HSS10} with Steenstrup, where they extended the study to homogenous trees. A similar characterization for radial Fourier multipliers with respect to the block length on free products of groups is proved by Wysocza\'nski in 1995 (\cite{W95}). These results were recently extended to (amalgamated) free products of operator algebras in \cite{HM12,M14,D13}. We rewrite a special version of the results from \cite{HSS10} below. \begin{thm}\label{HSS}(Haagerup, Steenstrup, Szwarc) Let $X$ be a homogenous tree with degree $\geq3$ and $\dot{\phi}:\N\rightarrow \C$. Then the function $\varphi(x,y) = \dot{\phi}(d(x,y))$ is a Schur multiplier if and only if the matrix \[ H=(\dot{\phi}(j+k)-\dot{\phi}(j+k+2))_{j,k \geq 0}\] belongs to the trace class. In that case $\lim_{n} \dot{\phi}(2n)$ and $\lim_{n} \dot{\phi}(2n+1)$ exist, and $$\|M_{\dot{\phi}}\| \leq \lim_{n\rightarrow\infty}(| \dot{\phi}(2n)|+|\dot{\phi}(2n+1)|)+\|H\|_{S^1}.$$ \end{thm} In this section, we remark that a similar result to the ``if" part of Theorem \ref{HSS} holds on hyperbolic graphs. We first recall a proof of the ``if" part of Theorem \ref{HSS}, valid for all trees, and that we will later adapt for general hyperbolic graphs. Fix an infinite geodesic path $p$. For every $x \in X$, there is a unique infinite geodesic path $p_x$ which starts at $x$ and flows into $p$. Remark that for every $x,y \in X$ the geodesics $p_x$ and $p_y$ first meet on a point of the geodesic segment between $x$ and $y$, and then coincide forever. In formulas, the set $\{(i,j) \in \N \times \N, p_x(i) = p_x(j)\}$ is of the form $\{(i_0+k,j_0+k), k \in \N\}$, for $i_0,j_0$ satisfying $i_0+j_0 = d(x,y)$. Since $H=(\dot{\phi}(i+j)-\dot{\phi}(i+j+2))_{0\leq i,j<\infty}$ belongs to the trace class, we can write $H=A^* B$ where $A,B$ are Hilbert-Schmidt operators satisfying $\|H\|_{S^1}= \|A\|_{S_2} \|B\|_{S_2}$. For $x,y \in X$, set \[P(x) = \sum_{i \geq 0} B(e_i) \otimes \delta_{p_x(i)} \in \ell^2(\N) \otimes \ell^2(X)\] and \[Q(y) = \sum_{j \geq 0} A(e_j) \otimes \delta_{p_y(j)} \in \ell^2(\N) \otimes \ell^2(X)\] where $(e_i)_{i \geq 0}$ and $\delta_x$ are the coordinate orthonormal basis of $\ell^2(\N)$ and $\ell^2({X})$. We see that $ \|P(x)\|^2 = \sum_{i \geq 0} \| B(e_i) \|^2=\|B\|_{S_2}^2$ and $ \|Q(y)\|^2 = \|A\|_{S_2}^2.$ Using that $\langle B e_i,A e_j\rangle = \dot{\phi}(i+j) - \dot{\phi}(i+j+2)$ we can write \begin{multline*} \langle P(x),Q(y) \rangle= \sum_{i,j, p_x(i)=p_y(j)} \dot{\phi}(i+j) - \dot{\phi}(i+j+2)\\ =\sum_{k=0}^\infty\dot{\phi}(d(x,y)+2k) - \dot{\phi}(d(x,y)+2k+2) \end{multline*} which equals to $\dot{\phi}(d(x,y))-\lim_{n\rightarrow\infty}\dot{\phi}(2n)$ for $d(x,y)$ even and equals to $\dot{\phi}(d(x,y))-\lim_{n\rightarrow\infty}\dot{\phi}(2n+1)$ for $d(x,y)$ odd. Therefore, $$\dot{\phi}(d(x,y))=\langle P(x),Q(y) \rangle+\frac{1+(-1)^{d(x,y)}}2\lim_{n}\dot{\phi}(2n)+\frac{1-(-1)^{d(x,y)}}2\lim_{n}\dot{\phi}(2n+1) .$$ Fix a distinguished point $e\in X$. Note $(-1)^{d(x,y)}=(-1)^{d(x,e)}(-1)^{d(y,e)}$ since $d(x,e)+d(y,e)-d(x,y)$ is even. So $1\pm (-1)^{d(x,y)}$ is a Schur multiplier with norm $\leq 2$. We then obtain by Proposition \ref{Schur} that $$\|M_{\dot{\phi}}\|\leq \lim_{n\rightarrow\infty}(| \dot{\phi}(2n)|+|\dot{\phi}(2n+1)|)+\|H\|_{S^1}.$$ \subsection{Generalization to hyperbolic graphs} Identify a connected graph $\Gamma$ with its vertex set and equip it with the graph distance $d$. A geodesic path $p$ is a finite or infinite sequence of points $p(0),...,p(k)...\in\Gamma$ such that $d(p(m), p(n)) = |m-n|$ for every $m, n$. A connected graph $\Gamma$ is {\it hyperbolic} if there exists a constant $\delta \geq 0$ such that for every geodesic triangle each edge is contained in the $\delta$-neighborhood of the union of the other two. A finitely generated group $\Gamma$ is {\it hyperbolic} if its Cayley graph is hyperbolic. The hyperbolicity property of a group is independent of the choice of generating set. A tree is a hyperbolic graph for $\delta=0$. See e.g. \cite{BN08}, section 5.3 for more information on hyperbolic groups. To obtain an extension on general hyperbolic graphs, we need the following result of Ozawa \cite[Proposition 10]{O08}. \begin{prop}[Ozawa]\label{ozawa} Let $\Gamma$ be a hyperbolic graph with bounded degree. There is $C_0 \in \R$, a Hilbert space $\mathcal H$ and maps $\eta_i^{\pm}\colon \Gamma \to \mathcal H$ (for $i \in \N$) such that \begin{enumerate} \item\label{orthogonality} $\eta_i^\pm(x) \perp \eta_{j}^\pm(x)$ for all $x \in \Gamma$ and $|i-j|\geq 2$. \item\label{bound} $\|\eta_i^\pm(x)\| \leq \sqrt{C_0}$ for all $i \in \N$ and $x \in \Gamma$, \item\label{scalprod} $\sum_{i+j=n} \langle \eta_i^+(x), \eta_j^-(y)\rangle = \left\{ \begin{array}{cc} 1 &\textrm{if }d(x,y)\leq n\\ 0 & \textrm{otherwise}\end{array}\right.$. \end{enumerate} \end{prop} The $\eta_i^{\pm}(x)$ provided by this Proposition will play the role of the vectors $\delta_{p_x(i)} \in \ell^2(X)$ in the preceding proof. Assume $H=(\dot{\phi}(i+j)-\dot{\phi}(i+j+1))_{0\leq i,j<\infty}\in S^1$. Then the diagonal of $H$ belongs to $\ell^1$, and hence $\lim_n \dot\phi(n)$ exists. We proceed similarly. Write $H=A^* B$ with $\|H\|_{S^1}= \|A\|_{S_2} \|B\|_{S_2}$. Set \[P(x) = \sum_{i \geq 0} B(e_i) \otimes \eta_i^+(x) \in \ell^2 \otimes \mathcal H\] and \[Q(y) = \sum_{j \geq 0} A(e_j) \otimes \eta_j^-(x) \in \ell^2 \otimes \mathcal H,\]for $x,y \in \Gamma$. From condition \ref{orthogonality} and \ref{bound} in Proposition \ref{ozawa} we see that $|\langle B(e_i) \otimes \eta_i^+(x),B(e_j) \otimes \eta_j^+(x)\rangle|$ is zero if $|i-j|\geq 2$, and always less than $C_0\frac{\|B(e_i)\|^2 + \|B(e_i)\|^2}{2}$. Hence \[ \|P(x)\|^2 = \sum_{i,j \geq 0} \langle B(e_i) \otimes \eta_i^+(x),B(e_j) \otimes \eta_j^+(x)\rangle \leq 3 C_0 \|B\|_{S_2}^2.\] Similarly, $\sup_y \|Q(y)\|^3 \leq 3 C_0 \|A\|_{S_2}^2$. We claim that $\langle P(x),Q(y) \rangle= \dot{\phi}(d(x,y))$ for all $x,y \in \Gamma$. Indeed, \[ \langle P(x),Q(y) \rangle = \sum_{i,j} \langle B e_i \otimes \eta_i^+(x) , A e_j \otimes \eta_i^-(y)\rangle.\] Using that $\langle B e_i,A e_j\rangle = \dot{\phi}(i+j) - \dot{\phi}(i+j-1)$ we can write \[ \langle P(x),Q(y) = \sum_{n \geq 0} \left((\dot{\phi}(n) - \dot{\phi}(n+1)) \sum_{i+j=n} \langle \eta_i^+(x), \eta_j^-(y)\rangle\right),\] which equals $\dot{\phi}(d(x,y)) - \lim_n \dot{\phi}(n)$ from assumption \ref{scalprod} in Proposition \ref{ozawa}. By Propostion \ref{Schur}, we obtain Theorem \ref{hyperbolicgraph}. \begin{rem} In Theorem \ref{hyperbolicgraph}, we can take $C=1$ if $\Gamma$ is a tree. Otherwise, $C$ may depend on $\Gamma$. \end{rem} \begin{rem}\label{rem=difference-1-2} As the example of $\dot \phi(k) = (-1)^k$ shows, the condition that $(\dot{\phi}(j+k)-\dot{\phi}(j+k+1))_{j,k \geq 0}$ is of trace class is stronger than the condition in Theorem \ref{HSS} for $(\dot{\phi}(j+k)-\dot{\phi}(j+k+2))_{j,k \geq 0}$, but is necessary for general hyperbolic graphs. Indeed, if $\Gamma_0$ is the Cayley graph of $(\Z/3\Z) \ast (\Z/3\Z) \ast (\Z/3\Z)$ with generating set the union of the $3$ copies of $\Z/3\Z$, then by \cite[Theorem 6.1]{W95} the Schur multiplier with symbol $\dot \phi(d(x,y))$ is bounded on $B(\ell^2(\Gamma_0))$ if and only if $(\dot{\phi}(j+k)-\dot{\phi}(j+k+1))_{j,k \geq 0}$ belongs to the trace class. Theorem \ref{hyperbolicgraph} can therefore be read as ``a function $\phi\colon \N \to \C$ defines a bounded radial multiplier on every hyperbolic graph $\Gamma$ with bounded degree if and only if it defines a bounded radial multiplier on $\Gamma_0$''. \end{rem} \begin{rem} The fact that it is the matrix $(\dot{\phi}(j+k)-\dot{\phi}(j+k+2))_{j,k \geq 0}$ that appears in Theorem \ref{HSS} is related to the fact that trees are bipartite (a graph is bipartite is it does not contain any odd length cycle). A modification (left to the reader) of the proofs in \cite{O08} actually shows that when the hyperbolic graph $\Gamma$ is bipartite, then Theorem \ref{hyperbolicgraph} holds also for $H$ replaced by $(\dot{\phi}(j+k)-\dot{\phi}(j+k+2))_{j,k \geq 0}$. In that case the statement becomes ``a function $\phi\colon \N \to \C$ defines a bounded radial multiplier on every bipartite hyperbolic graph $\Gamma$ if and only if it defines a bounded radial multiplier on ${\mathbb F}_2$ with standard generating set''. \end{rem} In particular, if we apply the preceding Theorem \ref{hyperbolicgraph} to a Cayley graph of a finitely generated hyperbolic group $\Gamma$ and recall that the word length of $y^{-1}x$ equals to $d(y^{-1}x,e)=d(x,y)$ on its Cayley graph, we get \begin{corollary}\label{hyperbolicgroup} Let $\Gamma$ be a finitely generated hyperbolic group, and $|\cdot|$ the length function on $\Gamma$ associated to a finite generating set of $\Gamma$. Then there is a constant $C \in \R$ such that if $\dot{\phi}\colon \N \to \C$ is a function such that the infinite Hankel matrix $$H=(\dot{\phi}(k+j)-\dot{\phi}(k+j+1))_{0\leq k,j<\infty}$$ belongs to the trace class $S^1$, then $ \lambda_g\mapsto \dot{\phi}(|g|)\lambda_g$ extends to a completely bounded map on ${\mathcal L}(\Gamma)$ with norm $\leq C \|H\|_{S^1} + \lim_{n\rightarrow \infty}|\dot{\phi}(n)|$. \end{corollary} \subsection{Weighted length functions on ${\mathbb F}_n$} Corollary \ref{hyperbolicgroup} in particular applies to ${\mathbb F}_n$ equipped with the length function associated to other finite generating sets than the standard generating set. One may also consider weighted lengths on a free group ${\mathbb F}_n$, $n \in \N \cup \{\infty\}$. Denote the free generators by $g_1,g_2,...g_k,...$. Fix a sequence of positive real numbers $a=(a_k)_k$, for $g=g_{i_1}^{k_1}g_{i_2}^{k_2}...g_{i_m}^{k_m}\in {\mathbb F}_n,i_j\neq i_{j+1}$, let $$|g|_a=\sum_{j=1}^ma_{i_j}|k_j|.$$ Theorem \ref{main-result} also extends to the weighted lengths $|g|_a$ on free products groups ${\mathbb F}_n, 2\leq n\leq \infty$ because of the following Proposition. For $\phi:{\R}_+\mapsto {\C}$, we denote by $\phi_t(\cdot)=\phi(t\cdot)$ and $m_{\phi^a}$ the Fourier multiplier sending $\lambda_g$ to $\phi(|g|_a)\lambda_g$. We ignore the ``a'' when $a_k=1$ for all $k$. \begin{prop}\label{dilate} Given a continuous function $\phi$, suppose $\|m_{\phi_t}\|_{c.b}<C$ on ${\mathbb F}_\infty$ for all $0<t<1$, then $\|m_{\phi_t^a}\|_{c.b}<C$ for the same constant $C$ for any sequence $a$ and any $t>0$. \end{prop} \begin{proof} Assume first that $a_k\in {\N^*=\{1,2,\dots\}}$. Let $T_a$ be the trace preserving *-homomorphism sending $\lambda_{g_j}$ to $\lambda_{g_j^{a_j}}$. Then $$T_a \circ m_{\phi_t^a} =m_{\phi_t} \circ T_a,$$ which shows that $\|m_{\phi_t^a}\|_{c.b}< \|m_{\phi_t}\|_{c.b}$ since $T_a$ is completely isometric. Next we assume $a_k\in {\Q}_+$ with a common denominator $N$. Then $$ T_{Na} \circ m_{\phi_t^a} = m_{\phi_{\frac tN}} T_{Na}.$$ Therefore, $m_{\phi_t^a}$ is completely bounded with upper bound $\sup_{0<t<1}\|m_{\phi_t}\|_{c.b}$. The general case follows by approximation. \end{proof} \section{Complete boundedness of $S_t^r$}\label{section=trace_class_estimates} Theorem \ref{HSS} (respectively Corollary \ref{hyperbolicgroup}) states that the completely bounded norm of the map $S_t^r$ on $\mathcal L({\mathbb F}_n)$ (respectively $\mathcal L(\Gamma)$ for a hyperbolic group $\Gamma$) is equivalent to (respectively dominated by) the trace class norm of the corresponding Hankel matrix. In this section we give an upper bound on the trace class norm of Hankel matrices with smooth symbol, that we apply to several explicit examples. In particular Theorem \ref{main-result} is an immediate consequence of Corollary \ref{hyperbolicgroup} and Examples \ref{ex=heat_sgp_complex} and \ref{ex=heat_sgp_real}. We start by stating a more precise version of Theorem \ref{thm=R+_intro}. \begin{thm}\label{R+} Let $f\colon [0,\infty) \to \R$ be a bounded continuous function of class $C^2$ on $(0,\infty)$, and $ \frac12\geq\alpha> 0$. Then, for any $t>0$, the trace class norm of the matrix $f(t(j+k)) - f(t(j+k+1))$ satisfies the inequality \begin{eqnarray}\label{tinv} \left\|\left(f(t(j+k))-f(t(j+k+1))\right)_{j,k \geq 0}\right\|_{S^1} \leq \frac{C}{\sqrt {\alpha}} \sqrt{ A B} \leq \frac{2C}{\sqrt {\alpha}} B \end{eqnarray} for some universal constant $C$, where \[ A = \sqrt{\| x^{\frac 1 2 - \alpha} f'\|_{L^2(\R_+)} \| x^{\frac 1 2 + \alpha} f'\|_{L^2(\R_+)}},\] \[ B = \sqrt{\| x^{\frac 3 2 - \alpha} f''\|_{L^2(\R_+)} \| x^{\frac 3 2 + \alpha} f''\|_{L^2(\R_+)}}.\] \end{thm} We postpone the proof of Theorem \ref{R+} to the end of this section. \subsection{Examples of applications} We give several applications of Theorem \ref{R+}. \begin{example}[The semigroup $\lambda_g \mapsto (1+|g|)^{-z} \lambda_g$ on free groups] For every $z \in C$ with positive real part, the formula \[ (1+n)^{-z} = \frac{1}{\Gamma(z)} \int_0^\infty t^{z-1} e^{-t} e^{-tn} dt,\] (where $\Gamma(z) = \int_0^\infty t^{z-1} e^{-t} dt$ is the Gamma function) together with the fact that $S_t^1\colon \lambda_g \mapsto e^{-t|g|} \lambda_g$ is unital completely positive for every $t>0$ implies that $\lambda_g \mapsto (1+|g|)^{-z} \lambda_g$ is unital completely positive if $z \in \R_+$, and unital completely bounded with completely bounded norm less than $\Gamma(\mathrm{Re}(z))/|\Gamma(z)|$ otherwise. This estimate is far from optimal: if $\mathrm{Re}(z) \geq 3$, the rapid decay property implies that this cb norm is less than $\sum_{n \geq 0} (1+n)^{-2}$, whereas $\Gamma(\mathrm{Re}(z))/|\Gamma(z)|$ is unbounded. Theorem \ref{R+} gives complementary estimates for this norm in the regime $\mathrm{Re}(z) \leq 3$. Namely let $z=a+ib$ with $0<a\leq 3$. Taking $f(x) = (1+x)^{-z}$ and $\alpha = \min(1,a)/2$ then \[ \|x^{\frac 1 2 \pm \alpha}f'\|_{L^2(0,\infty)} \leq \|(1+x)^{\frac 1 2 \pm \alpha}f'\|_{L^2(1,\infty)} = \frac{|z|}{\sqrt{2a \pm 2\alpha}} \leq \frac{|z|}{\sqrt a} \] and similarly \[ \|x^{\frac 3 2 \pm \alpha}f''\|_{L^2(0,\infty)} \leq \frac{|z(1+z)|}{\sqrt{2a \pm 2\alpha}} \leq \frac{|z(1+z)|}{\sqrt{a}} .\] Hence Theorem \ref{HSS} and Theorem \ref{R+} imply that the completely bounded norm of $\lambda_g \mapsto (1+|g|)^{-z} \lambda_g$ is less than $C\frac{|z|}{\mathrm{Re}(z)} \sqrt{|1+z|}$. This implies that for $\omega \in [0,\frac \pi 2)$, the cb norm of $\lambda_g \mapsto (1+|g|)^{-z} \lambda_g$ is bounded by $C(1+\tan \omega)^{\frac 3 2}$ on $\{z \in \C, |\arg z| \leq \omega\}$. The same estimates also hold for the multiplier $\lambda_g \mapsto \max(1,|g|)^{-z}\lambda_g$. This follows from Theorem \ref{HSS} and from the following general inequality applied to $a_n = \max(1,n)^{-z} - (n+1)^{-z}$. If $a_n$ is any sequence of complex numbers, \[ \|(a_{j+k+1})_{j,k \geq 0}\|_{S^1} \leq \|(a_{j+k})_{j,k \geq 0}\|_{S^1} \leq |a_0|+2 \|(a_{j+k+1})_{j,k \geq 0}\|_{S^1}\] the first inequality if obvious, whereas the second is the triangle inequality for the trace norm $\| \cdot\|_{S^1}$ applied to the decomposition \[a_{j+k} = a_{0} 1_{j=k=0} + a_{k}1_{j=0,k>0} + a_{j+k}1_{j>0}.\] Note also that by Corollary \ref{hyperbolicgroup}, the same results hold for the semigroups $\lambda_g \mapsto (1+|g|)^{-z} \lambda_g$ and $\lambda_g \mapsto \max(1,|g|)^{-z}\lambda_g$ on every hyperbolic group, up to some multiplicative constant depending on the group. \end{example} \begin{example}[Fej\'er Kernel]\label{Fejer} Given $N\in {\N}$, let $F_N(k)=(1-\frac {k}{N})\chi_{[0,N]}(k)$. Define Fej\'er multiplier as $$m_{F_N}: \lambda_ g\mapsto F_N(|g|)\lambda_g.$$ Then $\|m_{F_N}\|_{cb}\simeq \log N$ on free group von Neumann algebras. In fact, \begin{equation} \label{eq=Fejer_unbounded} \|(F_N(k+j)-F_N(k+j+1))_{k,j}\|_{S^1}=\|(\frac {1}{N})_{k+j\leq N-1}\|_{S^1}\simeq \log N.\end{equation} If one applies Theorem \ref{R+} for the function $f(x) = (1-x)^\delta\chi_{[0,1]}(x)$, we see that $$\sup_N \|(F_N(k+j)^\delta-F_N(k+j+2)^\delta)_{k,j}\|_{S^1} <\infty$$ if $\delta> \frac 3 2$, because $f'' \in L^2$ if and only if $\delta>\frac 3 2$. We will see in Example \ref{Riesz} that the previous inequality actually holds for all $\delta>1$. \end{example} \begin{example}[Bochner-Riesz Mean]\label{Riesz} Bochner-Riesz mean is a ``smoothed" Fej\'er multiplier in modern harmonic analysis. Given $N\in {\N}$, let $B^\delta_N(k)=(1-\frac {k^2}{N^2})^\delta\chi_{[0,N]}(k)$ for $\delta\in {\C}.$ Define Bochner-Riesz multiplier as $$m^\delta_{B_N}: \lambda_g\mapsto B^\delta_N(|g|)\lambda_g.$$ As for the Fej\'er kernel, a direct application or Theorem \ref{R+} would yield that the Bochner-Riesz kernels are completely bounded on von Neumann algebras of hyperbolic groups if $\mathrm{Re}(\delta)>\frac 3 2$. However, using the known boundedness properties of Bochner-Riesz multipliers on $\mathcal L(\Z)$, one can decrease this to $\mathrm{Re}(\delta)>1$. Before that, we observe that, as in $\mathcal L(\Z^n)$, the problem of complete boudnedness of the Bochner-Riesz and Fej\'er multipliers on hyperbolic groups are equivalent, in the sense that for all $\mathrm{Re}(\delta)\geq 0$ and all hyperbolic groups, there is a constant $C$ such that for all $N$, \begin{equation}\label{eq:equivalence_Fejer_BochnerRiesz} \frac{1}{C} \leq \frac{\|m^\delta_{B_N}\|_{cb(\mathcal L\Gamma)}}{\|m^\delta_{F_N}\|_{cb(\mathcal L\Gamma)}} \leq C.\end{equation} Indeed, given $\mathrm{Re}(\delta)\geq 0$, let $f_\delta$ and $g_\delta$ be a compactly supported $C^2$ functions on $[0,\infty)$ satisfying $f_\delta(x) = 1/g_\delta(x) = (1+x)^\delta$ for all $x \in [0,1]$, so that $B^\delta_N (k) = f_\delta(k/N) F^\delta_N(k)$ and $F^\delta_N (k) = g_\delta(k/N) F^\delta_N(k)$. By Corollary \ref{R+} $\sup_N \| (f_\delta(\frac{i+j}{N}) - f_\delta(\frac{i+j+1}{N}) )_{j,k}\|_{S^1}<\infty$ and same for $g_\delta$. By Corollary \ref{hyperbolicgroup}, the multipliers corresponding to $f_\delta(|g|/N)$ and $g_\delta(|g|/N)$ are therefore bounded uniformly in $N$ on every hyperbolic group. This proves \eqref{eq:equivalence_Fejer_BochnerRiesz}. We can now prove that Bochner-Riesz multipliers (and hence the Fej\'er multipliers by \eqref{eq:equivalence_Fejer_BochnerRiesz}) are completely bounded on every hyperbolic groups, and in particular on all free groups, if $\mathrm{Re}\delta>1$. This follows from Corollary \ref{hyperbolicgroup} and the estimate \begin{equation}\label{eq:Bochner_Riesz_delta>1} \forall \mathrm{Re}(\delta)>1, \sup_N \| ( B^\delta_N(j+k) - B^\delta_N(j+k+1) )_{j,k \geq 0}\|_{S^1} <\infty. \end{equation} Let us prove \eqref{eq:Bochner_Riesz_delta>1}. By derivating we can write \[ B^\delta_N(k) - B^\delta_N(k+1) = \int_0^1 2 \delta (1-\frac{(k+t)^2}{N^2})^{\delta-1} \chi_{[0,N]}(k+t) \frac{k+t}{N^2} dt.\] Denote by $B_{N,t}^{\delta-1}(k) = (1-\frac{(k+t)^2}{N^2})^{\delta-1} \chi_{[-N,N]}(k+t)$; for $t=0$ and $k \geq 0$ this is $B_N^{\delta -1}$. Let $f_1$ be a compactly supported $C^2$ function on $[0,\infty)$ such that $f(x) = -x^2/2$ on $[0,1]$. The previous equality becomes \[ B^\delta_N(k) - B^\delta_N(k+1) = \int_0^1 2 \delta B_{N,t}^{\delta-1}(k) ( \frac{t-1/2}{N^2} + f_1(\frac{k}{N}) - f_1(\frac{k+1}{N}) ) dt.\] The trace norm of the matrix $(B_{N,t}^{\delta-1}(j+k) \frac{t-1/2}{N^2} )_{j,k \geq 0}$ is less than the sum of the absolute values of its entries, which is less than $\frac 1 2$. Moreover by Corollary \ref{R+} the trace norm of the matrix $(f_1(\frac{j+k}{N}) - f_1(\frac{j+k+1}{N}))$ is bounded uniformly in $N$, by some constant $C$. Therefore Lemma \ref{lemma=HankelSchur} and the previous equality imply that the trace norm of $( B^\delta_N(j+k) - B^\delta_N(j+k+1) )_{j,k \geq 0}$ is less than \begin{eqnarray*} &&|\delta| + 2 C |\delta| \sup_{t\in [0,1]} \| \sum_{n \in \Z} B_{N,t}^{\delta - 1}(n) e^{2i\pi n \theta}\|_{L^1(\R/\Z)}\\ &\leq& |\delta| + 2 C |\delta| \|m_{B_N}^{\delta-1}\|_{cb(L^\infty(\R))}\leq |\delta| + 2 C |\delta| e^{C|Im \delta|^2} \end{eqnarray*} The first inequality follows by embedding $L^\infty(\R/\Z)$ into $L^\infty(\R)$ via $ \sum a_ne^{2i\pi n \theta}\mapsto \sum a_ne^{2i\pi n (x-t)}$. The second inequality is quoted from \cite[Prop. 10.2.2]{G14}, and the constant $C$ depends only on $\mathrm{Re}(\delta)$. This proves \eqref{eq:Bochner_Riesz_delta>1} and \begin{eqnarray}\label{ConstantC} \|m_{B_N}^{\delta}\|_{cb({\cal L}(\Gamma))}\leq e^{C+C|Im \delta|^2} \end{eqnarray} for $\mathrm{Re}(\delta)>1$ with $C$ only depends on $\mathrm{Re}(\delta)$ and $\Gamma$. We can observe that the assumption $\mathrm{Re}(\delta)>1$ is needed. Indeed, for $\delta=1$, \eqref{eq:Bochner_Riesz_delta>1} does not hold because of \eqref{eq=Fejer_unbounded} and \eqref{eq:equivalence_Fejer_BochnerRiesz}, so $m_{B_N}^1$ is not c.b. on ${\cal L}({\mathbb F}_2)$ uniformly in $N$ by Theorem \ref{HSS}. Fix $0<\varepsilon<1$, let $C$ be the constant in \eqref{ConstantC} such that the multiplier $F(z)=m_{B_N}^z e^{Cz^2-5C}$ is c.b. on ${\cal L}(\Gamma)$ uniformly on the complex line $\{z; \mathrm{Re}(z)=1+\varepsilon\}$. Note that $F(z)$ is c.b. on $L^2(\hat\Gamma)$ uniformly in $N$ on the imaginary line $\{z; \mathrm{Re}(z)=0\}$. By complex interpolation and duality, we get $ m_{B_N}^\delta$ is c.b. on $L^p(\hat \Gamma)$ uniformly in $N$ for any $|\frac2p-1|< \mathrm{Re}(\delta),1\leq p\leq\infty$. The same holds for $ m_{F_N}^\delta$ because of (\ref{eq:equivalence_Fejer_BochnerRiesz}). \end{example} \begin{example}\label{ex=heat_sgp_real} Given $r>0$, let $\alpha=\frac {\min\{r, 1\}}2$ and $f(x) = e^{-x^r}$. We then have, $$\|x^{\frac 1 2\pm\alpha} f'\|_{L^2(0,\infty)}\leq c \sqrt r, \|x^{\frac 3 2\pm\alpha}f''\|_{L^2(0,\infty)}\leq c(1+ r)\sqrt r$$ Applying Corollary \ref{R+}, we get \[ \sup_{t\geq 0} \|(e^{-t(j+k)^r} - e^{-t(j+k+1)^r})_{j,k\geq 0}\|_{S^1}\leq c(1+r).\] Moreover the order $r$ as $r$ goes to $\infty$ is optimal. Indeed, if $n$ is the integer part of $r$ and $t=n^{-r}$ using the inequality $\|A\|_{S^1} \geq \sum_{j=0}^n |A_{j,n-j}|$ we have \[ \|(e^{-t(j+k)^r} - e^{-t(j+k+1)^r})_{j,k\geq 0}\|_{S^1} \geq (n+1)\left(e^{-1} - e^{-(1+1/n)^r}\right) \sim r(e^{-1} - e^{-e})\] as $r \to \infty$. \end{example} \begin{example}\label{ex=heat_sgp_complex} For every $z=a+bi\in \C$ with $|arg z| \leq \omega<\frac \pi2$, let $f(x)=e^{-zx^r}$. Denote $K=(1+\tan^2 \omega)$. Then $|z|^2\leq Ka^2$ and \begin{eqnarray*} |f'|^2&=&|zrx^{r-1}e^{-zx^r}|^2\leq Ka^2r^2x^{2r-2}e^{-2ax^r} \\ |f''|^2&=&|-z^2r^2x^{2r-2}e^{-zx^r}+zr(r-1)x^{r-2}e^{-zx^r}|^2\\ &\leq& 2K^2 |a^2r^2x^{2r-2}e^{-ax^r}|^2+2K|a r(r-1)x^{ r-2}e^{-ax^r}|^2. \end{eqnarray*} Setting $\alpha=\min\{\frac r2,1\}$ we then have using the change of variable $s= a x^r$, \begin{eqnarray*} \|x^{\frac12\pm\alpha}f'\|_{L_2(\R_+)}^2&\leq& K\int_{\R_+}a^2r^2x^{2r\pm 2\alpha-1}e^{-2ax^r}dx\\ &=&K\int_{\R_+}r a^{\mp\frac {2\alpha} r}s^{1 \pm \frac{2\alpha}{r}}e^{-2s}ds\simeq K a^{\mp\frac {2\alpha} r}r.\\ \|x^{\frac32\pm\alpha}f''\|_{L_2(\R_+)}^2&\leq&2K^2\int_{\R_+} a^4r^4x^{4r\pm 2\alpha-1}e^{-2ax^r}dx\\ &&\hskip .5cm +2K\int_{\R_+} a^2r^2(r-1)^2x^{2r\pm 2\alpha-1}e^{-2ax^r}dx\\ & = &2K^2 \int_{\R_+} a^{\mp\frac {2\alpha} r}r^3s^{3 \pm \frac{2\alpha}{r}}e^{-2s}ds\\ &&\hskip .5cm+2K\int_{\R_+} a^{\mp\frac {2\alpha} r }r(r-1)^2 s^{1 \pm \frac{2\alpha}{r}}e^{-2s}ds\\ &\simeq&2K^2a^{\mp\frac {2\alpha} r}r^3+2K a^{\mp\frac {2\alpha} r }r(r-1)^2. \end{eqnarray*} Corollary \ref{R+} yields that \[ \sup_{t\geq 0} \|(e^{-zt(j+k)^r} - e^{-zt(j+k+1)^r})_{j,k\geq 0}\|_{S^1}<c(1+(\tan \omega)^{\frac 3 2}) (1+ r).\] \end{example} \subsection{The proof}\label{subsection=proof} In the sequel we consider the unit circle $\T = \{e^{2i\pi t}, t \in \R/\Z\}$ equipped with the Lebesgue probability measure, and the unit disk $\mathbf D = \{z \in \C, |z|<1\}$ equipped with the Lebesgue probablity measure $\frac{dz}{\pi}$. We now turn to the proof of Theorem \ref{R+}. The proof relies on Peller's characterization of trace class Hankel matrices \cite{P80} that we now recall. With the formulation given in \cite[Theorem 3.1]{HSS10} (which gives very good constants), Peller's theorem states that a Hankel matrix $(a_{j+k})_{j,k \geq 0}$ belongs to the trace class if and only if the function $g(z) = \sum_{n \geq 0} (n+1)(n+2) a_n z^n$ belongs to $L^1(\mathbf{D},\frac{dz}{\pi})$, and \[ \frac \pi 8 \| g\|_{L^1(\mathbf{D},\frac{dz}{\pi})} \leq \|(a_{j+k})_{j,k\geq 0}\|_{S^1} \leq \|g\|_{L^1(\mathbf{D},\frac{dz}{\pi})}.\] The condition $\sum_{n \geq 0} (n+1)(n+2) a_n z^n \in L^1(\mathbf{D})$ is one of the equivalent conditions for the series $\sum_{n \geq 0} a_n z^n$ to belong to the Besov space of analytic functions $B_1^1$. In the sequence we will work with another classical condition, that is more suited for our proof. Consider the classical de la Vall\'ee Poussin kernels $(W_n)_{n \geq 0}$. The $W_n$'s are functions on $\T$ given by their Fourier coefficients. $W_0(z) = 1+z$ and for $n>0$ \[\widehat{W_n}(k)= \left\{ \begin{array}{ll} 2^{-n+1}(k-2^{n-1}) & \textrm{if }2^{n-1} \leq k \leq 2^n\\ 2^{-n}(2^{n+1} - k) & \textrm{if }2^{n} \leq k \leq 2^{n+1}\\ 0 & \textrm{otherwise.} \end{array}\right.\] The Besov space $B_1^1$ of analytic functions is the Banach space of series $\varphi(z) = \sum_{n \geq 0} a_n z^n$ with $a_n \in \C$ such that \begin{equation}\label{eq:def_B11} \|\varphi\|_{B_1^1} = \sum_{n \geq 0} 2^n \|W_n \ast \varphi\|_{L^1(\T)} < \infty.\end{equation} We refer to \cite{P03} for the equivalence of these definitions of $B_1^1$, or for the following formulation of Peller's theorem~: there is a constant $C>0$ such that for every Hankel matrix $A=(a_{j+k})_{j,k\geq 0}$, \begin{equation}\label{eq=peller} C^{-1} \| \sum_{n \geq 0} a_n z^n\|_{B_1^1} \leq \|A\|_{S^1} \leq C \| \sum_{n \geq 0} a_n z^n\|_{B_1^1}.\end{equation} For a function $f\colon [0,\infty) \to \R$ and a subinterval $I$ of $[0,\infty)$ we adopt the following notation \begin{equation}\label{eq=def_l2} \|f \|_{L^2(I)} = \left(\int_I |f(x)|^2 dx\right)^{\frac 1 2}, \|f \|_{\ell^2(I)} = \left(\sum_{k \in I \cap \mathbf N} |f(k)|^2\right)^{\frac 1 2}.\end{equation} We will prove the following upper estimate on the $B_1^1$-norm of a function with smooth symbol. \begin{prop}\label{prop=Hankel_with_C^1_symbol} Let $f\colon [0,\infty) \to \R$ a continuous function of class $C^1$ on $(1,\infty)$, and $\frac12\geq\alpha >0$. Then \[ \| \sum_{n \geq 0} f(n) z^n\|_{B_1^1} \leq C\left(|f(0)|+\frac {1}{\sqrt{ \alpha}} (\widetilde A+\sqrt{\widetilde A \widetilde B})\right),\] for a universal constant $C$, where \[\widetilde A = \sqrt{\| x^{\frac 1 2 - \alpha} f\|_{\ell^2([1,\infty))} \| x^{\frac 1 2 + \alpha} f\|_{\ell^2([1,\infty))}},\] \[\widetilde B = \sqrt{\| x^{\frac 3 2 - \alpha} f'\|_{L^2(1,\infty)} \| x^{\frac 3 2 + \alpha} f'\|_{L^2(1,\infty)}}.\] \end{prop} Before we prove the Proposition, we explain how it implies Theorem \ref{R+}. \begin{proof}[Proof of Theorem \ref{R+}] Let $f$ be as in Theorem \ref{R+}. Note that both $ A$ and $ B$ are unchanged if the function $f$ is replaced by $x\mapsto f(t x)$. We can therefore restrict ourselves to the case $t=1$. We first prove the inequalities \begin{equation}\label{eq=discrete_continuous} \left( \sum_{n\geq1} n^{2\beta} |f(n+1)- f(n)|^2\right)^{\frac 1 2} \leq \|x^\beta f'\|_{L^2(1,\infty)}\end{equation} for $\beta = \frac 1 2+\alpha$ and $\beta = \frac 1 2-\alpha$, and \begin{eqnarray}\label{eq=continuous} \|x^\beta (f'(x+1)-f'(x))\|_{L^2(1,\infty)} &\leq \|x^\beta f''\|_{L^2(1,\infty)} \end{eqnarray} for $\beta = \frac 3 2+\alpha$ and $\beta = \frac 3 2-\alpha$. Together with \eqref{eq=peller} and Proposition \ref{prop=Hankel_with_C^1_symbol}, they will imply that \begin{equation}\label{eq=intermediate_ineq} \left\|\left(f(j+k)-f(j+k+1)\right)_{j,k \geq 0}\right\|_{S^1} \leq C\left( |f(0)-f( 1)|+\frac {1 }{\sqrt{ \alpha}}( A + \sqrt{ A B} )\right). \end{equation} For \eqref{eq=discrete_continuous}, note that $n^\beta \leq x^\beta$ for every integer $n$ and $x \in [n,n+1]$. By Cauchy-Schwarz inequality we have \[ n^{\beta} |f(n+1)- f(n)| \leq n^\beta \|f'\|_{L^2(n,n+1)} \leq \|x^\beta f'\|_{L^2(n,n+1)}.\] By taking the square and summing for $n \geq 1$ we get \eqref{eq=discrete_continuous}. For \eqref{eq=continuous} write $f'(x+1)-f'(x) = \int_{0}^1 f''(x+s) ds$ and use the triangle inequality to get \[ \|x^\beta (f'(x+1)-f'(x))\|_{L^2(1,\infty)} \leq \int_0^1 \|x^\beta f''(x+s)\|_{L^2(1,\infty)}ds,\] from which \eqref{eq=continuous} follows because $\|x^\beta f''(x+s)\|_{L^2(1,\infty)} \leq \|x^\beta f''(x)\|_{L^2(1,\infty)}$ for all $0<s<1$. We now prove the theorem. If $ B = \infty$ there is nothing to prove. So let us assume that $t=1$ and $ B<\infty$. Theorem \ref{R+} follows from \eqref{eq=intermediate_ineq} and the inequalities \[ |f(0) - f(1)| \leq \|f'\|_{L^1(\R_+)} \leq \frac{\sqrt 2}{\sqrt\alpha} A\] and \[ A \leq \frac{1}{\sqrt{1-\alpha^2}} B.\] To prove the first inequality, decompose the integral and use the Cauchy-Schwarz inequality \begin{eqnarray*} \|f'\|_{L^1} &= &\int_0^s \frac{ x^{\frac 1 2 - \alpha} f'(x)}{x^{\frac 1 2 - \alpha}} dx + \int_s^\infty \frac{f'(x) x^{\frac 1 2 + \alpha}}{x^{\frac 3 2 + \alpha}} dx \\ & \leq& \frac{ s^\alpha}{\sqrt{2\alpha}} \| x^{\frac 1 2 - \alpha} f'\|_{L^2(\R_+)} + \frac{ s^{-\alpha}}{\sqrt{2\alpha}} \| x^{\frac 1 2 - \alpha} f'\|_{L^2(\R_+)}.\end{eqnarray*} Taking the infimum over $s>0$ we get $\|f'\|_{L^1(\R_+)} \leq \frac{\sqrt 2}{\sqrt\alpha} A$ as claimed. Let us move to the inequality $ A \leq \frac{1}{\sqrt{1-\alpha^2}} B$. By the assumption $ B<\infty$, we have that $f'' \in L^1([1,\infty)$ and hence $\lim_{x \to \infty} f'(x)$ exists. Since $f$ is bounded, this limit is $0$, and we can write $f'(x) = \int_1^\infty x g_r(x) dr$ where $g_r(x) = f''(rx)$. By the triangle inequality \[ \| x^{\frac 1 2 \pm \alpha} f'\|_{L^2(\R_+)} \leq \int_1^\infty \| x^{\frac 3 2 \pm \alpha} g_r \|_{L^2(\R_+)} dr.\] By a change of variable \[ \| x^{\frac 3 2 \pm \alpha} g_r \|_{L^2(\R_+)} = \frac{1}{r^{2\pm \alpha}} \| x^{\frac 3 2 \pm \alpha} f'' \|_{L^2(\R_+)},\] and hence using that $\int_1^\infty \frac{dr}{r^{2\pm \alpha}} = \frac{1}{1\pm \alpha}$, we get \[ \| x^{\frac 1 2 \pm \alpha} f'\|_{L^2(\R_+)} \leq \frac{1}{1\pm \alpha} \| x^{\frac 3 2 \pm \alpha} f'' \|_{L^2(\R_+)}.\] The inequality $ A \leq \frac{1}{\sqrt{1-\alpha^2}} B$ follows. \end{proof} We give now the proof of Proposition \ref{prop=Hankel_with_C^1_symbol}. We start with a classical elementary lemma. \begin{lemma} \label{lemma=elementary} If $\varphi \in L^2(\T)$ then \[ \|\varphi\|_{L^1(\T)} \leq \frac{2}{\sqrt\pi} \sqrt{\|\varphi \|_{L^2(\T)} \| (1-z)\varphi\|_{L^2(\T)}}.\] \end{lemma} \begin{proof} Denote $g(z) =(1-z)\varphi(z)$. For any $0<s<1/2$: \begin{eqnarray*} \|\varphi\|_{L^1} & = & \int_{0}^{1} |\varphi(e^{2i\pi t})| d t\\ & =&\int_{-s}^s |\varphi(e^{2i\pi t})| d t + \int_{s}^{1-s} \frac{1}{|1-e^{2i\pi t}|} |(1-e^{2i\pi t})\varphi(e^{2i\pi t})| d t\\ & \leq & \sqrt{2s} \|\varphi\|_2 + \sqrt{\int_s^{1-s} \frac{1}{|1-e^{2i\pi t}|^2} d t} \|g\|_2 \end{eqnarray*} by the Cauchy-Schwarz inequality. The remaining integral can be computed: \begin{eqnarray*} \int_s^{1-s} \frac{1}{|1-e^{2i\pi t}|^2} d t & = &2 \int_s^{1/2} \frac{1}{4\sin^2(\pi t)} dt\\ & = & \frac{1}{2}\left[\frac{-\cos(\pi t)}{\pi \sin(\pi t)}\right]_s^{1/2} = \frac{1}{2 \pi \tan(\pi s)} \leq \frac{1}{2\pi^2 s} \end{eqnarray*} where we used that $\tan x \geq x$ for all $0\leq x \leq \frac \pi 2$. Taking $s= \|g\|_2/ 2\pi\|\varphi\|_2 \leq 1/2$ we get the desired inequality. \end{proof} Let $f$ be as in Proposition \ref{prop=Hankel_with_C^1_symbol}. Recall the notation introduced in \eqref{eq=def_l2}. We prove the following. \begin{lemma}\label{lemma=domination_L1norm} Let $I_n = (2^{n-1},2^{n+1}]$. Denote $\varphi(z) = \sum_{n\geq 0} f(n) z^n$. Then for $n \geq 1$ \[ 2^n\|W_n \ast \varphi\|_{L^1(\T)} \leq \frac{4}{\sqrt \pi}(\|x^{\frac 1 2} f\|_{\ell^2(I_n)} + \sqrt{\|x^{\frac 3 2} f'\|_{L^2(I_n)} \|x^{\frac 1 2} f\|_{\ell^2(I_n)}}).\] \end{lemma} \begin{proof} The inequality $\| W_n \ast \varphi\|_{L^2(\T)} \leq \|f\|_{\ell^2(I_n)}$ is clear. Writing $(1-z)(W_n \ast \varphi)(z)$ as \ \sum_{2^{n-1}+1}^{2^{n+1}} (\widehat W_n(k)-\widehat W_n(k-1)) f(k) z^k + \widehat W_n(k-1)( f(k)-f(k-1)) z^k,\] and noting that $|\widehat W_n(k)-\widehat W_n(k-1)| \leq 2^{1-n}$ and $\widehat W_n(k-1) |f(k)-f(k-1)| \leq \|f'\|_{L^2(k-1,k)}$ for $k\in I_n$, we get \[\|(1-z) (W_n \ast \varphi)\|_{L^2(\T)} \leq 2^{1-n} \|f\|_{\ell^2(I_n)} + \|f'\|_{L^2(I_n)}.\] By Lemma \ref{lemma=elementary} and the inequality $\sqrt{a + b} \leq \sqrt a + \sqrt b$ we get \[ \|W_n \ast \varphi\|_{L^1(\T)} \leq \frac{2^{\frac{3-n}{2}}}{\sqrt \pi} \|f\|_{\ell^2(I_n)} + \frac{2}{\sqrt \pi} \sqrt{\|f\|_{\ell^2(I_n)} \|f'\|_{L^2(I_n)}}.\] Multiplying by $2^n$ and using that $x \geq 2^{n-1}$ on $I_n$ we get \[ 2^n \|W_n \ast \varphi\|_{L^1(\T)} \leq \frac{4}{\sqrt \pi} \left(\| x^{\frac 1 2} f\|_{\ell^2(I_n)} + \sqrt{ \|x^{\frac 1 2} f\|_{\ell^2(I_n)} \|x^{\frac 3 2} f'\|_{L^2(I_n)}}\right),\] which concludes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{prop=Hankel_with_C^1_symbol}] By Lemma \ref{lemma=domination_L1norm} and Peller's characterization, there is a universal constant $C$ such that we have \[ \|A\|_1 \leq C \left( |f(0)|+|f(1)|+\sum_{n \geq 1} \|x^{\frac 1 2} f\|_{\ell^2(I_n)} + \sqrt{ \|x^{\frac 1 2} f\|_{\ell^2(I_n)} \|x^{\frac 3 2} f'\|_{L^2(I_n)}}\right).\] Denote here $I_0=\{1\}$, so that $|f(1)| = \|f\|_{\ell^2(I_0)}$. Then by Cauchy-Schwarz inequality the previous inequality becomes \begin{multline}\label{eq=after_Peller} \|A\|_1 \leq C \left(|f(0)|+\sum_{n \geq 0} \|x^{\frac 1 2} f\|_{\ell^2(I_n)} \right. \\ \left.+ \sqrt{ \sum_{n \geq 1} \|x^{\frac 1 2} f\|_{\ell^2(I_n)}}\sqrt{\sum_{n \geq 1} \|x^{\frac 3 2} f'\|_{L^2(I_n)}}\right).\end{multline} Let $N \geq 1$. If $n \leq N$, $x^{1/2}$ is dominated by $x^{\frac 1 2 -\alpha} 2^{\alpha(n+1)}$ on $I_n$, and hence \begin{eqnarray*} \sum_{n = 0}^N \|x^{\frac 1 2} f\|_{\ell^2(I_n)} &\leq& \left(\sum_{n = 0}^N 2^{2\alpha(n+1)}\right)^{\frac 1 2} \left( \sum_{n=0}^N \| x^{\frac 1 2 - \alpha}f\|_{\ell^2(I_n)}^2\right)^{\frac 1 2}\\ & \leq & \frac{2^{\alpha(N+2)}}{\sqrt{2^{2\alpha} - 1}} \sqrt 2 \| x^{\frac 1 2 - \alpha} f\|_{\ell^2([1,2^{N+1}])} \\ & \leq& \frac{C}{\sqrt{\alpha}} 2^{\alpha N} \| x^{\frac 1 2 - \alpha} f\|_{\ell^2([1,\infty))}. \end{eqnarray*} The $\sqrt 2$ is because every point in $[1,\infty)$ belongs to at most $2$ intervals $I_n$ for $n \in [1,N]$. For $n > N$ use that $x \geq 2^{n-1}$ on $I_n$ to dominate \begin{eqnarray*} \sum_{n > N} \|x^{\frac 1 2} f\|_{\ell^2(I_n)} &\leq& \sum_{n > N} 2^{-\alpha(n-1)} \| x^{\frac 1 2 + \alpha} f\|_{\ell^2(I_n)} \\ & \leq &\left(\sum_{n > N} 2^{-2\alpha(n-1)} \right)^{\frac 1 2} \sqrt{2} \|x^{\frac 1 2 + \alpha} f\|_{\ell^2([1,\infty))}\\ & \leq & \frac{C}{\sqrt{\alpha}} 2^{-\alpha N} \|x^{\frac 1 2+\alpha}f\|_{\ell^2([1,\infty))}. \end{eqnarray*} Let $a = \| x^{\frac 1 2 - \alpha}f\|_{\ell^2([1,\infty))}$ and $b = \|x^{\frac 1 2+\alpha}f\|_{\ell^2([1,\infty))}$. Since $a \leq b$ we have $\inf_{N \geq 1} 2^{\alpha N} a + 2^{-\alpha N} b \leq 2^{1+\alpha} \sqrt{ab}$. This implies that there is a constant $C$ such that \[ \sum_{n \geq 0} \|x^{\frac 1 2} f\|_{\ell^2(I_n)} \leq \frac{C}{\sqrt \alpha} \sqrt{\| x^{\frac 1 2 - \alpha}f\|_{\ell^2([1,\infty))} \|x^{\frac 1 2 + \alpha}f\|_{\ell^2([1,\infty))}}.\] The same argument implies a similar inequality with $f$ replaced by $x f'$ and the norm $\ell^2$ replaced by the norm $L^2$~: \[ \sum_{n \geq 1} \|x^{\frac 3 2} f'\|_{L^2(I_n)} \leq \frac{C}{\sqrt \alpha} \sqrt{\| x^{\frac 3 2 - \alpha}f'\|_{L^2(1,\infty))} \|x^{\frac 3 2 + \alpha}f\|_{L^2(1,\infty)}}.\] If we remember \eqref{eq=after_Peller} we get the inequality in the Proposition, which concludes the proof.\end{proof} \section{The case of $\Z^d$}\label{section=Zd} In this section we prove that $\Z^d$ equipped with its standard generating set satisfies the conclusion of Corollary \ref{hyperbolicgroup} for all $d\geq 1$. Actually we prove a stronger result~: the ``if-part'' of Theorem \ref{HSS} holds for $\Z^d$ (see Remark \ref{rem=difference-1-2}). For the standard generating set, the word-length of $n =(n_1,\dots,n_d) \in \Z^d$ is the $\ell^1$-length $|n|=|n_1|+\dots+|n_d|$. \begin{thm}\label{Zd} Let $d \geq 1$. There exists $C_d \in \R_+$ such that for every function $\dot \phi \colon \N \to \C$ satisfying that the matrix $H = \left(\dot \phi(j+k) - \dot \phi(j+k+2)\right)$ is trace class, the map \[ \sum_{n \in \Z^d} c_n e^{i n \cdot x} \mapsto \sum_{n \in \Z^d} c_n \dot \phi(|n|) e^{in \cdot x}\] is bounded on $L^\infty(\T^d)$ with norm less than $ \lim_{n\rightarrow\infty}(| \dot{\phi}(2n)|+|\dot{\phi}(2n+1)|)+C_d\|H\|_{S^1}$. \end{thm} \begin{rem} This is indeed an analogous of Corollary \ref{hyperbolicgroup} because, as $\Z^d$ is commutative, the von Neumann algebra of $\Z^d$ is $L^\infty(\R^d/\Z^d)$, and the norm and cb norm of a Fourier multiplier on $L^\infty(\R^d/\Z^d)$ coincide. \end{rem} \begin{rem} Theorem \ref{Zd} together with Theorem \ref{HSS} tell us that the Banach space of c.b. radial multipliers on $\mathbb{F}_d$ embeds naturally into the Banach space of c.b. radial multipliers on $\Z^d$. It is tempting to expect a direct proof of this. We were only able to find a proof that relies on Theorem \ref{HSS} and on estimates for the norm in $L^1(\R^d/\Z^d)$ of functions with radial Fourier transform. \end{rem} For the proof of Theorem \ref{Zd}, we can restrict ourselves to the case when $\dot \phi$ has finite support. For a function $\phi \colon \Z^d \to \C$ with finite support, the Fourier multiplier $m_\phi$ with symbol $\phi$ is the convolution by the function $x \in \R^d/\Z^d \mapsto \sum_{n \in \Z^d} \phi(n) e^{2i\pi n\cdot x}$ on $L^\infty(\R^d/\Z^d)$. The norm and cb norm of $m_\phi$ both coincide with the $L^1$-norm of the function $\sum_{n \in \Z^d} \phi(n) e^{2i\pi n\cdot x}$. Taking into account Peller's Theorem \cite{P80} (see \S \ref{subsection=proof}), we see that Theorem \ref{Zd} is equivalent to the existence of $C_d$ such that \begin{equation}\label{eq=validity_Zd} \| \sum_{n \in \Z^d} a_{|n|} e^{ i n \cdot x}\|_{L^1([-\pi,\pi]^d)} \leq C_d \| \sum_{m \geq 0} (a_{m} - a_{m+2}) e^{i m\theta}\|_{B_1^1}\end{equation} for all finitely supported sequence $(a_m)_{m \geq 0}$. But it is easy to see from \eqref{eq:def_B11} that for $\varphi(e^{i\theta}) = \sum_{m \geq 0} b_m e^{i m\theta}$, \[ C^{-1} \|\varphi\|_{B_1^1} \leq \sum_{n \geq 0} \|W_n \ast \psi\|_{L^1(\T)} \leq C \|\varphi\|_{B_1^1}\] for some universal constant $C$, where $\psi(e^{i\theta}) = \sum_{m \geq 0} (m+1) b_m e^{i m\theta}$. The inequality \eqref{eq=validity_Zd} therefore follows from \begin{prop}\label{prop=inequZd} Let $d$ be an integer. There is a constant $C_d$ such that for every finitely supported sequence $(a_n)_{n\geq 0}$, \begin{equation}\label{eq=L1_Dirichlet} \| \sum_{n \in \Z^d} a_{|n|} e^{ i n \cdot x}\|_{L^1([-\pi,\pi]^d)} \leq C_d \| \sum_{m \geq 0} (m+1) (a_{m} - a_{m+2}) e^{i m\theta}\|_{L^1([-\pi,\pi])}.\end{equation} \end{prop} \begin{proof We can assume that $d$ is even, because \eqref{eq=L1_Dirichlet} for $d$ implies \eqref{eq=L1_Dirichlet} for $d-1$ by taking the average with respect to $x_d$. We can rewrite \[ \sum_{n \in \Z^d} a_{|n|} e^{i n \cdot x} = \sum_{m\geq 0} (a_{m} - a_{m+1}) D_m(x)\] where $D_m(x) = \sum_{|n|\leq m} e^{i n \cdot x}$. The exact value for $D_m(x)$ was computed in \cite[Theorem 4.2.3]{X95} and is equal to \[ D_m(x_1,\dots,x_d) = [\cos x_1,\dots,\cos x_d]G_m\] where $G_m\colon [-1,1] \to \R$ is given by \[ G_m(\cos \theta) = (-1)^{\frac d 2 - 1} (\sin \theta)^{d-2} (\cos(m\theta) + \cos((m+1)\theta))\] and for a function $f \colon [-1,1] \to \C$ and $d$ distinct numbers $t_1,\dots,t_d \in [-1,1]$ we use the following notation of divided difference \[ [t_1,\dots,t_d]f = \sum_{j=1}^d \frac{f(t_j)}{\prod_{k \neq j} (t_j - t_k)}.\] Here we use that $d$ is even, otherwise in the formula for $G_m(\cos \theta)$ the terms $\cos$ have to be replaced by $\sin$. For a function $f \colon [0,\pi] \to \C$ we define $H_f \colon [-1,1] \to \C$ by \[H_f(\cos \theta) = (\sin\theta)^{d-2} f(\theta) \textrm{ for }\theta \in [0,\pi].\] Then we have the identity \[ \sum_{m \geq 0} (a_m - a_{m+1}) G_m(\cos \theta) = \frac{(-1)^{\frac d 2 - 1}}{2} H_{f_1+f_2}(\cos \theta),\] for all $\theta\in [-\pi,\pi]$ with $f_2(\theta)=f_1(-\theta)$ and \[f_1(\theta) = \sum_{m \geq 0} (a_m - a_{m+1})(e^{im\theta} + e^{i(m+1)\theta}) = (a_0-a_1) + \sum_{m\geq 0}(a_m - a_{m+2}) e^{i(m+1)\theta}.\] By the preceding we can therefore write \[ \sum_{n \in \Z^d} a_{|n|} e^{i n \cdot x} =\frac{(-1)^{\frac d 2 - 1}}{2} [\cos x_1,\dots,\cos x_d]H_{f_1+f_2}.\] Using that $H_1(t) = (1-t^2)^{\frac d 2 - 1}$ is a polynomial of degree $d-2$ ($d$ is even) and that $[t_1,\dots,t_d]f=0$ whenever $f$ is a polynomial of degree $d-2$ we observe for further use that \begin{equation}\label{eq=divided_difference_vanish} [\cos x_1,\dots,\cos x_d]H_1 = 0.\end{equation} We claim that \begin{equation}\label{eq=divided_difference_ineq} \sup_{0\leq t\leq\pi} \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[t,\pi]}} \|_{L^1([-\pi,\pi]^d)} <\infty \end{equation} This will imply the proposition. Indeed, if $K$ is the $\sup$ in the previous inequality, and $f \colon [0,\pi] \to \C$ is any $C^1$ function, writing $f(\theta) = f(0) + \int_{0}^{\pi} f'(t) \chi_{[t,\pi]}(\theta) dt$ for all $\theta \in [0,\pi]$ and using \eqref{eq=divided_difference_vanish}, we get \begin{multline*} \| [\cos x_1,\dots,\cos x_d]H_{f} \|_{L^1([-\pi,\pi]^d)} \leq \\ \int_{0}^\pi |f'(t)| \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[t,\pi]}} \|_{L^1([-\pi,\pi]^d)} dt \leq K \|f'\|_{L^1([0,\pi])}.\end{multline*} Applying this inequality to $f=f_1+f_2$ and noticing that \begin{eqnarray}\label{f1'} f_1'&=&i\sum_{m\geq 0} (m+1)(a_{m} - a_{m+2}) e^{i(m+1)\theta}\end{eqnarray} we get \eqref{eq=L1_Dirichlet}. We now move to \eqref{eq=divided_difference_ineq}. Note $H_{\chi_{[t,\pi]}}+H_{\chi_{[0,t]}}=H_1=0$, we have \[ \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[t,\pi]}} \|_{L^1([-\pi,\pi]^d)} = \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[0,t]}} \|_{L^1([-\pi,\pi]^d)}.\] Also by symmetry we have \[ \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[t,\pi]}} \|_{L^1([-\pi,\pi]^d)} = 2^d \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[t,\pi]}} \|_{L^1([0,\pi]^d)}.\] Finally, by the change of variables $x_i \mapsto \pi - x_i$, \[ \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[t,\pi]}} \|_{L^1([0,\pi]^d)} = \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[0,\pi-t]}} \|_{L^1([0,\pi]^d)},\] so we are left to prove \[ \sup_{0\leq t \leq \pi/2} \| [\cos x_1,\dots,\cos x_d]H_{\chi_{[0,t]}} \|_{L^1([0,\pi]^d)} <\infty.\] This will follow from the estimate in Lemma \ref{lemma=free_diff_lemma} below. If $d=2$ this is exactly the Lemma for $s=0$. If $d >2$, fix $0<t\leq \frac \pi 2$, and for $\theta \in [0,\pi]$ write \[ H_{\chi_{[0,t]}}(\cos \theta) = \int_0^t (d-2) (\sin u)^{d-3} \cos u \chi_{[u,t]}(\theta) du.\] With the notation of Lemma \ref{lemma=free_diff_lemma} we have for all $t \in (0,\pi/2)$ \[ \|[\cos x_1,\dots,\cos x_d]H_{\chi_{[0,t]}} \|_{L^1([0,\pi]^d)} \leq \int_0^t (d-2) (\sin u)^{d-3} \cos u \|A_{u,t}^d\|_{L^1} du,\ which, by Lemma \ref{lemma=free_diff_lemma}, is less than \[ C_d t^{2-d} \int_0^t (d-2) (\sin u)^{d-2} \cos u du = C_d \left(\frac{\sin t}{t}\right)^{d-2} \leq C_d.\] This concludes the proof of \eqref{eq=divided_difference_ineq} and of the proposition. \end{proof} The previous proof used the following lemma. \begin{lemma}\label{lemma=free_diff_lemma} Let $d \geq 1$ be an integer. For every $0\leq s<t\leq \pi$, define a function $A^d_{s,t} \colon [0,\pi]^d \to \R$ by \begin{eqnarray*} A^d_{s,t}(x_1,\dots,x_d) &=& [\cos x_1,\dots,\cos x_d] (\cos \theta \mapsto \chi_{[s,t]}(\theta) )\\ & = & \sum_{i=1}^d \frac{\chi_{[s,t]}(x_i)}{\prod_{j \neq i} (\cos x_i - \cos x_j)}. \end{eqnarray*} Then there is a constant $C_d$ such that for all $0\leq s <t \leq \frac \pi 2$. \[ \|A_{s,t}^d\|_{L^1([0,\pi]^d)} \leq C_d t^{2-d}.\] \end{lemma} \begin{proof} We prove by induction on $d$ that a stronger inequality holds. Namely for all $0<s<t \leq \frac \pi 2$, \begin{equation}\label{eq=induction_hypothesis} \|A_{s,t}^d\|_{L^1([0,\pi]^d)} \leq C_d (t-s) \left( \frac 1 s |\log(1-\frac s t)| \right)^{d-1}. \end{equation} It is easy to see that $(t-s) \left( \frac 1 s |\log(1-\frac s t)| \right)^{d-1} \leq C'_d t^{2-d}$ for some constant $C'_d$ and all $0<s<t\leq \pi$, so that \eqref{eq=induction_hypothesis} is indeed stronger than the lemma. At least for $s>0$, but the case $s=0$ follows by letting $s \to 0$. The case $d=1$ is obvious because \[ \| A^1_{s,t}\|_{L^1([0,\pi])} = \int_0^\pi \chi_{[s,t]}(\theta) d\theta = t-s.\] Assume that \eqref{eq=induction_hypothesis} holds for $d \geq 1$, and let $0 < s < t \leq \frac \pi 2$. Throughout the proof we will write $ X \lesssim Y$ when we mean $X \leq C Y$ for some constant allowed to depend on $d$ but not on $s,t$. If $x_1,\dots,x_{d+1} \in [s,t]$ then \[ A^{d+1}_{s,t}(x_1,\dots,x_{d+1}) = [\cos x_1,\dots,\cos x_{d+1}]1=0.\] By symmetry we therefore have \[ \|A^{d+1}_{s,t}\|_{L^1([0,\pi]^{d+1})} \leq (d+1) \left(\int_0^s + \int_t^\pi\right) \|A^{d+1}_{s,t}(\cdot,\beta)\|_{L^1([0,\pi]^{d})} d\beta.\] If $\beta \notin [s,t]$ we can write \begin{eqnarray*} A_{s,t}^{d+1}(x_1,\dots,x_d,\beta) &=& \sum_{i=1}^d \frac{\chi_{[s,t]}(x_i)}{(\cos x_i - \cos \beta) \prod_{j \neq i} (\cos x_i - \cos x_j)}\\ & = &[\cos x_1,\dots,\cos x_d] \left(\cos \theta \mapsto h_\beta(\theta)\chi_{[s,t]}(\theta)\right),\end{eqnarray*} where we denote $h_\beta(\theta) = \frac{1}{\cos \theta - \cos \beta}$. At this point we have to distinguish the cases $\beta<s$ and $\beta>t$. Let us first consider the case $0 \leq \beta<s$. Then for $\theta \in [s,t]$ we write $h_\beta(\theta) = h_\beta(t) - \int_s^t h_\beta'(u) \chi_{[s,u]}(\theta) du$, so that by the triangle inequality we get \[ |A_{s,t}^{d+1}(x_1,\dots,x_d,\beta)| \leq |h_\beta(t) A^d_{s,t}(x_1,\dots,x_d)| + \int_s^t |h_\beta'(u) A_{s,u}^d(x_1,\dots,x_d) | du.\] Integrating with respect to $x_1,\dots,x_d$ we obtain \[ \|A^{d+1}_{s,t}(\cdot,\beta)\|_{L^1([0,\pi]^{d})} \leq |h_\beta(t)| \|A^{d}_{s,t}\|_{L^1} + \int_s^t |h_\beta'(u)| \|A^{d}_{s,u}\|_{L^1} du.\] If we use the elementary inequalities $|h_\beta(\theta)| \lesssim \frac{1}{\theta(\theta-\beta)}$ and $|h'_\beta(\theta)| \lesssim \frac{\theta}{\theta^2(\theta-\beta)^2}$ valid for all $0 \leq \beta <\theta \leq \frac \pi 2$, we have $\int_0^s |h_\beta(t)| d\beta \lesssim \frac{1}{t}|\log(1-\frac s t)|$ and $\int_0^s |h'_\beta(u)| d\beta \lesssim \frac{s}{u^2(u-s)}$ and the previous inequality together with the induction hypothesis yields after integration \begin{multline*} \int_0^s \|A^{d+1}_{s,t}(\cdot,\beta)\|_{L^1([0,\pi]^{d})} d\beta \\ \lesssim (t-s) \frac{ |\log(1-\frac s t)|^{d}}{s^{d-1}t} + \int_s^t \frac{s^{2-d}}{u^2} |\log(1-\frac s u)|^{d-1} du. \end{multline*} With the change of variable $v=1-\frac s u$ the last integral becomes \[s^{1-d} \int_0^{1-\frac s t} |\log v|^{d-1} dv.\] One can check that this integral is less than $C (t-s) \left( \frac 1 s |\log(1-\frac s t)| \right)^{d}$. The inequality \[ \int_0^s \|A^{d+1}_{s,t}(\cdot,\beta)\|_{L^1([0,\pi]^{d})} d\beta \lesssim (t-s) \left( \frac 1 s |\log(1-\frac s t)|\right)^d \] follows. When $\beta\geq t$, we write $h_\beta(\theta) = h_\beta(s) + \int_s^t h_\beta'(u) \chi_{[u,t]}(\theta) du$ and by the inequalities $|h_\beta(\theta)| \lesssim \frac{1}{\beta(\beta-\theta)}$ and $|h_\beta'(u)| \lesssim \frac{u}{t^2(\beta-u)^2}$ valid for $u,\theta \leq t$, we get similarly \[ \int_t^\pi \|A^{d+1}_{s,t}(\cdot,\beta)\|_{L^1([0,\pi]^{d})} d\beta \lesssim \frac{ |\log(1-\frac s t)|}{s} \|A^d_{s,t}\|_{L^1}+ \int_s^t \frac{u}{t^2(t-u)} \|A^d_{u,t}\|_{L^1}.\] By the induction hypothesis the first term is $\lesssim (t-s)(\frac 1 s |\log(1-s/t)|)^d$, and the second one is less than \begin{eqnarray*} \int_s^t \frac{u^{2-d}}{t^2} |\log(1-u/t)|^{d-1}du& \leq & \frac t{s^d}\int_s^t |\log(1-u/t)|^{d-1}du/t\\ &=&\frac t{s^d}\int_{0}^{1 - \frac s t} |\log v|^{d-1} dv\\ &\lesssim &(t-s)\left( \frac 1 s |\log(1-\frac s t)|\right)^d \end{eqnarray*} Therefore, \[ \int_t^\pi \|A^{d+1}_{s,t}(\cdot,\beta)\|_{L^1([0,\pi]^{d})} d\beta \lesssim (t-s)\left( \frac 1 s |\log(1-\frac s t)|\right)^d .\] This completes the proof of \eqref{eq=induction_hypothesis} for $d+1$. The lemma is proved. \end{proof} \begin{rem} If we consider $f_1'+f_2'$ in (\ref{f1'}) and notice that \begin{eqnarray*} f_1'+f_2'&=&i\sum_{m> 0} m(a_{m-1} - a_{m+1}) e^{im\theta}-i\sum_{m> 0} m(a_{m-1} - a_{m+1}) e^{-im\theta}\\ &=&i\sum_{m\in {\Z}} |m|(a_{|m|-1} - a_{|m|+1}) e^{im\theta}. \end{eqnarray*} We then get \begin{equation}\label{fm1tod} \| \sum_{n \in \Z^d} a_{|n|} e^{ i n \cdot x}\|_{L^1([-\pi,\pi]^d)} \leq C_d \| \sum_{m\in {\Z}} |m|(a_{|m|-1} - a_{|m|+1}) e^{im\theta}\|_{L^1([-\pi,\pi])}\end{equation} for finitely supported $a$, which says that the Fourier multiplier $e^{in\cdot x}\mapsto a_{|n|}e^{in\cdot x}, n\in {\Z}^d$ is bounded on $L^\infty([-\pi,\pi]^d)$ for all $d\in {\N}$ provided $\lim_{k\rightarrow \infty} |a_{2k}|+|a_{2k+1}|<\infty$ and the Fourier multiplier $e^{im\theta}\mapsto b_me^{im\theta}$ with $b_m=|m|(a_{|m|-1} - a_{|m|+1}),m\in{\Z}$, is bounded on $L^\infty([-\pi,\pi])$. \end{rem} \section{BMO and $H^\infty$ Calculus}\label{section=motivation} A motivation in studying $S_t^r$ comes from harmonic analysis on free groups. We will briefly explain it in this section. We will also show a related result on bounded $H^\infty$-calculus. Following \cite{M08} and \cite{JM12}, we may consider BMO spaces associated with the semigroups $S_t^r$ on the free group von Neumann algebras. For $f\in L^2(\hat {\mathbb F}_n)$, let $$\|f\|_{BMO^r}=\sup_{t\geq 0} \|S_t^r |f-S_t^r f|^2\|^\frac12.$$ Set $$BMO^r(\hat{\mathbb F}_n)=\{f\in L^2, \max\{\|f\|_{BMO^r},\|f^*\|_{BMO^r}\} <\infty\}.$$ Theorem 5.2 of \cite{JM12} says that the complex interpolation space between BMO$^r$ and $L^2(\hat {\mathbb F}_n)$ is $L^p(\hat {\mathbb F}_n)$, for all $2<p<\infty$ and $0<r\leq1$. Thus, for any $0<r\leq 1$, BMO$^r$ serves as an endpoint for $L^p(\hat {\mathbb F}_n)$ corresponds to $p=\infty$. What will be an endpoint space which could substitute $L^1(\hat {\mathbb F}_n) $? A natural candidate would be the $H^1$ space defined by the Littlewood-Paley G-function $$G(f)=(\int_0^\infty |\partial_t S^1_t f|^2tdt)^\frac12$$ for $f\in L^1$ and $$ \|f\|_{H^1}=\tau G(f)<\infty .$$ In fact, for $n=1$, we have Fefferman--Stein's famous duality $(H^1)^*=BMO^1$ and the corresponding interpolation result. There has not been an satisfactory $H^1$-BMO duality theory associated with semigroups on free group von Neumann algebra for $n>1$. A main obstacle is due to the missing of geometric/metric tools in the noncommutative setting. For example, when $n=1$, all the $H^1$-BMO duality-arguments (to the best knowledge of the authors) rely on an equivalent characterization of $H^1$ by the Lusin area-function, the definition of which is similar to the Littlewood-Paley G-function but uses an integration on cones instead of the radial integration. The concept of ``cones" on $\hat{\mathbb F}_n$ is an big mystery for $n>1$. However, there is a semigroup-representation of Lusin-area integrations as follows $$Af=(\int_0^\infty S_{t^2}^2|\partial_t S^1_t f|^2tdt)^\frac12,$$ which uses the semigroup $S_t^2$ to compensate the ``integration on cones" and $$\|f\|_{H^1}\simeq \| A(f)\|_{L^1}$$ for $n=1$ (see \cite{M08} for an explanation). We should point out that the equivalence $\|f\|_{H^1}\simeq \| A(f)\|_{L^1}$ fails if we replace the extra $S_t^2$ in the definition of $Af$ by $S_t^1$. The complete boundedness of $S_t^r$, especially for $r=2$, then draws our attention and is proved in Section 3. We still do not know wether a semigroup $H^1$-BMO duality holds on ${\cal L}({\mathbb F}_n)$ for $n>1$ and leave the question for later. Junge--Le Merdy--Xu (\cite{JMX06}) studied $H^\infty$-calculus in the noncommutative setting (see \cite{CDMY96}). In particular, they obtain a bounded $H^\infty$-calculus property of ${\cal L}^r:\lambda_g\mapsto |g|^r\lambda_g$ on $L^p(\hat {\mathbb F}_n)$ and consequently a Littlewood-Paley theory for the corresponding semigroup $S_t^r$ for all $1<p<\infty,0<r\leq 1$. The end point cases ($p=1,\infty$) are more subtle and ${\cal L}^r$ has no bounded $H^\infty$-calculus on the group von Neuman algebra ${\cal L}{\mathbb F}_n$. In the rest part of this section, we will show that $S_t^r, 0<r<1$ has a bounded $H^\infty$-calculus on $BMO^\frac12({\mathbb F}_n)$. \begin{prop}\label{Mc} Suppose $T$ is a sectorial operator on a Banach space $X$. Assume $\int_0^\infty Te^{-tT}a(t)dt$ is bounded on $X$ with norm smaller than $C$ for any choice $a(t)=\pm1$. Then $T$ has a bound $H^\infty(S_\eta^0)$ calculus for any $\eta>\pi/2$. \end{prop} \begin{proof} This is a consequence of Example 4.8 of \cite{CDMY96} by setting $a(t)$ to be the sign of $ \langle Te^{-tT}u, v\rangle$ for any pair $(u,v)\in( X,X^*)$. \end{proof} \begin{prop}\label{JM12} Suppose $a(t)$ is a function on $(0,\infty)$ satisfying \begin{eqnarray}\label{correction} s\int_s^\infty\frac{ |a(t-s)|^2}{t^2}dt\leq c_a^2.\end{eqnarray} for any $s>0$. Then $\int_0^\infty {\cal L}^\frac12 e^{-t{\cal L}^\frac12}a(t)dt$ is completely bounded on $BMO^\frac12({\mathbb F}_n)$ with upper bound $\lesssim c_a$. \end{prop} \begin{proof} We apply Corollary 3.4 of \cite{JM12} to $S_t^1$. Note that the subordinated Poisson semigroup of $S_t^1$ is $S_t^{\frac12}$. So the space $BMO({\cal P})$ associated with $S_t^1$ as defined in \cite{JM12} is the space $BMO^\frac12$ defined in this section. Corollary 3.4 of \cite{JM12} then implies Proposition \ref{Mc}. Because the required $\Gamma^2\geq 0$ condition associated with $S_t^1$ is actually the positive definiteness of the kernel $K(g,h)=(\frac {|g|+|h|-|g^{-1}h|}2)^2$ on ${\mathbb F}_n\times {\mathbb F}_n$, which easily follows from the negative definiteness of the length function $|\cdot|$. \end{proof} \begin{rem} There are a few misprints in \cite{JM12}. The condition of $a(t)$ on page 710 is miss-written. The correct one is (\ref{correction}) in this article. In Thereom 3.3 of \cite{JM12}, the integer $n$ must be strictly positive. \end{rem} \begin{thm} For $0<r<1$, ${\cal L}^r:\lambda_g\mapsto |g|^r\lambda_g$ has a bounded $H^\infty(S_\eta^0)$ calculus on $BMO^\frac12(\hat{\mathbb F}_n)$ for any $\eta>r \pi$. \end{thm} \begin{proof} It is easy to see that $S_t^1$ is a bounded semigroup on $BMO^\frac12$. So ${\cal L}^r$ is a sectorial operator on $BMO^\frac12$ of type $\frac{r\pi}2$ for $0<r<1$. Applying Proposition \ref{JM12} to Proposition \ref{Mc}, for $|a(t)|=1$ and $T={\cal L}^{\frac12}$ we conclude that ${\cal L}^\frac12$ has a bounded $H^\infty(S_\eta^0)$ calculus on $BMO^\frac12$ for any $\eta>\pi/2$. Therefore, ${\cal L}^r$ has a bounded $H^\infty(S_\eta^0)$ calculus on $BMO^\frac12$ for any $\eta>r\pi$. \end{proof} \bigskip {\bf Acknowlegement.} The authors thank Narutaka Ozawa for helpful comments. \bibliographystyle{amsplain}
train/arxiv
BkiUckfxK0wg09KOWZs9
5
1
\section{Introduction} \noindent In order to compactify proper metric spaces, Gromov \cite[\S 1.2]{gr} introduced the notion of horofunction compactifications. Motivated by applications to $C^*$-algebras, Rieffel \cite{ri} studied Busemann points, which are limits of almost-geodesics and form a subset of the horofunctions in the boundary of the horofunction compactification. For finite-dimensional normed spaces they were described explicitly by Walsh, see \cite{wa2}. In the diploma thesis of the second author, this description was used to characterize the converging sequences in the horofunction compactification of finite-dimensional vector spaces with polyhedral norms. See Theorem \ref{thm:characterization} later. Also Karlsson, Metz and Noskov \cite{kmn} describe the horofunction boundary for a polyhedral norm. Recently, Kapovich and Leeb \cite{kl} studied the polyhedral horofunction compactification of finite-dimensional vector spaces in order to understand the Satake compactifications of symmetric spaces of non-compact type. Specifically, they raised the following question: \begin{ques}\cite[Quest. 6.18]{kl} Suppose that $\lVert \cdot \rVert$ is a polyhedral norm on a finite-dimensional real vector space $V$. Is it true that the horoclosure $\overline{V}$ of $V$ with respect to this norm, with its natural stratification, is homeomorphic to the closed unit ball for the dual norm? \end{ques} The main purpose of this paper is to give a positive answer to this question and to give an explicit formula for the homeomorphism (see Theorem \ref{thm:homeo} later). This explicit map is a generalization of the moment map known from toric varieties. See \cite[p. 82]{fu} for a definition of the moment map and a similar statement to ours about the map. The basic construction is a bijective map $m^C$ from the vector space $X$ into an $m$-dimensional convex polytope $C$ with vertices $\{c_1, \ldots, c_r\}$. It is defined by \begin{align*} m^C: X &\longrightarrow \inte(C),\\ x &\longmapsto \sum_{i = 1}^r \frac{e^{-\langle c_i | x \rangle}}{\sum_{k =1 }^r e^{-\langle c_k | x \rangle}} c_i. \end{align*} \noindent Note that this map is open and surjective onto the interior of $C$. More about it can be found in section \ref{sec:m^c} of this paper. The horofunction compactification can be defined for any proper metric space $X$, see Section \ref{sec:horofunction} below for the general definition. In this paper we focus on the case where $X$ is a finite-dimensional normed space with a polyhedral norm, that is, the unit ball of the norm is a convex polytope containing the origin in its interior. We identify $X$ with $\mathbb R^m$ to use the Euclidean inner product to define orthogonal projections in $X$. Following Walsh \cite[Thm. 1.1 and 1.2]{wa2} we describe the horoboundary as a set of real-valued functions $h_{E,p}$ on the dual space $X^*$ that are parametrized by proper faces $E$ of the dual unit ball $B^{\circ}$ and certain points $p$ in a subspace of $X$. An explicit description can be found in Section \ref{sec:h_ep}. These maps $h_{E,p}$ are used to define the homeomorphism between the horofunction compactification $\overline{X}^{hor}$ of $X$ and the dual unit ball $B^{\circ}$: \begin{thm}\label{thm:homeo} Let $(X, \lVert \cdot \rVert)$ be a finite-dimensional normed space with a polyhedral norm. Let $B \subset X$ be the unit ball associated to $\lVert \cdot \rVert$ and $B^{\circ} \subset X^*$ its dual polytope. Then the horofunction compactification $\overline{X}^{hor}$ of $X$ with respect to the norm $\lVert \cdot \rVert$ is homeomorphic to $B^{\circ}$ via the map \begin{align*} m: \overline{X}^{hor} &\longrightarrow B^{\circ}, \\ X \ni x &\longmapsto m^{B^{\circ}}(x), \\ \partial_{hor} X \ni h_{E,p} &\longmapsto m^E(p). \end{align*} \end{thm} The map $m$ is a combination of several maps $m^C$ for different convex subsets $C \subseteq B^{\circ}$. The interior of $X$ is mapped into the interior of the dual unit ball $B^{\circ}$. Horofunctions associated to the face $E \subset B^{\circ}$ are mapped into the face $E$, where $p \in X$ denotes the position of the image within $E$. Note that $p$ lies in a subspace of $X$ of the same dimension as the convex polytope $E$. The proof of this theorem is based on a result in the diploma thesis of the second author, which gives a characterization of sequences converging to the horoboundary, see Theorem \ref{thm:characterization} below. After some preliminaries we prove this characterization and give some examples to visualize the strong dependence of the direction and shape of the sequence from the faces of the unit ball and its dual. By combining this characterization and the above explicit map, we prove Theorem \ref{thm:homeo} in the last section. \subsubsection*{Acknowledgement} The first author acknowledges support from NSF grants DMS 1107452, 1107263, 1107367 GEometric structures And Representation varietiesŽ (the GEAR Network) and partial support from Simons Fellowship (grant \#305526) and the Simons grant \#353785. The second author was supported by the European Research Council under ERC-consolidator grant 614733. \section{Preliminaries} \subsection{Notations} In the following, $(X,\lVert \cdot \rVert)$ always denotes an $m$-dimensional normed space with a polyhedral unit ball $B$ associated to the norm $\lVert \cdot \rVert$. That means $B$ is an $m$-dimensional polytope containing the origin in its interior. Let $B^{\circ}$ denote the dual unit ball of $B$ in the dual space $X^*$. It is also an $m$-dimensional polytope, see Definition \ref{defi:dual} below. $\langle \cdot | \cdot \rangle$ denotes the dual pairing of $X^*$ and $X$. For any subset $F \subset X$ let $V(F) \subset X$ be the subspace generated by $F$, that is, the smallest subspace containing $F$, and $V(F)^\bot$ its orthogonal complement with respect to the Euclidean inner product obtained by identifying $X$ with $\mathbb R^m$. The projection of an element $x \in X$ to these two subspaces will be written as $x_F$ for the projection to $V(F)$ and $x^F$ for the projection to $V(F)^\bot$. Whenever a convex set $C$ is given as the convex hull of a set of points, $C = \conv\{c_1, \ldots, c_k\}$, we want this set of points to be minimal, that is, $\conv\{c_1, \ldots, c_k\} \neq \conv\{c_1, \ldots, c_{j-1}, c_{j+1}, \ldots, c_k\}$ for all $j = 1, \ldots, k$. This means that each point $c_j$ is a proper vertex of $C$. \begin{rem} We could also have taken the quotient $X/V(F)$ instead of $V(F)^\bot$, but since the orthogonal complement is more geometric, we use the complement $V(F)^\bot$. \end{rem} \subsection{Some convex analysis} \begin{defi}\label{defi:dual} Let $B$ be the unit ball of our norm $\lVert \cdot \rVert$. Then the \emph{dual unit ball} $B^{\circ}$ is defined as the polar of $B$: \[ B^{\circ} \mathrel{\mathop:}= \{ y \in X^* \ | \langle y | x \rangle \geq -1 \ \forall x \in B \}. \] The \emph{dual norm} $\lVert \cdot \rVert^\circ$ is the norm which has $B^{\circ}$ as its unit ball. \end{defi} \begin{rem} Every $m$-dimensional polytope $C \subset X$ containing the origin in its interior defines a norm $\lVert \cdot \rVert_C$ on $X$ by \begin{align*} \lVert x \rVert_C \mathrel{\mathop:}= \inf\{\alpha > 0 | x \in \alpha C\} \end{align*} for all $x \in X$. \end{rem} There are two ways to describe a bounded polytope, either as the convex hull of a finite set of points or as the intersection of finitely many half-spaces. For more details on polars and polyhedral convex sets see for example \cite{be} or \cite[\S 19]{ro}. This leads to the following description of a polytope $C$ around the origin and its dual polytope $C^\circ$: Let $C = \conv\{c_1, \ldots, c_r\}$ be an $m$-dimensional polytope around the origin which is given as the convex hull of a finite set of points. Then each point $c_i$ defines a hyperplane $H_i \subset X^*$ such that $\langle H_i | c_i \rangle = -1$, that is, $\langle h_i | c_i \rangle = -1$ for all $h_i \in H_i$. Let $V_i \subset X^*$ be the halfspace bounded by $H_i$ which contains the origin. Then \begin{align*} C^\circ &= \bigcap_{i = 1, \ldots, r} V_i\\ &= \{ y \in X^* \ | \ \langle y | c_i \rangle \geq -1 \ \forall i= 1, \ldots, l\}. \end{align*} As $C$ is convex and contains the origin, we have $(C^\circ)^\circ = C$. It is therefore also easy to describe $C^\circ$ as a convex hull of a finite set of points when $C$ is given as the intersection of certain halfspaces $V_i$. \begin{defi} A \emph{$k$-face} of a polytope $C = \bigcap_{i = 1, \ldots, r} V_i \subset X$ is a $k$-dimensional subset of $X$ which is the intersection of $C$ with one or more hyperplanes $H_i$ that bound $V_i$. An $m-1$-dimensional face is also called a \emph{facet}. \end{defi} \begin{ex} \label{ex:dualL1} We consider $\mathbb R^2$ equipped with the $L^1$-norm. Then $B$ is a square with vertices \[ c_1 = (1,0), \ c_2 = (0,1), \ c_3 = (-1,0), \ c_4 = (0,-1). \] Each of them defines a hyperplane, for example \[ H_1 = \left\{ \left . (-1, y) \right| \ y \in \mathbb R \right\}, \] for which obviously $\langle H_1 | c_1 \rangle = -1$. We obtain similar sets for the other three vertices. \begin{figure}[h!] \includegraphics{Horofunction_submit_picture_B_Bdual_2.pdf} \caption{The unit ball $B$ and its dual $B^{\circ}$ as in Example \ref{ex:dualL1} }\label{fig: example_B_Bo} \end{figure} From Figure \ref{fig: example_B_Bo} it is clear that the dual unit ball is a square corresponding to the $L^\infty$-norm with vertices \[ u_1 = (1,1), \ u_2 = (-1,1), \ u_3 = (-1, -1), \ u_4 = (1,-1). \] \end{ex} \begin{rem} There is a one-to-one correspondence between the faces of $B$ and those of $B^{\circ}$. Indeed, let $F \subset B$ be a $k$-face of the polyhedral unit ball $B$. Then there is a unique ($m-1-k$)-face $E \subset B^{\circ}$ of the dual unit ball defined by the equation $\langle E | F \rangle = -1$, that is, $\langle e | f \rangle = -1$ for all $e \in E$ and $f \in F$. This face is called the \emph{dual face} of $F$ and often denoted by $F^\circ$. Note that \begin{align*} \label{dimensionformula} \dim F + \dim F^\circ = m-1. \end{align*} \end{rem} \begin{lem}\label{lem:samepairing} Let $E \subset B^{\circ}$ be a face and $F \subset B$ its dual. Then there is a $t \in X^*$ such that \[ \langle e | q \rangle = \langle t | q \rangle \] for all $e \in E $ and $q \in V(F)$. \end{lem} \begin{proof} The statement follows from the fact that $E \subset (V(F)^\bot)^* + t$ for some $t \in V(F)^*$. That is, for all $e \in E$ there is an $f^\bot \in (V(F)^\bot)^*$ such that $e = f^\bot + t$. \end{proof} \begin{notation} Let $C = \conv\{c_1, \ldots, c_r\}$ be a convex polytope. Then the faces of $C$ are also convex polytopes. For any face $F \subset C$ let $S_F \subset \{1, \ldots, r\}$ denote those indices of vertices belonging to $F$. \end{notation} \begin{defi} Let $R \subset X$ be a convex set of arbitrary dimension. Then the \emph{cone} $K_R$ over $R$ is the convex set \[ K_R \mathrel{\mathop:}= \{x \in X \ | \ \exists \ \alpha > 0, r \in R \text{ such that } x = \alpha r\}. \] \end{defi} \begin{lem} \label{lem:pairinginfty} Let $C = \conv\{c_1, \ldots, c_r\} \subset X$ be an $m$-dimensional convex polytope around the origin with faces $\{F_1, \ldots, F_k\}$. Fix one face $F = F_j \in \{F_1, \ldots, F_k\}$ and denote by $E = F^\circ \subset C^\circ$ its dual face. Let $(x_n)_{n \in \mathbb N}$ be an unbounded sequence such that for $n$ large enough $x_{n,F} \in K_F$. Then for any edge $c_E$ of $E$ it holds (as $n \longrightarrow \infty$): \begin{align*} \langle c_E - c_j | x_{n,F} \rangle \longrightarrow \left \{ \begin{array}{ll} 0 & \text{ if } j \in S_E, \\ -\infty & \text{ if } j \notin S_E . \end{array} \right. \end{align*} \end{lem} \begin{proof} As $x_{n,F} \in K_F$ for $n$ large enough, there is an $f_n \in F$ for each $n$ such that $x_{n,F} = \lVert x_{n,F} \rVert_C \cdot f_n$. Let $j \in S_E$, then \[ \langle c_E - c_j | x_{n,F} \rangle = \lVert x_{n,F} \rVert_C \Big ( \langle c_E - c_j | f_n \rangle \Big ) = 0, \] because both $c_E$ and $c_j$ are vertices of $E$ and $f_n \in F = E^\circ$. \\ If $j \notin S_E$, then $\langle c_j | f_n \rangle > -1$ while $\langle c_E | f_n \rangle = -1$. Therefore, as $\lVert x_{n,F} \rVert_C \rightarrow \infty$, we have \[ \langle c_E - c_j | x_{n,F} \rangle = \lVert x_{n,F} \rVert_C \Big ( \underbrace{\langle c_E - c_j | f_n \rangle }_{< 0} \Big ) \longrightarrow - \infty. \qedhere \] \end{proof} \begin{defi} The \emph{affine hull} $\aff(A)$ of a set $A \subset X$ is defined to be the smallest affine set in $X$ containing $A$. The \emph{relative interior} $\ri(A)$ of $A$ is the interior of $A$ within $\aff(A)$. Similarly, we define the \emph{relative boundary} of $A$ as $\partial_{rel}A \mathrel{\mathop:}= (\cl A)\setminus (\ri A)$. \end{defi} \subsection{The ``pseudo-norm'' $\lvert \cdot \rvert_R$} Before we can introduce the maps defining horofunctions in the next section, we first need to define a ``pseudo-norm'': \begin{defi} Let $R \subset X^*$ be a convex set. For every $x \in X$ define \begin{align*}\label{pseudonorm} |x|_R := - \inf_{q \in R}\langle q|x\rangle. \end{align*} \end{defi} In general, this is not a norm. But by the polarity of the unit balls $B$ and $B^{\circ}$, $\lvert \cdot \rvert_{B^{\circ}}$ is a norm, since \[ \lvert \cdot \rvert_{B^{\circ}} = -\inf_{q \in B^{\circ}}\langle q|\cdot\rangle = \lVert \cdot\rVert. \] In the following we state some technical lemmata about the relation of the pseudo-norm $\lvert \cdot \rvert_R$ and the norm $\lVert \cdot \rVert$. We will use them later in the proof of Theorem \ref{thm:characterization}. \begin{lem} \label{lem:minionbdry} If $R = \conv\{r_1, \ldots, r_k\}$ is a convex polytope, then \begin{align*} \lvert x \rvert_R = - \inf_{i = 1,\ldots, k} \langle r_i | x \rangle. \end{align*} \end{lem} \begin{proof} Define a function $f: R \longrightarrow \mathbb R$ via $f(q) = \langle q|x\rangle $. As $R$ is compact and $f$ is continuous and affine, $f$ takes its minimum and its maximum on the boundary of $R$. Indeed, if the extrema would only lie in the interior of $R$, the derivative would be $0$ there. As $f$ is affine, it would be constant in contradiction to the assumption that it takes its extrema not on the boundary. As the boundary of $R$ is the finite union of several polyhedral convex sets, we can conclude in the same way that $f$ takes its minimum and maximum on the vertices $r_1, \ldots, r_k$. \end{proof} \begin{lem} \label{||E||=Ei} Let $F$ be a non-empty proper face of $B$ and $E := F^\circ$ its dual convex face with vertices $e_1, \ldots, e_k$. Their dual facets are denoted by $F_i = \{e_i\}^\circ \subset B$ for all $i = 1, \ldots, k$. If $F \subset B$ is not a facet, the $F_i$ contain $F$ in their relative boundary. If $F$ is already a facet, take $F_i = F$ for all $i$. Let $x \in X$ be in the interior of $K_F$ and $p \in X$ be small enough such that $p + x \in K_{F_j}$ for some (not necessarily unique) $j \in \{1, \ldots, k\}$. Then \[ |x + p|_E = \lVert x + p \rVert. \] \end{lem} \begin{proof} Because of the duality $F_j = \{e_j\}^\circ$ and as $\frac{x+p}{\lVert x+p \rVert} \in F_j$, we know that \[ \left \langle e_j \left |\frac{x+p}{\lVert x+p \rVert}\right. \right \rangle = -1, \hspace{1cm} \left \langle e_i \left |\frac{x+p}{\lVert x+p \rVert}\right. \right \rangle \geq -1 \hspace{3mm} \text{ for all } i \neq j. \] Together with Lemma \ref{lem:minionbdry} we compute \begin{align*} |x+p|_E &= -\inf_{q \in E}\langle q|x+p\rangle \\ &= - \inf_{i = 1, \ldots, k}\langle e_i|x+p\rangle \\ &= - \langle e_j|x+p\rangle \\ &= \lVert x + p \rVert. \qedhere \end{align*} \end{proof} \begin{lem} \label{||E=||B+||E} Let $F$ be a proper face of $B$ and $x \in X$ such that $x \in \inte(K_F)$ and $E = F^\circ$ its dual face. Then for all $p \in X$, \[ |x + p|_E = \lVert x \rVert + |p|_E. \] \end{lem} \begin{proof} As $\frac{x}{\lVert x \rVert} \in F$, we know that $\langle q|\frac{x}{\lVert x \rVert}\rangle = -1$ and therefore $\langle q|x\rangle = - \lVert x \rVert$ for all $ q \in E$. With this we obtain \begin{align*} |x + p|_E &= -\inf_{q \in E}\langle q|x+p\rangle \\ &= - \inf_{q \in E}[\langle q|x\rangle + \langle q|p\rangle ]\\ &= \lVert x \rVert - \inf_{q \in E}\langle q|p\rangle \\ &= \lVert x \rVert + |p|_E. \qedhere \end{align*} \end{proof} \subsection{The maps $h_{E,p}$} \label{sec:h_ep} In this section we introduce real-valued functions on $X$ which will later turn out to be the horofunctions of $X$ with respect to our norm $\lVert \cdot \rVert$. For every proper face $E \subset B^{\circ}$ and every $p \in V(E^\circ)^\bot$ we define the function \begin{align*} h_{E,p}: X &\longrightarrow \mathbb R, \\ y &\longmapsto \lvert p - y \rvert_E - \lvert p \rvert_E. \end{align*} We could also take $p \in X$ to define $h_{E,p}$. But the following lemma shows that only the part in $V(F)^\bot$ makes a contribution to $h_{E,p}$. \begin{lem} \label{lem:p^Fonly} Let $E \subset B^{\circ}$ be a face and $F \subset B$ its dual. Then for all $p, y \in X$ \[ \lvert p - y \rvert_E - \lvert p \rvert_E = \lvert p^F - y \rvert_E - \lvert p^F \rvert_E, \] where as usual $p^F$ denotes the projection of $p$ to $V(F)^\bot$. \end{lem} \begin{proof} Let $\{e_1, \ldots, e_k\}$ be the vertices of $E$ and $\{f_1, \ldots, f_l\}$ those of $F$. Then by Lemma \ref{lem:samepairing} there is a $t \in X^*$ such that $\langle e_i | q \rangle = \langle t | q \rangle$ for all $q \in V(F)$ and all $i \in \{1, \ldots, k \}$. So we obtain \begin{align*} \lvert p - y \rvert_E - \lvert p \rvert_E &= -\inf_{i} \langle e_i | p - y \rangle + \inf_{i}\langle e_i | p \rangle \\ &= -\inf_{i} \left[\langle e_i | p^F - y \rangle + \langle e_i | p_F \rangle \right] + \inf_{i} \left[\langle e_i | p^F \rangle + \langle e_i | p_F \rangle \right] \\ &= - \inf_{i} \langle e_i | p^F - y \rangle + \inf_{i} \langle e_i | p^F \rangle \\ &= \lvert p^F - y \rvert_E - \lvert p^F \rvert_E, \end{align*} where the infimum is always taken over $i \in \{1, \ldots, k\}$. \end{proof} \begin{lem} \label{lem:pseudonormshift} Let $E \subset B^{\circ}$ be a face and as usual $F \subset B$ its dual face. Let $E_t = E + t$ be the convex set obtained by shifting $E$ by some $t \in X^*$. Then for $p \in V(F)^\bot$ and any $y \in X$, we have the equality \[ h_{E_t, p}(y) = h_{E,p}(y) + \langle t | y \rangle. \] \end{lem} \begin{proof} Let $e_1, \ldots, e_k$ be the vertices of $E$. Then the vertices of $E_t$ are \mbox{$\{v_j = e_j + t | j = 1, \ldots, k\}$}, and \begin{align*} h_{E_t, p}(y) &= \lvert p - y \rvert _{E_t} - \lvert p \rvert_{E_t} = -\inf_{j = 1, \ldots, k}\langle v_j | p - y \rangle + \inf_{j = 1\, \ldots, k}\langle v_j | p \rangle \\ &= -\inf_j \big [ \langle e_j | p - y \rangle + \langle t | p - y \rangle \big ] + \inf_j \big[ \langle e_j | p \rangle + \langle t | p \rangle \big ] \\ &= \inf_{j }\langle e_j | p - y \rangle + \inf_{j }\langle e_j | p \rangle - \langle t | p - y \rangle + \langle t | p \rangle \\ &= h_{E,p}(y) + \langle t | y \rangle. \qedhere \end{align*} \end{proof} \begin{lem} \label{lem:hE1_neq_hE2} Let $E_1 \neq E_2$ be two faces of $B^{\circ}$ with dual faces $F_1, F_2 \subset B$, respectively. Then there are no points $p_1, p_2 \in X$ such that $h_{E_1, p_1} = h_{E_2, p_2}$. \end{lem} \begin{proof} Without loss of generality let $\dim E_1 \geq \dim E_2$. By $u_j$, $j = 1, \ldots, r$ we denote the vertices of $B^{\circ}$. Let $u \in E_1 \setminus E_2$ be a vertex of $E_1$ and $F = \{u\}^\circ \subset B$ its dual facet. Now we assume there are $p_1, p_2 \in X$ such that $h_{E_1, p_1} = h_{E_2, p_2}$. Then as $h_{E_1, p_1}(p_1) = \lvert p_1 - p_1 \rvert_{E_1} - \lvert p_1 \rvert_{E_1} = - \lvert p_1 \rvert_{E_1}$ it holds \begin{align*} h_{E_2, p_2}(p_1) = \lvert p_2 - p_1 \rvert_{E_2} - \lvert p_2 \rvert_{E_2} = - \lvert p_1 \rvert_{E_1}. \end{align*} Now take $y \in X$ such that $p_1- y, p_2 - y \in K_F$. As $F$ is a facet of $B$, we can always find a $y$ big enough such that this condition is satisfied. Then as $\frac{p_1 - y}{\lVert p_1 - y \rVert} \in F$, the dual pairing $\langle u | \frac{p_1 - y}{\lVert p_1 - y \rVert} \rangle = -1$ is minimal and the pairing with any other vertex of $B^{\circ}$ is strict greater than $-1$. So as $u$ is a vertex of $E_1$ but not of $E_2$, we obtain \begin{align*} h_{E_1, p_1}(y) &= -\inf_{j \in S_{E_1}} \langle u_j | p_1 - y \rangle - \lvert p_1 \rvert_{E_1} \\ &= - \lVert p_1 - y \rVert \inf_{j \in S_{E_1}} \left \langle u_j \left| \frac{p_1 - y}{\lVert p_1 - y \rVert} \right. \right \rangle - \lvert p_1 \rvert_{E_1} \\ &= \lVert p_1 - y \rVert - \lvert p_1 \rvert_{E_1} \end{align*} and \begin{align*} h_{E_2, p_2}(y) &= -\inf_{i \in S_{E_2}} \langle u_i | p_2 - y \rangle - \lvert p_2 \rvert_{E_2} \\ &= -\inf_{i \in S_{E_2}} \big [ \langle u_i | p_2 - p_1 \rangle + \langle u_i | p_1 - y \rangle \big] - \lvert p_2 \rvert_{E_2} \\ &< \lVert p_1 - y \rVert + \lvert p_2 - p_1 \rvert_{E_2} - \lvert p_2 \rvert_{E_2} \\ &= \lVert p_1 - y \rVert - \lvert p_1 \rvert_{E_1} = h_{E_1, p_1}(y). \end{align*} So for every pair $p_1, p_2 \in X$ we have found a point where $h_{E_1, p_1}$ and $h_{E_2, p_2}$ do not coincide. \end{proof} \section{The horofunction compactification} \label{sec:horofunction} \subsection{Introduction to horofunctions} For this general introduction let $(X,d)$ be a not necessarily symmetric metric space, that is, $d(x,y) \neq d(y,x)$ for $x, y \in X$ is possible. Assume the topology to be induced by the symmetrized distance \[ d_{sym}(x,y) \mathrel{\mathop:}= d(x,y) + d(y,x) \] for all $x,y \in X$. Let $p_0$ be a basepoint and let $C(X)$, the space of continuous real-valued functions on $X$, be endowed with the topology of uniform convergence on bounded subsets. We denote its quotient by constant functions by $\widetilde{C}(X)$. The horofunction compactification of $X$ is an embedding of $X$ into $\widetilde{C}(X)$. To obtain this embedding we define \begin{equation*} \begin{aligned} \psi: X &\longrightarrow \widetilde{C}(X) \nonumber \\ z &\longmapsto \psi_z = d(\cdot, z) - d(p_0,z). \end{aligned} \end{equation*} By using the triangle inequality it can be shown that this map is injective and continuous. But it is not always an embedding. To ensure this, some more assumptions are required, as the following lemma shows (see \cite[p.4 and Thm. 2.2]{wa2} for a proof). \begin{lem}\label{lem:condi} ~ \begin{enumerate} \item If $d_{sym}$ is proper, i.e. every closed ball is compact, then the closure of the set $\{\psi_z \ | z \in X\}$ in $\widetilde{C}(X)$ is compact. \item Let additionally $X$ be geodesic, i.e., every two points are connected by a geodesic, and let $d$ be symmetric with respect to convergence, that is, for a sequence $(x_n)_{n \in \mathbb N}$ in $X$ and some $x \in X$ the following condition holds: \[ d(x_n, x) \longrightarrow 0 \ \text{ iff } \ d(x, x_n) \longrightarrow 0. \] Then $\psi$ is an embedding of $X$ into $\widetilde{C}(X)$. \end{enumerate} \end{lem} \begin{defi} The horofunction boundary of a metric space $X$ is the boundary of the closure of the map $\psi$ in $\widetilde{C}(X)$: \[ \partial_{hor} (X) \mathrel{\mathop:}= \Big(\cl \psi(X) \Big) \setminus \psi(X). \] Its elements are called \emph{horofunctions}. If the closure $\overline{X}^{hor} \mathrel{\mathop:}= X \cup \partial_{hor} X$ is compact, it is called the \em{horofunction compactification} of $X$. \end{defi} \begin{rem} ~ \begin{enumerate} \item The choice of an alternative basepoint $p_0'$ leads to a homeomorphic boundary and compactification. For a reference see \cite{wa1}. \item All elements of $\cl \psi(X)$ are 1-Lipschitz with respect to $d_{sym}$. \end{enumerate} \end{rem} From now on we assume all conditions of Lemma \ref{lem:condi} to be satisfied and indentify $X$ with $\psi(X)$. Then a sequence $(z_n)_n \subset X$ converges to a horofunction $\xi \in \partial_{hor} (X)$ if the sequence of the associated maps converges uniformly over compact subsets: $\psi_{z_n} \longrightarrow \xi$. Rieffel \cite[Thm. 4.5]{ri} showed that there are special sequences that always converge to a horofunction $\xi \in \partial_{hor}X$, namely those along so-called almost-geodesics. \begin{defi} A continuous map $\gamma: \mathbb R \longrightarrow X$ is called an almost-geodesic, if for all $\varepsilon > 0$ there is an $N \in \mathbb N$ such that for all $ t \geq s \geq N$ \[ \lvert d\big(\gamma(0), \gamma(s)\big) + d\big(\gamma(s), \gamma(t)\big) - t \rvert < \varepsilon. \] \end{defi} \begin{defi} A horofunction which is the limit of an almost-geodesic is called a \emph{Busemann point}. \end{defi} In general, not all horofunctions have to be Busemann points, and it is an interesting question when this actually happens. In the case of a finite-dimensional vector space with polyhedral norm we know by Walsh \cite[Thm. 1.2]{wa2} that this is actually true. In this situation with a polyhedral unit ball $B$, he also gives a criterion to calculate all Busemann points explicitly. To do this, he describes the horofunctions as the Legendre-Fenchel-transforms $f_{E,p}^*$ of certain functions depending on proper faces $E \subset B^{\circ}$ and points $p \in X$: \begin{align} \label{legendre} f_{E,p}: \ X^* &\longrightarrow [0, \infty], \nonumber \\ q &\longmapsto f_{E,p}(q) \mathrel{\mathop:}= I_E(q) + \langle q | p \rangle - \inf_{y \in E} \langle y | p \rangle, \end{align} where the indicator function $I_E(q)$ is $0$ for $q \in E$ and $\infty$ elsewhere. Recall that the Legendre-Fenchel-transform $f^*$ of a function $f:X \rightarrow \mathbb R \cup \{\infty\}$ is given by \begin{align*} f^*: X^* &\longrightarrow \mathbb R \cup \{\infty\},\\ w &\longmapsto \sup_{x \in X} \big(\langle w | x \rangle - f(x) \big). \end{align*} More about it can be found for example in \cite[\S 7.2]{be}. The result of Walsh can be stated as follows. \begin{thm}\cite[Thm. 1.1.]{wa2} Let $(X, \lVert \cdot \rVert)$ be a finite-dimensional vector space with polyhedral norm and the notations be as above. Then the set of Busemann points is the set \[ \{f_{E,p}^* \ | E \subset B^{\circ} \text{ is a (proper) face, } p \in X\}. \] \end{thm} We show now that our previously defined maps $h_{E,p}$ are exactly these Busemann points. \begin{lem} \label{lem:fep-pseudonorm} Let $E$ be a face of $B^{\circ}$ and $p \in V$. Then \[ f_{E,p}^*(\cdot) = h_{E,p}(\cdot) = |p - \cdot|_E - |p|_E. \] \label{fep*} \end{lem} \begin{proof} By definition, we obtain for all $y \in V$: \begin{align*} f_{E,p}^*(y) &= \sup_{x \in X^*}[\langle x|y\rangle - f_{E,p}(x)]\\ &= \sup_{x \in X^*}[\langle x|y\rangle - I_E(x) - \langle x|p\rangle + \inf_{q \in E}\langle q|p\rangle]\\ &= \sup_{x \in E}[\langle x|y-p\rangle] + \inf_{q \in E}\langle q|p\rangle\\ &= -\inf_{x \in E}(\langle x|p-y\rangle) + \inf_{q \in E}\langle q|p\rangle\\ &= |p-y|_E - |p|_E. \qedhere \end{align*} \end{proof} \begin{cor} With the notations as in the previous lemma, it holds \[ f_{E,p}^* = f_{E,p^F}^*. \] \end{cor} \begin{proof} The statement follows directly by Lemma \ref{lem:p^Fonly}. \end{proof} \noindent In summary we can describe the set of horofunctions easily as \[ \partial_{hor}X = \{h_{E,p} \ | E \subset B^{\circ} \text{ is a (proper) face, } p \in V(E^\circ)^\bot \}. \] \noindent To describe the topology of $\overline{X}^{hor}$ we characterize converging sequences in the following section. \subsection{The characterization theorem} The main theorem of this section characterizes all sequences converging to a horofunction. It shows the strong dependence of the horofunctions on the shape of the dual unit ball, which is the underlying principle of the homeomorphism in Theorem \ref{thm:homeo}. This result is also used in \cite{js} to establish a geometric 1-1 correspondence between the nonnegative part of $n$-dimensional projective toric varieties and horofunction compactifications of $\mathbb R^n$ with respect to rational polyhedral norms. Before we state the theorem to characterize converging sequences, we show a lemma which already contains the main idea of the characterization. \begin{lem} \label{lem:subsequence} Let $(x_n)_{n \in \mathbb N}$ be a sequence in $(X, \lVert \cdot \rVert)$ with $\lVert x_n \rVert \longrightarrow \infty$ as $n \longrightarrow \infty$. Let $B$ be the polyhedral unit ball associated to $\lVert \cdot \rVert$. Then $(x_n)_n$ has a subsequence $(x_{n_j})_j$ which satisfies the following conditions:\\ There is a proper face $F \subset B$ and a point $p \in V(F)^\bot$ such that: \begin{itemize} \item[(i)] the projection $x_{n_j, F}$ lies in $K_{F}$ for all $j \in \mathbb N$. \item[(ii)] $d \left(x_{n_j,F} ,\partial_{rel} K_F \right ) \longrightarrow \infty$ as $n_j \longrightarrow \infty$. \item[(iii)] the orthogonal projection converges: $\lVert x_{n_j}^F - p \rVert \longrightarrow 0$ as $n_j \longrightarrow \infty$. \end{itemize} \end{lem} \begin{proof} To find a face of $B$ satisfying all three conditions, we start with the facets, the maximal dimensional faces. As $B$ is a polyhedral unit ball, it has only finitely many of them and their cones cover the whole vector space $X$. Therefore we find a subsequence, also denoted by $(x_n)_{n \in \mathbb N}$, such that $x_n \in K_F$ for all $n$. As $V(F) = X$, the projection is the identity and the first and the third condition are satisfied. If also the second condition is fulfilled, we are done. Otherwise take the intersection of all faces in the relative boundary of $K_F$ to which $x_n$ has bounded distance. As the relative boundary of the cone is the union of parts of subspaces all intersecting in the origin, they have unbounded increasing distance to each other if they do not have a common subspace. Therefore this intersection is not trivial and it is again a cone generated by a face $F_1$ of $B$ of dimension lower than $\dim(F)$. Consider the projection to the subspace $V(F_1)$ and its orthogonal complement. As the distance to $V(F_1)$ is bounded, there is a subsequence $(x_{n_j})_{j \in \mathbb N} \subset (x_n)$ such that the orthogonal part converges, $x_{n_j}^{F_1} \longrightarrow p_1 \in V(F_1)^\bot$. This satisfies the third condition with respect to $F_1$. By taking the intersection, we guarantee, that the distance of the projected sequence $x_{n_j, F_1}$ to the relative boundary of $K_{F_1}$ is unbounded, which gives us the second condition. If the projection $x_{n_j, F_1}$ was lying outside of $K_{F_1}$, it must have unbounded distance to the relative boundary of $K_{F_1}$ as just seen. But this is a contradiction to $x_{n_j} \in K_F$ as the following argument shows. The relative boundary of $K_F$ is a union of cones lying in hyperplanes and $x_n^{F_1}$ is bounded, $\lVert x_n^{F_1} \rVert \leq b$ for some $b \in \mathbb R$. Therefore, if we project the set $M \mathrel{\mathop:}= \{y \in K_F | \lVert y^{F_1} \rVert \leq b \}$ to $V(F_1)$, it covers $K_{F_1}$ and remains within finite distance to its relative boundary outside of $K_{F_1}$. See Figure \ref{pic:proof_characterizationthm} for an idea. \begin{figure}[h!] \includegraphics{Horofunction_submit_picture_proof_charact_thm_2.pdf} \caption{View from above onto $V(F_1)$ (left) and from the origin into $K_F$ (right).} \label{pic:proof_characterizationthm} \end{figure} As $(x_{n_j}) \in M$, this is a contradiction to the unbounded distance of $x_{n_j, F_1}$ to the relative boundary of $K_{F_1}$. \end{proof} We are now prepared to state and prove the main theorem of this section. \begin{thm} \label{thm:characterization} Let $B \subset (X, \lVert \cdot \rVert)$ be a convex polyhedral unit ball associated to $\lVert \cdot \rVert$ and $B^{\circ} \subset X^*$ its dual. For a sequence $(z_n)_{n \in \mathbb N} \subset X$ the associated sequence $\psi_{z_n} (\cdot) = \lVert z_n - \cdot \rVert - \lVert z_n \rVert$ converges to a horofunction $h_{E,p}$ with $p \in V(E^\circ)^\bot$ and $E$ a proper face of $B^{\circ}$ if and only if the following conditions are satisfied for the proper face $F = E^\circ\subset B$ and $p \in V(F)^\bot$ as above: \begin{itemize} \item[(0)] The sequence is unbounded: $\lVert z_n \rVert \longrightarrow \infty$ as $n \longrightarrow \infty$. \item[(i)] The projection to $V(F)$ lies in the cone of $F$: $ (z_n)_F \in K_{F}$ for $n$ big enough. \item[(ii)] The distance of the projection to the relative boundary of the cone is unbounded: $d \big(z_{n,F}, \partial_{\text{rel}}K_F \big) \longrightarrow \infty$ as $n \longrightarrow \infty$. \item[(iii)] The orthogonal projection is bounded and converges to $p$: \\ $\lVert z_n^F - p \rVert \longrightarrow 0$ as $n \longrightarrow \infty$. \end{itemize} \end{thm} \begin{proof} We first show that $\psi_{z_n}$ converges to $h_{E,p}$ if all conditions are satisfied. Let $(z_n)_{n \in \mathbb N}$ be a sequence satisfying all conditions for some face $F$ of $B$ and $p \in V(F)^\bot$. For any $y \in X$, let $n$ be large enough such that there are two facets $F_i, F_j \subset B$ (i.e. dim$F_i =$ dim$F_j = m-1$) having $F$ in their relative boundary and satisfying % \begin{align} \label{bed.inFi} \frac{z_{n,F} + p - y}{\lVert z_{n,F} + p - y \rVert} \in F_i; \hspace{1cm} \frac{z_{n,F} + p}{\lVert z_{n,F} + p \rVert} \in F_j. \end{align} % If $F$ is a facet itself, take $F_i = F_j = F$. As $B$ is polyhedral, each face lies in the relative boundary of a facet of $B$. By condition $(ii)$, the distance to the relative boundary of $K_F$ goes to infinity and therefore, as $p$ and $y$ are constant, the above mentioned condition can always be satisfied for $n$ large enough. If $(ii)$ was not satisfied, we could land in facets that do not have $F$ in their relative boundary. Then we have \begin{align*} (\psi_{z_n} &- h_{E,p})(y) = \lVert z_n - y \rVert - \lVert z_n \rVert - h_{E,p}(y)\\ &= \lVert z_n^F - p + z_{n,F} + p - y \rVert - \lVert z_n^F - p + z_{n,F} + p \rVert - h_{E,p}(y) \\ &\leq \lVert z_n^F - p \rVert + \lVert z_{n,F} + p - y \rVert + \lVert z_n^F - p \rVert - \lVert z_{n,F} + p \rVert - h_{E,p}(y) \\ &\overset{1}{=} |z_{n,F} + p - y|_E - |z_{n,F} + p|_E - h_{E,p} + 2 \lVert z_n^F - p \rVert \\ &\overset{2}{=} \lVert z_{n,F} \rVert + |p - y|_E - \lVert z_{n,F} \rVert - |p|_E - h_{E,p} + 2 \lVert z_n^F - p \rVert \\ &= |p-y|_E - |p|_E - h_{E,p} + 2 \lVert z_n^F - p \rVert \\ &\longrightarrow 0 \end{align*} by the usual and the reverse triangle inequality. Step $1$ follows by Lemma \ref{||E||=Ei} and with Equation (\ref{bed.inFi}) above. The second step is a consequence of Lemma \ref{||E=||B+||E}. The sets $F_i, F_j$ are chosen precisely such that all these lemmata can be applied. \\ Similarly we get % \begin{align*} (\psi_{z_n} &- h_{E,p})(y) = \lVert z_n^F - p + z_{n,F} + p - y \rVert - \lVert z_n^F - p + z_{n,F} + p \rVert - h_{E,p}(y) \\ &\geq -\lVert z_n^F - p \rVert + \lVert z_{n,F} + p - y \rVert - \lVert z_n^F - p \rVert - \lVert z_{n,F} + p \rVert - h_{E,p}(y) \\ &\longrightarrow 0. \end{align*} So we have shown that $\psi_{z_n}(y) \longrightarrow h_{E,p}(y)$ for all $y \in X$ pointwise. As we assume $d_{sym}$ to be proper and because all elements of $\cl\{\psi_z \ | z \in X\}$ are 1-Lipschitz with respect to $d_{sym}$, pointwise convergence of $\psi_{z_n}$ is equivalent to uniform convergence on bounded sets, which again is equivalent to uniform convergence on compact sets in $C(X)$. See for example \cite{wa2} for a reference. Therefore $\psi_{z_n} \longrightarrow h_{E,p}$. For the other direction we have to show that every converging sequence $(z_n)_{n \in \mathbb N} \subset X$ with $\psi_{z_n} \longrightarrow h_{E,p}$ for some proper face $E \subset B^{\circ}$ and $p \in V(E^\circ)^\bot$ satisfies the conditions of the theorem. The proof is based on Lemma \ref{lem:subsequence}, where we have shown that every sequence ``converging" to infinity has a subsequence fulfilling conditions $(i) - (iii)$ for some $F \in \mathcal F$. \\ So let $(z_n)_n$ be a sequence with $\psi_{z_n} \longrightarrow h_{E,p}$ and let $F:= E^\circ$ be the dual face. If the sequence was bounded, $\psi_{z_n}$ would stay in the interior of $\psi(X)$ and not converge to a Busemann point in the boundary. Thus by Lemma \ref{lem:subsequence}, $(z_n)_n$ has a subsequence $(z_{n_j})_j$ satisfying all conditions with respect to $F$. We have to show that this subsequence is the whole sequence. If it was not the whole sequence, we could find a subsequence $z_{n_j}$ satisfying the conditions for some face $F_1 \neq F$. Then by the first part of the proof we would have $\psi_{z_{n_k}} \longrightarrow h_{E_1,p} \neq h_{E,p}$ as $E_1 \neq E$ (see Lemma \ref{lem:hE1_neq_hE2}) which is a contradiction. The same argument works if we had a subsequence fulfilling the conditions for some $p_1 \neq p$ with $p_1 - p \notin V(F)$. \end{proof} \subsection{Examples} In this section we want to give some examples to illustrate the conditions of the theorem and to give the reader some intuition how sequences converge. In all examples below we consider $\mathbb R^2$ equipped with the $L^1$-norm. Its dual is the $L^\infty$-norm as seen in Example \ref{ex:dualL1} before. The unit ball $B$ and its dual $B^{\circ}$ as well as the notation of faces are shown in Figure \ref{fig:B_Bo_examples}. \begin{figure}[h!] \includegraphics{Horofunction_submit_picture_unit_ball_2.pdf} \caption{The unit ball $B$ and its dual $B^{\circ}$ with some faces. The face $F_1$ is collapsed to the point $E_1$ while $F_2$ is blown-up to the face $E_2$.}\label{fig:B_Bo_examples} \end{figure} We consider sequences of the form $z_n = (n, f(n)) \in \mathbb R^2$ following functions $f: \mathbb R \rightarrow \mathbb R$. These functions are shown in Figure \ref{fig:function_examples}. \begin{ex} \label{ex:1} For some constant $c \in \mathbb R$ and $n \in \mathbb N$ consider the sequence \[ z_n = (n, c) \in \mathbb R^2. \] The sequence runs along a line parallel to the x-axis shifted by $c$. For the face $F_2 \subset B$ the cone $K_{F_2}$ is the non-negative x-axis with the origin as its relative boundary. $V(F_2)^\bot$ then is the y-axis isomorphic to $\mathbb R$. It is easy to see that all conditions of the theorem are satisfied with $F = F_2$ and $p = c$. As the sequence is parallel to the x-axis, it is not possible to choose $F_1$ as face here because the distance to the relative boundary is constant. As $F_2^\circ = E_2$ we conclude that $\psi_{z_n} \longrightarrow h_{E_2, c}$. \end{ex} \begin{ex} \label{ex:2} Next we consider sequences $(z_n)_{n \in \mathbb N}$ of the type \[ z_n = (n, sn) \in \mathbb R^2 \] with $s > 0$. Here we can choose $F_1$ as our face and because $s \neq 0$, the distance to the relative boundary is unbounded now. The dual $E_1$ of $F_1$ is just a point $E_1 = \{e_1\}$, and by equation (\ref{legendre}) on page \pageref{legendre} it is clear that $h_{E_1,p}(x) = \langle e_1 | x \rangle$ is independent of $p$ for all $x \in \mathbb R^2$. This fits with our conditions above as $V(F_1) = \mathbb R^2$ and so $V(F_1)^\bot = \{0\}$. The convergence of $z_n$ is independent of the value of $s$, all sequences of this type converge to the same horofunction $h_{E_1}$. \end{ex} \begin{ex} \label{ex:3} We now take a sequence that lies completely in $K_{F_1}$ but converges to the horofunction associated with the face $F_2$. For $n \in \mathbb N$ let \[ z_n = \left (n, \frac{1}{n} \right) \in \mathbb R^2 \] be our sequence. Then as $z_n$ approaches the $x$-axis, the boundary condition is not satisfied for $F = F_1$. If we take $F = F_2$ instead, this is not a problem any more. As not the whole sequence but only its projection to the subspace has to be inside the cone, it is easy to check that all requirements are fulfilled for $F_2$ with $p = 0$. This is the same limit as for the sequence in the first example with $c = 0$. \end{ex} \begin{figure}[h!] \includegraphics{Horofunction_submit_picture_examples_2.pdf} \caption{The functions for Examples \ref{ex:1} and \ref{ex:2} (left) and \ref{ex:3} to \ref{ex:5} (right)}\label{fig:function_examples} \end{figure} \begin{ex} \label{ex:4} The next sequence $(z_n)_{n \in \mathbb N}$ we consider is given by \[ z_n = \left(n, \frac{1}{2} \sin(5n) + 1 \right) \in \mathbb R^2. \] It lies completely in $K_{F_1}$ but as the $y$-value is bounded, it contradicts the boundary condition for this face. If we choose $F_2$ instead, this condition is satisfied but we are not able to find an appropriate $p \in V(F_2)^\bot \simeq \mathbb R$ to fulfill the last one. As these two faces are the only reasonable choices, we conclude that in this case $\psi_{z_n}$ does not converge at all. This also turns out when doing the calculation directly. \end{ex} One could guess that an easier condition for finding the appropriate face $F$ is to look at the limiting direction $\frac{z_n}{\lVert z_n \rVert}$ and to require it to be in $F$. The next example shows that this is too much simplified. \begin{ex} \label{ex:5} For $n \in \mathbb N$ take the sequence with \[ z_n = \big(n, \log(2n)\big) \in \mathbb R^2. \] Then $\frac{z_n}{\lVert z_n \rVert} \longrightarrow (1,0) = F_2$, so a reasonable choice seems to be $F = F_2$. But then it is not possible to find a $p \in \mathbb R$ satisfying the last condition because $z_{n, F_2} = (n,0) \in \mathbb R \times \{0\}$ but the sequence $z_n^{F_2} = (0, \log(2n)) \in \{0\} \times \mathbb R$ does not converge. If we take $F = F_1$ instead this guaranties us unbounded distance to the relative boundary and because the projection is just the identity, all other requirements are also fulfilled, so $\psi_{z_n} \longrightarrow h_{E_1}$, independent of $p$ as explained before. \end{ex} \begin{rem} We have seen in the examples that it is not enough to consider the direction of a sequence to determine the right face associated to the horofunction. The easiest examples are sequences following straight lines and they are important enough to show the general behavior of the sequences. All sequences in a regular direction, that is, within the interior of the cone of a facet, collapse and converge to the horofunction associated to the dual vertex, independent of any translation or direction. For a sequence in a singular direction associated to a lower dimensional face $F$, we have the same collapsing behavior for the $z_{n,F}$-part and a blowing-up in the orthogonal direction $V(F)^\bot$, which is encoded by the point $p \in V(F)^\bot$ in the definition of $h_{E,p}$. \end{rem} \section{Proof of Theorem \ref{thm:homeo}} In the last part of this section we prove Theorem \ref{thm:homeo}. To do so, we need the map $m^C$, which maps a finite-dimensional vector space to the interior of a convex polytope $C$ of the same dimension. The structure of the map is motivated by the moment map known from the theory of toric varieties. See for example \cite[\S 4.2]{fu} for a description of it. Although we do not have a Lie group acting on a toric variety here, the map is the same. Up to some signs which come from the definition of the dual unit ball, the same result as Theorem \ref{thm:mc} can be found in \cite[p. 82]{fu} but with a different proof. The moment map was also used to realize the closure of a flat in the Stake compactifications as bounded polytopes in \cite{ji}. \subsection{The map $m^C$} \label{sec:m^c} Let $C \subset (\mathbb R^{m})^*$ be an $m$-dimensional polytope with vertices $\{c_1, \ldots, c_r\}$. We define the map $m^C$ on the real vector space $\mathbb R^m$ and its dual $(\mathbb R^{m})^*$ in order to be able to use methods from analysis for the proof. Nevertheless, Theorem \ref{thm:mc} also holds for $m^C$ defined on $X$ and $C \subset X^*$. \begin{equation*} \label{eq:mc} \begin{aligned} m^C: \mathbb R^m &\longrightarrow \inte(C),\\ x &\longmapsto \sum_{i = 1}^r \frac{e^{-\langle c_i | x \rangle}}{\sum_{k =1 }^r e^{-\langle c_k | x \rangle}} c_i. \end{aligned} \end{equation*} Later we want to use $C^\circ$ as the unit ball of our vector space. As $C$ not necessarily contains the origin, $(C^\circ)^\circ$ is not $C$ any more, so to ensure that $C^\circ$ can be a unit ball, we consider a translated convex set properly lying around the origin and then take the dual of this set. The map $m^C$ behaves well under shifting as the following lemma shows. \begin{lem} \label{lem:mcshift} Let $C_s = C + s$ be the convex polyhedral set obtained by shifting $C$ by an element $s \in \mathbb R^{m,*}$. Then for all $x \in \mathbb R^m$ \begin{align*} m^{C_s}(x) = m^C(x) + s. \end{align*} \end{lem} \begin{proof} Let $x \in \mathbb R^m$ be arbitrary, then, as $C_s = \conv\{c_j + s | j = 1, \ldots, r \}$, we have \begin{align*} m^{C_s}(x) &= \sum_{i = 1}^r \frac{e^{-\langle c_i + s | x \rangle}}{\sum_{k =1 }^r e^{-\langle c_k + s | x \rangle}} (c_i + s) \\ &= \sum_{i = 1}^r \frac{e^{-\langle c_i | x \rangle - \langle s | x \rangle}}{\sum_{k =1 }^r e^{-\langle c_k | x \rangle - \langle s | x \rangle}} (c_i + s) \\ &= \sum_{i = 1}^r \frac{e^{-\langle c_i | x \rangle}}{\sum_{k =1 }^r e^{-\langle c_k | x \rangle}} c_i + \sum_{i = 1}^r \frac{e^{-\langle c_i | x \rangle}}{\sum_{k =1 }^r e^{-\langle c_k | x \rangle}} s\\\ &= m^C(x) + s. \qedhere \end{align*} \end{proof} \begin{thm} \label{thm:mc} The map $m^C$ defined above is continuous and bijective. \end{thm} \begin{proof} Let $x \in \mathbb R^m$ be an arbitrary point. As no summand $e^{-\langle c_i | x \rangle}$ in the numerator can be zero, $m^C(x)$ is a proper convex combination of all vertices of $C$ and lies therefore in its interior. It is obvious that the map is continuous. To show injectivity, define the function \begin{align*} f: \mathbb R^m &\longrightarrow \mathbb R, \\ x &\longmapsto -\ln\left(\sum_{i = 1}^r e^{-\langle c_i | x \rangle}\right). \end{align*} Then $m^C= \nabla f$ is the gradient of $f$. To prove injectivity of $m^C$ on $\mathbb R^m$, we show that $f$ is strictly concave and then use a description of strict concavity including the derivative. We define the function \begin{align*} g: \mathbb R^m &\longrightarrow \mathbb R, \\ x &\longmapsto \sum_{i = 1}^r e^{-\langle c_i | x \rangle}, \end{align*} such that $f(x) = -\ln \big( g(x)\big)$. Consider H\"older's inequality \begin{align} \label{eq:hoelder} \sum_{i = 1}^n \lvert a_i b_i \rvert \leq \left(\sum_{i=1}^n \lvert a_i \rvert^p \right)^{\frac{1}{p}} \left(\sum_{i=1}^n \lvert b_i \rvert^q \right )^{\frac{1}{q}} \end{align} for all $a_1, \ldots, a_n, b_1, \ldots, b_n \in \mathbb R$ and $\frac{1}{p} + \frac{1}{q} = 1$ with $p,q > 0$. Take $p = \frac{1}{\lambda}$ and $q = \frac{1}{1-\lambda}$ for some $\lambda \in (0,1)$, then we have for all $x \neq y \in \mathbb R^m$: \begin{align*} g\left(\lambda x + (1-\lambda) y\right) &= \sum_{i = 1}^r e^{-\langle c_i | \lambda x + (1-\lambda)y \rangle}\\ &= \sum_{i=1}^r e^{-\langle c_i | \lambda x \rangle} e^{-\langle c_i | (1-\lambda) y \rangle} \\ &\leq \left(\sum_{i = 1}^r \left[ e^{-\langle c_i | \lambda x \rangle}\right]^{p} \right)^{\frac{1}{p}} \left( \sum_{j = 1}^r \left[e^{-\langle c_j|(1-\lambda) y \rangle} \right]^{q} \right)^{\frac{1}{q}} \\ &= \left(\sum_{i = 1}^r \left[ e^{-\langle c_i | x \rangle}\right] \right)^{\lambda} \left( \sum_{j = 1}^r \left[e^{-\langle c_j| y \rangle} \right] \right)^{1-\lambda} \\ &= g(x)^\lambda g(y)^{1-\lambda}. \end{align*} Therefore \begin{align*} f\left (\lambda x + (1-\lambda) y \right) &= -\ln\big ( g(\lambda x + (1-\lambda) y) \big) \\ &\geq -\ln \big( g(x)^\lambda g(y)^{1-\lambda}\big) \\ &= \lambda f(x) + (1-\lambda) f(y) \end{align*} by the monotonicity of the logarithm. So $f$ is concave. It is actually strictly concave as the following argument shows. As our summands are positive, there is equality in H\"older's inequality (\ref{eq:hoelder}) if and only if $a_i^p = \alpha b_i^q $ for all $i \in {1, \ldots, n}$ with $\alpha > 0$. In other words, $f\left (\lambda x + (1-\lambda) y \right) = \lambda f(x) + (1-\lambda) f(y)$ if and only if $e^{-\langle c_i | x \rangle} = \alpha e^{-\langle c_i | y \rangle}$ for all $i \in \{1, \ldots, r\}$, which is equal to $-\langle c_i | x-y \rangle = \ln(\alpha)$ for all $i$. As all $c_i$ together span an $m-$dimensional convex subset of $\mathbb R^m$, this can only be satisfied if $x = y$, which is a contradiction to our assumption. So we never have equality in H\"older's inequality which means that $f$ is strictly concave. For a function $s: D \longrightarrow \mathbb R$ with $D \subset \mathbb R^m$ convex, strict concavity is equal to the generalized monotonicity condition \[ s(y) < s(x) + \langle \nabla s(x) | (y-x)\rangle \] for all $x, y \in D$ with $x \neq y$. So let $x,y \in \mathbb R^m$ with $x \neq y$, then \begin{align*} f(x) &< f(y) + \langle \nabla f(y) | (x - y) \rangle \\ &< f(x) + \langle \nabla f(x) | (y - x) \rangle + \langle \nabla f(y) | (x - y) \rangle \end{align*} and consequently, as $\nabla f(x) = m^C(x)$, we have $\langle m^C(x) - m^C(y) | y - x \rangle > 0$. Therefore $m^C(x) \neq m^C(y)$ for all $x \neq y \in \mathbb R^m$ and injectivity is shown. We show that $m^C$ is onto by showing that the derivative of $m^C$ is a negative definite matrix and therefore invertible. Using the Inverse Function Theorem we then prove surjectivity. Recall that \[ m^C(x) = \frac{\sum_{i = 1}^r e^{-\langle c_i | x \rangle} c_i}{\sum_{k = 1}^r e^{-\langle c_k | x \rangle}} \in \mathbb R^m \] for all $x \in \mathbb R^m$. We use the notation that an upper index denotes the corresponding component of the vector. Then \begin{align*} \frac{\partial (m^C)^\alpha}{\partial x^\beta} (x) &= \frac{1}{(\sum_{j = 1}^r e^{-\langle c_j | x \rangle})^2} \left [ \sum_{i,j= 1}^r c_i^\alpha c_j^\beta e^{-\langle c_i + c_j|x \rangle} - \sum_{i,k = 1}^r c_i^\alpha c_i^\beta e^{-\langle c_i + c_j|x \rangle} \right ]\\ &= \frac{1}{(\sum_{j = 1}^r e^{-\langle c_j | x \rangle})^2} \sum_{i < k } e^{-\langle c_i + c_k | x \rangle} ( c_i^\alpha c_j^\beta + c_j^\alpha c_i^\beta - c_i^\alpha c_i^\beta - c_j^\beta c_j^\alpha) \\ &= \frac{-1}{(\sum_{j = 1}^r e^{-\langle c_j | x \rangle})^2} \sum_{i < k } e^{-\langle c_i + c_k | x \rangle} ( c_i^\alpha - c_j^\alpha)(c_i^\beta - c_j^\beta). \end{align*} Let $a = (a^1 \ldots a^m)^t \in \mathbb R^m$ be some arbitrary vector. Then the quadratic form defined by the derivative of $m^C$ is negative definite. Indeed, \begin{align*} (a^1,\ldots, a^m)& \begin{pmatrix} \frac{\partial (m^C)^1}{\partial x^1} (x) & \cdots & \frac{\partial (m^C)^1}{\partial x^m} (x) \\ \vdots & \ddots & \vdots \\ \frac{\partial (m^C)^m}{\partial x^1} (x) & \cdots & \frac{\partial (m^C)^m}{\partial x^m} (x) \end{pmatrix} \begin{pmatrix} a^1 \\ \vdots \\ a^m \end{pmatrix} = \sum_{\alpha, \beta = 1}^r a^\alpha a^\beta \frac{\partial (m^C)^\alpha}{\partial x^\beta} (x)\\ &= \frac{-1}{(\sum_j e^{-\langle c_j | x \rangle})^2} \sum_{\alpha, \beta} a^\alpha a^\beta \left [ \sum_{i <k} e^{-\langle c_i + c_k | x \rangle} (c_i^\alpha - c_k^\alpha)(c_i^\beta - c_k^\beta) \right] \\ &= \frac{-1}{(\sum_j e^{-\langle c_j | x \rangle})^2} \sum_{i <k} e^{-\langle c_i + c_k | x \rangle} \left[ \sum_{\alpha, \beta} a^\alpha (c_i^\alpha - c_k^\alpha) a^\beta (c_i^\beta - c_k^\beta) \right] \\ &= \frac{-1}{(\sum_j e^{-\langle c_j | x \rangle})^2} \sum_{i <k} e^{-\langle c_i + c_k | x \rangle} \left (\sum_{\alpha} a^\alpha (c_i^\alpha - c_k^\alpha) \right)^2 \\ &= \frac{-1}{(\sum_j e^{-\langle c_j | x \rangle})^2} \sum_{i <k} e^{-\langle c_i + c_k | x \rangle} \left ( \langle a | c_i - c_k \rangle \right)^2 < 0. \end{align*} By the Inverse Function Theorem we know that $m^C$ is a local isomorphism and that its image is open in $\inte(C)$. It remains to show that the image is also closed. Assume that the image is open but not closed and take a point on the boundary of it which lies in the interior of $C$, say $y \in \partial m^C(\mathbb R^m) \cap \inte(C)$. Then we can find a sequence $(y_n)_{n \in \mathbb N} \subset m^C(\mathbb R^m)$ converging to $y$. Let $(x_n)_{n \in \mathbb N} \subset \mathbb R^m$ be the corresponding sequence of preimages. Then there are two cases to distinguish. \\ If $(x_n) \longrightarrow \infty$, we can find a subsequence also denoted by $x_n$, which fulfills all conditions of Lemma \ref{lem:subsequence} with respect to $C^\circ$. As we are only interested in limits, we can assume by Lemma \ref{lem:mcshift} that $C$ contains the origin. Let $F \subset C^\circ$ be the corresponding face and $E \subset C$ its dual. Then by Lemma \ref{lem:pairinginfty} and the third condition of Lemma \ref{lem:subsequence} we see that for an arbitrary vertex $c_E$ of $E$ it holds \begin{align} \label{eq:SEremains_1} \langle c_E - c_j | x_{n,F} \rangle &\longrightarrow \left \{ \begin{array}{lcl} 0 &\text{ if } &j \in S_E\\ -\infty &\text{ if } &j \notin S_E \end{array} \right. \\ \langle c_E - c_j | x_n^F \rangle &\text{ is bounded.} \nonumber \end{align} Then in the limit, only those summands of \begin{align} \label{eq:SEremains_2} m^C(x_n) = \sum_{i = 1}^r \frac{e^{-\langle c_i | x_n \rangle}}{\sum_{k =1 }^r e^{-\langle c_k | x_n \rangle}} c_i = \sum_{i = 1}^r \frac{e^{\langle c_E - c_i | x \rangle}}{\sum_{k =1 }^r e^{\langle c_E - c_k | x \rangle}} c_i \end{align} with $i \in S_E$ remain. Therefore, the limit $\lim_{n \rightarrow \infty} m^C(x_n)$ of the subsequence is a convex combination of all those edges spanning the face $E$ and lies in $E$. As $E$ is in the boundary of $C$, this is a contradiction. It remains to prove the case where $(x_n)$ is contained inside a compactum. Then again we can find a subsequence $(x_{n_k})$ converging to some point $x \in \mathbb R^m$. By continuity of $m^C$ and uniqueness of limits we conclude $y = m^C(x)$ lies in the image of $m^C$. As $y$ was some arbitrary boundary point, the image of $m^C$ is also closed in $C$ and therefore the whole $C$. \end{proof} \subsection{The actual proof of Theorem \ref{thm:homeo}} \begin{thmn} Let $(X, \lVert \cdot \rVert)$ be a finite-dimensional normed space with polyhedral norm. Let $B \subset X$ be the unit ball associated to $\lVert \cdot \rVert$ and $B^{\circ} = \conv\{ u_1, \ldots, u_r\} \subset X^*$ its dual. Then the horofunction compactification $\overline{X}^{hor}$ is homeomorphic to $B^{\circ}$ via the map \begin{align*} m: \overline{X}^{hor} &\longrightarrow B^{\circ}, \\ x \in X &\longmapsto m^{B^{\circ}}(x),\\ h_{E,p} \in \partial_{hor}X &\longmapsto m^E(p). \end{align*} \end{thmn} \begin{proof} The proof is structured as follows. After showing that the map is well-defined we prove continuity and at last bijectivity. As both spaces involved are Hausdorff and compact, this is enough to conclude that the map is a homeomorphism. Let $E$ be a face of $B^{\circ}$, $F = E^\circ \subset B$ its dual face and denote by $E^F = \conv\{u_j^F | j \in S_{E^F} = S_E\}$ the orthogonal projection of $E$ to $(V(F)^\bot)^*$. By the construction of the dual unit ball, $E^F$ is a maximal dimensional convex polytope in the vector space $(V(F)^\bot)^*$. Then there is a $t \in (V(F)^\bot)^* \subset X^*$ such that $E = E^F + t$ is a maximal dimensional convex polytope in the affine space $(V(F)^\bot)^* + t$. As for a horofunction $h_{E,p}$ we have by definition that $p \in V(F)^\bot$, we can apply Theorem \ref{thm:mc} to obtain a continuous and bijective map $m^{E^F}$ from $V(F)^\bot$ to $\inte(E^F)$. By Lemma \ref{lem:mcshift} we conclude that also the map $m^E$ has these properties. Indeed, let $y \in E$, $y^F \in E^F$ and $x \in V(F)^\bot$ be the preimage of $y^F$. Then \begin{align*} m^E(x) &= m^{E^F}(x) + t \\ &= y^F + t = y, \end{align*} which concludes this part of the proof. As $B^{\circ}$ is the finite union of the relative interiors of the convex sets $E_j$, it remains to show that $m$ is continuous on the boundary of the faces. For continuity from the interior of $B^{\circ}$ to the boundary, we first take a sequence $(z_n)_{n \in \mathbb N} \subset X$ that converges to a horofunction $h_{E,p}$. Then by the third condition of the characterization of sequences in Theorem \ref{thm:characterization}, we know that $z_n^F \rightarrow p$. Let $u_E$ be an arbitrary vertex of $E$. Then by the same calculation as already done in equations (\ref{eq:SEremains_1}) and (\ref{eq:SEremains_2}) we conclude that \begin{align*} \label{eq:mu(z_n)} m(z_n) \longrightarrow \frac{\sum_{j \in S_E} e^{-\langle u_j | p \rangle} u_j}{\sum_{k \in S_E} e^{- \langle u_k | p \rangle}} = m(h_{E,p}) \end{align*} as $n \longrightarrow \infty$. For the continuity within the boundary, the argument is similar. The basic idea is to use the already shown continuity on a lower dimensional subspace, where the unit ball is given by the dual of a projected and translated face of $B^{\circ}$. Let $h_{E_n, p_n} \longrightarrow h_{E',p'}$ be a sequence of converging horofunctions. As there are only finitely many faces $E_j$ of $B^{\circ}$, we can take a subsequence $h_{E, p_n}$ with a fixed face $E$ of $B^{\circ}$. Let $F$ be the corresponding dual face of $B$. Let again $E^F = E - t$ denote the projection of $E$ to $(V(F)^\bot)^*$, $t \in V(F)^*$. If $E^F$ does not contain the origin in its interior, let $E^F_0 = E^F + s$ for some $s \in (V(F)^\bot)^*$ be the shifted set containing the origin in its interior. Then all together we have \[ E = E^F + t = E^F_0 + t + s. \] We take \[ B_{E}^\circ \mathrel{\mathop:}= E^F_0 \] as dual unit ball and consequently its dual $B_{E}$ as unit ball in the linear subspace $(V(F)^\bot)^*$. Similarly we have \[ E' = E'^F_0 + t' + s \] with the translated and projected (for some $t' \in V(\widetilde{F})^*$) convex set $E'^F_0 \subset (V(F)^\bot)^*$. Note that the translation is by the same element $s$ as for $E$, while the projection parameter might be different. Then by Lemma \ref{lem:pseudonormshift} we conclude for any $y \in X$: \begin{align*} h_{E, p_n}(y) &= h_{E^F_0, p_n} (y^F) + \langle s | y^F \rangle + \langle t | y_F \rangle, \\ h_{E', p'}(y) &= h_{E'^F_0, p'} (y^F) + \langle s | y^F \rangle + \langle t' | y_F \rangle. \end{align*} For the restriction to $(V(F)^\bot)^*$ we obtain \begin{align} \label{eq:restrictedconvergence} h_{E^F_0, p_n}|_{V(F)^\bot} \longrightarrow h_{E'^F_0, p'}|_{V(F)^\bot}. \end{align} Therefore $\psi_{p_n}(y) \longrightarrow \lvert p' - y \rvert_{E'^F_s} - \lvert p' \rvert_{E'^F_0}$ with respect to the norm $B_{E}$. This has two consequences. At first we conclude from this that $E'^F_0 \subset E^F_0$ is a face and from equation (\ref{eq:restrictedconvergence}) it follows that $t' = t$ and so also $E' \subset E$ is a face. Secondly we conclude by the already shown continuity in the interior of a vectorspace, here $V(F)^\bot$ with norm $B_{E}$ and $(p_n)$ as sequence, that \[ m^{E^F_0}(p_n) \longrightarrow m^{E'^F_0}(p'). \] By Lemma \ref{lem:mcshift} this is equivalent to the convergence \[ m^{E}(p_n) \longrightarrow m^{E'}(p') \] which we wanted to show. \end{proof}
train/arxiv
BkiUd8A5qoaAwdii7BuQ
5
1
\section{Introduction} Quantum-dot systems have been well studied both experimentally and theoretically for over thirty years. Their optical properties, namely the quantum size effect, make them useful for commercial applications within liquid crystal displays \cite{Nanotech_2014}. The tunability of their electrical properties allows to control single electrons \cite{Splettstoesser_2017} and gives rise to a number of effects such as Coulomb blockade, the Kondo effect, tunnel magnetoresistance or Andreev bound states \cite{Pekola_2013}, on which versatile electronic and spintronic quantum-dot devices such as a single-electron transistor \cite{Wang_2010, Devoret_2000}, a quantum-dot spin valve \cite{Sahoo_2005, Crisan_2016} or a Cooper-pair splitter \cite{Schindele_2014, Tan_2015, Borzenets_2016} are based. In order to study resonant transport in interacting quantum-dot setups, a numerically exact method called ''Iterative summation of path integrals'' (ISPI), was developed \cite{Weiss_2008, Weiss_2013, Mundinar_2019}. ISPI is based on the truncation of correlations that decay exponentially in time. The method is well suited to study quantum-dot systems at finite temperature, including both equilibrium and nonequilibrium, in the regime in which various energy scales, e.g., associated with Coulomb interaction, temperature, or transport voltage, are of the same order of magnitude and, therefore, lack a clear separation. The ISPI scheme was first introduced to discuss nonequilibrium transport through the Anderson model \cite{Weiss_2008, Weiss_2013}. Applying the method to the Anderson-Holstein model, where the quantum dot is coupled to a phonon mode, demonstrated the impact on the Franck-Condon blockade, when entering the quantum-coherent regime \cite{Huetzen_2012}. Recently, the ISPI method was applied to quantum-dot spin valves, demonstrating the importance of resonant effects in the tunnel-magnetoresistance as well as unveiling interaction-induced current asymmetries caused by an interaction-induced exchange field \cite{Mundinar_2019, Mundinar_2020}. The purpose of this work is to develop the ISPI scheme further. We show that the necessary truncation of correlations motivates a mapping of ISPI to a transfer-matrix approach, which by construction is formulated in the stationary limit, such that extrapolation of finite-time results is not needed anymore. We develop the theoretical corner stones of this new method, referred to as ''Transfer-matrix summation of path integrals'' (TraSPI). In order to keep the discussion transparent, we exemplify the method for a system with relatively few degrees of freedom, namely the Anderson model describing a single-level quantum dot coupled to two normal metal leads. We, however, emphasize that all new concepts discussed in this work are not limited to this simple model, but can easily be transferred to other, more intricate setups. Single-electron transistors that utilize a quantum dot as an island have gathered a lot of attention throughout the years and are still under heavy investigation, both experimentally and theoretically. To mention just a few recent examples, a single-electron transistor consisting of a quantum dot and normal metal leads was realized experimentally to demonstrate that shot noise in a single-electron transistor can be reduced significantly via feedback control, which should allow the construction of efficient, nanoscale thermoelectric devices \cite{Wagner_2017}. Furthermore, if the quantum dot is periodically driven via a gate voltage, it is possible to accurately control the dot's emission time statistics \cite{Brange_2021}. For a system, in which the quantum dot is coupled to a single lead only, the quantum dot can be driven out of equilibrium via a plunger gate voltage, which allows the measurement of the free energy of a confined electron in order to study thermodynamics on the microscopic level \cite{Hofmann_2016}. For a superconducting single-electron transistor, an attractive interaction was found that survives even far beyond the superconducting regime \cite{Guenevere_2017, Cheng_2015}. Employing the method of full-counting statistics for a negative-$U$ Anderson model, it was shown that this phenomenon is robust, even for fast spin relaxation \cite{Kleinherbers_2018}. On a theoretical basis, different approaches are used and actively developed to study different parameter regimes of quantum-dot systems. The method of Dynamical Mean Field Theory is advanced and combined with other methods, such as functional renormalization group theory, to increase the predictive power of the method, even for strong and nonlocal electron correlations \cite{Rohringer_2018}. Perturbation theory in the tunnel coupling strength $\Gamma$ within a master-equation approach, have proven highly useful in the description of quantum-dot systems. This method is developed further in different directions, e.g. by introducing SU($N$)-invariant kinetic equations to effectively study multilevel quantum dots \cite{Maurer_2020} or by improving on the commonly used rotating-wave approximation, leading to a so-called coherent approximation \cite{Kleinherbers_2020}. While perturbative methods often allow for at least a qualitative description, nonperturbative effects are, by construction, beyond their scope. To cover them, numerically exact methods are in high demand. Several approaches are known to tackle this problem. Quantum Monte Carlo simulations were advanced to reach the stationary regime for systems in nonequilibrium \cite{Profumo_2015, Bertrand_2019}. Different flavors of renormalization group theory (RG) have been applied to quantum-dot systems almost since the inception of their theoretical discussion \cite{Anderson_1970}. Since then significant advances have been made to the formalism: A combination of numerical RG and time-dependent density-matrix RG allows to discuss the nonequilibrium steady state transport properties of quantum-dot systems \cite{Schwarz_2018}, while within functional RG it was possible to approximate the flow of the Luttinger-Ward functional while maintaining conservation laws \cite{Rentrop_2016}. Finally, it was shown that density functional theory is able to study out-of-equilibrium transport theories, even in strongly correlated systems, such as the Anderson model \cite{Kurth_2016}, while a quasiparticle Fermi-liquid theory can be used to work within the low-energy limit of such systems \cite{Mora_2015}. The article is structured as follows. In Sec.~\ref{sec:Model} we introduce the Anderson model's Hamiltonian and derive the path-integral formulation of the generating function. We demonstrate how interactions are decoupled via a discrete Hubbard-Stratonovich (HS) transformation and then solve the path-integral. In Sec.~\ref{sec:Method} we discuss the main ideas behind the ISPI and TraSPI schemes, namely the truncation of exponentially decaying correlations after a memory time $t_K$, as well as the truncation of the correlations induced by the HS transformation. We give a short overview of the ISPI formulation that was introduced in earlier works in Sec.~\ref{sec:FTImplementation}. We then demonstrate how it can be mapped to a transfer-matrix approach, and discuss the main benefits of this new formulation in Sec.~\ref{sec:TMImplementation}. We finish this section with a description of the convergence procedure that ensures that the results obtained via the TraSPI scheme are numerically exact. In Sec.~\ref{sec:Results} we discuss the results for the Anderson model, obtained via TraSPI. First, we discuss current based observables, like the current and the conductance, and after that the dot's occupation number. Finally, we conclude in Sec.~\ref{sec:Conclusion}. \section{Model} \label{sec:Model} We write the well-known Hamiltonian for an interacting, single-level quantum dot that is tunnel-coupled to two metallic leads in the form \cite{Anderson_1961, Weiss_2008} (we set $\hbar=1$ throughout this work) \begin{align}\label{eq:Hamiltonian} \mathcal{H} = & \sum_\sigma \epsilon_{0,\sigma} \hat n_\sigma - \frac{U}{2} \left( \hat n_\uparrow - \hat n_\downarrow \right)^2 \nonumber\\ & + \sum_{\alpha \vb{k} \sigma} \epsilon_{\alpha \vb{k}} \hat c^\dagger_{\alpha\vb{k}\sigma} \hat c_{\alpha\vb{k}\sigma} + \sum_{\alpha \vb{k} \sigma} \big(t_\alpha \hat c^\dagger_{\alpha\vb{k} \sigma} \hat d_{\sigma} + \text{h.c.}\big). \end{align} The on-site occupation-number operator is given by $\hat n_\sigma = \hat d^\dagger_\sigma \hat d_\sigma$, where $\hat d^\dagger_\sigma$ and $\hat d_\sigma$ create or annihilate an electron on the quantum dot with spin $\sigma = \,\uparrow,\downarrow$, respectively. The Coulomb interaction strength is given by $U$. In Eq.~\eqref{eq:Hamiltonian} we made use of the operator identity $\hat n_\uparrow \hat n_\downarrow = \frac{1}{2} (\hat n_\uparrow + \hat n_\downarrow) - \frac{1}{2} (\hat n_\uparrow - \hat n_\downarrow)^2$ and incorporated the terms linear in $\hat n_\sigma$ by shifting the energy of the quantum dot's level, such that $\epsilon_{0,\sigma} = \epsilon_0 + \sigma B/2$ with $\epsilon_0 = E_0 + U/2$, where the bare energy level $E_0$ in the absence of magnetic field and Coulomb interaction can be tuned via a gate voltage. To write the interaction in terms of $\hat n_\uparrow - \hat n_\downarrow$ turns later out to be advantageous for the discrete Hubbard-Stratonovich transformation. An electron with energy $\epsilon_{\alpha \vb{k}} = \epsilon_{\vb{k}} - \mu_\alpha$ in lead $\alpha = (\mathrm L,\mathrm R)$ and with momentum $\vb{k}$ is created or annihilated by the operators $\hat c^\dagger_{\alpha\vb{k}\sigma}$ and $\hat c_{\alpha\vb{k}\sigma}$, respectively. Finally, $t_\alpha$ denotes the tunnel coupling between lead $\alpha$ and the quantum dot. The tunnel-coupling strength between quantum dot and lead $\alpha$ is given by $\Gamma_\alpha = 2\pi \abs{t_\alpha}^2 \rho(\epsilon^{\mathrm F}_{\alpha\vb{k}})$, where $\rho(\epsilon^{\mathrm F}_{\alpha\vb{k}})$ denotes the density of states of lead $\alpha$ at the Fermi level. We work in the wide-band limit, which usually is a good approximation in the stationary regime \cite{Covito_2018}. We also assume a symmetric setup, where $\Gamma = \Gamma_\mathrm L = \Gamma_\mathrm R$ and where the chemical potential of the leads are given by the bias voltage $\mu_\alpha = \pm eV/2$, for the left and right lead, respectively. \subsection{Path-integral formulation} \begin{figure}[b] \centering \includegraphics[width=\columnwidth]{KeldyshC.pdf} \caption{The Keldysh contour $C$ with the measurement time ${\color{red}t_\mathrm m}$ at its center. The contour gets discretized into a total of $2N$ time steps of size $\delta_t$. } \label{fig:contour} \end{figure} We are interested in discussing time-local observables, namely the current, the quantum dot's occupation number and its spin-projector expectation value in the stationary limit. The latter is achieved for $\tb \to \infty$, where $\tb={\color{red}t_\mathrm m}-t_1$ (\textit{before}) is the time interval between the initialization and the measurement time ${\color{red}t_\mathrm m}$, while $t_\mathrm a=t_N-{\color{red}t_\mathrm m}$ (\textit{after}) denotes the time interval between ${\color{red}t_\mathrm m}$ and the Keldysh return time $t_N$. Nonequilibrium properties are taken into account within a functional integral formulation on the Keldysh contour $C$ (see Fig.~\ref{fig:contour}) \cite{Kamenev, Negele_Orland, Mundinar_2019}. To take the forward and backward branch of the Keldysh contour into account, it is useful to define $\tau = (t,\nu)$, with physical time $t$ and Keldysh branch index $\nu=\pm$, with $+$ and $-$ representing the upper and lower Keldysh contour, respectively. For any time-local observable $\hat{O}$ at measurement time ${\color{red}t_\mathrm m}$, we introduce a source term $\eta O({\color{red}t_\mathrm m})$, with $\eta \in \mathds{R}$, in order to break time-translation symmetry on the level of the system's action $\mathcal{S}$ in a specific way. This allows us to calculate the expectation value of said observable via a derivative of the system's Keldysh generating functional, \begin{align} \label{eq:GenExpVal} \expval*{\hat{O}} = \pdv{\eta} \ln Z[\eta] \bigg|_{\eta=0}. \end{align} The Keldysh generating functional for the Hamiltonian given in Eq.~\eqref{eq:Hamiltonian} takes the form \begin{align}\label{eq:GenFuncDef} Z[\eta] = \int \mathcal{D}[d, c] \, \mathrm{e}^{\mathrm{i} \mathcal S + \eta O({\color{red}t_\mathrm m})}, \end{align} and fulfills $Z[0]=1$ by construction. The functional integral is built in the basis of fermionic coherent states $\ket{\Psi(\tau)}$, being defined via the eigenvalue equations of the fermionic annihilation operators \begin{subequations}\begin{align} \hat d_\sigma \ket{\Psi(\tau)} &= d_{\sigma} (\tau) \ket{\Psi(\tau)}, \\ \hat c_{\alpha \vb{k} \sigma} \ket{\Psi(\tau)} &= c_{\alpha \vb{k} \sigma} (\tau) \ket{\Psi(\tau)}. \end{align}\end{subequations} As a result, the action $\mathcal S$ and the source term $\eta O({\color{red}t_\mathrm m})$ in Eq.~\eqref{eq:GenFuncDef} are functions of the Grassmann fields $\bar{d}_{\sigma} (\tau),d_{\sigma} (\tau), \bar{c}_{\alpha \vb{k} \sigma} (\tau), c_{\alpha \vb{k} \sigma} (\tau)$, and the functional integral runs over all of these degrees of freedom \cite{Kamenev, Mundinar_2019, Weiss_2013}. Dropping the explicit $\tau$-dependency, the action takes the form \begin{align} \mathcal{S} = & \int_C \dd{t} \left[ \sum_\sigma \bar{d}_\sigma (\mathrm{i} \partial_t - \epsilon_{0,\sigma}) d_\sigma - \frac{U}{2} \left( n_\uparrow - n_\downarrow\right)^2 \right. \\ & \left. + \sum_{\alpha \vb{k} \sigma} \bar{c}_{\alpha \vb{k} \sigma} ( \mathrm{i} \partial_t - \epsilon_{\alpha \vb{k}}) c_{\alpha \vb{k} \sigma} + \sum_{\alpha \vb{k} \sigma} \big( t_\alpha \bar{c}_{\alpha \vb{k} \sigma} d_\sigma + \text{h.c.}\big)\right]\nonumber \end{align} Performing the integral in Eq.~\eqref{eq:GenFuncDef} does not pose any challenge on the terms contributed by the leads' and the tunneling Hamiltonian, since they are quadratic in the Grassmann fields. However, the quartic on-site interaction term has to be tackled via a discrete Hubbard-Stratonovich transformation \cite{Hubbard_1959, Stratonovich_1958, Hirsch_1983}. For this, we first discretize the Keldysh contour into $2N$ time slices of length $\delta_t$ (see Fig.~\ref{fig:contour}), and then perform a HS transformation, \begin{align}\label{eq:HStrafo} \mathrm{e}^{-\frac{1}{2} \mathrm{i} \nu \delta_t U \left( n_\uparrow-n_\downarrow \right)^2} = \frac{1}{2} \sum_{s=\pm 1} \mathrm{e}^{-s\zeta_\nu (n_\uparrow-n_\downarrow)}, \end{align} on each of these $2N$ slices, keeping in mind that $n_\sigma$ can only assume the values $0$ and $1$. This decouples the interaction term, at the cost of introducing one Ising-like degree of freedom, $s=\pm 1$, per time slice. The HS parameter $\zeta_\nu$ is determined uniquely for $0\leq \delta_t U < \pi$ via \cite{Mundinar_2019, Weiss_2013} \begin{align} \cosh \zeta_\nu = \mathrm{e}^{-\frac 1 2 \mathrm{i} \nu \delta_t U}. \end{align} As a result, we have introduced $2N$ new HS spins but are able to also integrate over the dot degrees of freedom, solving the functional integral in Eq.~\eqref{eq:GenFuncDef}. We find \begin{align} \label{eq:modGenFunc} \check Z} % Zp = Z / 2^{2N[\eta] = \sum_{\vec{s}} \det D[\eta,\vec{s}\,], \end{align} with the discretized generating functional $\check Z} % Zp = Z / 2^{2N[\eta]\propto Z[\eta]$, and with the matrix \cite{Weiss_2013,Mundinar_2019}% \begin{subequations}\label{eq:modDressedGF} \begin{align} \label{eq:Ddef} D[\eta,\vec{s}\,] & = S \left[ \Delta^{-1} - S \Sigma^C + \eta \Sigma^O\right] \Delta \\ & = S - \Sigma^C \Delta +\eta S \Sigma^O \Delta \\ & = S - \tilde\Sigma^{C} + \eta S \tilde\Sigma^{O}. \end{align} \end{subequations} For this we used the HS spin vector \begin{align}\label{eq:vs} \vec{s} = (s^+_1, s^-_1, s^+_2, s^-_2,\ldots, s^+_N, s^-_N), \end{align} such that the sum includes all $2^{2N}$ possible spin configurations along the discretized Keldysh contour. In addition, we introduced several new matrices: We identify the inverse time-discrete Green's function $\Delta^{-1} = \Delta_0^{-1} - \Sigma^{\text{T}}$ of the non-interacting setup, where $\Delta_0$ is the free dot's Green function, and $\Sigma^\text{T}=\sum_\alpha \Sigma^{\text{T},\alpha}$ is the tunneling self energy. In addition, we introduced the charging self energy $S \Sigma^C$ with the diagonal spin matrix $S = \diag(\vec{s}\,) \otimes \sigma_z$, which is the only part depending on the HS spins $\vec{s}$, as well as the source self energy $\Sigma^O$, which is included to account for the source term. We also made use of the short-hands $\tilde\Sigma^{C} = \Sigma^C \Delta$ and $\tilde\Sigma^{O} = \Sigma^O \Delta$ for later convenience. All of these matrices have dimensions $4N\times 4N$ due to the Trotter slicing (see Fig.~\ref{fig:contour}) and the spin degree of freedom. Note that we modified the generating functional by absorbing the factor $\frac{1}{2}$ per spin from Eq.~\eqref{eq:HStrafo}. Additionally, we have multiplied $S$ from the left and $\Delta$ from the right in Eq.~\eqref{eq:modDressedGF}. These changes do not affect the expectation value of the observable, since $\det S=1$ and $\det \Delta=\mathit{const}$, which cancels due to the logarithmic derivative in Eq.~\eqref{eq:GenExpVal}. However, multiplying by $\Delta$ ensures that $D[\eta,\vec{s}\,]$ decays exponentially, while through the multiplication with $S$ the HS spins are located only on the diagonal of $D[\eta,\vec{s}\,]$ as well as on the parts affected by the source self energy. The first property will be crucial when implementing the ISPI scheme, while the second is useful for an efficient implementation of the transfer-matrix formulation. \subsection{Form of the matrices} To specify the elements of the matrices in Eq.~\eqref{eq:Ddef} we employ the basis $(n,\nu,\sigma)$, where the Trotter index $n = 1,\ldots,N$ labels the time slice, the Keldysh index $\nu = \pm$ distinguishes the upper from the lower Keldysh contour, and $\sigma = \,\uparrow,\downarrow$ denotes the spin. In this basis, the Green's function of the non-interacting quantum dot in the presence of leads is given as \footnote{We use the notation $A=[a_{nn'}]_{nn'}$ to build the matrix $A$ from its elements $a$.} \begin{align}\label{eq:gom} \Delta &= \bigg[ \int \frac{\dd{\omega}}{2\pi} \mathrm{e}^{-\mathrm{i}\omega (n-n')\delta_t} \times \\ &\quad\times\Big\{\sigma_z \otimes \left[\left(\omega - \epsilon_0\right)\sigma_0 - \tfrac{B}{2} \sigma_z\right] - \gamma_+(\omega)\otimes \sigma_0 \Big\}^{-1} \bigg]_{n n'},\nonumber \end{align} with $\gamma_\pm(\omega)=\gamma_\mathrm L(\omega)\pm\gamma_\mathrm R(\omega)$, where $\gamma_\alpha(\omega)$ denotes the $2\times 2$ Keldysh matrix \begin{align} \gamma_\alpha(\omega) =\frac{\mathrm{i}}{2} \Gamma_\alpha \begin{pmatrix} 2f_\alpha(\omega)-1 & -2f_\alpha(\omega) \\ -2f_\alpha(\omega)+2 & 2f_\alpha(\omega)-1 \end{pmatrix}, \end{align} and $\sigma_z$ and $\sigma_0$ are the Pauli matrices, acting on either Keldysh or spin space if they appear on the left or on the right of the tensor product, respectively. The Fermi function $f_\alpha(\omega)=[\exp(\beta(\omega-\mu_\alpha))+1]^{-1}$ describes the equilibrium occupation distribution of lead $\alpha$. Note that due to the symmetric discretization of the derivative $\partial_t \mapsto \omega^{-1}$ in frequency space, the discretized advanced and retarded Green's functions have the diagonal $|\Delta^{\mathrm{a,r}}|_{nn}=\frac{1}{2}$ instead of $1$, such that $\det(2\Delta)=1$. The charging self energy $S \Sigma^C$ is time-local and therefore a diagonal matrix, with \begin{align}\label{eq:ChargingSE} \Sigma^C & = \diag\left[\mathrm{i} \begin{pmatrix} \zeta_+ & 0 \\ 0 & \zeta_- \end{pmatrix} \otimes \sigma_0 \right]_{n}. \end{align} The form of the source self energy $\Sigma^O$ is based on the observable of interest $\hat{O}$, since it is derived from its source term $\eta O({\color{red}t_\mathrm m})$. The source self energies for the occupation number, the spin projection in $z$ direction, and for the current were derived in earlier works \cite{Mundinar_2020, Weiss_2008, Weiss_2013, Mundinar_2019}. As a result, we only present the results here, and refer to the aforementioned references for a more detailed derivation. Assuming we include the measurement on the Trotter slice $m={\color{red}t_\mathrm m}/\delta_t$, we find for the occupation number $\hat N = \sum_\sigma \hat n_\sigma$ and for the spin projection $\hat S_z = \frac{1}{2} (\hat n_\uparrow - \hat n_\downarrow)$ \cite{Mundinar_2020} \begin{subequations}\label{eq:SourceSE} \begin{align}\label{eq:SourceLoc} \Sigma^{N} &= \left[\,\delta_{n m} \delta_{n'm} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \otimes \sigma_0\,\right]_{n n'}, \\ \Sigma^{S_z} &= \left[\,\delta_{n m} \delta_{n' m} \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} \otimes \frac{\sigma_z}{2}\,\right]_{n n'}. \end{align} These are sparse matrices, containing only two nonzero elements. Finally, the current operator is given by $\hat I = -\mathrm{i} e/2 \sum_{\alpha \vb{k} \sigma} \alpha t_\alpha (\hat c^\dagger_{\alpha \vb{k} \sigma} \hat d_\sigma - \hat d^\dagger_\sigma \hat c_{\alpha \vb{k} \sigma})$, with its source term given by \cite{Weiss_2008, Weiss_2013, Mundinar_2019} \begin{align}\label{eq:SourceI} \Sigma^{I} = \frac{e}{2} \Re\! \left[\mathrm{i} \delta_{n m} \int \frac{\dd \omega}{2\pi} \frac{\left[\sigma_z\gamma_-(\omega)\right] \otimes \sigma_0}{\mathrm{e}^{\mathrm{i} \omega(n-n') \delta_t}}\right]_{n n'}. \end{align}\end{subequations} Note that the source self energy for the current operator is a sparse matrix, too, with only one row $m$ having non-vanishing elements. \section{Method} \label{sec:Method} \subsection{Truncation of matrix \textit D} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{GFdecay.pdf} \caption{The system's dressed, non-interacting Green's function $\Delta$ as a function of time $(t-t')$ for parameters $k_\mathrm{B} T=0.2\,\Gamma$, $\epsilon_0=0$, $eV=\Gamma$, $B=0$. The dotted line is an exponential fit for the envelope function according to $|\Delta|\propto \exp(-|t-t'|/\xi_0)$. Inset: Correlation time $\xi_0$ of the envelope function for the non-interacting system as a function of temperature $k_\mathrm{B} T$. Other parameters are the same as in the main panel.} \label{fig:GFdecay} \end{figure} The sum in Eq.~\eqref{eq:modGenFunc} runs over all configurations of the $2N$ HS spins, with $N$ usually being of the order of several hundreds. Summing over these $2^{2N}$ configurations is an insurmountable task, and approximations are in order. The approximation for the ISPI scheme is based on the fact that lead-induced correlations decay exponentially with time at finite temperatures \cite{Weiss_2008, Weiss_2013}. As a consequence the system's non-interacting Green's function $\Delta$ -- and with it $D[\eta,\vec{s}\,]$ -- also decays exponentially with time $|t-t'|$. This is shown in the main panel of Fig.~\ref{fig:GFdecay}, where the absolute value of $\Delta^{+-}$ is plotted against $|t-t'|$ for the parameter set $k_\mathrm{B} T = 0.2\,\Gamma$, $\epsilon_0=0$, $eV=\Gamma$, $B=0$ and $\delta_t \to 0$. While for low temperatures one finds large oscillations as seen in the figure, the enveloping function still decays exponentially, as long as $k_\mathrm{B} T >0$. We demonstrate this temperature dependence of the correlations' decay in the inset of Fig.~\ref{fig:GFdecay}, where the correlation time $\xi_0$ of the enveloping function is plotted as a function of temperature. This motivates us to truncate the interacting Green's function Eq.~\eqref{eq:modDressedGF}, such that \begin{align}\label{eq:tridiagD} D[\eta,\vec{s}\,] = \begin{bmatrix} D_{1}(\vec{s}_1) & D_{1}^+ & & \\ D_{2}^- & D_{2}(\vec{s}_2) & D_{2}^+ & \phantom{\ddots} \\ & D_{3}^- & \ddots & \ddots \\ & & \ddots & D_{L}(\vec{s}_{L}) \end{bmatrix} \end{align} is a block-tridiagonal matrix with $4K \times 4K$ blocks $D_{\ell}^{(\pm)}$, where $\ell=1,\ldots,L$ and $L = N/K$. The number of Trotter slices within one block, $K = t_K/\delta_t$, depends on a chosen memory time $t_K$ and the Trotter step size $\delta_t$. Since only row $m$ in Eq.~\eqref{eq:modDressedGF} is affected by the source self energy, see Eqs.~\eqref{eq:SourceSE}, this also holds true for the block matrix, where only the block row $\tilde m$ containing row $m$ is affected by the source self energy. We also introduced the block spins $\vec{s}_\ell$, each consisting of a set of $2K$ HS spins, such that $\vec{s}=(\vec{s}_1,\ldots,\vec{s}_L)$, see Eq.~\eqref{eq:vs}. Using this truncation we are now able to formulate the ISPI scheme, which was first applied to the Anderson model in Ref.~\onlinecite{Weiss_2008} and later employed to study the Anderson-Holstein model \cite{Huetzen_2012}, and the quantum-dot spin valve \cite{Mundinar_2019, Mundinar_2020}. We introduce the main ideas in the next section, and refer to any of the aforementioned references for a more complete discussion. After that we introduce the mapping of the ISPI method to a transfer-matrix method, which by construction directly addresses the stationary limit and drastically increases the computational performance. \subsection{ISPI formulation} \label{sec:FTImplementation} As can be seen in Eq.~\eqref{eq:modGenFunc} one has to calculate the determinant of the block-tridiagonal matrix from Eq.~\eqref{eq:tridiagD}. According to Ref.~\onlinecite{Salkuyeh_2006} such a determinant can be evaluated iteratively via \begin{align}\label{eq:GenFuncDcheck} \check Z} % Zp = Z / 2^{2N_{L}[\eta] = \sum_{\vec{s}} \prod_{\ell=1}^{L} \det \check{C}_{\ell}(\vec{s}_{1:\ell}), \end{align} where $\vec{s}_{k:\ell}=\{\vec{s}_{k},\ldots,\vec{s}_\ell\}$, and \begin{align}\label{eq:Dcheck} \check{C}_{\ell}(\vec{s}_{1:\ell}) &= D_{\ell}(\vec{s}_\ell) - D_{\ell}^- \, \check{C}_{\ell-1}^{-1}(\vec{s}_{1:\ell-1}) \, D_{\ell-1}^+ \end{align} denotes the Schur complement in the $\ell$-th step, with $\check{C}_{1}(\vec{s}_{1:1}) = D_{1}(\vec{s}_1)$ and $\ell = 2,\ldots, L$. Therefore, each $\check{C}_{\ell}$ depends on all previous $\check{C}_{k}$, with $k<\ell$, and since the spins are distributed diagonally, Eq.~\eqref{eq:Dcheck} connects $\vec{s}_\ell$ with all previous $\vec{s}_k$. To remain consistent with the idea of truncating correlations after the memory time $t_K$, we approximate $\check{C}_{\ell}(\vec{s}_{1:\ell})$ by \cite{Weiss_2008, Mundinar_2019} \begin{align}\label{eq:Dtilde} C_{\ell}(\vec{s}_{\ell-1},\vec{s}_{\ell}) &= D_{\ell}(\vec{s}_\ell) - D_{\ell}^- \, D_{\ell-1}^{-1}(\vec{s}_{\ell-1}) \, D_{\ell-1}^+, \end{align} that is, in Eq.~\eqref{eq:Dcheck} we replace $\check{C}_{\ell-1}^{-1}(\vec{s}_{1:\ell-1})$ with $D_{\ell-1}^{-1}(\vec{s}_{\ell-1})$, which only depends on $\vec{s}_{\ell-1}$. Therefore, $C_{\ell}$ only connects $\vec{s}_{\ell-1}$ and $\vec{s}_{\ell}$, effectively truncating interaction-induced correlations after the memory time $t_K$. As a result, we are able to rewrite Eq.~\eqref{eq:GenFuncDcheck}, finding \begin{align}\label{eq:GenFuncDtilde} \check Z} % Zp = Z / 2^{2N_{L}[\eta] &= \sum_{\vec{s}_{1}} \det D_{1}(\vec{s}_1) \sum_{\vec{s}_{2}} \det C_{2}(\vec{s}_{1},\vec{s}_{2}) \times \cdots \nonumber \\ & \quad \times \sum_{\vec{s}_{L}} \det C_{L}(\vec{s}_{L-1},\vec{s}_{L}). \end{align} Note that we used the fact that each $C_{\ell}$ depends only on the two block spins $\vec{s}_{\ell-1}$ and $\vec{s}_{\ell}$, allowing us to evaluate $L$ sums over $M=2^{2K}$ spin configurations, instead of the much larger sum over $2^{2N}$ configurations. Since we can choose $K\ll N$, this is a huge reduction of complexity, allowing us to evaluate the generating functional $\check Z} % Zp = Z / 2^{2N[\eta]$ for much larger values of $N$. ISPI can be characterized as a finite-time implementation due to the finite length of the considered Keldysh contour as depicted in Fig.~\ref{fig:contour}. In order to reach the stationary limit, the time $\tb$ before the measurement has to be chosen large enough such that the system has relaxed from any arbitrary initial state to its stationary state. This can be ensured by choosing $\Gamma \tb \gg 10$, depending on the system under consideration. For the Anderson model, we find that a time interval of $\Gamma \tb = 15$ is sufficient. The finite-time implementation has the disadvantage that, first, one needs to carefully choose for each calculation the proper time interval to make sure that the stationary limit has been achieved. Second, increasing the time interval (due to larger relaxation times) increases the computational cost. The TraSPI formulation, put forward in this paper, is motivated by the desire to directly perform the limit $\tb\to\infty$ analytically by using transfer matrices. This strategy does not only strongly decrease the computational cost but has also other benefits, as described in the next section. \subsection{TraSPI formulation} \label{sec:TMImplementation} The term ``transfer matrix'' is reminiscent of the transfer-matrix method known from statistical physics, which was first introduced to solve the 1-dimensional Ising model \cite{Kramers_1941, Wannier_1941} and later used by Onsager as the basis for his well-known exact solution of the two-dimensional Ising model \cite{Onsager_1944, Kaufman49}, for a recent application see, e.g., \cite{Hucht16a,Hucht21a}. In fact, Eq.~\eqref{eq:GenFuncDtilde} is the basis for a mapping of the ISPI scheme to a transfer-matrix formulation, which we derive here. \begin{figure}[t!] \centering \includegraphics[width=0.8 \columnwidth]{TMSketch.pdf} \caption{Color sketch of the elements of $D[\eta,\vec{s}\,]$ after the truncation for the case $L=K=5$. Small boxes represent a $4\times 4$ block, while saturation depicts the absolute value of the elements. Red boxes carry HS spins $s^\nu_n$, blue boxes are affected by the source self energy and carry HS spins $s^\nu_{m}$, grey boxes are neither affected by the source term nor by the HS spins. The blue solid borders represent elements included in the TM $\mat U_{\ell-1,\ell}$. The green dashed borders represent elements included in the TM $\mat V_{\!\ell}$. For these, lighter colors denote TMs affected by the source term. (a) and (b) are graphical representations of Eqs.~\eqref{eq:AsymTransGenFunc} and \eqref{eq:SymTransGenFunc}, respectively. The TM $\mat T$, shown as orange arrows, first propagates one step of size $t_K$ backwards in time, followed by two steps in forward direction. The TM $\mat U$ on the other hand is represented by blue arrows, and propagates two steps of size $t_K$ forward in time, while $\mat V$ or $\mat V^{-1}$ are shown by green lines, propagating one step forward or backward, respectively.} \label{fig:TMImplement} \end{figure} First, we reiterate that each $C_{\ell}$ depends on two block spins $\vec{s}_{\ell-1}$ and $\vec{s}_{\ell}$, while $D_{1}$ depends only on $\vec{s}_1$. Therefore, the shape of Eq.~\eqref{eq:GenFuncDtilde} suggests rewriting it into a matrix product in the space of HS-spin configurations. For this, we enumerate the $M$ different configurations of the block spins $\vec{s}_\ell$ by $\mu=0,\ldots,M{-}1$, such that, e.g., $\mu=0$ corresponds to $2K$ HS spins pointing up, $\vec{s}_\ell=(1,\ldots,1)$. In addition, we write $f_\ell(\mu)$ instead of $f_\ell(\vec{s}_\ell)$ for any function that depends on the HS spins, in order to keep the notation compact. With this, we define the $M\times M$ transfer matrix (TM) \begin{align}\label{eq:LambdaDef} \mat T_{\ell-1,\ell} &= \big[\det C_{\ell}(\mu,\mu') \,\big]_{\mu\mu'}. \end{align} Thus, each row corresponds to one of the $M$ configurations $\mu$ of the block spin $\vec{s}_{\ell-1}$ and each column to one of the $M$ configurations $\mu'$ of $\vec{s}_{\ell}$. If we additionally define the two vectors \begin{subequations}\begin{align} \bra{v} &= \big[ \det D_{1}(\mu) \,\big]_\mu,\\ \ket{1} &= \big[ 1 \big]_\mu, \end{align}\end{subequations} then the Keldysh generating functional \eqref{eq:GenFuncDtilde} takes the simple matrix-product form \begin{align}\label{eq:AsymTransGenFunc} \check Z} % Zp = Z / 2^{2N_{L}[\eta] = \mel*{v}{\mat T_{1,2}\mat T_{2,3}\cdots\mat T_{L-1,L}}{1}, \end{align} and each multiplication with $\mat T_{\ell-1,\ell}$ corresponds to one term $\sum_{\vec{s}_{\ell}} \det C_{\ell}(\vec{s}_{\ell-1},\vec{s}_{\ell})$ in Eq.~\eqref{eq:GenFuncDtilde}. In the next step, we symmetrize Eq.~\eqref{eq:AsymTransGenFunc} by introducing two new transfer matrices $\mat U_{\ell-1,\ell}$ and $\mat V_{\!\ell}$. For this, we reinspect the definition \eqref{eq:Dtilde} of $C_{\ell}$, whose determinant provides the elements of $\mat T_{\ell-1,\ell}$, noting that it has the well-known form of the Schur complement of a part of the tridiagonal block matrix $D[\eta,\vec{s}\,]$, given by \begin{align}\label{eq:Dbar} D_{\ell-1,\ell}(\mu,\mu') = \begin{bmatrix} D_{\ell-1}(\mu) & D_{\ell-1}^+ \\ D_{\ell}^- & D_{\ell}(\mu') \end{bmatrix}. \end{align} This is a $2\times 2$ block matrix, affected by the block spins $\vec{s}_{\ell-1}$ and $\vec{s}_{\ell}$. Thus, we are able to rewrite $\det C_{\ell}$ as the quotient of two determinants, \begin{align}\label{eq:DtildeSchur} \det C_{\ell}(\mu,\mu') = \frac{\det D_{\ell-1,\ell}(\mu,\mu')} {\det D_{\ell-1}(\mu)}. \end{align} Returning to the transfer-matrix formalism, this corresponds to the matrix product \begin{align} \mat T_{\ell-1,\ell} = \mat V^{-1}_{\!\ell-1} \mat U^{\vphantom{1}}_{\ell-1,\ell} \, , \end{align} where $\mat U_{\ell-1,\ell}$ is a dense $M\times M$ transfer matrix and $\mat V_{\!\ell}$ is a $M\times M$ diagonal matrix, \begin{subequations}\label{eq:UVDef} \begin{align} \mat U_{\ell-1,\ell} &= \big[ \det D_{\ell-1,\ell}(\mu,\mu') \,\big]_{\mu\mu'} \\ \mat V_{\!\ell} &= \diag\!\big[ \det D_{\ell}(\mu) \,\big]_{\mu}. \end{align} \end{subequations} Building the generating functional with these, we get a symmetrized version of Eq.~\eqref{eq:AsymTransGenFunc}, \begin{align}\label{eq:SymTransGenFunc} \check Z} % Zp = Z / 2^{2N_{L}[\eta] = \mel**{1}{\mat U^{\vphantom{1}}_{1,2} \mat V^{-1}_{\!2} \mat U^{\vphantom{1}}_{2,3} \cdots \mat V^{-1}_{\!L-1} \mat U^{\vphantom{1}}_{L-1,L}}{1}, \end{align} where we used the fact that $\bra{v}= \bra{1} \mat V_{\!1}$. We emphasize that to this point the Eqs.~\eqref{eq:GenFuncDtilde}, \eqref{eq:AsymTransGenFunc}, and \eqref{eq:SymTransGenFunc} are synonymous, and no additional assumptions have been made. In Fig.~\ref{fig:TMImplement} we show a sketch of the elements of $D[\eta,\vec{s}\,]$ that are taken into account for the case $L=K=5$. Each of the small $1\times 1$ boxes represents a single time slice of $D[\eta,\vec{s}\,]$ (thus a $4\times 4$ block, to take spin and Keldysh into account.). The number of matrices entering Eq.~\eqref{eq:SymTransGenFunc} increases linearly with $L$. To achieve the stationary limit, we have to choose $L$ large. We now improve upon this: (i) we perform the stationary limit analytically, (ii) we optimize the position of the measurement, (iii) we calculate the derivative of $\pdv{\eta}\check Z} % Zp = Z / 2^{2N_{L}[\eta]$ at $\eta=0$ analytically, and (iv) we implement a differential measurement method. \subsubsection{Stationary limit} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{LambdaU.pdf} \caption{(a) The absolute value of the five leading eigenvalues of the TM $\mat T$ as a function of the interaction strength $U$ for $\Gamma t_K = 1.5$ and $K=5$. (b) The eigenvalue $\lambda_0$ as a function of $U$ for $\Gamma t_K = 1.5$ and for different $K$ (and therefore $\delta_t$). Dotted lines show the approximation \eqref{eq:lnlambda}. Shown is the parameter set given by $k_\mathrm{B} T=0.5\,\Gamma$, $\epsilon_0=0$, $eV=0.1\,\Gamma$, $B=0$. Different parameters produce similar pictures.} \label{fig:EigenvalueDevelopment} \end{figure} From this point on we are interested in the case that the system reached the stationary limit, i.e., we perform the limit $\tb\to \infty$. In order to perform this limit, we start with an index shift by $\tilde m$ in the block indices $(\ell,\tilde m)_\text{old}\mapsto (\ell,\tilde m)_\text{new}=(\ell-\tilde m,0)$, such that the measurement block becomes $\tilde m=0$ and $\ell=-L_\mathrm b-1,\ldots,L_\mathrm a+1$, with $L_\mathrm b+L_\mathrm a=L-3$. Note that $L_\mathrm b$ and $L_\mathrm a$ denote the number of TMs $\mat T$ \textit{before} and \textit{after} the measurement, respectively. In the next step, we make use of the fact that the system is symmetric under time-translation as long as the source term is not included, i.e., as long as $\eta = 0$. As a consequence, the non-interacting Green's function $\Delta$ obeys $\Delta_{n,n'} = \Delta_{n+1,n'+1}$. Thus, for the tridiagonal matrix $D[0, \vec{s}\,]$ we equally find that $D_\ell(\mu)$ and $D_\ell^\pm$ are independent of $\ell$. With this and Eqs.~\eqref{eq:Dtilde}, \eqref{eq:Dbar}, we find that due to time-translation symmetry the transfer matrices fulfill $\mat T_{\ell-1,\ell} = \mat T$, $\mat U_{\ell-1,\ell} = \mat U$, and $\mat V_{\!\ell} = \mat V$ for all $\ell\neq 0, 1$. However, the source term was implemented to specifically break this time-translation symmetry, and therefore whenever an element of $D[\eta,\vec{s}\,]$ is affected by the source self energy $\Sigma^O$, above relations do not hold anymore. We mentioned before that for time-local observables, the source self energy affects only the matrices $D_{0}^{\pm}$ and $D_{0}(\vec{s}_{0})$ at most. Consequently, only the transfer matrices acting at position $\tilde m=0$, i.\,e., $\mat T_{-1,0}$, $\mat T_{0,1}$, $\mat U_{-1,0}$, $\mat U_{0,1}$ and $\mat V_{\!0}$, are affected by the source term and are therefore different from the others, see Fig.~\ref{fig:TMImplement}. We find \begin{align} \label{eq:LambdaGenFunc} \check Z} % Zp = Z / 2^{2N_{L}[\eta] &= \mel*{v}{\mat T^{L_\mathrm b} \mat T_{-1,0} \mat T_{0,1} \mat T^{L_\mathrm a}}{1}. \end{align} To reach the stationary limit $\tb \to \infty$, we now can take the limit $L\to \infty$, meaning we let both $L_\mathrm b\to \infty$ and $L_\mathrm a\to \infty$. For the high powers of $\mat T$ we use the identity \begin{align}\label{eq:MatAsEigvec} \lim_{n\to\infty} \frac{\mat T^n}{\lambda_0^n} = \lim_{n\to\infty} \sum_{k\geq 0} \frac{\lambda_k^n}{\lambda_0^n} \dyad{\lambda_k} = \dyad{\lambda_0}, \end{align} where $\lambda_k$ are the eigenvalues of $\mat T=\mat V^{-1}\mat U$, with $\abs{\lambda_0} > \abs{\lambda_1} \geq \ldots$, while $\bra{\lambda_k}$ and $\ket{\lambda_k}$ are the respective left and right eigenvectors, fulfilling $\braket{\lambda_k}{\lambda_{k'}}=\delta_{kk'}$. Likewise, we define $\bra*{\tilde\lambda_k}=\bra*{\lambda_k}\mat V^{-1}$ and $\ket*{\tilde\lambda_k}=\mat V\ket*{\lambda_k}$ as corresponding left and right eigenvectors of $\mat U \mat V^{-1}$ (with the same eigenvalues, $\tilde\lambda_k=\lambda_k$). Plugging this back into Eq.~\eqref{eq:LambdaGenFunc} and performing the stationary limit, we find \begin{subequations}\label{eq:LambdaGenFunc2} \begin{align} \frac{Z_{\infty}[\eta]}{Z_{\infty}[0]} & = \lambda_0^{-2} \mel**{\lambda_0}{\mat T_{-1,0} \mat T_{0,1}}{\lambda_0} \\ \label{eq:UVGenFunc2} &= \lambda_0^{-2} \mel*{\tilde\lambda_0}{\mat U_{-1,0} \mat V^{-1}_{\!0} \mat U_{0,1}}{\lambda_0}. \end{align} \end{subequations} In Fig.~\ref{fig:EigenvalueDevelopment}(a) we plot the absolute value of the largest five eigenvalues of the TM $\mat T$ as a function of the Coulomb interaction strength $U$ for the parameter set $k_\mathrm{B} T=0.5\,\Gamma$, $\epsilon_0=0$, $eV=0.1\,\Gamma$, and $B=0$ memory time $\Gamma t_K = 1.5$, and $K=5$. We find that the largest eigenvalue is $\lambda_0=M$ at $U=0$, as would be expected, since in the noninteracting limit the transfer matrix $\mat T$ is just a $M \times M$ matrix, where each element is equal to 1. For finite Coulomb interaction it scales with $\delta_u = \delta_t U$ as \begin{align}\label{eq:lnlambda} \ln\frac{\lambda_0}{M} = \frac{3(K-1)}{32} \delta_u^2 + \mathcal{O}(\delta_u^3) \end{align} for small $\delta_u$. However, once $U$ becomes large, we find occasionally peaks where one of the lower eigenvalues diverges. These divergencies are not physical but are a consequence of the discretization and truncation of the Green's function. We can track them down to elements of the diagonal matrix TM $\mat V$ that vanish, resulting in divergencies in $\mat V^{-1}$, and hence in $\mat T$. Nonetheless, the physical correct eigenvalue $\lambda_0$ is still present, and we choose that one (instead of the peaks) to calculate the correct generating functional. To choose the correct eigenvalue $\lambda_0$ we make use of the fact that the corresponding right eigenvector is $\ket{\lambda_0} = \ket{1} + \mathcal{O}(\delta_u^3)$ (see below). Thus, we identify the correct eigenvalue by maximizing the overlap of the corresponding right eigenvector with $\ket{1}$. In Fig.~\ref{fig:EigenvalueDevelopment}(b) we plot this physically correct eigenvalue $\lambda_0$ as a function of $U$ for different $K$. As can be seen, only for $U \gtrsim 3\,\Gamma$ the data gets noisy, but increasing $K$ allows for the calculation of larger $U$. The TraSPI formulation is a significant reduction in complexity compared to the traditional, i.e., finite-time ISPI implementation: Instead of $L-1$ dense transfer matrices, we only have to evaluate the three dense TMs $\mat U$, $\mat U_{-1,0}$ and $\mat U_{0,1}$, as well as the two diagonal TMs $\mat V$ and $\mat V_{\!0}$. An additional benefit of the $\mat U\mat V$-decomposition \eqref{eq:UVDef} is that it allows for an analytic evaluation of the derivative with respect to $\eta$. Before, however, we demonstrate that we are able to shift the measurement time to the end of the Keldysh contour in order to reduce the number of necessary transfer matrices even further. \subsubsection{Position of the measurement} In Eqs.~\eqref{eq:LambdaGenFunc2}, we assumed that the measurement is placed somewhere in the middle of the Keldysh contour (see also Fig.~\ref{fig:contour}). However, based on causality, whatever happens at physical times after the measurement must not have an impact on the outcome of the measurement itself. In other words, the time propagation $t_\mathrm a$ along the Keldysh contour from ${\color{red}t_\mathrm m}$ to $t_N$ and back to ${\color{red}t_\mathrm m}$ is unitary and should therefore cancel out, see Fig.~\ref{fig:contour}. As a consequence, it should not be detrimental to shift the measurement time forward in time on the Keldysh contour, until it is located at the rightmost point. In the words of the TM formulation from Eq.~\eqref{eq:LambdaGenFunc}, it should be sufficient to let $L_\mathrm b\to\infty$ and set $L_\mathrm a=0$, or when taking a look at the results from Eqs.~\eqref{eq:LambdaGenFunc2}, the vector $\ket{1}$ should be a right eigenvector of the TM $\mat T$. However, this exact unitary present in the continuum limit \eqref{eq:GenFuncDef} is violated by the Trotter discretization. Nevertheless, the error in the right eigenvector is quite small, $\ket{\lambda_0}=\ket{1}+\mathcal{O}(\delta_u^3)$, and can be safely neglected, while the other eigenvectors show a stronger dependency on $\delta_u$. This means that we are able to rewrite Eqs.~\eqref{eq:LambdaGenFunc2} as \begin{subequations} \begin{align}\label{eq:LambdaGenFuncEnd} \frac{Z_{\infty}[\eta]}{Z_{\infty}[0]} & = \lambda_0^{-2} \mel**{\lambda_0}{\mat T_{-1,0} \mat T_{0,1}}{1} \\ &= \lambda_0^{-2} \mel*{\tilde\lambda_0}{\mat U_{-1,0} \mat V^{-1}_{\!0} \mat U_{0,1}}{1}. \end{align} \end{subequations} Note that with this equation, the measurement takes place in the second to last $4K\times 4K$ block. If we actually measure on the last possible Trotter slice, ${\color{red}t_\mathrm m} = t_N$, only a single TM remains that is affected by the source term, resulting in the even simpler expression \begin{subequations}\label{eq:LambdaGenFuncEnd2} \begin{align} \frac{Z_{\infty}[\eta]}{Z_{\infty}[0]} & = \lambda_0^{-1} \mel**{\lambda_0}{\mat T_{-1,0}}{1} \\ \label{eq:UVGenFuncEnd2} & = \lambda_0^{-1} \mel*{\tilde\lambda_0}{\mat U_{-1,0}}{1}, \end{align} \end{subequations} This means, we are able to further reduce the number of necessary TMs to the two fully occupied TMs $\mat U$ and $\mat U_{-1,0}$, and one diagonal TM $\mat V$. We now turn to deriving the analytic derivative with respect to $\eta$, necessary to calculate observables from Eqs.~\eqref{eq:LambdaGenFuncEnd2}. \subsubsection{Analytic derivative} To calculate expectation values of observables via Eq.~\eqref{eq:UVGenFuncEnd2}, we make again use of Eq.~\eqref{eq:GenExpVal}. We explicitly calculate the derivative with respect to $\eta$ here. If for some reason we do not wish to position the measurement at the end of the Keldysh contour, meaning we use Eq.~\eqref{eq:UVGenFunc2} instead of Eq.~\eqref{eq:UVGenFuncEnd2}, calculations for the derivatives of the two remaining TMs work analogous as presented here. Performing the derivative yields \begin{align}\label{eq:AnaDerivU} \expval*{\hat{O}} & = \frac{Z_\infty'[0]}{Z_\infty[0]} = \lambda_0^{-1} \mel*{\tilde\lambda_0}{\mat U_{-1,0}'}{\lambda_0}, \end{align} where we wrote $A'$ for $\pdv{\eta}A|_{\eta=0}$. The derivative of a matrix acts on its elements, which in this case are themselves determinants of $D_{\ell-1,\ell}$. Therefore, we make use of the identity $(\ln \det A)' = (\det A)'/\det A = {\tr}(A^{-1} A')$, and extend the notation introduced for $D_{\ell-1,\ell}$, Eq.~\eqref{eq:Dbar}, to other matrices, meaning $A_{\ell-1,\ell}$ denotes a $2\times 2$ part of a $L\times L$ block matrix $A$. We further simplify the notation by writing $A_{[2]}=A_{\ell-1,\ell}$ if $\ell \neq 0,1$, which are $2\times 2$ block matrices not affected by the source term. With this we find for the derivative \footnote{Note that we can safely write $\tr(A/B)$ even if $[A,B]\neq 0$, as $\tr(A B^{-1})=\tr(B^{-1} A)$.} \begin{subequations}\begin{align} \mat U'_{-1,0} &= \left[ \det D_{[2]}(\mu,\mu') \tr\frac{ D_{-1,0}'(\mu,\mu')}{D_{[2]}(\mu,\mu')} \right]_{\mu\mu'} \\ &= \left[ \tr \frac{\det\big(1 - S_{[2]} (\mu,\mu') \tilde\Sigma^{C}_{[2]}\big) \, \tilde\Sigma^{O}_{-1,0}}{1 - S_{[2]}(\mu,\mu') \tilde\Sigma^{C}_{[2]}} \right]_{\mu\mu'}, \label{eq:Um}\end{align}\end{subequations} where we plugged in $D_{[2]}(\mu,\mu')$ and $D_{-1,0}'(\mu,\mu')$ from \eqref{eq:modDressedGF}, and pulled the determinant inside the trace. Inserting Eq.~\eqref{eq:Um} back into Eq.~\eqref{eq:AnaDerivU} allows for an analytic expression of the derivative of the generating functional, and with it for the expectation value. When placing the measurement somewhere in the center of the Keldysh contour, c.f. Eq.~\eqref{eq:UVGenFunc2}, one would find that \begin{align} \label{eq:AnaDerivFull} \expval*{\hat{O}} &= \expval*{\hat{O}}_{-1,0}+\expval*{\hat{O}}_{0,1}-\expval*{\hat{O}}_{0}, \end{align} where $\expval*{\hat{O}}_{-1,0}$ is given by Eq.~\eqref{eq:AnaDerivU}, while $\expval*{\hat{O}}_{0,1} = \lambda_0^{-1} \mel*{\tilde\lambda_0}{\mat U_{0,1}'}{\lambda_0}$ and $\expval*{\hat{O}}_{0} = \mel*{\tilde\lambda_0}{\mat V_{\!0}'}{\lambda_0}$. The respective derivatives are then calculated in analogy to Eq.~\eqref{eq:Um}. Using the analytic derivative instead of a numeric derivative reduces numerical errors and of course allows for a more straightforward implementation. We now continue to introduce a differential measurement, to decrease the impact of numerical errors. \subsubsection{Differential measurement} In a final step, we minimize discretization errors further by only calculating a differential form of the observable of interest. This means, instead of the $U$-dependent observable we only calculate the difference between the interacting case and the noninteracting limit numerically, and get an improved estimate \begin{align} \expval*{\hat{O}} = O^{(0)} + \expval*{\hat{O}(U) - \hat{O}(U=0)} \end{align} for the considered observable. The analytic expectation values $O^{(0)}$ for $U=0$ are calculated by employing the Meir-Wingreen formula \cite{Meir_1992,Jauho_1994} for the current and the dot's lesser Green's function to account for occupation number and $z$ projection of the spin, leading to \newcommand{p(\omega)}{p(\omega)} \begin{subequations}\label{eqs:AnaObserv} \begin{align} I^{(0)} & = 2\,\Gamma^2\int_{-\infty}^\infty \frac{\dd{\omega}}{2\pi} \frac{p(\omega)\left[ f_\mathrm L(\omega) - f_\mathrm R(\omega) \right]}{[p(\omega)^2-B^2] (\omega - \epsilon_0)}, \\ N^{(0)} & = 2\,\Gamma\int_{-\infty}^\infty \frac{\dd{\omega}}{2\pi} \frac{p(\omega)\left[ f_\mathrm L(\omega) + f_\mathrm R(\omega) \right]}{[p(\omega)^2-B^2](\omega - \epsilon_0)}, \\ S_{z}^{(0)} & = B\Gamma \int_{-\infty}^\infty \frac{\dd{\omega}}{2\pi} \frac{f_\mathrm L(\omega) + f_\mathrm R(\omega)}{[p(\omega)^2-B^2](\omega - \epsilon_0)}, \end{align}\end{subequations} where we defined $p(\omega)=[\Gamma^2 + (\omega - \epsilon_0)^2]/(\omega - \epsilon_0)$. Since it is expected that the numerical calculation of the observable at $U=0$ has similar errors as in the interacting case, using the differential measurement cancels these errors. This leads to a significant reduction of numerical errors. The remaining errors are then effectively eliminated during a convergence procedure, which is discussed in the next section. \subsection{Convergence}\label{ssec:convergence} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{plot_regression.pdf} \caption{The convergence procedure used to eliminate systematic errors for the example of the current as an observable. (a) Regression of $\delta_t^2 \to 0$ to eliminate the Trotter error, for different $t_K$ with $K=3, \ldots,8$ and $U=1.5\,\Gamma$. The squares at $\delta^2_t=0$ are the resulting values of this procedure. They can be found again in the ''original'' data set in (b), where the current is shown as a function of $\Gamma t_K$. For this data we employ the Aitken extrapolation \eqref{eq:Aitken} twice, shown as ''extrapolation 1'' and ''extrapolation 2''. From the mean of the data of iteration 2 the limiting value $t_K \to \infty$ is received (shown as a dashed line, error estimate is of the order of the line width). } \label{fig:Converge} \end{figure} Throughout the derivation of Eq.~\eqref{eq:AnaDerivU}, we introduced two systematic errors; one being the Trotter error, caused by the finite discretization length $\delta_t$ \cite{Fye_1986}, the other is the error introduced by the truncation at memory time $t_K$. However, both can be eliminated using a convergence procedure which allows us to provide numerically exact data. In earlier works \cite{Weiss_2008, Mundinar_2020}, different approaches were used for this convergence procedure, the most common being a two-step regression, first for $\delta_t \to 0$, and then with the resulting values for $1/t_K\to 0$. We refer to the aforementioned sources for a detailed discussion of these procedures. The main problem of these procedures is that the second regression $1/t_K \to 0$ using a power series ansatz is difficult to motivate. Therefore, we employ, in this work, a more sophisticated convergence procedure, that also starts with a power series regression of $\delta_t \to 0$ but then uses Aitken extrapolations \cite{Aitken_1927}. The complete process is shown in Fig.~\ref{fig:Converge} for the example of the current as an observable and for the parameter set $k_\mathrm{B} T = 0.5\,\Gamma$, $\epsilon_0 = \Gamma$, $eV = 0.5\,\Gamma$, and $U = 1.5\,\Gamma$. First, we calculate the expectation value $O$ of the observable $\hat{O}$ for a fixed memory time $t_K$ and varying step size $\delta_t$. The result is a set of different realizations of the same observable for this specific parameter set and memory time, $O(t_K, \delta_{t} = t_K/K)$ with $K=3,\ldots,8$. It is well known that Trotter errors are of the order $\delta_t^2$ \cite{Fye_1986}. Consequently, we fit $O(t_K, \delta_t)$ against a polynomial expression \begin{align} O(t_K, \delta_{t} \to 0 ) = \lim_{\delta_t\to 0} \sum_{j=0}^n c_j \delta_t^{2j}, \end{align} such that the observable with eliminated Trotter error $O(t_K)$ is given by the constant $c_0$ (see Fig.~\ref{fig:Converge}(a)). Note that in the equation above it is sufficient to stop at $n=2$, thus only taking up to second order in $\delta^2_t$ into account \cite{Weiss_2008, Mundinar_2020}. Having eliminated the Trotter error, we now turn to eliminate the truncation error. For this, we repeat the first step for different values of the memory time $t_K$, with $1 \leq \Gamma t_K \leq 2$. This leads to a set of realizations of the desired observable $O(t_K)$. As can be seen in Fig.~\ref{fig:Converge}(b), the observable as a function of $\Gamma t_K$ converges exponentially against a limiting value for $t_K\to \infty$, which is the numerically exact result of the observable for one specific parameter set. This behavior can be understood from the exponential decay of the Green's function $\Delta$, cf.~Fig.~\ref{fig:GFdecay}. For such exponentially converging sequences the Aitken extrapolation works exceptionally well, accelerating the convergence, which eventually leads to an approximately constant sequence at the limiting value. The Aitken extrapolation for a sequence $f_n$ is given by \cite{Aitken_1927} \begin{align}\label{eq:Aitken} (\mathcal A f)_{n+1} = f_n - \frac{(\Delta_n f_n)^2}{\Delta_n^2 f_n}, \end{align} with forward differences \cite{KoenigHucht21} $\Delta_n f_n=f_{n+1}-f_n$. For $f_n$ we use the set of realizations of the desired observable $O(t_K)$ and find, that after two Aitken extrapolations the data is approximately constant around the numerically exact result for sufficiently large values of $t_K$. We use the mean value of this sequence as the final result and its standard deviation as an error estimate. \section{Results} \label{sec:Results} Having introduced and extensively discussed the TraSPI method, we now use it to calculate current-based and occupation number-based observables of a single-level, interacting quantum dot coupled to two normal leads. To make interaction-induced effects more visible, we show derivatives of the current and the occupation number with respect to the bias voltage $eV$. These derivatives are calculated numerically via central differences, \begin{align} \dv{O(V)}{V} = \frac{O(V+\delta_V) - O(V-\delta_V)}{2 \delta_V}+\mathcal{O}(\delta_V^2), \end{align} where we choose $e\delta_V \leq 0.01\,\Gamma$. For all data sets we took memory sizes $1.0 \leq \Gamma t_K \leq 2.5$ and $K = 3,\ldots,7$ into account. Whenever this was not sufficient to reach convergence, we included $K=8$. \subsection{Differential conductance} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{Geps0DiffU.pdf} \caption{The differential conductance as a function of the gate voltage $\epsilon_0$ for different values of the Coulomb interaction strength $U$. Shown is the regime of linear response, given by small bias voltages $eV = 0.1\,\Gamma$. Other parameters are $k_\mathrm{B} T=0.2\,\Gamma$ and $B=0$, shaded areas are error estimates.} \label{fig:Ieps0DiffU} \end{figure} We start the discussion of the conductance in the linear-response regime with Fig.~\ref{fig:Ieps0DiffU}. There, the $\dd I/\dd V$ is shown as a function of the gate voltage $\epsilon_0$ for a small bias voltage $eV=0.1\,\Gamma$. The other parameters are given by $k_\mathrm{B} T = 0.2\,\Gamma$ and $B=0$. Each curve represents a different value of the Coulomb interaction strength $U$, starting at $U=0$, which is calculated analytically, see Eqs.~\eqref{eqs:AnaObserv}. For such small bias voltages, we are able to reach Coulomb interaction strengths of $U \leq 2.5\,\Gamma$, with the data sets still converging. For larger $U$ it would be necessary to take memory times $\Gamma t_K > 2.5$ and thus larger $K$ into account. For the noninteracting case, we find a peak at $\epsilon_0=0$, where the single level is within the transport window. The peak height is, for the chosen parameters, at $\dd I/\dd V \approx 1.81 \,e^2/h$. The deviation from the maximally possible value of $2 \,e^2/h$ is due to finite temperature. As $\epsilon_0$ moves away from $0$, the dot's energy level is pushed out of the transport window and the conductance drops significantly. When increasing the interaction strength $U$, one would expect a level splitting for single and double occupation of the quantum dot, and as a result two peaks at $\epsilon_0 = \pm U/2$ in the $\dd I/\dd V$ curves. We do not reach high enough values of $U$ to clearly resolve this peak splitting, but we see that the central peak becomes significantly broader. In addition, at $\epsilon_0 = 0$ the conductance is suppressed with increasing $U$, with the maximal value for $U=2.5\,\Gamma$ being $\dd I/\dd V = 1.623 \pm 0.002 \,e^2/h$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{dIdVeps0DiffU.pdf} \caption{The differential conductance as a function of the gate voltage $\epsilon_0$ for different values of the Coulomb interaction strength $U$. Parameters are $k_\mathrm{B} T=0.2\,\Gamma, eV=3\,\Gamma$, $B=0$.} \label{fig:dIdVeps0DiffU} \end{figure} Next, we turn to the nonlinear-response regime. In Fig.~\ref{fig:dIdVeps0DiffU}, we show the differential conductance for a large bias voltage of $eV = 3\,\Gamma$. The other parameters are as in Fig.~\ref{fig:Ieps0DiffU}. Due to the large bias voltage, the $\epsilon_0$ dependence of the conductance is more complex. For $U=0$, there are two peaks at $\epsilon_0 \approx \pm eV/2 = \pm 1.5\,\Gamma $, reflecting the two resonance conditions of the dot level matching the Fermi level of the left and right electrode, respectively. The peaks are not perfectly centered around $\pm eV/2$ due to the finite width of the peaks. Since the two peaks overlap, their maxima get pulled towards each other. At finite Coulomb interaction, there are, in principle, four resonance conditions, given by $\epsilon_0 \approx \pm eV/2 \pm U/2$. This explains that with increasing the Coulomb interaction $U$, the two peaks are pushed away from each other. Simultaneously, the peak heights decrease significantly. This is a consequence of the reduced overlap of the peaks as they away from each other. The most remarkable feature, however, is the appearance of a third peak around $\epsilon_0=0$ for $U > 2\,\Gamma$. This is due to the fact that here $U \approx eV$, and therefore at $\epsilon_0 = 0$ the singly occupied state is in resonance with the right lead and the doubly occupied state is in resonance with the left lead. Since a noninteracting-electron picture only predicts two peaks, the appearance of additional peaks is clear indication of Coulomb interaction. However, a third (and ultimately a fourth) peak can only be resolved for sufficiently large values of $U$. This, we could not achieve by using the finite-time implementation of ISPI, but the TraSPI formulation now allows us to enter this regime. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{dIdVeVDiffU.pdf} \caption{The differential conductance as a function of the bias voltage $eV$ for different values of the Coulomb interaction strength. Parameters are $k_\mathrm{B} T=0.2\,\Gamma$, $\epsilon_0=\Gamma$, $B=0$.} \label{fig:dIdVeVDiffU} \end{figure} To address the crossover from the linear- to the nonlinear-response regime, we discuss in Fig.~\ref{fig:dIdVeVDiffU} the differential conductance as a function of the bias voltage at $\epsilon_0 = \Gamma$. The temperature is again $k_\mathrm{B} T = 0.2\,\Gamma$ and $B=0$. For vanishing Coulomb interaction, we find, as expected, two peaks located at $eV \approx \pm \epsilon_0 =\pm \Gamma$. With increasing Coulomb interaction $U$, the differential conductance increases in the linear-response regime but decreases in the nonlinear-response regime, in accordance to our findings in Figs.~\ref{fig:Ieps0DiffU} and \ref{fig:dIdVeps0DiffU}. We emphasize that the system under consideration is particle-hole symmetric, such that the differential conductance $\dd I/\dd V$ is an even function of both $\epsilon_0$ and $eV$. Both symmetries are fulfilled perfectly by the TraSPI formalism up to numerical accuracy. \subsection{Occupation number} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{dNdVeps0eVhigh.pdf} \caption{The derivative of the occupation number with respect to bias voltage in the nonlinear-response regime, $eV=3\,\Gamma$, as a function of gate voltage $\epsilon_0$. Shown are different values of the Coulomb interaction strength $U$. Parameters are $k_\mathrm{B} T=0.2\,\Gamma$, $B=0$.} \label{fig:dNdVeps0higheV} \end{figure} We now address the average number of electrons on the quantum dot. For low-lying $\epsilon_0$, the quantum dot is occupied with two electrons. With increasing $\epsilon_0$, the occupation number is reduced and ultimately becomes empty. In Fig.~\ref{fig:dNdVeps0higheV}, we show the derivative of the occupation number as a function of gate voltage in the nonlinear-response regime. We choose the same parameters as in Fig.~\ref{fig:dIdVeps0DiffU}. Similarly as the differential conductance is better suited to resolve detailed structures, the derivative of the occupation number shows more structure than the occupation number itself. In that respect, Fig.~\ref{fig:dIdVeps0DiffU} for the current and Fig.~\ref{fig:dNdVeps0higheV} for the occupation number are analogous to each other. The peaks of $\dd N/\dd V$ are at the same positions as the peaks of $\dd I/\dd V$, indicating that a large change of the current is accompanied with a large change of the occupation number. With increasing $U$, the peaks move away from each other, and the TraSPI formalism is able to accurately describe the full region in between. The numerical calculations of the conductance and the occupation number are independent of each other. The information contained in these two quantities is partially the same but in most transport regimes differes from each other in detail. There is, however, one limit in which they carry identical information. This occurs at zero temperature, $T=0$, and vanishing bias voltage $V=0$. In this case, the electrons transversing the quantum dot scatter only elastically, such that Friedel's sum rule can be applied, which leads to the Langreth formula \cite{Langreth_1966, Ng_1988} \begin{align} \label{eq:sumrule} \frac{\dd I}{\dd V}\bigg|_{T,V=0}=\frac{2e^2}{h} \frac{4\,\Gamma_\mathrm L \Gamma_\mathrm R}{(\Gamma_\mathrm L+\Gamma_\mathrm R)^2} \sin^2 \left( \frac{\pi}{2} \big\langle N\big\rangle \right). \end{align} This remarkable result is valid for any value of the interaction strength $U$, covering all regimes from noninteracting electrons to strong correlations, e.g., in the Kondo regime. The Langreth formula is, in general, violated for any approximation scheme that only includes a certain class of transport contributions. However, for a numerically exact treatment, which includes TraSPI, the Langreth formula serves as a consistency check to assess the quality of the method. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{OccGeps0Comp.pdf} \caption{The differential conductance for low bias voltage $eV=0.1\,\Gamma$, as a function of the gate voltage $\epsilon_0$ for different values of the Coulomb interaction strength $U$. Comparison between the direct result obtained via the TraSPI scheme (direct) and the result obtained from Langreth's formula \eqref{eq:sumrule}. Parameters are $k_\mathrm{B} T=0.2\,\Gamma$, $B=0$.} \label{fig:GOccComparison} \end{figure} In Fig.~\ref{fig:GOccComparison}, we compare the linear conductance calculated in two different ways: once directly and once via the calculation of the occupation number and Langreth's formula Eq.~\eqref{eq:sumrule}. As stated above, Langreth's formula holds at zero temperature. This regime can be accessed by TraSPI only away from resonance, whereas close to the resonance condition $\epsilon_0=0$, numerical convergence is too slow. Therefore, perform our calculations at finite temperature, choosing $k_\mathrm{B} T = 0.2$, in accordance with the previous figures. We find very good agreement away from resonance, where the influence of finite temperature on the conductance is small. The deviation of the Langreth result Eq.~\eqref{eq:sumrule} from the direct calculation at resonance $\epsilon_0$ is fully understood as a finite-teperature effect. In conclusion, Fig.~\ref{fig:GOccComparison} gives us strong confidence in the quality of TraSPI as a numerically-exact method. \section{Conclusions} \label{sec:Conclusion} \enlargethispage{3ex} In the literature, there is a phlethora of methods to describe quantum transport through nanostructures, which all have their advantages and disadvantages in different regimes. Some are restricted to the linear-response regime while others can cover strong nonequilibrium situations. In scenarios with a clear hierarchy of the involved parameters, a perturbation expansion in one of them can be used. However, in real experiments very often many of these parameters characterizing, e.g., temperature, tunnel-coupling strength, Coulomb interaction as well as gate and bias voltage, are of the same order of magnitude. Then, numerically exact methods are desirable. In this paper, we presented TraSPI as such a numerically exact method. It is based on an iterative summation of path integrals, referred to as ISPI. The virtue of ISPI (and thus also TraSPI) is that naturally takes into account all orders in tunneling of electrons between quantum dot and leads, allows for arbitrary bias voltages that drive the system out of equilibrium, is not restricted to either low or high temperature, and is able to include finite Coulomb interaction. While the method is numerically exact, stronger correlations increase the convergence time, such that IPSI (and TraSPI) are best suited for small to intermediate strengths of the Coulomb interaction. In previous applications of the ISPI method \cite{Mundinar_2019,Mundinar_2020}, we concentrated on spin-dependent phenomena, which show an interaction dependence already for moderate Coulomb interaction strengths. To increase the range of applicability towards stronger Coulomb interaction strengths, the efficiency of the method needs to be increased. This we do in the present paper by mapping the ISPI scheme to a transfer-matrix approach, which results in TraSPI. The major virtue of involving transfer matrices is that the stationary limit is implemented by construction. This avoids the numerically-costly extrapolation of the results of the finite-time formulation as done in ISPI. In addition, the use of transfer matrices allow for further improvements that enhance the efficiency of the method. Numerical effort is reduced by optimally choosing the position of the measurement in time. Furthermore, to minimize numerical errors, it is advantageous to analytically implement derivatives, e.g., of the current with respect to bias voltage to get the differential conductance, instead of performing a numerical derivative of the current. And finally, we reduce numerical errors by making use of the possibility to calculate the noninteracting limit analytically and to numerically calculate only the difference between the interacting and the noninteracting case. To illustrate the performance of the TraSPI method, we analyzed the differential conductance through a single-level quantum dot in both the linear and nonlinear regime. We were able to reach values of the Coulomb-interaction strength that are sufficient to resolve in the non-linear response regime a third conductance peak instead of only two peaks that are expected for non-interacting electrons. Finally, we were able to perform a quality check of TraSPI by demonstrating that Langreth's formula, which connects the zero-temperature, linear conductance with the dot's occupation, is fulfilled in the regime of its applicability range. Therefore, we are confident that the TraSPI formulation enables us to address systems, transport regimes and effects that could not be covered by previous methods. \section{Acknowledgements} We thank S.~Weiss for fruitful discussions on the ISPI scheme. Financial funding of the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under project 278162697 - SFB 1242 is acknowledged.
train/arxiv
BkiUeDPxaL3Sui42XiYD
5
1
\section{Introduction} \label{intro} A rich literature has been produced on the quantum state of atoms and molecules confined into potential wells with a specified symmetry not only as basic quantum systems but also as models for extreme high pressure states of matter, spectroscopically active defects in solid lattices and chemical species in molecular cages (an updated review of the state of the art is found e.g. in \cite{leykoo2014} while applications are discussed e.g. in \cite{Raman}). As expected much attention has been devoted to H$_2^+$, the simplest molecular system\cite{cruz2009,colin2011,leykoo1981,sarsa2012,mateos2002,gorecki1988,singh1964,micca20151,micca20152,micca20153,segal2006}. Also this particular case, apart from its value as fundamental issue, may find several applications. For example, it was studied as a model system for a molecular qubit in a solid \cite{kang2008} and as a reaction intermediate of electrochemical reactions involving molecular hydrogen on Pt electrodes \cite{juodkazis2011,juodkazyte2014}. Confined H$_2^+$ also represents the simplest model for a F$_2^+$ center in crystals \cite{henderson2006} and these ions are actually implanted into solids (e.g. KBr \cite{KBr}) purposely. Confined H$_2^+$ presents also an interest for materials science and microelectronics \cite{micca20153}, being the probable outcome of H$_2^+$ ions impact on solid substrates, which are charged at high negative voltage with respect to the plasma potential in a radio-frequency or microwave reactor with H$_2$ as main component of the feed: these devices have attracted much attention in the last few years also for their use in materials processing \cite{hydro1,hydro2,hydro3, hydro4,diomede2012,diomede2014,diomede2017}. Although previous studies have shown that H$_3^+$ is the main ion present in the plasma phase under typical pressure conditions for these devices, the triatomic ion dissociates above 4.37 eV and it is not found inside solid lattices after exposure to plasma \cite{otte1998}. These results are also potentially relevant for studies of fusion plasma devices and future fusion reactors walls, constantly incorporating high energy D$^+$, T$^+$ ions \cite{iter1,iter2}. Since very few information is found on the physical state of H$_2^+$ inside real solid lattices, simple geometric models of confinement are very appealing as a framework, or physical environment, to standardize preliminary calculations and compare results. The confined species was found to possess different properties with respect to the free one. It has been shown that confined H$_2^+$ in its most stable electronic state ($^2\Sigma_g^+$) has a lower bond length due to electron cloud compression and that the related Potential Energy Surface (PES) has a different shape. The compressed states corresponding to the unbound $^2\Sigma_u^+$ and $^2\Pi_g$ present deep minima in their potential curves \cite{micca20151,micca20153}. Some results have been reported for non Born-Oppenheimer calculations of confined hydrogen systems \cite{skouteris2010,sarsa2012}. However, the availability of PESs for the classical motion of nuclei of H$_2$/H$_2^+$ into potential wells discloses the wide field of vibrational level calculations and prediction of Raman spectra to be compared to experiments and {\it ab initio} calculations. In spite of the very wide perspective for further studies and the very high rate of results production in the last years, as mentioned, previous works have, as a rule, considered only spherical and spherical prolate confinement. Cylindrical confinement has been considered for H$_2$ \cite{lo2005}, cylindrical and cubic confinement for the H atom \cite{micca20152}. In this work, we report results for H$_2^+$ confined in a octahedral well using a slightly modified version of the Monte Carlo code used by the same authors in the past. \section{Calculation method}\label{method} The method used for these calculations has been already discussed in previous works by the same authors \cite{micca20151,micca20152,micca20153}. It can be described as a diffusion Monte Carlo (DMC), with some differences with respect to the standard one, in order to achieve adaptability to different confining polyhedra. Basically, DMC uses a diffusion process in a phase space to project an initial guess for the wave function onto the lowest energy state with a given symmetry. During the propagation along imaginary time, the energy guess is adapted. Although the DMC method is usually not considered the best choice for a few particles problem, we have shown in our previous papers that is leads to straightforward formulation of numerical codes for any shape of the well. Furthermore, Cartesian coordinates can be used for any new case without necessity to match the geometry of the well \cite{micca20152}. Algoritms produced this way have been validated by comparison with known results \cite{micca20152}). Concerning numerical and algorithm implementation issues, they have been described in much detail elsewhere \cite{micca20153}. Accordingly, only the additional implementation issues specific to the application of the method to an octahedral well are explained in the next section. \section{The octahedral well}\label{OCt} The most relevant symmetry coordinates for a diatom inside an octahedral well are [100] (using Miller's notation) corresponding to a diatom on a C$_4$ axis and [111] corresponding to a diatom on a C$_3$ axis (Figure \ref{fig:1}). The two cases possess a reduced symmetry with respect to initial D$_{\infty h}$ of the isolated molecule and O$_{h}$ of the empty potential well, being respectively D$_{4h}$ for [001] case and D$_{3d}$ for [111] . \begin{figure} \centering \resizebox{0.5\textwidth}{!}{ \includegraphics{box.pdf} } \caption{Octahedral confinement of H$_2^+$. The plot displays nuclear and barrier locations, used in the calculation of the electronic eigenfunction, in the two cases considered in the paper. In the plot the confining walls are related to a cubic lattice in order to serve as an analogy of a octahedral hole. Related Miller indexes used in the calculations are shown. The two cases, as shown, are [100], where the two nuclei are placed on a C$_4$ axis of the O$_h$ well giving a D$_{4h}$ symmetry, and [111] where they are placed on a C$_3$ axis giving a D$_{3d}$ symmetry.} \label{fig:1} \end{figure} These directions are along $O_h$ symmetry axes of the barrier. We assume the following Hamiltonian: \begin{equation}\label{1} \hat{H}=-\frac{1}{2}\nabla^2+V(\textbf{r})+\frac{1}{d}+E_H \end{equation} that includes the repulsion energy of the two nuclei, which is classical in nature. The addition of $E_H$, which is the ionization energy of the $1s$ state of H, fixes $E_T=0$ as the asymptotic value of a free (confinement dimension $r_0 \rightarrow \infty $) ion at large $d$ (dissociation limit) in the ground electronic state. $V({\bf r})$ is the expression of the potential energy due to nuclei and barrier on the electron, i.e. \begin{equation}\label{2} V({\bf r}) = -\frac{1}{|{\bf r}-{\bf r}_a|}-\frac{1}{|{\bf r}-{\bf r}_b|}+V_B({\bf r}) \end{equation} In this expression ${\bf r}_a$ and ${\bf r}_b$ are the position vectors of the two nuclei, which are expressed as a function of $d$ after the Miller index of the internuclear axis $\left [ ijk \right ]$ are fixed. $V_B$ is the barrier contribution, which equals zero inside the polyhedral well and becomes indefinitely high outside. With the present choice of the axis, the position of first nucleus is given as a function of $\left [ ijk \right ]$ and $d$ by \begin{align}\label{3} x_a = d \frac{i}{2\sqrt{(i^2+j^2+k^2)}}\\ y_a = d \frac{j}{2\sqrt{(i^2+j^2+k^2)}}\\ z_a = d \frac{k}{2\sqrt{(i^2+j^2+k^2)}} \end{align} The second nucleus is then at ${\bf r}_b=-{\bf r}_a$. Octahedral confinement is obtained by using the following expression of $V_B$: \begin{equation}\label{4} V_B({\bf r}) = \begin{cases} 0 & \quad \text{if } |x|+|y|+|z|<c\\ \infty & \quad \text{if } |x|+|y|+|z|>c\\ \end{cases} \end{equation} where $c$ is the distance between the center and any vertex of the octahedral hole. In a cubic lattice $2c \sim a$ where $a$ is the lattice constant. However, the exact relation between $c$ and $a$ can be established only on the basis of a fitting, which requires a spectroscopic quantity to be determined and compared to experiments for H$_2^+$ in a given material, e.g. a Raman spectrum. \section{Results}\label{Results} For both cases, we give here only the PES for lowest a$_{1g}$ orbital, although excited orbitals can be easily calculated as shown in \cite{micca20151,micca20152,micca20153} Results are given here for three values of the parameter $c$, namely 4, 6 and $10 a.u.$ The first two are of the right order of magnitude to be relevant for applications while the largest one is considered in order to discuss the limiting behavior for large wells. Results are reported in tables \ref{Tab1} and \ref{Tab2} and in figures \ref{fig:2} and \ref{fig:3}. These results show that the physical state of the ion in a well of fair dimension to mimic an octahedral hole is very different from that in plasma phase. For example, the compression in a well with $c = 4 a.u.$ changes drastically the energy of the minimum of the PES, its curvature close to the minimum and the equilibrium distance between the two nuclei (Table \ref{Tab1}). Also, in agreement with previous results for confined H$_2^+$ also by the present authors \cite{micca20151,micca20153}, it is observed that the confinement produces a strong increase of the energy for values of $d$ larger than the equilibrium one, leading to a peculiar shape of the PES corresponding to a strongly bound state with high vibrational frequency. An issue that is due to investigate here, since it was not relevant in previous calculation, is the effect of the orientation of the internuclear axis relative to the barrier. It has been found in the past that the confinement of a molecule into a well with dimensions comparable to the molecular radius increases the vibrational frequencies and leads to "bound-like" PESs also for originally non-bound states. The effect is due to a mechanism which can be interpreted as due to "compression" of the electron "cloud" increasing the negative potential terms when the nuclei are displaced \cite{micca20151}. A different mechanism, which occurs in cases where $d$ is much larger than the molecular radius, discussed in \cite{micca20153}, is of much interest here. It can be seen that for the case $c = 10 a.u.$ the molecule has enough room to practically dissociate, and the PES gets close to the zero value at intermediate $d$ values; al larger $d$ however, the energy increases and reaches a maximum when the nuclei are in contact with the barrier (full lines in Figs. 2 and 3). The basic of this mechanism is that the barrier surface is a nodal one. Its effect can be discussed in terms of symmetry. For example, in a hydrogen atom close to a dielectric surface, the $1s$ atomic state is symmetry correlated to the $2p$ state, which has the same number of radial nodes, when the nucleus is in contact with the surface. This leads to a theoretical energy of $10.2$ eV. Indeed a close energy is obtained, as expected, for the D$_{3d}$ case of figure \ref{fig:2} when the well is large enough (see also Table \ref{Tab2}). \begin{figure} \centering \resizebox{0.5\textwidth}{!}{\includegraphics{H2piu_ottaedro_facce.pdf}} \caption{Octahedral confinement: [111] direction (D$_{3d}$). H$_2^+$ potential energy as a function of internuclear distance $d$.} \label{fig:2} \end{figure} \begin{figure} \centering \resizebox{0.5\textwidth}{!}{\includegraphics{H2piu_ottaedro_vertici.pdf}} \caption{Octahedral confinement: [100] direction (D$_{4h}$). H$_2^+$ potential energy as a function of internuclear distance $d$.} \label{fig:3} \end{figure} Instead, in figure \ref{fig:3} it is possible to note that the PES varies with $d$ by a much different trend when nuclei are approaching the vertex and it reaches a limit energy not far from the H ionization one (Table \ref{Tab2}). This region corresponds to a limit hydrogen atom inside the octahedral edge where four octahedral faces join, so the electron cloud is strongly compressed. The limit atom of the H$_2^+$ dissociation is therefore in a highly excited state, close to ionization. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & c (a.u.) & d$_{eq}$ (a.u.) & E$_{eq}$ (eV)\\ \hline [100] & 6 & 1.9 & -2.60\\ [111] & 6 & 1.9 & -2.59\\ \hline [100] & 4 & 1.6 & -0.29\\ [111] & 4 & 1.6 & -0.27\\ \hline \end{tabular} \caption{Energy values at equilibrium distances, for [100] and [111] directions.} \label{Tab1} \end{table} \begin{table} \begin{tabular}{|c|c|c|c|} \hline & c (a.u.) & d (a.u.) & E (eV)\\ \hline [100] & 10 & 20 & 12.61\\ [111] & 10 & 11.55 & 9.22\\ \hline [100] & 6 & 12 & 15.46\\ [111] & 6 & 6.93 & 10.22\\ \hline [100] & 4 & 8 & 22.81\\ [111] & 4 & 4.62 & 15.21\\ \hline \end{tabular} \caption{Energy differences between [100] and [111] directions.} \label{Tab2} \end{table} \section{Conclusions}\label{conc} In this paper, the confinement of a H$_2^+$ ion in an octahedral well is considered for the first time. From the point of view of plasma devices and materials science, this system represents a model for H$_2^+$ into a octahedral hole of a solid lattice, where the real hole is replaced by an infinite well with the same symmetry and behaving like a three dimensional box for the electron. Orbitals are determined by solving the Schroedinger equation using a version of the DMC method already applied by the authors to confined H in various geometries and spherical confined H$_2^+$. The PESs as a function of $d$ for two different orientations, [001] and [111] are reported for a few values of the well width representative of solids, in particular Pt for which confined H$_2^+$ has been considered in the past as reaction intermediate. Results confirm the known phenomenology of confined H$_2^+$, but allow a further discussion in view of the role played by molecular orientation relative to the well shape. In perspective, ideal octahedral confinement obtained by rigid walls, analogous to the very much studied spherical, ellipsoidal and cylindrical, may provide a very useful reference model for initial guesses of spectroscopic predictions to be compared with e.g. Raman or optical absorption measurements, thereby providing useful information on the physical status of molecular species retained into solid lattices with account of the real lattice symmetry, in particular species produced from the penetration of high energy particles from a hydrogen plasma into reactor wall materials and substrates. In this first paper on the topic, we have not addressed the issue of excited orbitals with known nodal surfaces which are promptly accessible using our method. Future studies could also consider less restricted geometries where the molecular center of mass is not in the center of the well. Such extension presents, even prior to actual calculations, interesting symmetry aspects to start future works with. \section*{Acknowledgment} This research activity has been supported by the General Studies Programme of the European Space Agency under grant 4200021790CCN3. S.L. acknowledges useful e-mail discussions with C. Le Sech. \section*{References}
train/arxiv
BkiUfZA5qhLBoPwGrASi
5
1
\section{Introduction} \label{sec:introduction} Excess point defects (PDs) are massively generated in materials driven away from equilibrium by external forces such as mechanical solicitation and irradiation~\cite{Martin1996,Gary2007}. This phenomenon induces fluxes of PDs toward the microstructural features (e.g., grain boundaries, dislocation lines, etc.) acting as PD sinks. The solute-PD interaction in alloys leads to the coupling between the PD and solute atom fluxes to the sinks. This flux coupling is the main kinetic process controlling the redistribution of solute atoms in alloys driven by an excess of PDs~\cite{Anthony1968,Anthony1969,Okamoto1974,Barbu1975,Okamoto1979,Kato1992,Bruemmer1999,Nastar2012,Ardell2016}. {Additionally, the external forces may introduce new mechanisms that affect the state of the system at the atomic scale. Examples of these mechanisms include collective motion of atoms or mixing. For instance, the latter mechanisms occur under irradiation producing displacement cascades~\cite{Haff1977,Motta1992,Averback1998} or during severe plastic deformation such as shearing~\cite{Vo2013,Ashkenazy2017}, torsion~\cite{Pouryazdan2012,Beach2017} and ball milling~\cite{Pochet1995,Klassen1997,Suryanarayana2004}. Unlike the thermally activated mechanisms leading the system toward equilibrium, these additional mechanisms are mostly athermal. They do not obey the microscopic detailed balance and lead to the disordering of the atomic configurations~\cite{Martin1996}. Note that both the thermally activated and externally forced mechanisms contribute to the mass transport in driven alloys, such that the system is driven to a non-equilibrium steady state (NESS)~\cite{Martin1996}. This situation prevents the use of standard methods to compute the thermodynamic and kinetic properties. In order to understand and predict the effect of external driving forces, it is crucial to be able to model the interplay between the two above-mentioned mechanisms. However, there is no model of flux coupling simultaneously accounting for these two competitive mechanisms at the atomic scale.} {Radiation damage is a good school case for the modeling of driven alloy thermodynamics and kinetics because there exists microscopic models that effectively reproduce the atomic mixing caused by the external force~\cite{Martin1984,Averback1986}. Under irradiation, atoms are regularly hit by incident particles. The collision transfers kinetic energy from the incident particle to the primary knock-on atom (PKA). If this energy is below the displacement threshold energy (DTE), the PKA will only vibrate around its position unless a PD is located nearby, in which case the atom may exchange its position with the PD~\cite{Roussel2002}. For a recoil energy well above DTE (e.g., typically 1\,keV in metals), the PKA will move away from its original site, thereby creating Frenkel pairs and transferring kinetic energy to neighbouring atoms, which will themselves move away from their positions, so on so forth. Locally, this displacement cascade process produces a large excess number of PDs as well as atomic mixing since most of the atoms are displaced~\cite{Littmark1980}. The climax of this process is called the heat spike, where the material is locally liquid-like and atoms are able to rapidly diffuse in this region~\cite{Vineyard1976}. The excess energy eventually dissipates leading to a quench-like process where the crystalline structure is recovered and only a small excess number of PDs remain~\cite{Brinkman1954, Benedek1987, Nordlund1999}. The whole phenomenon can be effectively modeled by (1) creating PDs and PD clusters, and (2) shuffling atomic positions. The latter process is often considered random~\cite{English1981,Pramanik1986} but there are some evidences that it is in fact affected by thermodynamics~\cite{Workman1987}. Besides, recent investigations on concentrated alloys~\cite{Terentyev2006,Aidhy2015,Zhang2017} and high entropy alloys~\cite{Do2018} have shown that during the quenching stage, the spatial distribution of the PDs compared with the solute atoms is partially driven by the thermodynamic short range order (i.e. binding interaction). For more details concerning displacement cascades, please refer to Ref.\,\cite{Nordlund2018}.} {The mixing of atomic positions in the displacement cascade was previously modeled effectively by forced atomic relocations (FARs) which consist in forced exchanges of positions between an atom and its nearest neighbour species such as another atom or a PD~\cite{Martin1984,Enrique2000,Lear2017,Soisson2018}. However, details in the cascade such as the spatial correlation between solute atoms and PDs due to the thermodynamic interactions were neglected. Moreover, the thermal mechanism and FAR were considered separately at the macroscopic scale. For instance, the tracer diffusion coefficient of the solute atom was written as the sum of two diffusion coefficients respectively related to the two mechanisms~\cite{Gary2007}. In this case, the interplay between these two mechanisms was neglected. An improved version of the model was proposed in Ref.\,\cite{Roussel2002}, where the five-frequency model~\cite{Lidiard1955,Lidiard1960} in face-centered cubic (fcc) systems was generalized to account for both types of kinetic mechanisms simultaneously. However, solute diffusivity is the only accessible quantity from this model, so it does not provide any information on solute-PD flux coupling. Moreover, long range FAR cannot be accounted for since the five-frequency model is limited to first nearest neighbour (1NN) interactions.} Thermodynamic and kinetic properties such as flux coupling coefficients and tracer diffusion coefficients of an alloy can be calculated from the Onsager matrix of the transport coefficients. Whenever the diffusion mechanism satisfies the microscopic detailed balance, Onsager has demonstrated that this matrix is symmetric~\cite{Onsager1931-1,Onsager1931-2}. We may calculate it either from the equilibrium atomic displacement fluctuations using the Allnatt formulae~\cite{Allnatt1965,Allnatt1982} or from the flow of matter resulting from an applied external force. However, for driven alloys including athermal mechanisms not obeying the microscopic detailed balance, we cannot compute the transport coefficients by means of a Monte Carlo numerical approach based on the Allnatt formulae. Yet, recent statistical theories have shown that it is possible to derive an effective Onsager matrix from the fluctuation theorem~\cite{Evans1993,Gallavotti1996}, though the resulting matrix is non symmetric. These theories go beyond the linear response theory. They provide a methodology for the investigation of far from equilibrium kinetics. Such an approach has been applied to the study of a molecular motor driven by forced chemical reactions~\cite{Lau2007,Lacoste2008}. However, it is not directly applicable to properly model systems with FAR because there are no notions of alloying effects and kinetic correlations in this model. In the context of research on diffusion in alloys, one knows how to deal with the complexity of calculating a sequence of PD jumps when the frequency of each jump depends on the local environment of the defect as long as the diffusion mechanism satisfies the microscopic detailed balance~\cite{Allnatt1993,Nastar2005,Garnier2013,Garnier2014,Messina2014,Messina2016,Schuler2016,Abhinav2019}. To obtain the transport coefficients at NESS, we start from the standard self-consistent mean field (SCMF) theory~\cite{Nastar2000,Nastar2005}, which was applied to the calculation of transport coefficients only in systems near equilibrium with jump mechanisms obeying the microscopic detailed balance. {In this paper, we generalize the SCMF theory by including the athermal mechanism of FAR. The generalized SCMF is implemented in the KineCluE code~\cite{Schuler2020} in order to perform automated calculations of transport coefficients. Different models of FAR are tested and the impact of each model parameter is systematically investigated. This study allows us to understand the conditions in which FAR significantly affect the material thermodynamic and kinetic properties.} The structure of the paper is as follow. In Sec.\,\ref{Sec_Models} we introduce the model of irradiation damage and thermal diffusion. Then, we introduce the mean field kinetic model to estimate the PD concentration under irradiation. Sec.\,\ref{sec_SCMF} is devoted to the calculation of the transport coefficients within the SCMF framework. In Sec.\,\ref{sec:results}, we focus on the comparison between the results given by different FAR models in various representative model alloys. The impact of different model parameters are studied in Sec.\,\ref{sec:sensitivity_relocation_model}. A discussion of the results, including the limitations and possible improvements, is presented in Sec.\,\ref{Sec_Discussions}. \section{Modeling of diffusion mechanisms under irradiation} \label{Sec_Models} \subsection{Thermally activated jump frequencies}\label{subsec:thermal_jump} We use the transition state theory to model thermally activated diffusion~\cite{Vineyard1957}. We introduce the thermal jump frequency $\omega_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{\alpha V}$ associated with the thermally activated exchange of atom $\alpha$ and vacancy V which brings the system from configuration $\textbf{n}$ to $\widetilde{\textbf{n}}$: \begin{equation} \omega_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{\alpha V} = \nu\,\text{exp}\left(-\frac{E^{\text{mig}}_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}}{k_B T}\right), \end{equation} where $\nu$ is the attempt frequency, $k_B$ is the Boltzmann constant, $T$ is the temperature and $E^{\text{mig}}_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}$ is the migration barrier from configuration $\textbf{n}$ to $\widetilde{\textbf{n}}$, which can be computed by means of ab initio calculations~\cite{Tucker2010,Messina2014}. This mechanism is mediated by PDs and the jump rate depends on the temperature as well as the initial and saddle-point configurations. Note that it satisfies the principle of the microscopic detailed balance: \begin{equation} P_{\textbf{n}}\,\omega_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{\alpha V} = P_{\widetilde{\textbf{n}}}\,\omega_{\widetilde{\textbf{n}}\rightarrow \textbf{n}}^{\alpha V}, \end{equation} where $P_{\textbf{n}}$ is the probability of configuration $\textbf{n}$. { \subsection{Irradiation damage} \label{subsec:cascade_models} We follow the ideas of previous studies to model the radiation damage by FAR. The latter includes two mechanisms: (1) FAR between two randomly chosen atoms (FAR-a) which consists in exchanging the positions of two atoms on lattice sites, and (2) FAR between a randomly chosen atom and a PD (FAR-d) which consists in exchanging the positions between an atom and a vacancy (V) or a self-interstitial atom (SIA). We need to account for the removing and creation of PDs within a cascade. {Here we consider that the PDs ``disappear'' during the heat spike, because the material is locally a liquid-like phase where there is no notion of PD. Later, during the process of quenching, only a small fraction of the PDs ``reappears'' somewhere in the cascade area, as if they had effectively jumped to another crystalline site during the cascade. The effective result of this process is modeled by FAR-d.} For recoil energy well above DTE producing displacement cascade, the overall effect of the mixing is modeled by FAR characterized by a relocation distance $r$. FAR occurs at a given frequency proportional to the radiation flux.} First for FAR-a, we assume that the probability density function $p(r)$ of the relocation distance follows an exponential decay~\cite{Enrique2000,Enrique2003,Demange2017}: \begin{equation} p\,(r) = \frac{1}{r_\text{m}}\,\text{exp}\left(-\frac{r}{r_\text{m}}\right), \end{equation} where $r_\text{m}$ is the mean relocation distance which is related to the size of the displacement cascade. Note that the latter depends on the material and the recoil energy of PKA. For example, the sizes of cascades generated in metals by fast neutrons or by heavy ions typically range between 10 and 100\,\r{A}~\cite{English1981,Phythian1995}. At the atomic scale, the relocation distance $r$ is discrete and is equal to one of the $i$-th NN distances. We define the probability mass function $\mathcal{P}(i)$ so that the distribution $p(r)$ in the interval $[r_{i},r_{i+1}]$ is averaged to the $i$-NN point: \begin{equation} \mathcal{P}(i)=\int_{r_i}^{r_{i+1}}p\,(r)\,\text{d}r, \end{equation} where $r_i$ corresponds to the $i$-NN distance. In practice, we consider only a finite set of nearest neighbours, meaning that there is a cut-off relocation distance $L$-NN beyond which the probability is set to 0. In this case, we define the normalized probability mass function $\mathcal{P}_L(i)$ as: \begin{equation}\label{eq:model_2_distribution} {\mathcal{P}}_L(i) = \frac{{\mathcal{P}}(i)}{\sum_{s=0}^{L}{\mathcal{P}}(s)}. \end{equation} We introduce as well a simplified model associated with a single relocation distance $r_m$ because it gives access to an analytical solution. {Here we ignore FAR-d of SIA and this assumption is justified in Sec.\,\ref{subsec:forced_relocation}. Therefore, we consider only FAR-d of vacancy. We propose two categories of FAR-d models: either the same relocation model employed for FAR-a, or a model favoring the relocation sites close to the solute atoms in case of attractive binding energies between vacancy and solute atoms. The latter model makes sense because in the quench-like process at the end of the displacement cascade, the remaining PDs form preferentially where their formation energy is the lowest, that is in the vicinity of solute atoms.} In order to represent both categories of models, we introduce three models. Models 1 and 2 for the first category, and Model 3 for the second category including a thermodynamic effect on FAR-d. Model 1 includes a single relocation distance for both solute and vacancy, while Model 2 includes an exponential law for the relocation distance (Eq.\,\eqref{eq:model_2_distribution}) for both species. {Model 3 is similar to Model 2, the only difference is that when the relocated vacancy is located at a distance lower than a threshold value $R_c$ from the solute atom B, the vacancy is systematically exchanged with an atom randomly chosen among the 1NN atoms of B (chemically biased FAR-d). For recoil energy below DTE, the effective result of the sub-threshold collision is modeled only by FAR-d. The model of FAR-d is the same in Model 1 while FAR-a is not performed. The relocation distance is set to 1NN distance $r_1$. } \subsection{FAR frequencies}\label{subsec:forced_relocation} {The FAR-d frequency is denoted $\Gamma^\text{ad}$, and the FAR-a frequency is denoted $\Gamma^\text{aa}$. When the recoil energy is above DTE, the relocation frequency $\Gamma^{aa}$ can be deduced from the radiation dose rate $\phi$ based on the ion-beam mixing framework~\cite{Haff1977,Averback1986}. In our model, FAR-a reproduces the mixing of atoms in the displacement cascade, which is related to the number of PDs produced by the PKA. After the quenching phase, there is only a small fraction of surviving defects, which defines the unit of displacement per atom (dpa). Therefore, there is a factor $n_\text{rel}$ relating $\Gamma^\text{aa}$ and the radiation dose rate in unit of dpa/s (see Eq.\,\eqref{eq:n_rel}). \begin{equation} \label{eq:n_rel} \Gamma^\text{aa}=n_\text{rel}\,\phi. \end{equation} From the literature, we set $n_\text{rel}= 100$~\cite{Riviere1983,Averback1986,Muller1988}. The latter number varies with the alloy thermodynamics due to the thermal effect on the atomic mixing rate in the cascade~\cite{Workman1987}. The frequencies of FAR-a and FAR-d depend on the number of cascades formed per unit of time as stated in Section\,\ref{subsec:cascade_models}. Therefore, $\Gamma^\text{aa}$ and $\Gamma^\text{ad}$ are both proportional to the dose rate. Hence, they are proportionally related by: \begin{equation} \Gamma^\text{ad} = \gamma\,\Gamma^\text{aa}, \end{equation} with $\gamma$ the proportionality constant. Note that $\gamma$ is set to 1 if not specified, i.e. $\Gamma^{ad}=\Gamma^{aa}=\Gamma$. Sensitivity studies concerning the value of $\gamma$ are shown in Sec.\,\ref{subsec:sensibility_PD_frequency}. When the recoil energy is below DTE, the sub-threshold irradiation does not induce FAR-a because no displacement cascade is produced, thus $\Gamma^{aa}=0$. In this case, the calculation of $\Gamma^\text{ad}$ is not related to $\Gamma^{aa}$ and is directly deduced from the recoil energy. } Note that the maximum dose rate under realistic irradiation condition is around 1\,dpa/s~\cite{Gary2007}, thereby leading to a relocation frequency of about 100\,s$^{-1}$. The latter is still very small compared to the thermal jump frequency of SIA, even at low temperature. For instance, the SIA thermal jump frequency in pure nickel at 300\,K is around $10^{10}$\,s$^{-1}$ according to the atomic diffusion data given in Ref.\,\cite{Tucker2010}. Therefore, we do not expect an important impact of FAR-d on the SIA-mediated diffusion properties. Therefore, we consider only FAR-d with vacancies, as stated in Sec.\,\ref{subsec:forced_relocation}. Yet, we emphasize that the extension of our framework to account for FAR-d of SIAs is straightforward. \subsection{Point defect concentration}\label{subsec:PD_concentration} The global concentration of PD varies under irradiation, mainly due to the production of Frenkel pairs, the mutual recombination between SIA and V, the elimination of PD at PD sinks such as grain boundaries and dislocations. The vacancy concentration at NESS $C_{V}^{\text{ness}}$ is estimated from a rate theory model~\cite{Russell1984,Schuler2017-2}: \begin{align} \label{eq:V-phi} C_{V}^{\text{ness}}=C_{V}^{\text{eq}}-\dfrac{k^2\Omega}{8\pi r_c}+\sqrt{\left(\dfrac{k^2\Omega}{8\pi r_c} \right) ^2 + \dfrac{\phi\,\Omega}{4\pi r_c D_V} }, \end{align} where $C_{V}^{\text{eq}}$ is the thermal vacancy concentration at equilibrium, $r_c$ is the SIA-V recombination radius usually assumed to be of the order of the lattice parameter $a_0$. $\Omega$ is the atomic volume. $\phi$ is the radiation dose rate. $k^2$ is the sink strength assumed to be constant with the radiation dose rate with typical values ranging from $10^{12}$ to $10^{19}$~m$^{-2}$~\cite{Soisson2016} and $D_V$ is the vacancy diffusion coefficient. Note that the equilibrium concentration $C_{V}^{\text{eq}}$ is obtained from the vacancy formation enthalpy $H_V^\text{f}$ and entropy $S_V^\text{f}$ by: \begin{equation} C_{V}^{\text{eq}}=\text{exp}\left( -\frac{H_V^\text{f}-T\,S_V^\text{f}}{k_B\,T} \right). \end{equation} We may then replace the flux $\phi$ by its expression in terms of $\Gamma$ from Eq.\,\eqref{eq:n_rel} into Eq.\,\eqref{eq:V-phi}, leading to a direct relationship between the vacancy concentration at NESS and $\Gamma$. \section{Diffusion theory}\label{sec_SCMF} For total exchange frequencies, we use the notation: \begin{equation} \label{eq:total_frequencies} W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}=W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AV}+W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{BV}+W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AB}, \end{equation} where $W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AV}=\omega_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AV}+\Gamma_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AV}$, $W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{BV}=\omega_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{BV}+\Gamma_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{BV}$ and $W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AB}=\Gamma_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AB}$. Note that, although relocation frequencies do not depend on the configuration before and after the exchange, for the sake of clarity, we choose to follow the notation of the thermal jump frequencies. We start with a Master Equation expressing the fact that the probability distribution of different configurations is controlled by the transition probabilities between two configurations: \begin{equation} \label{eq:master_equation} \dfrac{\text{d} }{\text{d}t} \bm{P}=\bm{W} \bm{P}, \end{equation} where $\bm{W}$ is a matrix with components $\bm{W}_{\textbf{n}\widetilde{\textbf{n}}} = W_{\widetilde{\textbf{n}}\rightarrow \textbf{n}}$ if $\textbf{n}\neq \widetilde{\textbf{n}}$ and $\bm{W}_{\textbf{n}\textbf{n}}=-\sum_{\widetilde{\textbf{n}}\neq \textbf{n}}W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}$. $\bm{P}=(P_\textbf{n})$ is a linear vector of probabilities of configurations ($\textbf{n}$). {For now, the recombination reactions between SIA and V are introduced at the upper scale, within the mean field rate theory model of the average PD concentrations (see Sec.\,\ref{subsec:PD_concentration}). These athermal events are not treated on the same foot as FAR because SIA and V are considered to be well-separated at the end of the displacement cascade under dilute approximation~\cite{Nordlund2018}. In this case, the SIA-V recombination requires long-range diffusion, thereby not incorporated in the microscopic Master Equation. } We explain below the method we use to determine the dynamical short range order (SRO) parameters at NESS and the diffusion properties from the Master Equation. \subsection{Dynamical short range order}\label{subsec:SRO} Starting from the thermal equilibrium state, the mix of thermal jumps and FAR leads to NESS. The latter state is characterized by dynamical SRO parameters which depend on FAR frequencies and thermal jump frequencies. We define them from the configurational probabilities, deduced from a stationary condition applied to the Master Equation (Eq.\,\eqref{eq:master_equation}), also called the global detailed balance condition: \begin{equation} \label{eq:global_detailed_balance} \forall \textbf{n},~ \sum_{\widetilde{\textbf{n}}}W_{\widetilde{\textbf{n}}\rightarrow \textbf{n}}P_{\widetilde{\textbf{n}}} - W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}P_{\textbf{n}} = 0. \end{equation} The solution of Eq.\,\eqref{eq:master_equation} at NESS is noted $\bm{P^\text{ness}}=(P_\textbf{n}^\text{ness})$. {The SRO parameter for configuration $\textbf{n}$ is defined as the ratio between the configurational probability $P_\textbf{n}^\text{ness}$ and the one of the a reference configuration denoted $P_{0}^\text{ness}$.} \subsection{Transport coefficients} \label{subsec_Lij} The phenomenological transport coefficients ($\lambda_{\alpha\beta}$) are fundamental parameters to describe the diffusion of chemical species ($\alpha$, $\beta$) in alloy at the macroscopic scale. Fluxes of chemical species ($J_\alpha$) are proportional to these coefficients: \begin{equation} \overrightarrow{J}_\alpha=-\sum_\beta \lambda_{\alpha\beta}\frac{\overrightarrow{\nabla}\mu_\beta}{k_B T}, \end{equation} with $\nabla\mu_\beta$ the established driving force deviating the system from equilibrium. Starting from NESS, we apply a small gradient of chemical potential and compute the resulting flux of atoms and vacancy. Here we extend the SCMF model to jump mechanisms not obeying the microscopic detailed balance. The SCMF theory was first proposed to study the diffusion process with atomic jumps following the principle of microscopic detailed balance~\cite{Nastar2005} but the latter is broken because of FAR. By following the nomenclature of Ref.\,\cite{Nastar2005}, the configuration is defined by a vector $\textbf{n}$. The latter consists in occupation numbers of all species on all sites i.e. \{$n^{\text{A}}_1$,$n^{\text{B}}_1$,$n^{\text{V}}_1$; $n^{\text{A}}_2$,$n^{\text{B}}_2$,$n^{\text{V}}_2$; ...\}, with $n^\alpha_i$ equal to one if the site $i$ is occupied by species $\alpha$ and zero if not. {The transition from configuration $\textbf{n}$ to $ \widetilde{\textbf{n}}$ is realized by thermally activated jumps or FAR, with a total frequency ${W}_{\textbf{n} \rightarrow \widetilde{\textbf{n}}}$.} Within the standard SCMF theory in Ref.\,\cite{Nastar2005}, $P_\textbf{n}(t)$, the non-equilibrium distribution function of configuration $\textbf{n}$, is expressed as the product of the equilibrium probability $P^\text{eq}_\textbf{n}$ and a non-equilibrium contribution. Here we choose the reference state to be NESS, and replace $P^\text{eq}_\textbf{n}$ by the probability distribution function $P^{\text{ness}}_\textbf{n}$: \begin{equation} P_\textbf{n}(t) = P^{\text{ness}}_\textbf{n}\times\delta P_\textbf{n}(t). \end{equation} The Master Equation (see Eq.\,\eqref{eq:master_equation}) is written for a certain configuration $\textbf{n}$ as \begin{equation} \label{eq:master_equation_microscopic} \frac{\text{d}P_\textbf{n}(t)}{\text{d}t}=\sum_{\widetilde{n}}\left[ {W}_{{\widetilde{\textbf{n}} \rightarrow \textbf{n}}} P^{\text{ness}}_{\widetilde{\textbf{n}}}\delta P_{\widetilde{\textbf{n}}}(t) - {W}_{\textbf{n} \rightarrow \widetilde{\textbf{n}}} P^{\text{ness}}_\textbf{n}\delta P_\textbf{n}(t) \right]. \end{equation} By applying the global detailed balance condition, i.e. Eq.\,\eqref{eq:global_detailed_balance}, we obtain a reformulation of the Master Equation: \begin{equation} \label{eq:master_equation_microscopic_2} \frac{\text{d}P_\textbf{n}(t)}{\text{d}t}=\sum_{\widetilde{n}} {W}_{{\widetilde{\textbf{n}} \rightarrow \textbf{n}}} P^{\text{ness}}_{\widetilde{\textbf{n}}}\left[ \delta P_{\widetilde{\textbf{n}}}(t) - \delta P_\textbf{n}(t) \right]. \end{equation} Note that the standard SCMF theory relies on the microscopic detailed balance (${W}_{{\widetilde{\textbf{n}} \rightarrow \textbf{n}}} P^{\text{ness}}_{\widetilde{\textbf{n}}}={W}_{\textbf{n} \rightarrow \widetilde{\textbf{n}}} P^{\text{ness}}_\textbf{n}$). In that case, it is equivalent to consider the transition probabilities entering or exiting a given configuration. When the microscopic detailed balance is not satisfied the transition frequencies to be retained are the entering configurations. The derivation from the Master Equation (Eq.\,\eqref{eq:master_equation_microscopic_2}) of the transport coefficients is similar to the standard SCMF theory in Ref.\,\cite{Nastar2000,Nastar2005}. It is summarized in Appendix. \subsection{SCMF theory under first shell approximation}\label{subsec:1NN} Here we focus exclusively on the diffusion properties of a dilute binary model alloy A(B): a host matrix of atoms A containing a single solute atom of species B and a single vacancy. The crystallographic structure is chosen to be a fcc crystal. As explained in Sec.\,\ref{subsec:cascade_models}, we consider the vacancy as the only type of PDs. Our purpose is to extend the SCMF theory to include athermal FAR mechanisms. In order to derive analytical transport coefficients, we start with a first shell approximation. This approximation consists in neglecting kinetic coupling and thermodynamic interactions between B and V if the distance between both species is beyond 1NN. FAR-a and FAR-d are restricted to exchanges between 1NN sites only. In such dilute alloy, there are five different atom-vacancy thermal exchange frequencies ($\omega_{i=0,1,2,3,4}$) which we designate after the Lidiard's nomenclature~\cite{Lidiard1955} (see Fig.\,\ref{Fig:atomic_jumps_vac_nomencaltures}), to which we add 4 FAR-a frequencies ($\Gamma_{i=0,1,3,4}^{AB}$) and 5 FAR-d frequencies ($\Gamma_{i=0,1,2,3,4}^{AV}$ and $\Gamma_{2}^{BV}$). {The total B-V exchange frequency is noted $W_2^{BV}$. The total A-V and A-B exchanges conserving the 1NN distance between B and V are respectively noted $W_1^{AV}$ and $W_1^{AB}$. The total A-V and A-B exchanges dissociating the B-V pair are respectively noted $W_3^{AV}$ and $W_3^{AB}$. The total A-V and A-B exchanges associating the B-V pair are respectively noted $W_4^{AV}$ and $W_4^{AB}$, and all the other A-V and A-B exchanges far from the solute atom B are respectively noted $W_0^{AV}$ and $W_0^{AB}$. Here we recall that: \begin{eqnarray} W_i^{AV} && = \omega_i+\Gamma_{i}^{AV} = \omega_i+\Gamma^\text{ad}, \text{ for }i=0,1,3,4 ;\nonumber\\ W_i^{AB} && = \Gamma_{i}^{AB} = \Gamma^\text{aa}, \text{ for }i=0,1,3,4 ;\nonumber\\ W_2^{BV} && = \omega_2+\Gamma_{2}^{BV} = \omega_2+\Gamma^\text{ad}. \end{eqnarray} } \begin{figure} \centering \includegraphics[width=1\linewidth]{Fig_atomic_jumps_vac.eps} \caption{(Color online) Illustration of all the possible transitions in dilute fcc alloys including 1NN exchanges between atoms and between vacancy and atoms. Red hollow squares designate vacancies V, red filled circles designate solute atoms B, grey filled or hollow circles designate solvent atoms A. }\label{Fig:atomic_jumps_vac_nomencaltures} \vspace{-0.em} \end{figure} Within the first shell approximation, two configurational probabilities are considered: $P^{\text{ness}}_1$ for the configuration where B and V are located at 1NN and $P^{\text{ness}}_0$ for the dissociated configuration where B and V are beyond 1NN. The analytical expression of the 1NN-SRO is given by: \begin{eqnarray}\label{eq:SRO_SCMF} \frac{P^{\text{ness}}_1}{P^{\text{ness}}_0} \, &&= \frac{\omega_4+\Gamma^\text{ad}+\Gamma^\text{aa}}{\omega_3+\Gamma^\text{ad}+\Gamma^\text{aa}} \nonumber\\ &&= \frac{\text{exp}\left(E_b/k_B T\right)+(\Gamma^\text{ad}+\Gamma^\text{aa})/\omega_3}{1+(\Gamma^\text{ad}+\Gamma^\text{aa})/\omega_3}, \end{eqnarray} where $E_b$ is the B-V 1NN binding energy which is deduced from the ratio of thermal frequencies, with $\text{exp}\left(E_b/k_B T\right)=\omega_4/\omega_3$. Note that $P^{\text{ness}}_1/P^{\text{ness}}_0$ is a SRO parameter revealing the binding tendency of B and V at the 1NN. As $\Gamma$ increases, the SRO parameter decreases towards $1$. Note that the decrease in the above-threshold situation is larger than what is expected in the sub-threshold situation, just because two relocation frequencies contribute to the decrease in the above-threshold case. The expressions of the phenomenological coefficients $\lambda_{BV}$, $\lambda_{VB}$, $\lambda_{VV}$ and $\lambda_{BB}$ in a dilute binary fcc alloy are given by \begin{eqnarray} \label{eq:SCMF_1NN_LVB} \lambda_{VB}= && - \frac{a_0^2}{4}\,{C_{BV}^p} \left[W_2^{BV} - \frac{ \Lambda_4^{B} (\Lambda_3^{V}+\Lambda_4^{V}) }{\Lambda}\right] ,\\ \label{eq:SCMF_1NN_LBV} \lambda_{BV}= && - \frac{a_0^2}{4}\,{C_{BV}^p} \left[W_2^{BV} - \frac{ \Lambda_4^{V} (\Lambda_3^{B}+\Lambda_4^{B}) }{\Lambda}\right] ,\\ \label{eq:SCMF_1NN_LVV} \lambda_{VV}= && \frac{a_0^2}{4} \left\{{C_V^m W_0^{AV}} + {C_{BV}^p} \left[ W_2^{BV} - \frac{ \Lambda_4^{V} (\Lambda_3^{V}+\Lambda_4^{V}) }{\Lambda}\right]\right\},\nonumber\\ \label{eq:SCMF_1NN_LBB} \lambda_{BB}= && \frac{a_0^2}{4} \left\{{C_B^m W_0^{AB}} + {C_{BV}^p} \left[ W_2^{BV} - \frac{ \Lambda_4^{B} (\Lambda_3^{B}+\Lambda_4^{B}) }{\Lambda}\right]\right\},\nonumber \end{eqnarray} {where $\Lambda=7W_3 + 2W_1 + 2W_2^{BV}$, $\Lambda_3^{\alpha} = 3 W_3^{A\alpha} - 2 W_1^{A\alpha} - W_2^{BV}$ and $\Lambda_4^{\alpha} = 3 W_4^{A\alpha}P^\text{ness}_0/P^\text{ness}_1 - 2 W_1^{A\alpha} - W_2^{BV}$ $(\text{for}~\alpha=B,V)$, with $W_{i}=W_{i}^{AV}+W_{i}^{AB}$ for $i=0,1,3,4$. $C_{BV}^p$ is the concentration of B-V pair at 1NN distance and $C_V^m$ (resp. $C_B^m$) is the concentration of isolated V (resp. B). These concentrations can be deduced from the total concentrations of B and V (resp. $C_B$ and $C_V$) by a low temperature expansion formalism \cite{Sykes1973,Ducastelle1991,Schuler2017}: \begin{equation} \left\{ \begin{array}{lr} C_{BV}^p= C_B^0 C_V^0 z^\text{ness} \\ C_V^m = C_V-C_{BV}^p \\ C_B^m = C_B-C_{BV}^p, \end{array} \right. \end{equation} with $C_B^0$, $C_V^0$ to be obtained by solving the following system of equations: \begin{equation} \left\{ \begin{array}{lr} C_B = C_B^0 + C_B^0 C_V^0 \left(z^\text{ness}-z_o\right) \\ C_V = C_V^0 + C_B^0 C_V^0 \left(z^\text{ness}-z_o\right), \end{array} \right. \end{equation}}where $z^\text{ness}=12P^\text{ness}_1/P^\text{ness}_0$ is the effective partition function at NESS and $z_0=12$. Note that the term $\Lambda_{m=3,4}^V$ (resp. $\Lambda_{m=3,4}^B$) is related to the vacancy (resp. solute atom) mobility since it contains all the vacancy (resp. solute atom) jump mechanisms including A-V (resp. A-B) and B-V (resp. V-B) exchanges. At equilibrium, $\Lambda_{3}^V=\Lambda_{4}^V$ and $\Lambda_{3}^B=\Lambda_{4}^B$ due to the microscopic detailed balance. Hence the two off-diagonal equilibrium coefficients $\lambda_{VB}$ and $\lambda_{BV}$ are equal, according to the Onsager reciprocal relations. In addition, $\lambda_{VV}$ (resp. $\lambda_{BB}$) can be separated into two parts: $C_V^m W_0^{AV}$ (resp. $C_B^m W_0^{AB}$) and the rest. The latter represents the exchanges of the B-V pair at 1NN distance while the former represents the hops of the isolated V (resp. B). In the case of sub-threshold irradiation for which there is no direct exchange between atoms (i.e. $\Gamma^\text{aa}=0$), the off-diagonal coefficients are equal and, from Eq.\,\eqref{eq:SCMF_1NN_LVB},\,\eqref{eq:SCMF_1NN_LBV} we get: \begin{equation}\label{eq:LBV_sub} \lambda_{BV}= \lambda_{VB}= - \frac{a_0^2 C_{BV}^p}{4} \,\frac{W_2^{BV}\left(13W_3^{AV}-2W_1^{AV}\right) }{7 W_3^{AV} + 2 W_1^{AV} + 2 W_2^{BV}}. \end{equation} Although the microscopic detailed balance is broken for the individual exchange frequencies $\omega_3$ and $\Gamma^\text{ad}$, it still holds for the sum of the latter frequencies, that is $W_3=W_3^{AV}=\omega_{3}+\Gamma^\text{ad}$ (see Eq.\,\eqref{eq:SRO_SCMF}). By replacing the total frequencies by the corresponding thermally activated jump frequencies, and replacing the dynamical SRO at NESS by the equilibrium SRO, the transport coefficients turn out to be equivalent to the Onsager coefficients $L_{BV}$ of the five-frequency model within the first shell approximation~\cite{Howard1964}. The variation of $\lambda_{BV}$ with $\Gamma^\text{ad}$ depends on the full set values of the thermal-activated jump frequencies. When $\Gamma^\text{ad}$ is dominant before all $\omega_i$: $\lambda_{BV}\sim -C_B C_V \Gamma^\text{ad}$. Note that if $13\omega_{3}>2\omega_1$ ($L_{BV}<0$), then $\lambda_{BV}$ remains negative whatever the magnitude of the relocation frequencies. Otherwise, a change of sign of $\lambda_{BV}$ can be observed when $\Gamma^\text{ad}\simeq -(13\omega_{3}-2\omega_1)/11$. Therefore, when a solute atom is dragged by a vacancy, FAR-d may change the sign of the solute-vacancy flux coupling and destroy the solute drag effect. In the opposite case, when $L_{BV}$ is negative, FAR-d does not change the sign of the solute-vacancy flux coupling. We consider now the case of above-threshold irradiation. Then FAR have two contributions: FAR-a and FAR-d, with $\Gamma^\text{aa}=\Gamma$ and $\Gamma^\text{ad}=\gamma\Gamma$. The off-diagonal terms $\lambda_{BV}$ and $\lambda_{VB}$ are not equal and their difference $\Delta\lambda=\lambda_{VB}-\lambda_{BV}$ is given by: \begin{equation} \Delta\lambda=\frac{3a_0^2 C^p_{BV}}{4}\,\frac{(1-\omega_3/\omega_4)[\omega_a-(1-\gamma)\Gamma]\,\omega_4\,\Gamma}{(\omega_b+11\gamma\Gamma+9\Gamma)\left[\omega_4+(1+\gamma)\Gamma\right]}, \end{equation} with \begin{eqnarray} \omega_a&&=2\omega_2+2\omega_1-3\omega_3, \nonumber \\ \omega_b&&=7\omega_3+2\omega_1+2\omega_2. \end{eqnarray} Note that $\Delta\lambda=0$ in the two extreme cases when thermal jumps ($\omega$) are dominant (i.e. $\Gamma/\omega\rightarrow 0$) or negligible ($\Gamma/\omega\rightarrow \infty$). The sign of $\Delta\lambda$ is determined by the product $(1-\omega_3/\omega_4)[\omega_a-(1-\gamma)\Gamma]$. If $\gamma=1$, this product involves thermal jump frequencies only. The first parenthesis is directly related to the equilibrium SRO parameter: $(1-\omega_3/\omega_4)$ is positive if the vacancy and the solute atom attract each other and negative otherwise. The higher the thermodynamic attraction, the smaller the ratio $\omega_3/\omega_4$, and the larger the difference $\Delta\lambda$. \subsection{Extension of the KineCluE code} \label{subsec:KineCluE} {For a more precise calculation beyond the first shell approximation, we consider each configuration where V and B are located at a distance lower than the kinetic radius $R_k$, as B-V pair configuration. At distances larger than $R_k$, B and V are considered as isolated monomers. Therefore, 3 cluster contributions are included: monomer B, monomer V as well as B-V pair. Note that the calculation under first shell approximation performed at Sec.\,\ref{subsec:1NN} is a particular situation where the kinetic radius is set equal to the 1NN distance. The calculation of the cluster transport coefficients is performed using the KineCluE code~\cite{Schuler2020}. The latter accounts for all the kinetic paths within a pair cluster defined by radius $R_k$.} Note that the kinetic radius can be set well beyond the 1NN distance in KineCluE. This allows us to perform converged calculation of cluster transport coefficients including long-distance FAR as well as long range kinetic correlations. In order to use NESS as reference state, a module is added to the code which calculates the NESS probability distribution by solving Eq.\,\eqref{eq:master_equation}. Besides, the underlying principle of the microscopic detailed balance of the code is replaced by the global detailed balance condition (Eq.\,\eqref{eq:global_detailed_balance}). Detailed descriptions will be published elsewhere. {Models 1, 2 and 3 presented in Section\,\ref{subsec:cascade_models} have been introduced into KineCluE. Note that the cluster radius $R_c$ in Model 3 is set equal to $R_k$ for simplicity.} \subsection{Comparison between KineCluE results and Monte Carlo simulations} \label{subsec:AKMC_and_validation} { As mentioned in the introduction (Sec.\,\ref{sec:introduction}), as soon as one of the microscopic diffusion mechanisms ($W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AV}$ and $W_{\textbf{n}\rightarrow \widetilde{\textbf{n}}}^{AB}$) does not obey the microscopic detailed balance, we cannot use the Allnatt formulae~\cite{Allnatt1965,Allnatt1982} to extract the phenomenological transport coefficients from atomistic kinetic Monte Carlo (AKMC) simulations. However, for a binary alloy with solute-point defect interactions restricted to 1NN pairwise interactions, we have shown in Sec.\,\ref{subsec:1NN} that detailed balance is fulfilled in the case of sub-threshold irradiation. Therefore, in this specific case, we may rely on the Allnatt formulae to obtain the Onsager matrix of the transport coefficients. As for the thermodynamic properties, we may apply AKMC to study the dynamical short range order characterizing a NESS from an average on the residential time, relying on the ergodic principle. We choose here a model alloy with highly attractive vacancy-solute interactions because it emphasizes the effect of FAR on flux coupling. The migration barriers (in eV) are set to 0.95 for $\omega_0$ and $\omega_3$, 0.75 for $\omega_1$ and $\omega_4$, and 0.60 for $\omega_2$. The attempt frequency $\nu$ is chosen to be $10^{14}~\text{s}^{-1}$. As for the model of FAR, we choose Model 1, with $r_m$ equal to the 1NN distance $r_1$. The AKMC simulation box is a fcc crystal of 2048 sites. It contains one single solute atom and one vacancy. We apply periodic boundary conditions and use a residence-time algorithm. At each Monte Carlo step, we propose the whole set of the thermal jumps and FAR. We select one exchange from the proposed mechanisms. After every exchange, we compute the residence time increment. From the fluctuations of atomic positions, we compute the transport coefficients. Note that the corresponding off-diagonal coefficients given by the AKMC method are by construction symmetric. As shown in Ref.\,\cite{Gallavotti1996,Lau2007}, they do not correspond to the transport coefficients $\lambda_{BV}$ and $\lambda_{VB}$ whenever one of the diffusion mechanism does not obey the detailed balance. As for the KineCluE approach, the kinetic radius $R_k$ of the cluster B-V is set to $4a_0$. } \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_sub_SRO.eps} \caption{First nearest neighbor (1NN) short range order as a function of FAR frequency $\Gamma$ from KineCluE and AKMC simulations. Results are obtained for $\omega_4=3.55\times 10^4s^{-1}$ and $\omega_3=1.07\times 10^2s^{-1}$ at $T=400$K. Model 1 is applied.} \label{Fig:SRO_1} \vspace{-0.em} \end{figure} Fig.\,\ref{Fig:SRO_1} shows the evolution of the dynamical 1NN-SRO under sub- and above-threshold FAR. We obtain an excellent agreement between KineCluE and AKMC simulations on the SRO paramters. As expected, the dynamical SRO decreases with the relocation frequency with a higher rate in the case of an above-threshold irradiation. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_sub_DB_LBV_LVB.eps} \caption{Solute atom diffusion coefficient and off-diagonal coefficients of transport matrix as a function of the FAR-d frequency $\Gamma$ from KineCluE (solid and dashed lines) and AKMC (unfilled circles) simulations. Results are obtained for $\omega_{0,3}=1.07\times 10^2s^{-1}$, $\omega_{2}=1.52\times 10^5s^{-1}$ and $\omega_{1,4}=3.55\times 10^4s^{-1}$ at $T=400$K. Model 1 is applied, with 1NN FAR-d only.} \label{Fig:bal_sub_DB_LBV_LVB} \vspace{-0.em} \end{figure} Fig.\,\ref{Fig:bal_sub_DB_LBV_LVB} shows the variation of the transport coefficients with the frequency of FAR-d in the sub-threshold irradiation regime. Both KineCluE and AKMC methods give the same transport coefficients because the microscopic detailed balance holds for the total transition rates. However, when $\Gamma$ is small compared with thermal jump frequencies, we observe a slight discrepancy between the coefficients. Yet the size of the AKMC simulation box is comparable with $R_k$. The discrepancy is due to the difference in the applied boundary conditions between KineCluE and the AKMC method. In KineCluE, configurations of solute and vacancy located at a distance larger than the kinetic radius are not included in the calculation, while the AKMC method relies on periodic boundary conditions. In the latter, atoms or PDs exiting from the simulation box enter back through another side and add a kinetic correlation contribution to the transport coefficients. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_DB_LBV_LVB.eps} \caption{Solute atom diffusion coefficient and off-diagonal coefficients of transport matrix as a function of the above-threshold relocation frequency $\Gamma$ from KineCluE (solid and dashed lines) and AKMC (unfilled circles) simulations. Results are obtained for $\omega_{0,3}=1.07\times 10^2s^{-1}$, $\omega_{2}=1.52\times 10^5s^{-1}$ and $\omega_{1,4}=3.55\times 10^4s^{-1}$ at $T=400$K. Model 1 is applied, with 1NN FAR-d and FAR-a.} \label{Fig:bal_dir_DB_LBV_LVB} \vspace{-0.em} \end{figure} In the case of an above-threshold irradiation, we observe in Fig.\,\ref{Fig:bal_dir_DB_LBV_LVB} a similar behaviour of the diagonal transport coefficient $\lambda_{BB}$, whereas the single off-diagonal coefficient measured in AKMC simulations does not correspond any more to the off-diagonal phenomenological transport coefficients obtained by KineCluE. \section{Results on diffusion properties} \label{sec:results} Here we focus on the above-threshold irradiation case. We consider a model alloy with relatively high migration barriers. Hence the alloy is potentially sensitive to FAR effects, just because the thermal jump frequencies are small with respect to the relocation frequency deduced from realistic dose rate. The energy interaction between B and V is restricted to a pairwise 1NN interaction. The migration barriers (in eV) are set to 1.10 for $\omega_0$, $\omega_1$ and $\omega_3$, 0.90 for $\omega_4$, and 0.80 for $\omega_2$. The attempt frequency $\nu$ is chosen to be equal to $5\times10^{12}~\text{s}^{-1}$. The three relocation models indicated in the Section~\ref{subsec:cascade_models} are considered. We use KineCluE to calculate the transport coefficients. The parameter values that we set to estimate the vacancy concentration under irradiation are shown in Table\,\ref{tab:vacancy_concentration_parameter}. Here the mean relocation range $r_m$ and the cut-off distance $L$ for Models 2 and 3 are respectively set to 1NN ($\sqrt{1/2}\,a_0$) and 5NN ($\sqrt{5/2}\,a_0$) distances. The kinetic radius is set to $2a_0$. \begin{table}[b \caption{\label{tab:vacancy_concentration_parameter}% List of the parameters needed to estimate the vacancy concentration and their values set in the paper. } \begin{ruledtabular} \begin{tabular}{ll} \textrm{Parameter} & \textrm{Value} \\ \colrule Lattice parameter $a_0$ & 0.35\,nm\\ Vacancy formation enthalpy $H_{V}^{\text{f}}$ & 1.65\,eV\\ Vacancy formation entropy $S_{V}^{\text{f}}$ & 1.82\,$k_B$\\ Number of relocations per dpa $n_\text{rel}$ & 100\\ Effective sink strength $k^2$ & $10^{15}\,\text{m}^{-2}$\\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Dynamical short range order} \label{subsec:dynamical_SRO} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_SRO_models.eps} \caption{(Color online) Steady-state short range order as a function of the relocation frequency in the above-threshold radiation regime. Results are obtained by KineCluE for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ at $T=400$K. The mean relocation range $r_m$ is set 1NN. The cut-off relocation distance and the kinetic radius of the cluster B-V are set to 3$a_0$.} \label{Fig:bal_dir_SRO_models} \vspace{-0.em} \end{figure} Fig.\,\ref{Fig:bal_dir_SRO_models} shows the profile of dynamical SRO as a function of $\Gamma$ for Models 1, 2 and 3. The probability for B and V at 1NN distance is reduced by FAR leading to an effective B-V interaction smaller than the thermodynamic one. The decrease of 1NN-SRO with the relocation frequency in Model 1 starts when $\Gamma$ is around $10^{-2}\,\text{s}^{-1}$. The decrease starts earlier in Models 2 and 3: respectively around $10^{-4}\,\text{s}^{-1}$ and $10^{-3}\,\text{s}^{-1}$. However, the 1NN-SRO of Model 3 converges towards non-zero value at large $\Gamma$. In Model 1, there is no interaction between B and V beyond the 1NN distance, whatever the relocation frequency. However, in Models 2 and 3, we observe that the effective B-V interaction extends beyond the range of the thermal one (i.e. beyond the 1NN). The effective interaction remains up to 5NN distance when $\Gamma$ is comparable to one of the thermal jump frequencies. This is due to the relatively long range FAR. In the extreme case when $\Gamma$ is dominant before the thermal jump frequencies, the B-V interactions are dropping in Models 1 and 2 whereas in Model 3 the 1NN attraction is slightly decreasing and the 2NN, 3NN, 4NN and 5NN are slightly increasing. The binding tendency of a vacancy around the solute atom is still high ($P_1^\text{ness}/P_0^\text{ness} \simeq 10^2$) due to the introduction of the biased FAR-d with the 1NN atoms of the solute atom in Model 3. \subsection{Tracer diffusion coefficient} In the dilute limit, the tracer diffusion coefficient of solute B is written as \begin{equation} D_B^{*}=\frac{\lambda_{BB}}{C_B}. \end{equation} Phenomenological models of diffusion under irradiation systematically rely on the assumption that the thermally activated diffusion and FAR take place in parallel~\cite{Muller1988,Roussel2002}. {The tracer diffusion coefficient is then written as a sum of two diffusion coefficients: \begin{equation} \label{eq:classical_assumption} D_{B,\text{add}}^{*}=D_{B,\text{th}}^{*}C_V^\text{ness}/C_V^\text{eq}+D_{B,\text{far}}^{*}, \end{equation} where $D_{B,\text{th}}^{*}$ is the thermal diffusion coefficient commonly deduced from diffusion experiments or atomic based diffusion models and $D_{B,\text{far}}^{*}$ is the diffusion coefficient of solute atom B resulting from FAR only.} Note that both coefficients can be calculated by KineCluE. Unless one diffusion mechanism is dominant over the other, we expect a non-additive contribution to the solute tracer diffusion coefficient because of the kinetic correlations. In order to quantify the non-additive contribution, we define the parameter: \begin{equation} \Delta D_B = \frac{D_{B,\text{add}}^*-D_B^*}{D_B^*}. \end{equation} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_DB_models_insets.eps} \caption{Solute atom diffusion coefficient as a function of the relocation frequency $\Gamma$ in the above-threshold radiation regime. Results are obtained by KineCluE for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ at $T=400$K. $C_B$ is set 0.1 at.\%. The mean relocation range $r_m$ is set 1NN. The cut-off relocation distance and the kinetic radius of the cluster B-V are set to 3$a_0$. The insets (a) and (b) show the variations of correlation factor $f_B$ and the difference $\Delta D_B$ with the relocation frequency.} \label{Fig:bal_dir_DB_models} \vspace{-0.em} \end{figure} Fig \ref{Fig:bal_dir_DB_models} shows the variation of the solute diffusion coefficient with the relocation frequency. We observe that the global tendencies of the diffusion coefficients obtained with the three models are similar. However, the three curves do not have the same asymptote at large $\Gamma$. The largest difference occurs when the correlation factor $f_B$ is increased by FAR. With Models 1 and 2, this factor tends to 1 when $\Gamma$ is dominant over the thermal jump frequencies, meaning that there are no kinetic correlations. However, in Model 3, the correlation factor tends to 0.46. The remaining kinetic correlations are due to the biased FAR-d. Besides, $\Delta D_B$ is high when $\Gamma$ is in the range of the thermal jump frequencies because then, there is a strong competition between the thermal mechanisms and FAR. In this example, $\Delta D_B$ spans from 100\,\% to 300\,\% depending on the relocation model. \subsection{Flux coupling} We characterize the flux coupling between solute B and vacancy V by computing the wind factors~\cite{Anthony1969,Anthony1970,Okamoto1979} \begin{equation} \delta_{B\rightarrow V} = \frac{\lambda_{BV}}{\lambda_{VV}} \end{equation} and \begin{equation} \delta_{V\rightarrow B} = \frac{\lambda_{VB}}{\lambda_{BB}}. \end{equation} Both wind factors describe the B-V flux coupling related to two different situations. The wind factor $\delta_{B\rightarrow V}$ gives the number of solute atoms following a vacancy under the driving force $\nabla\mu_V$ and the wind factor $\delta_{V\rightarrow B}$ indicates the number of vacancies dragged by a solute atom under the driving force $\nabla\mu_B$. If the wind factors are positive, a drag of B by V (or vice versa) may occur. As shown in Sec.\,\ref{subsec:dynamical_SRO}, the interactions between the solute atom and the vacancy are reduced or even destroyed by FAR. Since the drag effect is highly related to this interaction, we study the effect of the relocation frequency $\Gamma$ on the wind factors. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_delta_models.eps} \caption{Drag factors $\delta_{B\rightarrow V}$ and $\delta_{V\rightarrow B}$ as a function of the relocation frequency $\Gamma$ in the above-threshold radiation regime. Results are obtained by KineCluE for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ at $T=400$K. $C_B$ is set 0.1 at.\%. The mean relocation range $r_m$ is set 1NN. The cut-off relocation distance and the kinetic radius of the cluster B-V are set to 3$a_0$. The dashed lines are eye-guides for $\delta_{B\rightarrow V}=0$ or $\delta_{V\rightarrow B}=0$.} \label{Fig:bal_dir_delta_models} \vspace{-0.em} \end{figure} Fig.\,\ref{Fig:bal_dir_delta_models} shows the variation of the wind factors with the relocation frequency. Whatever the relocation models, $\delta_{B\rightarrow V}$ and $\delta_{V\rightarrow B}$ globally decrease with $\Gamma$. However, $\delta_{V\rightarrow B}$ of Model 1 has a surprising non-monotonous behaviour: the drag effect is enhanced before being destroyed by FAR. $\delta_{B\rightarrow V}$ of Model 3 has also an atypical behaviour: it slightly increases and tends to a non-zero value at large $\Gamma$, meaning that the solute drag and vacancy drag effects are not totally destroyed. This is because the biased FAR-d maintains a flux coupling between B and V. This persistent flux coupling at high radiation flux should be very sensitive to the details of the relocation mechanism. \section{Sensitivity study with respect to the model and alloy parameters} \label{sec:sensitivity_relocation_model} FAR models depend on the values of the mean relocation range $r_m$, the kinetic radius $R_k$ of the cluster B-V and the truncation distance $L$. However, the latter parameter is not a physical parameter. Since the relocation frequency exponentially decreases with the distance between B and V (see Eq.\,\eqref{eq:model_2_distribution}), the value of $L$ does not affect the diffusion properties as long as it is large enough. Therefore, we focus here on the sensitivity of the results to the other two parameters: $r_m$ and $R_k$. \subsection{Kinetic radius} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_Diffusion_kr.eps} \caption{Diffusion properties as functions of the relocation frequency $\Gamma$ in the above-threshold radiation regime. Results are obtained by KineCluE for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ at $T=400$K with three different kinetic radius $R_k$ = 2$a_0$, 2.3$a_0$ and 3$a_0$. $C_B$ is set 0.05~at.\%. Model 3 is used as the relocation model. The mean and cut-off relocation distances are respectively set to $(\sqrt{2}/2)a_0$ and 2$a_0$.} \label{Fig:bal_dir_Diffusion_kr} \vspace{-0.em} \end{figure} In general, the results given by KineCluE code converge with the kinetic radius $R_k$~\cite{Schuler2020}. However, because $R_c=R_k$ in Model 3, the FAR-d models for a monomer vacancy and for a vacancy within the B-V pair are different. In this case, the results obtained with Model 3 may depend on the values of $R_k$. However, Fig.\,\ref{Fig:bal_dir_Diffusion_kr} shows that $D_B^*$, $\Delta D_B$ and $\delta_{B\rightarrow V}$ are not very sensitive to the change of the kinetic radius. Although, the decrease rate of $\delta_{V\rightarrow B}$ with $\Gamma$ is slower with $R_k=3a_0$ than $2a_0$. This is because the vacancy performs biased FAR-d with the 1NN atoms of the solute atom from longer distances. \subsection{Mean relocation range} \label{subsubsec:rm} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_Diffusion_rm.eps} \caption{Diffusion properties as functions of the relocation frequency $\Gamma$ in the above-threshold radiation regime. Results are obtained by KineCluE for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ at $T=400$K with three different values of $r_m$: 1NN, 5NN and 10NN. $C_B$ is set 0.1 at.\%. The cut-off relocation distance and the kinetic radius of the cluster B-V are set to 3$a_0$. } \label{Fig:bal_dir_Diffusion_rm} \vspace{-0.em} \end{figure} Fig.\,\ref{Fig:bal_dir_Diffusion_rm} shows the effect of the mean relocation distance $r_m$ on the solute diffusion and flux coupling. First we focus on Model 1. Since the solute mobility is enhanced when increasing the relocation distance, the corresponding solute diffusion coefficient increases with $r_m$. Besides, according to the plot of $\Delta D_B$, the interaction between thermal jumps and FAR decreases with $r_m$. The thermally-activated jump distance and the thermal interaction between B and V are both restricted to 1NN. The larger the relocation distance, the smaller the B-V interaction. Thus B and V are more likely to diffuse as monomers, a kinetic regime where the diffusion properties related to the thermal jumps and FAR become additive. As for the flux coupling, the decreasing rate of $\delta_{B\rightarrow V}$ with $\Gamma$ increases with $r_m$. Thus the solute drag effect is destroyed more easily. Besides, the variation tendency of $\delta_{V\rightarrow B}$ with $\Gamma$ become qualitatively different when $r_m>$ 1NN. The vacancy drag effect is not enhanced when $r_m$ equals to 2NN and 3NN. This may be due to the same reason mentioned before: B and V have many more paths to escape from each other. As for the results obtained with Models 2 and 3, they have similar profiles as the ones in Model 1. { \subsection{Atomic mixing rate} \label{subsec:n_bal} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_sensitivity_nbal.eps} \caption{$D_B$, and wind factors $\delta_{B\rightarrow V}$, $\delta_{V\rightarrow B}$ as a function of relocation frequency $\Gamma$ from KineCluE simulations. Results are obtained for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ with different values of $n_\text{rel}$ at $T=400$K. $C_B$ is set 0.1 at.\%.} \label{Fig:bal_dir_sensitivity_nbal} \vspace{-0.em} \end{figure} Since the previous section has shown that the effects of FAR are roughly the same in terms of the global tendency whatever the relocation model and the mean relocation distance, we choose the simplest model, Model 1 with $r_m=r_1$. As stated in Sec.\,\ref{subsec:forced_relocation}, the number of relocations per Frenkel pair created (i.e. $n_\text{rel}$) should be alloy specific due to the thermal effect on heat spike mixing. Fig.\,\ref{Fig:bal_dir_sensitivity_nbal} shows the variation of the diffusion properties in function of radiation dose rate with different values of $n_\text{rel}$. The effect of FAR on the flux coupling and tracer diffusion occurs at a smaller dose rate when $n_\text{rel}$ increases. Moreover, we observe that $\Delta D_B$ decreases with $n_\text{rel}$. These results show the importance of $n_\text{rel}$ in the prediction of a critical dose rate when the effects of FAR on the flux coupling and tracer diffusion is paramount. \subsection{FAR-d frequencies}\label{subsec:sensibility_PD_frequency} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_sensitivity_gamma.eps} \caption{$D_B$, and wind factors $\delta_{B\rightarrow V}$, $\delta_{V\rightarrow B}$ as a function of relocation frequency $\Gamma$ from KineCluE simulations. Results are obtained for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ with different values of $\gamma=\Gamma^{ad}/\Gamma^{aa}$ at $T=400$K. $C_B$ is set 0.1 at.\%.} \label{Fig:bal_dir_sensitivity_gamma} \vspace{-0.em} \end{figure} For the reason stated in Sec.\,\ref{subsec:n_bal}, Model 1 with $r_m=r_1$ is chosen for the sake of simplicity. Note that FAR-a and FAR-d are due to different phenomena in the displacement cascade: FAR-a describes the recoil mixing due to PKA while FAR-d models the lattice site change of PD during the quenching process. There is no guarantee that the frequencies of FAR-a ($\Gamma^{aa}$) and FAR-d ($\Gamma^{ad}$) are equal. Fig.\,\ref{Fig:bal_dir_sensitivity_gamma} shows the plot of $D_B$, $\delta_{B\rightarrow V}$ and $\delta_{V\rightarrow B}$ as a function of relocation frequency $\Gamma^{aa}=\Gamma$ with different ratios $\gamma=\Gamma^{ad}/\Gamma^{aa}$. The global tendencies of the above quantities are not affected by the variation of the ratio $\gamma$. Besides, the tracer diffusion coefficient $D_B$ is not sensitive to the variation of the ratio $\gamma$. However, $\delta_{B\rightarrow V}$ decreases with $\gamma$ while the variation of $\delta_{V\rightarrow B}$ has the opposite tendency. For $\gamma\neq 1$, FAR-a and FAR-d effects on the solute atom and PD diffusion occurs at different dose rate. It respectively happens when $\Gamma^{aa}$ (i.e. $\Gamma$) and $\Gamma^{ad}$ (i.e. $\gamma\Gamma$) are of the same order of magnitude compared with thermal jump frequencies. In brief, the smaller the $\gamma$ value, the larger the difference between the frequencies for FAR-a and FAR-d, and the more important the strength of the flux coupling. } \subsection{Thermal jump parameters} \label{subsec:sensitivity_alloy} The effects of FAR depend on the radiation dose rate and the intrinsic thermal jump frequencies of the alloy. We use KineCluE to perform a sensitive study of the radiation kinetic properties with respect to the thermal jump frequencies. Fig.\,\ref{Fig:bal_dir_sensitivity_binding_energy} shows the variation of $\Delta D_B$ and the wind factors $\delta_{B\rightarrow V}$, $\delta_{V\rightarrow B}$ with respect to $\Gamma$, for various values of $\omega_4$. The values of the other thermal jump frequencies are fixed. Model 1, with $r_m=r_1$, is chosen for the following discussion. The interactions between thermal jumps and FAR are emphasized in this case because the hop distances are both 1NN. The ratio $\omega_4/\omega_3$ directly affects the binding energy $E_b$ between solute atom and vacancy at 1NN. We observe that $\Delta D_B$ and wind factors increase with the binding energy. Besides, the larger the binding energy, the larger the enhancement of the wind factor $\delta_{V\rightarrow B}$ by FAR. This can be explained by noting that the solute atom and vacancy tend to be closer to each other with a larger binding energy. Therefore, the interaction between FAR and thermally activated diffusion of solute atom is more important, leading to a larger difference from what we would except with an additive model, i.e. Eq.\,\eqref{eq:classical_assumption}. Moreover, the binding tendency of the vacancy and the solute atom increases, causing an enhancement of the wind factor $\delta_V$. As well, $\omega_1$ and $\omega_2$ have a non-negligible effect on the profile of $\Delta D_B$ and wind factors in function of $\Gamma$. Here we set $\omega_4$ to its initial value $2.3\times 10^{1}s^{-1}$ and we perform calculations with different values of $\omega_1$ and $\omega_2$. Fig.\,\ref{Fig:bal_dir_sensitivity_w1_w2} shows that if $\omega_2$ is large compared to $\omega_1$ (more than 1 order of magnitude), $\Delta D_B$ and $\delta_{B\rightarrow V}$ increase with $\omega_1$ whereas the enhancement of $\delta_{V\rightarrow B}$ by FAR decreases with $\omega_1$. If the amplitudes of $\omega_2$ and $\omega_1$ are comparable (within 1 order of magnitude), the trends are opposite: $\Delta D_B$ and $\delta_{B\rightarrow V}$ decrease with $\omega_1$ whereas the enhancement of $\delta_{V\rightarrow B}$ by FAR increases with $\omega_1$. However, we observe that if the values of $\omega_1$ and $\omega_2$ are close (within 1 order of magnitude), the variations of $\Delta D_B$ and wind factors with $\Gamma$ are not sensitive to $\omega_1$. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_sensitivity_binding_energy.eps} \caption{$\Delta D_B$, and wind factors $\delta_{B\rightarrow V}$, $\delta_{V\rightarrow B}$ as a function of relocation frequency $\Gamma$ from KineCluE simulations. Results are obtained for $\omega_{0,1,3}=6.9\times 10^{-2}s^{-1}$ and $\omega_2=4.2\times 10^{2}s^{-1}$ with different values of $\omega_4$ at $T=400$K. $C_B$ is set 0.1 at.\%.} \label{Fig:bal_dir_sensitivity_binding_energy} \vspace{-0.em} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig_bal_dir_sensitivity_w1_w2.eps} \caption{$\Delta D_B$, and wind factors $\delta_{B\rightarrow V}$, $\delta_{V\rightarrow B}$ as a function of relocation frequency $\Gamma$ from KineCluE simulations. Results are obtained for $\omega_{0,3}=6.9\times 10^{-2}s^{-1}$, $\omega_2=4.2\times 10^{2}s^{-1}$ and $\omega_4=2.3\times 10^1s^{-1}$ with different values of $\omega_1$ and $\omega_2$ at $T=400$K. $C_B$ is set 0.1 at.\%.} \label{Fig:bal_dir_sensitivity_w1_w2} \vspace{-0.em} \end{figure} \section{Discussion and Summary} \label{Sec_Discussions} Neutron or ion irradiation in metals generates displacement cascades. We present a simplified model of this complex phenomenon by introducing FAR mechanisms, and an average creation rate of PD uniform in time and space. To calculate the energetic and kinetic properties, we write a Master Equation for the evolution of the distribution function which includes both the thermal jumps and FAR. We extend the SCMF theory to solve and compute the SRO parameters and the phenomenological transport coefficients at the NESS reached under irradiation. The main difficulty lies in the loss of the microscopic detailed balance when considering FAR mechanisms. Relying on Model 1 including FAR between 1NN sites only and a first shell approximation of the kinetic correlations, we derive analytical expressions of the phenomenological transport coefficients. We demonstrate that FAR does not produce a simple additive term to the transport coefficients. When the magnitude of the relocation frequency is in the range of the thermal frequencies, FAR interacts with the thermal diffusion mechanism, yielding non-symmetric off-diagonal transport coefficients and a solute tracer diffusion coefficient deviating from a direct sum of the contributions of thermal jumps and FAR. This deviation increases with the solute kinetic correlations. We use the automated code KineCluE to yield a more systematic study of the effect of the range and magnitude of FAR on the kinetic properties, including a sensitivity study with respect to the alloy thermodynamics and the models of relocation and PD production. Due to the lack of data on the detailed mechanisms of FAR and PD production, we introduce Models 2 and 3 representing two extreme situations, expecting the real situation to be in-between. In Model 2, we assume that FAR is a fully random process while in Model 3, we introduce a biased FAR-d with the 1NN atoms of the solute atoms to reproduce the fact that the vacancy creation within a cascade is partially driven by the vacancy-solute thermodynamic attraction. As a result, part of the vacancy-solute SRO remains which in turns leads to a higher resistance of the vacancy-solute flux coupling to irradiation. Positive flux coupling is the result of strong kinetic correlations, which can be modified by introducing FAR mechanisms. Our sensitivity study shows that the magnitude of the surviving kinetic correlations strongly depends on the details of the biased FAR-d mechanism, while the reduction of correlations and flux coupling due to the randomizing processes are less sensitive to the details of the relocation events unless the distance of FAR is close to the thermodynamic range. A persistent vacancy-solute flux coupling at low temperature and high radiation flux may play an important role on the solute redistribution in irradiated materials. Therefore, the mechanism of PD production with respect to the solute atom spatial distribution within the displacement cascade should be analyzed more precisely. Eventually, the effect of the interplay between thermal jumps and FAR on vacancy-solute positive flux coupling is important when the solute-vacancy thermodynamic attraction is large, the magnitude of the thermal jump frequencies compared with the relocation frequency and the range of thermodynamic interactions is close to the relocation distances. As for the tracer diffusion coefficients, their non-additivity property with respect to FAR and thermal jumps follows the same trend as the flux coupling phenomena in systems featuring positive flux coupling but may also arise in case of no positive flux coupling but strong correlated solute migration paths. For instance, the additive expression of Eq.\,\eqref{eq:classical_assumption} reproduces correctly the diffusion coefficient of Au in Al measured under irradiation~\cite{Acker1974}. This is because the vacancy-jump barrier in Al is around 0.58 eV~\cite{Wu2016}, hence the thermal jump frequencies are dominant over the relocation frequencies under realistic experimental conditions. However, we expect a non-negligible effect of FAR in Ni-based alloys because the vacancy-mediated migration barrier in pure Ni is high (around 1.09 eV~\cite{Wu2016}).
train/arxiv
BkiUddM5qhLA90Q4acPR
5
1
\section{Introduction}\label{IntroductionSection} A new era in radio astronomy is approaching with the upcoming continuum surveys \citep{Norris2013} planned at the SKA precursors telescopes, such as the \textit{Westerbork Observations of the Deep APERTIF Northern-Sky} (WODAN) \citep{Rottgering2010} at the Westerbork Synthesis Radio Telescope (WSRT), the \textit{Evolutionary Map of the Universe} (EMU) survey \citep{Norris2011} at the ASKAP array and the \textit{MeerKAT International GigaHertz Tiered Extragalactic Exploration} (MIGHTEE) survey \citep{Van der Heyden2010} at the Meerkat observatory. A considerable improvement is expected in sensitivity, resolution and instantaneous field of view compared to previous surveys. For instance, WODAN and EMU will jointly provide full sky coverage at 1.3 GHz with an unprecedented sensitivity down to 10-15 $\mu$Jy/beam and resolution around 10-15 arcsec. Phased Array Feed (PAF) technology will allow instantaneous field of view of 8 and 30 deg$^{2}$ for WODAN-APERTIF and ASKAP respectively and a corresponding increase in survey speed of a factor $\sim$20 with respect to VLA. MIGHTEE will allow even better sensitivities (0.1-1 $\mu$Jy/beam rms) although with a reduced field of view (1 deg$^{2}$). A dramatic gain in sensitivity (a factor 100) and field of view will be achieved with the future operations of the SKA. New challenges are expected to be brought by these significant advances. One is related to the data product throughput (e.g. spectral-imaging data cubes) expected to be generated by the SKA precursor telescopes, ranging from tens of gigabytes to several petabytes\footnote{ASKAP is expected to generate several petabytes per year of HI cube.}, and by the future SKA observatory, of the order of hundreds of terabytes per data cube in SKA1 and one order of magnitude higher in SKA Phase II \citep{Kitaeff2015}. For instance, up to 3 exabytes of fully processed data are expected in one year of full SKA1 operation \citep{Alexander2009}. Such amount of data cannot be processed nor stored and visualized on local computing resources, at least using conventional data formats so far used in astronomy. Furthermore, with the increase in sensitivity and surveyed sky area, a population of millions of sources will be potentially detectable making human-driven source extraction unfeasible. For example, the EMU survey is expected to generate a catalogue of $\sim$70 millions of sources detected at the 5$\sigma$ level of 50 $\mu$Jy/beam \citep{Norris2011}. For these reasons considerable efforts are currently focused on the development of algorithms to process imaging data and extract sources in a fast and mostly automated way and, at the same time, on the search for new data standards and image compression formats (e.g. see \citealt{Kitaeff2014}). While extensive studies have been performed on compact source search with several algorithms developed \citep{Hancock2012,Whiting2009,Whiting2012,Hopkins2002,Bertin1996,Hales2012, Peracaula2015,Hopkins2015}, particularly in the context of the ASKAP telescope, detection of extended sources in a completely unsupervised way (e.g. without requiring any a priori information or source templates) is still a partially explored field, at least for the radio domain. This motivates investing resources on exploring completely new methods or re-adapting known algorithms to the radio imaging case. Different approaches have been recently proposed in such direction. Some of them make use of conventional thresholding methods in the image wavelet or curvelet domain (e.g. see \citealt{Peracaula2011}), others employ compressive sampling techniques (e.g. \citealt{Dabbech2015}). Other studies employ the Circle Hough transform to detect circular-like objects, such as supernova remnants or bent-tail radio galaxies \citep{Hollitt2012}. In \citet{Norris2011} several methods from the Computer Vision domain have been reviewed. Waterfalling segmentation, circular or elliptical Hough transform and region growing were indicated as the most suited to the problem of extended source search. In the context of the SCORPIO project \citep{Umana2015} (hereafter denoted as "Paper I", see Section~\ref{SCORPIOProjectSection}), a pathfinder of the ASKAP EMU survey, and in view of the next-generation SKA surveys, we started to develop algorithms for automated source detection and classification. The designed method exploits some of the techniques and algorithms already in use in other source finders, aiming to combine their best features, but also introduces new features, particularly on the background estimation, detection of extended sources and source parameterization. We will therefore focus on these novel aspects throughout the paper. A description of the method, based on a superpixel segmentation and hierarchical merging, is presented in Section~\ref{MethodSection}. The algorithm has been tested on SCORPIO real radio data observed at the ATCA array down to a sensitivity of 30 $\mu$Jy/beam. Typical results achieved on sample field scenarios are presented and discussed in Section~\ref{ResultSection}, along with tests performed on the same fields observed at different wavelengths. \begin{figure*} \centering% \includegraphics[scale=0.55]{SourceFinderPipeline_last.pdf} \caption{Schematic pipeline of the designed source finder algorithm.} \label{SourceFinderPipelineFig} \end{figure*} \section{The SCORPIO project}\label{SCORPIOProjectSection} The SCORPIO project is a blind deep radio survey of a $2\times2$ deg$^2$ sky patch toward the Galactic plane, using the ATCA array in several configurations. The survey has been conducted at 2.1 GHz between 2011 and 2015 and achieved an average resolution around 10 arcsec. Further observations are already scheduled in 2016. The major scientific goals of the SCORPIO project is to search for different populations of Galactic radio point sources and the study of circumstellar envelopes (related to young or evolved massive stars, planetary nebulae and supernova remnants) which is extremely important for understanding the Galaxy evolution (e.g. ISM chemical enrichment, star formation triggering, etc.). Besides these scientific outputs, SCORPIO will be used as a test-bed for the EMU survey, guiding its design strategy for the Galactic plane sections. In particular, this includes exploring suitable strategies for effectively imaging and extracting sources embedded in the diffuse emission expected at low Galactic latitudes and investigating to what extent they can be employed in the EMU survey. The SCORPIO observations have produced a radio mosaic map of 133 single pointings with an \textrm{rms}{} down to 30 $\mu$Jy/beam. A pixel size of 1.5" is chosen for the final map. This sensitivity and a good $uv$-plane coverage have allowed the discovery of about 1000 new faint radio point sources and to satisfactorily map tens of extended sources. Preliminary results on a smaller pilot region of the SCORPIO field have already been published in Paper I, while the complete data reduction and analysis is still in progress. \section{A segmentation method for extended source detection}\label{MethodSection} Detection of extended sources represents a hard task for source finder algorithms. The main difficulties are due to the intrinsic emission pattern, which is usually fainter compared to compact sources (e.g. below the conventional 5$\sigma$ significance level) and spread over disjointed areas (e.g. unlike the adjacency assumption taken in compact source finders). In addition, object borders are usually soft thus the standard edge detector algorithms are not fully sensitive to them. Spatial filters are therefore often employed to enhance the emission at some given scale. Another issue is related to the estimation of reliable significance levels for detection. In fact, the widely used method for local noise and background estimation is typically biased around extended source regions, namely higher significance levels are artificially imposed for detection with respect to other image regions, free of diffuse emission. Under these conditions the source is likely to be undetected particularly if it has a large extension. Ideally, the source extraction task should provide a two-level hierarchical information: a segmentation of the input map into background and foreground regions associated with a source object, and, for each of the them, a collection of nested regions representing source features (e.g. clumps, shells, blobs) also at different scales. To this goal, we designed a multi-stage method based on image superpixel generation and hierarchical clustering. A schematic pipeline of the algorithm stages is shown in Fig.~\ref{SourceFinderPipelineFig} and summarized below: \begin{enumerate} \item \emph{Filtering}: To enhance extended structures, bright compact sources need to be filtered out from the map and a residual image generated and used as input for the following stages. Compact source extraction, discussed with more details in Section~\ref{CompactSourceFindingSect}, requires the computation of the background and noise maps to threshold the image at a suitable significance level. Furthermore, a smoothing stage is introduced on the residual image to suppress texture-like features due to imaging artefacts around the brightest sources and to source residuals left after the previous dilation stage. An edge-preserving guided filter \citep{He2013} was found to provide optimal performances among the tested filters. \item \emph{Extended source extraction}: The smoothed residual image is used as input for the segmentation algorithm described in Section \ref{SegmentationAlgorithmSection}. It consists of three main stages: firstly, an over-segmentation of the image into a collection of superpixels or regions is generated and a set of appearance parameters (both intensity- and spatial-based) computed for each region; then, a saliency map is computed in the second stage from region dissimilarities and used to drive region merging at the third stage, which is a sequence of clustering steps producing a collection of segmented regions or a binary mask as the final output. \item \emph{Source parametrization}: A set of morphological parameters is calculated over the segmented regions and delivered to the user. \end{enumerate} Additional details concerning each algorithm step are given in the following sections. \begin{figure*} \subfloat[Field A\label{ScorpioFieldFig1}]{\includegraphics[scale=0.28]{ScorpioS17Field_bw_finalPaperVersion_light.pdf}} \hspace{0.2cm} \subfloat[Field B\label{ScorpioFieldFig2}]{\includegraphics[scale=0.28]{ScorpioSNRFieldZoom_bw_finalPaperVersion_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field C\label{ScorpioFieldFig3}]{\includegraphics[scale=0.28]{ScorpioExtendedField_bw_finalPaperVersion_light.pdf}} \hspace{0.2cm} \subfloat[Field D\label{ScorpioFieldFig4}]{\includegraphics[scale=0.28]{ScorpioFaintSNRField_bw_finalPaperVersion_light.pdf}} \caption{Sample SCORPIO fields (A-D) selected for algorithm testing. Flux units are reported in the z axis.} \label{ScorpioFieldFig} \end{figure*} \subsection{Background and noise estimation}\label{BkgFindingSect} As noted in Paper I, both background and noise levels are subjected to variations throughout the image, due for example to diffuse emission around the Galactic plane or to the accuracy of the image reconstruction. Background and noise information are therefore estimated on a local basis using two alternative methods. The first conventional method assumes a rectangular grid of sample pixels and computes the local background and noise levels over a sampling box, centered around each grid center. Robust background/noise estimators are generally considered to reduce the bias caused by the possible presence of sources falling in the sampling box. For instance \textit{Selavy} \citep{Whiting2009,Whiting2012} uses the median and mean absolute deviation from the median (MAD), while the inter-quartile range is adopted in \textit{Aegean} \citep{Hancock2012}. Other methods use the previous estimators iteratively clipped up to reach a pre-specified tolerance, as in \textit{SExtractor} \citep{Bertin1996} or in Paper I. Several estimators are available in our program: median/MAD, biweight or $\sigma$-clipped estimators. Finally, a bicubic interpolation stage is carried out to derive local estimates on a pixel-by-pixel basis, e.g. the background and noise maps. The second method exploits the pixel spatial information, neglected by the conventional approach, along with the pixel intensity distribution to produce less biased noise/background estimates. Two different approaches were implemented. In the first, a superpixel partition of the image is generated (see Section~\ref{SegmentationAlgorithmSection} for more details) with region size assumed comparable to the synthesised beam size. An outlier analysis, based on a robust estimate of the Mahalanobis distance \citep{Rousseeuw1990} on region median-MAD parameter space, is then performed to detect significative regions (both positive or negative excesses), typically associated with sources or artefacts. Pixels belonging to that regions are marked and excluded from the background evaluation. The background and noise maps are finally computed as above by interpolating a robust estimator computed over background-tagged pixels in sampling boxes sliding through the entire image. A second approach uses a flood-filling algorithm to detect and iteratively clip blobs at some predefined significance level (e.g. 5$\sigma$) with respect to the first level estimate of the background and noise maps. Background and noise maps are re-computed at each iteration stage as described above. One or two iterations are typically sufficient. In practice, the first method can be safely used for bright compact source filtering, in which the background estimation is not requested to be highly accurate. The second method should be instead preferred in the search of faint compact sources or when thresholding extended bright sources. The size of the sampling grid is conventionally chosen to achieve sufficient interpolation accuracy at moderate computational cost. Instead, the choice of the box size is often given in terms of the beam size (e.g. 10 or 20 larger than the synthesised beam) and may have a considerable impact in the source extraction step: estimates computed on a small box could be severely biased by the presence of a source filling the box, while, on the other hand, a too large box could completely smooth out the local background/noise variations. In \citet{Huynh2012} the authors compared maps obtained by popular source finders, such as \textit{SFind} \citep{Hopkins2002}, \textit{SExtractor} \citep{Bertin1996} and \textit{Selavy} \citep{Whiting2009,Whiting2012}, and investigated the optimal parameter settings both for real and simulated data sets. However, they note that a completely automated procedure for background estimation, possibly independent on the distribution of sources, is still of crucial importance for future surveys. \begin{figure*} \subfloat[][Field A]{\includegraphics[scale=0.28]{MolongloS17Field_resampled_bw_finalPaperVersion_light.pdf}\label{MolongloScorpioFieldFig1}} \hspace{0.1cm} \subfloat[][Field B]{\includegraphics[scale=0.28]{MolongloSNRField_resampled_bw_finalPaperVersion_light.pdf}\label{MolongloScorpioFieldFig2}}\\ \vspace{-0.4cm} \subfloat[][Field C]{\includegraphics[scale=0.28]{MolongloExtendedField_resampled_bw_finalPaperVersion_light.pdf}\label{MolongloScorpioFieldFig3}} \hspace{0.1cm} \subfloat[][Field D]{\includegraphics[scale=0.28]{MolongloFaintSNRField_resampled_bw_finalPaperVersion_light.pdf}\label{MolongloScorpioFieldFig4}} \caption{Sample fields (A-D) selected for algorithm testing as observed in the Molonglo Galactic Plane Survey. Flux units are reported in the z axis.} \label{MolongloScorpioFieldFig} \end{figure*} \subsection{Filtering compact sources}\label{CompactSourceFindingSect} The presence of bright sources in the image significantly hardens the extended source detection task. We therefore implemented a filtering stage to remove them, based on the following steps. Blobs of connected pixels are first extracted from the image assuming a flood-filling procedure similar to that carried out in \textit{Aegean} \citep{Hancock2012} and \textit{Blobcat} \citep{Hales2012} source finders. A high seed threshold above the computed background is assumed, e.g. 10$\sigma$, and pixels are aggregated down to a merge threshold, e.g. 2.6$\sigma$. Each detected blob is subjected to a further search to identify nested blobs. These are extracted by thresholding the image curvature map $\kappa$, obtained by convolving the image with a Logarithm-of-gaussian (LoG) kernel, at some pre-specified threshold level (e.g. $\kappa >$0) or adaptively. A 2-level hierarchy of blobs is finally obtained. A set of morphological parameters (e.g. contour parameters, moments, shape descriptors, etc), is computed over the detected blobs and selection cuts are applied to identify point-like candidate sources. For example, blobs with a number of pixels that is too large or with an anomalous elongated shape typically fail to pass the point-like cut. Blobs tagged as "point-like" are removed from the input image using a morphological dilation operator with configurable kernel shape (e.g. elliptic or squared) and size, as suggested in \citet{Peracaula2015}, and replaced with a random background realization. A kernel size larger than 5 pixels was assumed to prevent the source halo pixels to further affect the residual image. \begin{figure*} \subfloat[Field A]{\includegraphics[scale=0.22]{ScorpioS17SegmResults_saliency_large_light.pdf}} \hspace{-0.cm} \subfloat[Field A]{\includegraphics[scale=0.22]{ScorpioS17SegmResults_large_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field B]{\includegraphics[scale=0.22]{ScorpioSNRSegmResults_saliency_large_light.pdf}} \hspace{-0.cm} \subfloat[Field B]{\includegraphics[scale=0.22]{ScorpioSNRSegmResults_large_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field C]{\includegraphics[scale=0.22]{ScorpioExtendedFieldSegmResults_saliency_large_light.pdf}} \hspace{-0.cm} \subfloat[Field C]{\includegraphics[scale=0.22]{ScorpioExtendedFieldSegmResults_large_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field D]{\includegraphics[scale=0.22]{ScorpioFaintSNRSegmResults_saliency_large_light.pdf}} \hspace{-0.cm} \subfloat[Field D]{\includegraphics[scale=0.22]{ScorpioFaintSNRSegmResults_large_light.pdf}} \caption{Segmentation results obtained for the test fields A-D (from top to bottom) assuming $l$=20 and $\beta$=1 (see text). Left: Saliency maps normalized to range [0,1]; Right: Segmentation maps. Each segmented region is colored in the plot according to the mean of its pixel fluxes in mJy/beam units. The white contour lines correspond to a manual segmentation generated by an expert astronomer.} \label{ScorpioSegmResultsFig} \end{figure*} \subsection{Segmentation algorithm}\label{SegmentationAlgorithmSection} We developed a segmentation algorithm for extraction of extended sources, based on a superpixel segmentation algorithm followed by a hierarchical clustering stage to aggregate similar segments into final candidate source regions. The algorithm steps are described below and a summary of the relevant algorithm parameters is reported in Table~\ref{AlgorithmParTable}: \begin{enumerate} \item \emph{Initialization}: Compute a set of filtered images to be used during the clustering stage, namely the image curvature $\kappa$ and an edge-sensitive map $\psi$. The latter can be alternatively obtained by convoluting the input image with a set of Kirsch filters oriented along different directions or as the result of the Chan-Vese contour finding algorithm \citep{ChanVese2001}. \begin{figure*} \subfloat[Field E]{\includegraphics[scale=0.28]{ScorpioSparseField3_bw_finalPaperVersion_light.pdf}} \hspace{-0.cm} \subfloat[Field E - Residual]{\includegraphics[scale=0.28]{ScorpioSparseField3ResidualMap_light.pdf}} \hspace{-0.cm} \caption{Left: Sample SCORPIO field E selected for algorithm testing. Flux units are reported in the z axis; Right: Residual map, normalized to range [0,1], obtained after applying point-like source and smoothing filtering stages to the input map.} \label{SparseFieldResultsFig} \end{figure*} \item \emph{Superpixel segmentation}: In this stage the image is over-segmented into $N_{\texttt{R}}$ connected regions or superpixels using flux and spatial information as input observables. To this aim we made use of the \textit{Simple linear iterative clustering (SLIC)} algorithm developed by \citet{Achanta2012}, which uses the k-mean algorithm to cluster pixels according to an intensity and spatial proximity measure. Segmentation is controlled by a set of input parameters, such as the desired superpixel size $l$, typically fixed to the smallest detail to be distinguished (e.g. close to the beam size to detect compact sources or larger to search for extended sources), the minimum number of pixels in a region ($N_{\texttt{min}}$) and a regularization parameter $\beta$ balancing spatial and intensity clustering in the distance measure $D_{ij}$ between a pixel $i$ and a superpixel center $j$: \begin{equation} D_{ij} = \sqrt{ D_{ij,c}^2 + \left(\frac{\beta}{l\times l}\right)^2 D_{ij,s}^2} \end{equation} $D_{ij,c}$ and $D_{ij,s}$ being the intensity and spatial Euclidean distances between pixel $i$ and superpixel $j$. Higher $\beta$ enhances the spatial proximity and favors more compact superpixels in the initial partition. In turn, lower $\beta$ favors clustering in intensity and superpixels with less regular shapes but adhering more tightly to the object contours. For each region $i$ an appearance parameter vector $\mathbf{x}_{i}=(\mu_{i},\sigma_{i},\mu_{i,\kappa},\sigma_{i,\kappa})$ is computed, with $\mu_{i}$ and $\mu_{i,\kappa}$ denoting respectively the mean of flux and curvature of pixels belonging to region $i$, while $\sigma_{i}$ and $\sigma_{i,\kappa}$ are their standard deviations. With this parameter choice, the computation and update of the region parameters after a merging can be done iteratively in a very fast way, namely without partially sorting the region pixel vector as in the case of median and MAD estimators. \item \emph{Saliency map estimation}: A saliency map is estimated in this step to enhance significant objects in the input image with respect to the background. Following \citet{Zhang2013}, a saliency estimator $S_i$ is computed for each region as: \begin{equation} S_i= 1-\exp\left(-\frac{1}{K}\sum_{j=1}^{K}\delta_{ij}\right)\;\;\;\;\;\;\;\;\;\;\;\delta_{ij}=\frac{d_{ij,c}}{1+d_{ij,s}} \end{equation} where $d_{ij,c}$ is the Euclidean distance between appearance vectors $\mathbf{x}_{i}$ and $\mathbf{x}_{j}$ of region $i$ and $j$, $d_{ij,s}$ the distance between their centroids. The sum is computed over the $K$ nearest neighbors of region $i$, typically 10\% or 20\% out of the total number of regions. Salient objects are likely to have similar pixels more confined in space compared to similar pixels belonging to the background which are more spatially spread in the image. To detect salient features at different scales, we combined saliency maps computed at different resolutions, e.g. corresponding to initial partitions with different superpixel sizes. Finally, multi-resolution saliency maps are combined with the computed local noise and background maps, which are found to be also sensitive to the diffuse emission. A saliency map with almost full pixel resolution is finally determined. \item \emph{Superpixel tagging}: Each pixel $i$ is tagged as background/object/untagged candidate if its saliency $S_i$ is within some adaptive threshold levels: \begin{equation} S_i= \left\{ \begin{array}{ll} \texttt{background} & S_i<S_{\texttt{thr}}^{\texttt{bkg}} \vspace{0.2cm} \\ \texttt{object} & S_i>S_{\texttt{thr}}^{\texttt{sig}} \vspace{0.2cm} \\ \texttt{untagged} & \text{otherwise} \end{array} \right. \end{equation} Different saliency thresholding approaches are possible. One of the most used in saliency studies \citep{Achanta2009,Perazzi2012,Kim2014,Zhang2013} assumes a global adaptive threshold of the kind $S_{\texttt{thr}}^{\texttt{bkg,sig}}=f_{\texttt{thr}}^{\texttt{bkg,sig}}\times\langle S\rangle$, where $\langle S\rangle$ is the average (or median) saliency of the map and $f$ is a numerical factor (e.g. $f$=1 for the background and $f$=2 for the signal; \citet{Achanta2009,Zhang2013}). After several tests performed on different maps we obtained optimal results by combining different global threshold measures: \begin{equation} S_{\texttt{thr}}^{\texttt{sig}}= \texttt{max}\{f_{\texttt{thr}}^{\texttt{sig}}\times\langle S\rangle,\texttt{min}\{S_{\texttt{thr}}^{\texttt{Otsu}},S_{\texttt{thr}}^{\texttt{valley}} \}\} \end{equation} where $S_{\texttt{thr}}^{\texttt{Otsu}}$ is the threshold level computed through the Otsu method (e.g. see \citealt{Sezgin2004} for a review of thresholding methods) and $S_{\texttt{thr}}^{\texttt{valley}}$ is the threshold corresponding to the first valley detected in the pixel saliency histogram. The threshold level factor $f_{\texttt{thr}}^{\texttt{sig}}$ is chosen as a trade-off between false detection rate and object detection efficiency. The alternative approach, more computationally expensive, is employing the local adaptive thresholding method used also for compact source extraction with or without outlier rejection. Superpixels are finally tagged as background, object or untagged candidates according to the majority of their pixel tags. \item \emph{Superpixel graph}: Identify 1st- and 2nd-order neighbors to each region $i$=1,\dots,$N_{\texttt{R}}$ and build a corresponding link graph as described in \citet{Bonev2014}. By 1st-order neighbors, we denote the regions surrounding and sharing a border with region $i$. For each region link $i-j$ in the graph, compute an edgeness $E_{ij}$ parameter related to the amount of edge present on the shared border between region $i$ and $j$. For 1st-order neighbors, this is estimated by taking the average of $\psi$ over the pixels located on the shared boundary, while for 2nd-order neighbors, it assumes the largest value present in the $\psi$ map. Let us consider an asymmetric dissimilarity measure $\Delta_{ij}$ between neighbor regions $i$ and $j$ given by: \begin{equation} \Delta_{ij}= (1-\lambda)d(\mathbf{x}_{i},\mathbf{x}_{i\cup j}) + \lambda E_{ij} \end{equation} where $d(\cdot,\cdot)$ is the Euclidean distance between feature vectors, $E_{ij}$ the edgeness parameter and $\lambda$ a regularization parameters balancing distance and edgeness weights in $\Delta_{ij}$. The above measure expresses the change of feature vector $\mathbf{x}_{i}$ caused by a potential merging with region $j$, which is favored when the distance between feature vectors is small and penalized when there is a border in between the two regions. Note that $\Delta_{ij}\neq\Delta_{ji}$. Compute the adjacency matrix $\mathbf{A}$ of the graph with elements $a_{ij}$: \begin{equation} a_{ij}= \frac{\Delta_{ij}^{-1}}{\sum_{j}\Delta_{ij}^{-1}} \end{equation} properly normalized to express a transition probability from node $i$ to $j$. \item \emph{Superpixel merging}: Following \citet{Ning2010} and \citet{Zhang2013}, merge superpixels on the basis of a maximum similarity criterion by iterating the following steps until no more merging is possible: \begin{enumerate} \item Merge untagged regions to candidate background regions if their similarity is maximal among neighbor similarities. \item Adaptively merge untagged regions if their similarity is maximal among neighbors similarities. \end{enumerate} Untagged regions shrink during the previous stage, while background regions grow. Signal-tagged regions are not affected in the previous stages. Superpixel parameter vector and graph (neighbor links, dissimilarity/adjacency matrix) are updated after each iterated merging stage. When no more merging is favored, all the remaining untagged regions are labeled as signal candidates. This stage always converges to assign all regions to either background or signal. A suitable superpixel merging order for each of the steps described above is determined as in \citet{Bonev2014} using the Google PageRank algorithm \citep{Brin1998} on the transition matrix $A$, that is solving the following equation: \begin{equation} \mathbf{p} = (1-d)\mathbf{e} + d\mathbf{A}^{T}\mathbf{p} \end{equation} in which $\mathbf{p}=(p_{1},p_{2},\dots,p_{N_{\texttt{R}}})$ is the desired vector with rank values (the principal eigenvector of $\mathbf{A}$), $d$ is the damping factor which can be set to a value between 0 and 1 (e.g. $d$=0.85 as in \citet{Brin1998,Page1999}) and $\mathbf{e}$ is a column vector of all 1's. The equation is solved by using the power iteration method \citep{Golub1983}. $\mathbf{p}$ is sorted and allows to select nodes with higher ranks for merging. \item \emph{Source selection}: In this step sources are identified from the collection of signal candidate regions selected in the previous stage. Following \citet{Bonev2014} the most similar signal regions are hierarchically clustered if their mutual dissimilarities ($\Delta_{ij}$, $\Delta_{ji}$) are within a pre-specified tolerance. Only a percentage (e.g. 30\%) of top ranked merging are allowed at each clustering iteration. A practical criterion for the merging is allowing first neighbors to always merge (e.g. a sort of flood-fill approach over superpixels) and assuming a tolerance for 2nd-order neighbors. Region parameter vectors and the dissimilarity/adjacency matrix are updated at each iteration stage and stop conditions are checked. If no regions are merged at the current hierarchy level or the remaining number of regions is below a specified threshold the algorithm stops and the final segmentation is returned to the user, otherwise a new iteration is started. \item \emph{Post-processing}: Some post-processing stages can be performed on the detected sources. A first step uses the hierarchical clustering approach described above to identify similar regions within each source and generate a list of nested sources one level down in the source hierarchy. Further, following \citet{Yang2008}, a number of statistical and morphology-descriptor parameters are computed over the source contour and/or its pixel distribution to be eventually employed in a source classification stage. Standard parameters include bounding box/ellipse, image/contour moments and roundness/rectangularity estimators. More complex parameters, such as Fourier Descriptors (FDs) \citep{Zhang2003}, Hu \citep{Hu1962} and Zernike moments \citep{Singh2011}, can be computed and supplied to the user. \end{enumerate} \begin{table*} \caption{Main parameters used in the source finder algorithm.} \begin{tabular}{p{1.5cm}|p{1.5cm}|p{8cm}} \bottomrule% \textbf{Stage} & % \textbf{Parameter} & % \textbf{Description}\\% \hline% \multirow{2}{*}{\pbox{1.5cm}{\texttt{Background}}} & \texttt{bkgModel} & \pbox{8cm}{\vspace{0.1cm}Model to be used for computing the background and noise maps (1=global, 2=local, 3=local robust).\vspace{0.1cm}}\\ \cline{2-3}% & \texttt{boxSize} & \pbox{8cm}{\vspace{0.1cm}Size of the box used to compute local background/noise estimators.\vspace{0.1cm}}\\ \cline{2-3}% & \texttt{gridSize} & \pbox{8cm}{\vspace{0.1cm}Size of the grid used when interpolating the local background/noise estimators.\vspace{0.1cm}}\\ \bottomrule \multirow{2}{*}{\pbox{1.5cm}{\texttt{Filtering}}} & \pbox{1.5cm}{\textbf{$\sigma_{\texttt{seed}}$ $\sigma_{\texttt{merge}}$}} & \pbox{8cm}{\vspace{0.1cm}Seed and merge threshold used to detect compact bright blobs in the image, e.g. $\sigma_{\texttt{seed}}$=10, $\sigma_{\texttt{merge}}$=2.5.\vspace{0.1cm}}\\ \cline{2-3}% & \textbf{$K_{\texttt{dilate}}$} & \pbox{12cm}{Kernel size to be used when dilating bright sources.}\\ \cline{2-3}% & \textbf{$\sigma_{\texttt{smooth}}$ $K_{\texttt{smooth}}$} & \pbox{7cm}{\vspace{0.1cm}Kernel and radius parameter to be used in image residual smoothing.\vspace{0.1cm}}\\ \hline% \multirow{2}{*}{\pbox{1.5cm}{\texttt{Superpixel}\\ \texttt{Generation}}} & \textbf{$l$} & Superpixel size used to generate the initial superpixel partition.\\ \cline{2-3}% & \textbf{$\beta$} & \pbox{8cm}{\vspace{0.1cm}Regularization parameter controlling starting superpixel segmentation and balancing clustering spatial and color distance. Low $\beta$ values favors spatial clustering, high $\beta$ favors color clustering\vspace{0.1cm}}\\ \hline% \multirow{2}{*}{\pbox{1.5cm}{\texttt{Saliency}\\ \texttt{Filter}}} & \textbf{$l_{\texttt{min/max/step}}$} & Superpixel sizes to be used in multi-resolution saliency computation, e.g. $l$=20-60, step 10.\\ \cline{2-3}% & \textbf{$\texttt{knn}$} & \pbox{8cm}{\vspace{0.1cm}Fraction of nearest neighbors superpixel used in saliency estimation, e.g. $\texttt{knn}$=10\%/20\%\vspace{0.1cm}}\\ \cline{2-3}% & \textbf{$f_\texttt{sal}^\texttt{scales}$} & \pbox{8cm}{\vspace{0.1cm}Fraction of salient scales required to contribute to final saliency estimation, e.g. $\texttt{knn}$=70\%\vspace{0.1cm}}\\ \cline{2-3}% & \texttt{useCurvMap} & Flag to include (multi-scale) curvature maps in saliency estimation\\ \cline{2-3}% & \texttt{useBkgMap} & Flag to include (multi-scale) background map in saliency estimation\\ \cline{2-3}% & \texttt{useNoiseMap} & Flag to include (multi-scale) noise map in saliency estimation\\ \cline{2-3}% & \texttt{salThrModel} & Method to be used for thresholding final saliency map (1=global, 2=local, 3=local robust)\\ \cline{2-3}% & \textbf{$f_{\texttt{thr}}^{\texttt{bkg}}$} & \pbox{8cm}{\vspace{0.1cm}Global threshold parameter to tag background pixel candidates in saliency map, e.g. $f_{\texttt{thr}}^{\texttt{bkg}}$=1.\vspace{0.1cm}}\\ \cline{2-3}% & \textbf{$f_{\texttt{thr}}^{\texttt{sig}}$} & \pbox{8cm}{\vspace{0.1cm}Global threshold parameter to tag signal pixel candidates in saliency map, e.g. $f_{\texttt{thr}}^{\texttt{bkg}}$=2.\vspace{0.1cm}}\\ \hline% \multirow{2}{*}{\pbox{1.5cm}{ \texttt{Superpixel}\\\texttt{Merging}}} & \textbf{$\lambda$} & \pbox{8cm}{\vspace{0.1cm}Regularization parameter used in superpixel merging stage balancing appearance and edge terms when computing superpixel dissimilarities. Low $\lambda$ values (close to zero) favors intensity similarity, high $\lambda$ values (close to 1) favors edge penalization.\vspace{0.1cm}}\\ \cline{2-3 & \texttt{Edge Model} & \pbox{8cm}{\vspace{0.1cm}Model to be used to compute superpixel edgeness (1=Kirsch, 2=Chan-Vese).\vspace{0.1cm}}\\ \cline{2-3 & \textbf{$f_{\texttt{merge}}$} & \pbox{8cm}{\vspace{0.1cm}Fraction of top ranked superpixels selected for merging at each hierarchy level, e.g. $f_{\texttt{merge}}$=30\%.\vspace{0.1cm}}\\ \cline{2-3 & $\varepsilon_{\texttt{merge}}^{\texttt{1st,2nd}}$ & \pbox{8cm}{\vspace{0.1cm}Maximum mutual dissimilarity tolerance used for accept a selected superpixel merging for 1st or 2nd neighbor superpixels, e.g. 5-15\%.\vspace{0.1cm}}\\ \cline{2-3 & $\Delta_{\texttt{thr}}$ & \pbox{8cm}{\vspace{0.1cm}Absolute dissimilarity threshold, when applied, to select/reject selected superpixel merging ($\Delta_{ij}\le\Delta_{thr}$). Low $\Delta_{thr}$ values (close to zero) imply strict superpixel similarity for merging. High $\Delta_{thr}$ values relax the merging.\vspace{0.1cm}}\\ \hlin \end{tabular} \label{AlgorithmParTable}% \end{table*} \subsection{Algorithm implementation} The described algorithms have been implemented in a C++ software library, dubbed \texttt{CAESAR}{} (\emph{Compact And Extended Source Automated Recognition}{}), allowing image filtering, background estimation, source finding, image segmentation starting from images in FITS or ROOT format. The library is mainly based on the \textit{ROOT} \citep{Brun1997} and \textit{R} \citep{R} frameworks for statistical objects and methods and on the \textit{OpenCV} library \citep{Bradski2000} for some of the image filtering algorithms. The source finding and segmentation algorithms have been developed from scratch along with some of the employed filtering stages. Future developments include the algorithm fine-tuning and optimization and further design activities for ease of deployment in a distributed computing infrastructure and integration within the pipeline frameworks of next-generation telescopes. Public distribution is planned once optimization steps are carried out. \begin{figure*} \subfloat[Field B]{\includegraphics[scale=0.23]{ScorpioSNRSourceDeblendResults_last_light.pdf}\label{PostProcessingResultsFig1}} \hspace{-0.2cm} \subfloat[Field C]{\includegraphics[scale=0.23]{ScorpioExtendedFieldDeblendResults_last_light.pdf}\label{PostProcessingResultsFig2}} \hspace{-0.2cm} \subfloat[Field D]{\includegraphics[scale=0.23]{ScorpioFaintSNRDeblendResults_last_light.pdf}\label{PostProcessingResultsFig3}}\\ \vspace{-0.4cm} \subfloat{\includegraphics[scale=0.5]{ZernikeMoments_paperFinalVersion.pdf}\label{PostProcessingResultsFig4}} \caption{Top panels: Sample segmented source images, normalized to range [0,1], in field B, C and D (solid black contours). White contours represent nested regions selected with a multi-resolution saliency-based method (solid lines) and with a multi-scale blob detector (dashed lines); Bottom panels: Zernike moments up to order $n$=4 computed over the segmented sources shown in the upper panels (black contoured area).} \label{PostProcessingResultsFig} \end{figure*} \section{Application to SCORPIO project data}\label{ResultSection} \subsection{Sample fields}\label{SCORPIOSampleData} To test the designed algorithm we considered four selected fields from the SCORPIO map in which several extended structures are present along with compact sources. The map is built as described in Paper I using data observed with the ATCA 0.75A array configuration in combination with data observed with the ATCA EW367 configuration, in which shorter baselines are present. The effective frequency range of the radio data used is 1.4-3.1 GHz. The sample fields, hereafter denoted as field A-D are shown in Fig.~\ref{ScorpioFieldFig}, and some details are reported below: \begin{itemize} \item \emph{Field A} (Fig.~\ref{ScorpioFieldFig1}): Field A (1000$\times$1000 pixels) is centered on the [DBS2003] 176 galactic stellar cluster (l=343.4830$^\circ$, b=-00.0380$^\circ$, angular size=1.45 arcmin). Two bubble objects, S16 and S17 \citep{Churchwell2006}, are associated with the cluster but only S17 is observed in the radio domain. Two bright point-like radio sources (SCORPIO1\_320 and SCORPIO1\_300), already known objects in radio, were identified in Paper I. SCORPIO1\_300 is located within the S17 bubble and has peak flux around 0.04 Jy/beam. The brighter SCORPIO1\_320 (peak flux$\sim$0.14 Jy/beam) has been tentatively classified as a Massive Young Stellar Object (MYSO) candidate \citep{Urquhart2007}. \item \emph{Field B} (Fig.~\ref{ScorpioFieldFig2}): Field B (1600$\times$1850 pixels) is centered on the Supernova Remnant (SNR) G344.7-0.1, located in the adjacency of the high energy $\gamma$-ray source HESSJ1702-420 (see \citealt{Giacani2011}). Close to the SNR, in the north-east region of the image, another extended emission is present and most probably associated with the MSC 345.1-0.2 supernova remnant candidate ($l$=345.062, $b$=-0.218 according to the MOST MSC survey at 843 MHz \citep{Whiteoak1996}). \item \emph{Field C} (Fig.~\ref{ScorpioFieldFig3}): Field C (1000$\times$1000 pixels) was analyzed in detail in Paper I. Some of the extended regions of emission present were associated with the following IRAS sources: IRAS 16566-4204, IRAS 16573-4214, IRAS 16561-4207. The first is recognized as a massive star formation region, while classification is uncertain for the others. \item \emph{Field D} (Fig.~\ref{ScorpioFieldFig4}): Field D (1000$\times$1000 pixels) is centered on the faint SNR Candidate MSC G345.1+0.2. Below this a more intense emission is present, associated with the G345.097+00.136 HII region. \end{itemize} An additional control field, free of extended sources and denoted as field E, is considered to study the algorithm response in the absence of any expected signal and tune the detection thresholds. Field E is reported in Fig.~\ref{SparseFieldResultsFig} (left panel). This map is built using data observed with the ATCA 0.75A array configuration alone. Due to the larger minimum baseline available extended and diffuse sources are strongly filtered out. \begin{figure*} \centering \subfloat[Field A]{\includegraphics[scale=0.22]{MolongloS17FieldSegmResults_saliency_large_light.pdf}} \hspace{0.cm} \subfloat[Field A]{\includegraphics[scale=0.22]{MolongloS17FieldSegmResults_large_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field B]{\includegraphics[scale=0.22]{MolongloSNRFieldSegmResults_saliency_large_light.pdf}} \hspace{0.cm} \subfloat[Field B]{\includegraphics[scale=0.22]{MolongloSNRFieldSegmResults_large_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field C]{\includegraphics[scale=0.22]{MolongloExtendedFieldSegmResults_saliency_large_light.pdf}} \hspace{0.cm} \subfloat[Field C]{\includegraphics[scale=0.22]{MolongloExtendedFieldSegmResults_large_light.pdf}}\\ \vspace{-0.4cm} \subfloat[Field D]{\includegraphics[scale=0.22]{MolongloFaintSNRSegmResults_saliency_large_light.pdf}} \hspace{0.cm} \subfloat[Field D]{\includegraphics[scale=0.22]{MolongloFaintSNRSegmResults_large_light.pdf}} \caption{Segmentation results obtained for the Molonglo sample fields A-D (from top to bottom) assuming $l$=5 and $\beta$=1. Left: Saliency maps normalized to range [0,1]; Right: Segmentation maps. Each segmented region is colored in the plot according to the mean of its pixel fluxes in mJy/beam units. The contours shown with solid white lines correspond to a manual segmentation generated by an expert astronomer.} \label{MolongloSegmResultsFig} \end{figure*} As discussed in Paper I the regions of extended emission present in the test fields A-D are in a few cases firmly associated with real source objects or candidates. In most cases, however, no association with known sources has been established and an artefact nature cannot be excluded a priori without a further insight and comparison to other surveys carried out with different telescopes or wavelength domains. As a result, no ground truth information at pixel level is available to quantify the algorithm performances in terms of widely used measures, such as the identification efficiency and false detection rate. The quality of the reconstruction will be therefore compared to a human-driven segmentation generated for each sample image by an expert astronomer. To enhance the source/artefact discrimination capabilities, we considered the same sample scenarios as observed in the Molonglo Galactic Plane Survey (MGPS) at 843 MHz, reported in Fig.~\ref{MolongloScorpioFieldFig}. The rms sensitivity over the survey is around 1-2 mJy/beam and the positional accuracy is 1-2". The lower resolution appears evident, particularly in Field B and C in which some of the extended regions present in SCORPIO are not fully resolved and are detected as compact sources in the source finding stage. On the other hand, due to the lower observing frequency, regions of extended emission are brighter and can be detected at higher significance levels. Furthermore, it is unlikely that the same imaging artefacts appear in both surveys which are conducted with different telescopes. Thus, common emission features can be therefore considered as real with a high degree of confidence. \begin{figure*} \subfloat[Field B - \textit{Aegean}, \textit{Blobcat}]{\includegraphics[scale=0.27]{ScorpioSNRFieldZoom_AegeanResults_finalPaperVersion_light.pdf}\label{OtherMethodResultsFig1}} \hspace{-0.1cm} \subfloat[Field D - \textit{Aegean}, \textit{Blobcat}]{\includegraphics[scale=0.27]{ScorpioFaintSNRField_AegeanResults_finalPaperVersion_light.pdf}\label{OtherMethodResultsFig4}}\\ \vspace{-0.4cm} \subfloat[Field B - \textit{Chan-Vese}]{\includegraphics[scale=0.27]{ScorpioSNRFieldZoom_CVResults_finalPaperVersion_light.pdf}\label{OtherMethodResultsFig2}} \hspace{-0.1cm} \subfloat[Field D - \textit{Chan-Vese}]{\includegraphics[scale=0.27]{ScorpioFaintSNRField_CVResults_finalPaperVersion_light.pdf}\label{OtherMethodResultsFig5}}\\ \vspace{-0.4cm} \subfloat[Field B - \textit{SWT}]{\includegraphics[scale=0.27]{ScorpioSNRFieldZoom_WTResults_finalPaperVersion_light.pdf}\label{OtherMethodResultsFig3}} \hspace{-0.1cm} \subfloat[Field D - \textit{SWT}]{\includegraphics[scale=0.27]{ScorpioFaintSNRField_WTResults_finalPaperVersion_light.pdf}\label{OtherMethodResultsFig6}} \caption{Source finding results obtained with three different algorithms over field B (left panels) and field D (right panels) compared to the human segmentation (solid white contours); Top: Results obtained with the Aegean (dotted green contours) and Blobcat source finders (dashed red contours); Center: Results obtained with the Chan-Vese algorithm (dotted green contours); Right: Results obtained with the Stationary Wavelet Transform (SWT) method at scale $J$=5 (dotted green contours) and $J$=6 (dashed red contours).} \label{OtherMethodResultsFig} \end{figure*} \subsection{Results} We applied the designed segmentation algorithm to the selected test fields described in Section~\ref{SCORPIOSampleData}. Multiple runs were performed under different choices of the algorithm parameters. The quality of the segmentation was visually inspected against the human segmentation and a suitable choice of the algorithm parameters selected on the basis of the maximum number of expected objects detected in all test fields at the corresponding minimum false detection rate. A minimum region size $l$ for the initial segmentation equal to $l\sim4\times$\texttt{beam} (equivalent to $l$=20 pixels) was considered. Smaller values (e.g. $l$=10 pixels), comparable to the beam size, were found to be too sensitive to small-scale structures (residual compact emission, artefacts) in the image and thus provide noisy segmentation results. Larger values, e.g. $l$=30-60 pixels, were investigated as well. As $l$ increases, small-scale details of the extended sources may be smoothed out. This does not represent an issue for field A and B in which the extended emission scale is larger by a factor of 4-5 compared to the minimum region size. Furthermore a larger value of $l$ favors the merging of artefacts in the background region, e.g. in Field B. The regularization parameter $\beta$, controlling initial over-segmentation, was studied. Different values were considered ($\beta$=0.01, 1, 10, 100) in correspondence to all other scanned parameters. Results were found comparable for $\beta$=0.01-1 while for values above $\beta$=10 the superpixels start to assume very compact shapes and does not fit well to object boundaries. The saliency maps computed for the SCORPIO sample fields using a multi-resolution range of $l$=20-60 pixels (step 10 pixels), in combination with background and noise maps, are shown in the left panels of Fig.~\ref{ScorpioSegmResultsFig}. It can be noted how the faint diffuse emission, previously hardly detectable without manually adjusting the map contrast, is significantly enhanced over the background after the saliency filter. The filter mostly preserves the expected object contours and slightly smooth out small scale details. A thresholding procedure on these saliency maps provides the initial signal and background markers for the following algorithm stages. Suitable values of the global signal threshold factor $f_{\texttt{thr}}^{\texttt{sig}}$ were searched over all test samples. The choice of the threshold level was mainly driven by Field D and control Field E and optimal values were found in the range 2.5-2.8. Higher values (up to 3.0) can be given to other fields at the cost of missing parts of the faint SNR source in Field D and of the large diffuse emission in Field C. Overall, we have found that the thresholded saliency map alone already provides a reasonable source detection. It is also worth to observe that saliency maps may constitute a valid input for different algorithms. Different choices of the similarity regularization parameter $\lambda$ were investigated: $\lambda$= 0, 0.1, 0.5. Results obtained with $\lambda$=0.1, 0.5 are overall comparable, with slightly better results obtained with $\lambda$=0.5, while worse results are obtained with $\lambda$=0. This analysis demonstrates that incorporating an edge information in the algorithm improves the segmentation quality, even though edges of radio objects are considerably softer than in natural images. The results of the segmentation stage are reported in the right panels of Fig.~\ref{ScorpioSegmResultsFig} for the four tested fields assuming $l$=20 pixels, $\beta$=1 and $\lambda$=0.5. Each segmented region is colored according to the mean of its pixel fluxes. The human segmentation is superimposed and shown with solid white contours. As it can be seen known objects and regions of diffuse emission are all identified and kept for later post-processing. The algorithm, at least with this choice of parameters, is also sensitive to other faint diffuse emission which were not identified in the human segmentation. After a deeper inspection, some of these were clearly attributed to imaging artefacts present in the input map, particularly in the field B in which a poorly cleaned bright object outside the studied field pollutes the entire map. For the remaining objects the nature remains unclear even after a visual inspection. This kind of artefacts represents a limitation in current SCORPIO map release. They can be removed in our analysis by increasing the threshold levels in the saliency map, at the cost of affecting source detection especially in fields C and D. In Fig.~\ref{SparseFieldResultsFig} (right panel) we report the results obtained over test field E using the same algorithm parameters selected for fields A-D. The left panel shows the input map while the right panel the map given to the segmentation algorithm after the compact source filtering and smoothing stage. As desired, no signal markers are found in the saliency map and thus no extended source detection is reported. An example of post-processing analysis, carried out for some relevant sources present in the test fields, is reported in Fig.~\ref{PostProcessingResultsFig}. Top panels shows the identified sources (solid black line contours) with nested components detected using two different methods. Solid white line contours are obtained by thresholding a multi-resolution saliency map computed over source pixels. Dashed white line contours are produced by a multi-scale blob detector approach, combining Laplacian of Gaussian (LoG) image filters at different scales. Other analysis are possible with the designed algorithm, e.g. running the hierarchical clustering over the source region to identify the most similar areas, thus not shown here. As discussed in Section~\ref{SegmentationAlgorithmSection} a set of parameters can be computed for each detected source, even the nested ones. As an example we report in the bottom panel of Fig.~\ref{PostProcessingResultsFig} the set of Zernike moments computed for the three sources up to the 4-th order. Note how the moments are sensitive to the source morphology and can be in principle considered for classification studies in combination with the other computed parameters (not in this paper purposes). A study of the suitable set of parameters and their robustness to noise is planned to be performed using simulated data. \begin{figure} \centering% \includegraphics[scale=0.38]{SourceFluxComparison.pdf} \caption{Integrated fluxes $S$ of extended sources in the test fields A-D, reconstructed with three different algorithms (black dots: \texttt{CAESAR}, red squares: Chan-Vese, blue triangles: Wavelet Transform J=5), as a function of the human-driven segmentation flux $S_{h}$.} \label{SourceFluxComparisonFig} \end{figure} \subsection{Application to data at different wavelengths} To evaluate the results obtained on radio data collected at different wavelengths and detector resolutions/sensitivities we considered the same test scenarios as observed in the Molonglo Galactic Plane Survey (MGPS) at 843 MHz, shown in Fig.~\ref{MolongloScorpioFieldFig}. We applied our method to the sample Molonglo fields using the same parameters considered in the analysis of the SCORPIO fields, with the following exceptions related to the lower resolution and size of the Molonglo maps. Smaller values of the superpixel sizes ($l$=5-10 pixels) can be assumed with respect to the SCORPIO maps, in which we have considered a minimum value of $l$=20 pixels. Saliency maps have been therefore computed starting from the chosen minimum superpixel size up to a smaller maximum scale value compared to that assumed in SCORPIO maps. A less aggressive initial smoothing filter is also assumed in this case. All the other algorithm parameters are left unchanged. The results are reported in Fig.~\ref{MolongloSegmResultsFig}. Some of the extended sources present in the field are not resolved and are detected as compact sources in the pre-filtering stage. The white contours shown in the plots are therefore relative to the detectable extended sources. As it can be seen, all the known sources are detected with high fidelity when comparing to the superimposed human segmentation. Additional regions of diffuse emission are detected as well. It is unclear at the present status whether they are real or most probably reconstruction artefacts. Overall, the results demonstrate that the method is flexible to be used also with different data under a minor tuning of parameters driven by the data itself, mainly sensitivity and resolution. \subsection{Results with different algorithms}\label{ResultsDifferentAlgorithms} It is valuable to consider what can be achieved on SCORPIO observed fields with other existing algorithms. Such a test is indeed useful to be carried out as many of the available algorithms were tested with less-sensitive radio data or benchmarked against simulated data neglecting the real background behavior and the Galactic Plane diffuse emission. Four different methods were considered and tested. The first two, \textit{Aegean} \citep{Hancock2012} and \textit{Blobcat} \citep{Hales2012} use a flood-fill algorithm to detect blobs in the image, starting from pixels above a seed threshold $\sigma_{\texttt{seed}}$ ($\sigma_{\texttt{seed}}$=5) with respect to the background and aggregating adjacent pixels above a second lower threshold $\sigma_{\texttt{merge}}$ ($\sigma_{\texttt{merge}}$=2.6). Blobs are finally deblended using curvature information. Background and noise maps were computed using the \textit{BANE} tool distributed within the \textit{Aegean} source finder. A third method, adopted by \citet{Peracaula2011}, searches for blobs on the Stationary Wavelet Transform (SWT) of a residual image, obtained from the input map by replacing bright compact sources with a random background estimate. We implemented this method from scratch. Finally an implementation of the Chan-Vese active contour algorithm \citep{ChanVese2001} was considered and tested over the sample data. The method iteratively evolves an initial contour till convergence on the boundaries of the foreground region. Contour evolution is done by seeking a level set function that minimizes a fitting energy functional depending on a set of input parameters. In Fig.~\ref{OtherMethodResultsFig} we report the sources detected by the four methods (from top to bottom) in fields B (left panels) and D (right panels) in comparison with the human segmentation shown with solid white contours. \textit{Aegean} and \textit{Blobcat} results are comparable. As expected, both algorithms were found to perform very well to detect bright and faint compact sources, including blended sources, but they are biased, by design, against extended sources. A 5$\sigma$ threshold was considered for source detection with the Wavelet method on two different scales $J$=5, 6. In these conditions, most of the extended bright sources present in the fields can be detected. Fainter features, such as parts of the supernova remnants or diffuse regions cannot be well detected, at least at the specified significance level. The Chan-Vese algorithm was tested over the residual image under different choices of parameters and using a simple circular level-set as initial contour. A pre-smoothing stage is applied to the input residual image. Contours surrounding areas of negative excesses with respect to the background level were removed from the set of final detected contours. As it can be seen, the extended source features missed by the other algorithms can be extracted with high accuracy compared to the human segmentation. Some imaging artefacts are also detected along with real sources even with the optimal choice of the Chan-Vese parameters. Overall, the Chan-Vese method was found to outperforms the other three tested algorithms in fully detecting extended objects. In Fig.~\ref{SourceFluxComparisonFig} we compare the integrated flux of the extended sources present in the four fields A-D estimated with three different methods (\texttt{CAESAR}{}: black dots, Chan-Vese: red squares, Wavelet method at scale $J$=5: blue triangles) as a function of the flux estimated using the human-driven segmentation. A total of 30 source candidates were identified, hereafter denoted as the "reference set". Data are reported in the plot for each algorithm in case of source identification and cross-match found with the reference set. As it can be seen, the estimated fluxes closely follow the reference, the observed spread in flux being regarded as a measure of the source reconstruction accuracy contribution to the total flux uncertainty. Overall, better results are obtained with the \texttt{CAESAR}{} and Chan-Vese algorithm, which are able to detect fainter sources with respect to the Wavelet method and achieve a better accuracy in flux estimation. We are aware that we have not exhausted the list of all possible algorithms for extended source extraction and that a deeper tuning is needed for the three tested algorithms before drawing firm conclusions on their suitability for our goals. For instance, a more refined initialization strategy is desired in the Chan-Vese method together with a finer exploration of the parameter space. Moreover, it is known that the two-level assumption (foreground/background) at the basis of the standard Chan-Vese algorithm may not be accurate to scenarios in which a large variation of intensity levels is present. New active contours algorithms \citep{VeseChan2002,Yang2013}, overcoming some of the standard Chan-Vese limitations, appeared recently in the literature and could be worthy of consideration. However, we expect that none of the methods will perform accurately over all the presented images and that a combination of different techniques is probably required at the very end. That motivated the development of a completely different approach reported in this paper. \section{Summary} We described in this paper a new algorithm for the detection of extended sources in radio maps, designed for the SCORPIO project and for next-generation radio surveys. The algorithm was tested with real radio data observed in the SCORPIO and Molonglo surveys and compared with existing algorithms. The achieved performances are found comparable or even superior to other approaches followed in the literature. The novel points introduced are: \begin{itemize} \item a new procedure for computing the background in presence of extended emission; \item an efficient filter to enhance diffuse emission, based on compact bright source removal, smoothing and saliency estimation; \item a flexible framework providing rich information for post-processing analysis and relaxing some of the limiting requirements used for compact source detection (e.g. pixel adjacency) \end{itemize} The results obtained with real data are promising and motivate further work both on the data side and on the algorithm side. For this purpose, a new release of the SCORPIO map, with improved cleaning procedure and data flagging applied, is in progress. Preliminary results on the studied fields show that many of the artefacts present in the first data release are now properly removed. Further, a campaign of single-dish measurement in the SCORPIO field is already scheduled to improve the map response to extended objects beyond the limits of the ATCA telescope. Source finding will therefore largely benefit from these improved maps. At the same time, simulation activities were started with the aim of generating extended source mock scenarios with ground truth available at pixel level to study the achieved source detection efficiency and contamination rate with realistic noise conditions. We are currently working on possible significant improvements also on the algorithm side, both at code and method level. Among these, improving saliency estimation and resolution has become an active field of development in recent works, see \citet{Perazzi2012,Cheng2014,Borji2014,Shi2015}. A proper combination of different algorithms could be a viable solution to decrease the spurious detection rate. Suitable criteria for combining nearby candidate sources is another aspect to be investigated in detail. The current algorithm implementation is not optimized for large maps, e.g. the full SCORPIO or expected ASKAP fields, as it still requires large computation time, e.g. from few to $\sim$15-20 minutes depending on image size, and memory requirements even on a single field, mainly related to the superpixel similarity matrixes. A new optimized version, also designed for parallel and/or distributed processing is therefore planned to be realized, possibly compliant with ASKAP EMU software pipeline requirements in terms of input/output products to be supported, employed technologies and processing strategies \citep{Cornwell2011,Chapman2014}.
train/arxiv
BkiUd0I5qoYA0D8lLKzm
5
1
\section{Introduction} In the last years many authors were interested in the linear theory of elastic materials with double porosity. The first studies regarding this theory are encountered in the papers of Barenblatt, \cite{1}. The concept of double porosity model allows for the body to have a double porous structure: a macro porosity connected to pores in the body and a micro porosity connected to fissures in the skeleton. According to Barneblatt, \cite{2}, Berryman, \cite{3} and Khalili, \cite{6}, the particular applications of materials with double porosity are in geophysics and according to Cowin, \cite{4} in mechanics of bone. The basic equations for elastic materials with double porosity involve the displacement vector field, a pressure associated with the pores and a pressure associated with the fissures [6-8]. We note that in the equilibrium theory the fluid pressures become independent of the displacement vector field. The theory for the behaviour of porous solids in which the skeletal or matrix materials are elastic and the interstices are void of material was studied by Nunziato and Cowin, \cite{7}. The intended applications of this theory are to geological materials such as rocks and soils and to manufactured porous materials such as ceramics and pressed powders. Iesan and Quintanilla, \cite{5} used the Nunziato- Cowin theory of materials with voids to derive a theory of thermoelastic solids which have a double porosity structure. In contrast with the classical theory of elastic materials with double porosity, the porosity structure in the case of equilibrium is influenced by the displacement field. According to Quintanilla \cite{8} is proved the impossibility of the localization in time of the solutions of the linear thermoelasticity with voids. The study of backward in time problem is very important from the thermomechanical point of view because it offers information about the behavior of the system in the past using the information that we have at the present time. Usualy the Saint Venant' s principle is used for the spatial behavior of the solutions for partial differential equations. The studies regarding the spatial decay estimates were obtained for elliptic \cite{flavin_knops}, parabolic [13-14] and hyperbolic \cite {flavin-knops2} equations. The main aim of the spatial decay estimates is to model the perturbations on a side of the boundary that are damped for the points located at some distance from this side of the boundary. For this analysis it is necessary to use a semi-infinite cylinder whose finite end is perturbed and our goal is to identify the effects when the spatial variable increases. The harmonic vibrations in thermoelastic dynamics with double porosity structure for the backward in time problem was studied by Florea, \cite{mms}. The phenomenon for which the mechanisms of dissipation are very strong, such that the solutions vanish after a finite time, is known as the localization in time of solutions. The impossibility of localization in time of solutions is an open problem because the proof of this concept exists only in some linear situations. In the particular case of the linear thermodynamics theory of visco-elastic solids with voids, the solutions decay can be controlled by some particular exponential or polynomial functions, [17-21]. The problem of the impossibility of localization of solutions was proved for the classical thermoelasticity with porous dissipation \cite{pamplona} and in the isotherm case with porous and elastic viscosity \cite{quinta}. The aim of our paper is to show that in the case of thermoelasticity with double porosity structure and microtemperature the only solution that vanishes after a finite time is the null solution, when the mechanisms of dissipation are the double porous dissipation, the temperature and the microtemperature. Our obtaining results can be also compared with those obtained in [17-21]. In our paper we will give information regarding the upper bound for the solution decay. In the previous results, [17-20], the authors proved that after a small period of time the thermomechanical deformations are very small and they can be neglected. In our paper we will highlight that they are not null for any positive time. The present study represents a continuation of the research regarding the impossibility of localization in thermo-porous-elasticity with microtemperatures realized by Quintanilla, \cite{quint}, using the results of Florea, \cite{ijam}. The present study is structured as follows: in the second section the basic equations for the backward in time problem in the case of materials with double porosity structure and microtemperature are described. Also, in this section the conditions imposed on the parameters that influence the behavior of the porous materials are presented. The impossibility of localization in time of solutions for the backward in time problem for a double porous material with microtemperature is expressed in the third section. We state here the conservation of the energy law and we highlight the main theorem of the present study. For the particular case of a semi-infinite cylinder a Phragmen-Lindelof alternative is obtained in the section 4. In the last section of the paper are drawn the conclusions of the present study. \section{Basic equations for the double porous materials with microtemperature} The equations of evolution that govern the problem of thermoelasticity with double porosity structure for the materials with microtemperature in the absence of the supply terms are, \cite{casas1}, \cite{casas2}: \begin{flalign} \label{m1} t_{ji,j}&=\rho\ddot u_i\nonumber\\ \sigma_{j,j}+\xi&=k_1\ddot\phi\\ \tau_{j,j}+\zeta&=k_2\ddot\psi\nonumber \end{flalign} where: $\rho$ is the mass density, $k_1,k_2$ are the coefficients of equilibrated inertia, $\sigma_j,\tau_j$ are the equilibrated stress vectors, $\xi,\zeta$ are the intrinsic equilibrated body forces, $t_{ji}$ are the stress tensors, $u_i$ is the displacement, $\phi, \psi$ are the volume fraction fields in the reference configuration. The equation of energy is given in \eqref{m2} and the equation of the first moment of energy is given in \eqref{m3}: \begin{align} \label{m2} \rho T_0\dot\eta=Q_{j,j} \end{align} \begin{align} \label{m3} \rho\dot\varepsilon_i=Q_{ji,j}+Q_i-q_i \end{align} where where $T_0$ is the constant absolute temperature of the body in the reference configuration, $\eta$ is the entropy, $Q_j$ is the heat flux$, \varepsilon_i$ represent the first moment of energy vector and $q_i$ is the microheat flux average, $Q_{ji}$ is the first heat flux moment tensor. We will consider in our study that we deal with a centrosymmetric material. In this case the constitutive equations for the linear theory are: \begin{flalign} \label{f4} t_{ij} &= C_{ijkl}u_{k,l}+B_{ij}\phi+D_{ij}\psi-\beta_{ij}\theta \nonumber\\ \sigma_i &=\alpha_{ij}\phi_{,j}+b_{ij}\psi_{,j}- N_{ij}T_j\nonumber\\ \tau_i &= b_{ji}\phi_{,j}+\gamma_{ij}\psi_{,j}-M_{ij}T_j\nonumber\\ \xi &=-B_{ij}u_{i,j}-\alpha_1\phi-\alpha_3\psi+\gamma_1\theta\\ \zeta &=-D_{ij}u_{i,j}-\alpha_3\phi-\alpha_2\psi+\gamma_2\theta \nonumber\\ \rho\eta &=\beta_{ij}u_{i,j}+\gamma_1\phi+\gamma_2\psi+a\theta\nonumber\\ Q_i &= \kappa_{ij}\theta_{,j}+L_{ij}T_j\nonumber\\ \rho\varepsilon_i &=-N_{ij}\phi_{,j}-M_{ji}\psi_{,j}-P_{ij}T_j\nonumber\\ Q_{ij} &=-A_{ijrs}T_{s,r}\nonumber\\ q_i &=(L_{ij}-R_{ij})T_j+(\kappa_{ij}-\lambda_{ij})\theta_{,j}\nonumber \end{flalign} where $C_{ijkl}$ is the elasticity tensor, $\beta_{ij}$ is the thermal dilatation tensor, $\kappa_{ij}$ is the heat conductivity tensor, $\beta_{ij}$ is the tensor of thermal dilatation, $B_{ij}, D_{ij}, \alpha_{ij}, b_{ij}, \gamma_{ij}, \alpha_1, \alpha_2, \alpha_3, \gamma_1, \gamma_2, a$ are typical functions in double porous theory and $N_{ij}, M_{ij}, R_{ij}, \lambda_{ij}, A_{ijrs}$ are tensors which are usual in the theories with microtemperatures. In the constitutive equations \eqref{f4} $\theta$ represents the temperature and $T_i$ are the microtemperatures. Introducing the constitutive equations \eqref{f4} into the evolution equations \eqref{m1} the system of the field equations for the thermoelasticity with double porosity and microtemperatures is obtained: \begin{subequations} \begin{equation} \label{5a} \rho\ddot u_i =\left(C_{jikl}u_{k,l}+B_{ji}\phi+D_{ij}\psi-\beta_{ij}\theta\right)_{,j}\tag{2.5.a} \end{equation} \begin{equation} \label{5b} k_1\ddot\phi=\left(\alpha_{ij}\phi_{,i}+b_{ij}\psi_{,i}-N_{ij}T_j\right)_{,j}-B_{ij}u_{i,j}-\alpha_1\phi-\alpha_3\psi+\gamma_1\theta\tag{2.5.b} \end{equation} \begin{equation} \label{5c} k_2\ddot\psi=\left(b_{ij}\phi_{,i}+\gamma_{ij}\psi_{,i}-M_{ij}T_j\right)_{,j}-D_{ij}u_{i,j}-\alpha_3\phi-\alpha_2\psi+\gamma_2\theta\tag{2.5.c} \end{equation} \begin{equation} \label{5d*} a\dot\theta =-\beta_{ij}\dot u_{i,j}-\gamma_1\dot\phi-\gamma_2\dot\psi+\frac{1}{T_0}\left(\kappa_{ij}\theta_{,j}+L_{ij}T_j\right)_{,j}\tag{2.5.d*} \end{equation} \begin{equation} \label{5e*} P_{ij}\dot T_j=\left(A_{ijrs}T_{s,r}\right)_{,j}-R_{ij}T_j-\lambda_{ij}\theta_{,j}-N_{ij}\dot\phi_{,j}-M_{ij}\dot\psi_{,j}\tag{2.5.e*} \end{equation} \end{subequations} Proving the uniqueness of the solution of the backward in time problem, implies the impossibility of localization of the solutions of the above system. The system of equations which describes the backward in time problem is given by the same set of equations as \eqref{5a}-\eqref{5c} while \eqref{5d*} and \eqref{5e*} change into: \begin{subequations} \begin{equation} \label{5d} a\dot\theta =-\beta_{ij}\dot u_{i,j}-\gamma_1\dot\phi-\gamma_2\dot\psi-\frac{1}{T_0}\left(\kappa_{ij}\theta_{,j}+L_{ij}T_j\right)_{,j}\tag{2.5.d} \end{equation} \begin{equation} \label{5e} P_{ij}\dot T_j=-\left(A_{ijrs}T_{s,r}\right)_{,j}+R_{ij}T_j+\lambda_{ij}\theta_{,j}-N_{ij}\dot\phi_{,j}-M_{ij}\dot\psi_{,j}\tag{2.5.e} \end{equation} \end{subequations} Because the constitutive coefficients are symmetric we have: $$C_{ijkl}=C_{klij}; \alpha_{ij}=\alpha_{ji}; b_{ij}=b_{ji}; B_{ij}=B_{ji}, D_{ij}=D_{ji}.$$ For the case of anisotropic and homogeneous material we can draw the assumption that the tensors $A_{ijrs}, P_{ij}, N_{ij}, M_{ij}, L_{ij}, R_{ij}, \lambda_{ij}$ are also symmetric: $$A_{ijkl}=A_{lkij}, P_{ij}=P_{ji}, M_{ij}=M_{ji}, L_{ij}=L_{ji}, N_{ij}=N_{ji}, R_{ij}=R_{ji}, \lambda_{ij}=\lambda_{ji}.$$ In the context of theories with microtemperature as a consequence of Clausius-Duhem inequality, we have the following assumption, \cite{casas1}: \setcounter{equation}{5} \begin{equation} \label{f8} \kappa_{ij}\theta_{,i}+\left(L_{ij}+T_0\lambda_{ij}\right)\theta_{,j}T_i+T_0R_{ij}T_iT_j+T_0A_{jirs}T_{i,j}T_{s,r}\geq 0 \end{equation} In order to obtain the estimated results it is necessary to impose the positivity of several functions and tensors: \begin{subequations} \begin{flalign} \label{a1}\tag{a.1} \rho(X)\geq\rho_0>0; \hspace{1em} k_1(X)\geq k_0^1>0;\hspace{1em} k_2(x)\geq k_0^2>0; \nonumber\\ \hspace{1em} a(x)\geq a_0>0; \hspace{1em} P_{ij}\xi_i\xi_j\geq p_0\xi_i\xi_i, p_0>0\nonumber \end{flalign} \begin{equation} \label{a2}\tag{a.2} \kappa_{ij}\xi_i\xi_j+(L_{ij}+T_0\lambda{ij})\xi_j\zeta_i+T_0R_{ij}\zeta_i\zeta_j\geq C_0(\xi_i\xi_i+\zeta_i\zeta_i), C_0>0, (\forall) \xi_i\zeta_i \end{equation} \begin{flalign} C_{ijkl}u_{i,j}u_{k,l}+\alpha_{ij}\phi_{,i}\phi_{,j}+\gamma_{ij}\psi_{,i}\psi_{,j}+2b_{ij}\phi_{,i}\psi_{,j}+2B_{ij}u_{i,j}\phi+2D_{ij}u_{i,j}\psi +\nonumber\\+\alpha_1\phi^2+\alpha_2\psi^2 +2\alpha_3\phi\psi \label{a3}\tag{a.3} \geq C^*\left(u_{i,j}u_{i,j}+\phi_{,i}\phi_{,i}+\psi_{,i}+\psi_{,i}+\phi^2+\psi^2\right), \end{flalign} \begin{equation*} \alpha_{ij}\xi_i\xi_j\geq0; \hspace{1em}b_{ij}\xi_i\xi_j\geq 0, (\forall) \xi_i \end{equation*} \begin{equation} \label{a4}\tag{a.4} A_{jirs}\xi_{ij}\xi_{sr}\geq C_1\xi_{ij}\xi_{ij}, C_1>0, (\forall) \xi_{ij} \end{equation} \end{subequations} The assumption \eqref{a1} is related to the thermomechanical characteristics, \eqref{a2} and \eqref{a4} are consequences of the Clausius-Duhem inequality, \eqref{a3} gives the information that the internal energy is positive and may be expressed based on the theory of mechanical stability. \section{Main results regarding the impossibility of localization in time} Let us consider a bounded domain $B$ with the boundary $\partial B$. The study of impossibility of localization in time for solutions of the backward in time problem is equivalent with the study of the uniqueness of solutions for the mentioned problem given by the system of equations \eqref{5a}-\eqref{5e}. To prove the uniqueness of solutions for the backward in time problem it is sufficient to show that only the null solution satisfies our problem with null initial and boundary conditions. In the next computations we assume that the domain $B$ is smooth enough to apply the divergence theorem. The initial conditions are: \begin{flalign} \label{e6} u_i(\bm{X},0)=\dot u_i(\bm{X},0)=\phi(\bm{X},0)=\dot\phi(\bm{X},0)=0\\ \psi(\bm{X},0)=\dot\psi(\bm{X},0)=\theta(\bm{X},0)=0, \hspace{2em} T_i(\bm{X},0)=0\hspace{2em}\bm{X}\in B\nonumber \end{flalign} and the boundary conditions: \begin{flalign} \label{e7} u_i(\bm{X},t)=\phi(\bm{X},t)=\psi(\bm{X},t)=\theta(\bm{X},t)=T_i(\bm{X},0)=0,\hspace{2em} \bm{X}\in\partial B, t\geq 0 \end{flalign} The aim of this section is to obtain the nergy relation for the double porous material with microtemperature. We will multiply \eqref{5a} by $\dot u_i$, \eqref{5b} by $\dot\phi$, \eqref{5c} by $\dot\psi$, \eqref{5d} by $\theta$ and \eqref{5e} by $T_j$, the obtained relations will be integrated on $[0,t]$ and they will be summed. Using the divergence theorem and taking into account the boundary conditions based on the principle of conservation of energy, we have the following relation: \begin{flalign} \label{f11} E_1(t)=&\frac{1}{2}\int\limits_B\left(\rho\dot u_i\dot u_i+k_1\dot\phi^2+ k_2\dot\psi^2+ a\theta^2+ P_{ij}T_jT_j+C_{ijkl}u_{i,j}u_{k,l}+2B_{ij}\phi u_{i,j} +\right.\nonumber\\ &+\left. 2D_{ij}\psi u_{i,j}+\alpha_{ij}\phi_{,j}\phi_{,j}+\gamma_{ij}\psi_{,j}\psi_{,j}+2b_{ij}\psi_{,j}\phi_{,j}+\alpha_1\phi^2+2\alpha_3\phi\psi+\alpha_2\psi^2\right)dV=\\ &=\int\limits_0^t\int\limits_B\left[\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}T_i\theta_{,j}\right)+A_{ijrs}T_{s,r}T_{i,j}+R_{ij}T_iT_j+\lambda_{ij}\theta_{,j}T_i\right]dVds\nonumber \end{flalign} Using the same procedure of multiplying the equations \eqref{5a} by $\dot u_i$, \eqref{5b} by $\dot\phi$, \eqref{5c} by $\dot\psi$, \eqref{5d} by $-\theta$ and \eqref{5e} by $-T_j$, integrating on $[0,t]$ and using the divergence theorem we have the following expression: \begin{flalign} \label{f12} E_2(t)=\frac{1}{2}\int\limits_B\left(\rho\dot u_i\dot u_i+k_1\phi^2+k_2\psi^2-a\theta^2-P_{ij}T_jT_j+C_{ijkl}u_{k,l}u_{i,j}+2B_{ij}\phi u_{i,j}+2D_{ij}\psi u_{i,j}\right.+\nonumber\\ \left.+\alpha_{ij}\phi_{,j}\phi_{,j}+2b_{ij}\psi_{,j}\phi_{,j}+\alpha_1\phi^22\alpha_3\phi\psi+\gamma_{ij}\psi_{,j}\psi_{,j}+\alpha_2\psi^2\right)dV=\\ =-\int\limits_{0}^t\left[\frac{1}{T_0}\int\limits_B\left(\kappa_{ij}\theta_{,j}\theta_{,j}+L_{ij}T_j\theta_{,j}+A_{ijrs}T_{s,r}T_{j,i}+R_{ij}T_jT_j+\lambda_{ij}\theta_{,j}T_j\right)dV\right]ds+\nonumber\\ +\int\limits_0^t\int\limits_B\left[\left(\beta_{ij}\theta\right)_{,j}\dot u_i-\left(M_{ij}T_j\right)\dot\psi-\left(N_{ij}T_j\right)\dot\phi+\gamma_1\theta\dot\phi+\gamma_2\theta\dot\psi\right]dVds\nonumber \end{flalign} Taking into consideration the equations \eqref{5a}-\eqref{5e}, the initial and boundary conditions, \eqref{e6}, \eqref{e7}, the following identity is obtained: \begin{flalign} \label{f13} \int\limits_B\left(\rho\dot u_i\dot u_i+k_1\dot\phi^2+k_2\dot\psi^2-a\theta^2-P_{ij}T_jT_j\right)dV&=\int\limits_B\left(C_{ijkl}u_{i,j}u_{k,l}+\alpha_{ij}\phi_{,i}\phi_{,j}+\gamma_{ij}\psi_{,i}\psi_{,j}+2b_{ij}\psi_{,j}\phi_{,i}+\right.\nonumber\\ &\left.+2B_{ij}u_{i,j}\phi+2D_{ij}u_{i,j}\psi+\alpha_1\phi^2+\alpha_2\psi^2+2\alpha_3\phi\psi\right)dV \end{flalign} The impossibility of localization of the solutions in the theory with double porosity and microtemperature is proved in the following theorem. \begin{theorem} Let $(u_i,\phi,\psi, \theta, T_i)$ be a solution of the backward in time problem \eqref{5a}-\eqref{5e} with the initial conditions \eqref{e6} and the boundary conditions \eqref{e7}. The only solution of the mentioned problem is the null solution $u_i=0,$ $\phi=0,$ $\psi=0,$ $\theta=0,$ $T_i=0$. \end{theorem} \begin{proof} Replacing \eqref{f13} into \eqref{f12} we obtain a new expression for $E_2(t)$: \begin{flalign*} E_2(t)=\int\limits_B\left(C_{ijkl}u_{i,j}u_{k,l}+\alpha_{ij}\phi_{,i}\phi_{,j}+\gamma_{ij}\psi_{,i}\psi_{,j}+2b_{ij}\phi_{,i}\psi_{,j}+2B_{ij}u_{i,j}\phi+\right.\\ +\left. 2D_{ij}u_{i,j}\psi+\alpha_1\phi^2+\alpha_2\psi^2+2\alpha_3\phi\psi\right)dV=\\ =-\int\limits_0^t\int\limits_B\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}\theta_{,j}T_i+T_0A_{ijrs}T_{s,r}T_{j,i}+T_0R_{ij}T_iT_j+T_0\lambda_{ij}\theta_{,j}T_i\right)dVds+\\ +\int\limits_0^t\int\limits_B\left[(\beta_{ij}\theta)_{,j}-(M_{ij}T_i)_{,j}\dot\psi-(N_{ij}T_i)_{,j}\dot\phi+\gamma_1\theta\dot\phi+\gamma_2\theta\dot\psi\right]dVds \end{flalign*} The energy can be wxpressed under the bellow form, if we consider a positive constant $\varepsilon$, small enough: $$E(t)=E_2(t)+\varepsilon E_1(t), \hspace{2em} \varepsilon\in (0,1)$$ Taking into account that $E(t)$ is a positive function we have the following form for the energy: \begin{flalign*} E(t)&=\frac{\varepsilon}{2}\int\limits_B\left(\rho\dot u_i\dot u_i+k_1\dot\phi^2+k_2\dot\psi^2+a\theta^2+P_{ij}T_iT_j\right)dV+\\ +&\frac{2+\varepsilon}{2}\int\limits_B\left(C_{ijkl}u_{i,j}u_{k,l}+\alpha_{i,j}\phi_{,i}\phi_{,j}+\gamma_{ij}\psi_{,i}\psi_{,j}+2b_{ij}\phi_{,i}\psi_{,j}+2B_{ij}u_{i,j}\phi+2D_{ij}u_{i,j}\psi+\right.\\ +&\left.\alpha_1\phi^2+\alpha^2\psi^2+2\alpha^3\phi\psi \right)dV \end{flalign*} On the other hand, \begin{flalign*} E(t)=&-\int\limits_0^t\int\limits_B\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}\theta_{,j}T_i+T_0A_{ijrs}T_{s,r}T_{j,i}+T_0R_{ij}T_iT_j+T_0\lambda_{ij}\theta_{,j}T_i\right)dVds+\\ +&\int\limits_0^t\int\limits_B\left[(\beta_{ij}\theta)_{,j}\dot u_i-(M_{ij}T_i)_{,j}\dot\psi-(N_{ij}T_i)_{,j}\dot\phi+\gamma_1\theta\dot\phi+\gamma_2\theta\dot\psi\right]dVds+\\ +&\varepsilon\int\limits_0^t\int\limits_B\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}T_i\theta_{,j}+T_0A_{ijrs}T_{s,r}T_{j,i}+T_0R_{ij}T_iT_j+T_0\lambda_{ij}T_i\theta_{,j}\right)dVds \end{flalign*} The above relation yields, for $\varepsilon\in(0,1)$: \begin{flalign*} E(t)=&-(1-\varepsilon)\int\limits_0^t\int\limits_B\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}\theta_{,j}T_i+T_0A_{ijrs}T_{s,r}T_{j,i}+T_0R_{ij}T_iT_j+T_0\lambda_{ij}T_i\theta_{,j}\right)dVds+\\ +&\int\limits_0^t\int\limits_B\left[ (\beta_{ij}\theta)_{,j}\dot u_i-(M_{ij}T_i)_{,j}\dot\psi-(N_{ij}T_i)_{,j}\dot\phi+\gamma_1\theta\dot\phi+\gamma_2\theta\dot\psi\right]dVds \end{flalign*} from where: \begin{flalign*} \frac{dE(t)}{dt}=&-(1-\varepsilon)\int\limits_B\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}++L_{ij}\theta_{,j}T_i+T_0A_{ijrs}T_{s,r}T_{j,i}+T_0R_{ij}T_iT_j+T_0\lambda_{ij}T_i\theta_{,j}\right)dVds+\\ +&\int\limits_B\left[ (\beta_{ij}\theta)_{,j}\dot u_i-(M_{ij}T_i)_{,j}\dot\psi-(N_{ij}T_i)_{,j}\dot\phi+\gamma_1\theta\dot\phi+\gamma_2\theta\dot\psi\right]dVds \end{flalign*} but, $$\int\limits_B \left(\beta_{ij}\theta\right)_{,j}\dot u_idV=\int\limits_B\beta_{ij,j}\theta\dot u_i dV+\int\limits_B\beta_{ij}\theta_{,j}\dot u_idV$$ The inequality of arithmetic and geometric means implies that: $$\int\limits_B\left(\beta_{ij}\theta\right)_{,j}\dot u_idV\leq C_1\int\limits_B\left(\rho\dot u_i\dot u_i+a\theta^2\right)dV+\varepsilon_1\int\limits_B\kappa_{ij}\theta_{,i}\theta_{,j}dV$$ where $\varepsilon_1$ is small enough, $C_1$ is a positive constant that can be determined based on the constitutive coefficients and $\varepsilon_1$; $$\int\limits_B(M_{ij}T_i){,j}\dot\psi dV\leq C_2\int\limits_B\left(k_2\dot\phi^2+P_{ij}T_iT_j\right)dV$$ where $C_2$ can be determined. Therefore, there is a positive constant $C$ such that: $$\frac{dE}{dt}\leq C\int\limits_B\left(\rho\dot u_i\dot u_i+k_1\dot\phi^2+k_2\dot\psi^2+a\theta^2+P_{ij}T_iT_j\right)dV$$ which is equivalent with the estimate: $$\frac{dE}{dt}\leq C^* E(t)\Leftrightarrow \frac{dE}{E}\leq C^* dt\Leftrightarrow \ln E\leq C^*t+\mathcal{C}\Leftrightarrow E(t)\leq \mathcal{C}e^{C^*t}.$$ For $t=0$ we will have the estimate: $$E(t)\leq E(0)e^{C^*t}$$ But, the initial condition leads us to $E(t)=0$ for every $t\geq 0$ that is equivalent with: $$\dot u_i=0; \dot\phi=0; \dot\psi=0;\theta=0; T_i(t)=0 \Leftrightarrow u_i=C_1; \phi=C_2;\psi=C_3;\theta=T_i=0$$ taking into account the initial conditions \eqref{e6} we obtain that the solution for our problem is the null solution: $$u_i=0;\phi=0;\psi=0; \theta=0; T_i=0$$ \end{proof} \section{Phragmen-Lindelof alternative for the solution of backward in time problem with double porosity and microtemperature} We consider a semi-infinite prismatic cylinder $B=D\times[0,\infty)$ that is occupied by a body with a double porosity structure with micro-temperature. By $D$ we note the cross section in the cylinder. The boundary of the section is a piece-wise continuously differentiable curve denoted by $\partial D$ sufficiently smooth to admit application of divergence theorem. The lateral surface of the cylinder is $\Pi=\partial D\times(0,\infty)$. The cylinder is assumed to be free of load on the lateral boundary surface.\\ The lateral boundary conditions are: \begin{flalign} \label{e15} u_i(\textbf{X},t)=0; \phi(\textbf{X},t)=0; \psi(\textbf{X},t)=0; \theta(\textbf{X},t)=0; T_i(\textbf{X}, T)=0 (\textbf{X},t)\in\Pi\times(0,\infty) \end{flalign} On the base of the cylinder the following boundary conditions are assumed: \begin{flalign} \label{e16} u_i(x_1,x_2,0,t)=\tilde u_i;\phi(x_1,x_2,0,t)=\tilde\phi;\nonumber\\ \psi(x_1,x_2,0,t)=\tilde\psi;\theta(x_1,x_2,0,t)=\tilde\theta; T_i(x_1,x_2,0,t)=\tilde T_i \end{flalign} For the solution of the problem determined by the system \eqref{5a}-\eqref{5e} with initial conditions \eqref{e15} and boundary conditions \eqref{e16} we want to obtain a Phragmen-Lindelof alternative necessary for the interpretation of the behavior of the solution of our boundary value problem. Our aim in this section is to estimate the absolute value of the defined function $H_\omega$ from \eqref{e17} by means of its spatial derivative. We define the function: \begin{flalign} \label{e17} H_\omega(z,t)&=\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\left[C_{i3kl}u_{k,l}+B_{i3}\phi+D_{i3}\psi-\beta_{3i}\theta\right]\dot u_i dads+\nonumber\\ &+\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\left[\alpha_{i3}\phi_{,i}+b_{i3}\psi_{,i}-N_{i3}T_i\right]\dot\phi dads+\\ &+\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\left[b_{3i}\phi_{,i}+\gamma_{i3}\psi_{,i}-M_{i3}T_i\right]\dot\psi dads+\nonumber\\ &+\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\frac{1}{T_0}\left[\kappa_{i3}\theta_{,i}+L_{i3}T_i\right]\theta dads\nonumber\\ &+\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\left(A_{3irs}T_{s,r}+R_{i3}T_i+\lambda_{ij}\theta_{,i}\right)T_i dads\nonumber \end{flalign} Here we have $D(z)=\{\textbf{X}\in B|x_3=z\}$ that denotes the cross section of the cylinder at a distance $z$ from the base. Through means of the divergence theorem and employing the field equations, boundary and initial conditions we obtain: \begin{flalign} \label{e18} H_\omega(z+h,t)-H_\omega(z,t)=\frac{1}{2}\int\limits_{R(z+h,z)}\chi_\omega(t)dV, (\forall)h>0 \end{flalign} where $R(z+h,z)=\{\textbf{X}\in B|z<x_3<z+h\}$. The internal energy is: \begin{flalign} \label{20} \Phi &=\rho\dot u_i\dot u_i+k_1\dot\phi^2+k_2\dot\psi^2+a\theta^2+P_{ij}T_jT_j+C_{ijkl}u_{i,j}u_{k,l}++2B_{ij}u_{i,j}\phi+2D_{ij}u_{i,j}\psi+\\ &+\alpha_{ij}\phi_{,i}\phi_{,j}+\gamma_{ij}\psi_{,i}\psi_{,j}+2b_{ij}\phi_{,i}\phi_{,j}+\alpha_1\phi^2+\alpha_2\psi^2+2\alpha_3\phi\psi\nonumber \end{flalign} such that: \begin{flalign} \label{e19} \chi_\omega(t)=e^{-2\omega t}\Phi(t)+\int\limits_0^t e^{-2\omega s}\left[2\omega\Phi(s)+2\frac{\kappa_{ij}}{T_0}\theta_{,i}(s)\theta_{,j}(s)+2\frac{L_{ij}}{T_0}\theta_{,i}T_j(s)\right.\\ +\left. 2A_{ijrs}T_{s,r}(s)T_{i,j}(s)+2R_{ij}T_i(s)T_j(s)+2\lambda_{ij}\theta_{,i}T_j(s)\right]ds\nonumber \end{flalign} From \eqref{e18} we have: $$\frac{\partial H_\omega}{\partial z}=\frac{1}{2}\int\limits_{D(z)}\chi_\omega(t)dz$$ that leads to the following relation: \begin{flalign} \label{e21} \frac{\partial H_\omega}{\partial z}=\frac{e^{-2\omega t}}{2}\int\limits_{D(z)}\Phi(t)da+\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\left[\omega\Phi(s)+W\right]dads \end{flalign} where, \begin{flalign*} W= \frac{\kappa_{ij}}{T_0}\theta_{,i}\theta_{,j}+\frac{L_{ij}}{T_0}\theta_{,i}T_j+A_{ijrs}T_{s,r}T_{i,j}+R_{ij}T_iT_j+2\lambda_{ij}\theta_{,i}T_j \end{flalign*} Further, we want to estimate the absolute value of $H_\omega$ in terms of spatial derivatives, in order to get a differential inequality, such that: \begin{flalign} \label{e22} |H_\omega|\leq C_\omega\frac{\partial H_\omega}{\partial z}, \hspace{1em}(\forall)z\geq 0 \end{flalign} The above inequality is known in the literature of specialty reagarding the spatial estimate as Phragm\'en-Lindel\"of alternative. Under the assumption\eqref{a3} the internal energy from \eqref{20} leads us to the following inequality: \begin{flalign*} \Phi&\geq \rho\dot u_i\dot u_i+k_1\dot\phi^2+k_2\dot \psi^2+a\theta^2+P_{ij}T_iT_i+C^*\left(u_{i,j}u_{i,i}+\phi_{,i}\phi_{,i}+\psi_{,i}\psi_{,i}+\phi^2+\psi^2\right) \end{flalign*} Therefore the relation \eqref{e18} yields: \begin{flalign*} &H_\omega(z+h,t)-H_\omega(z,t)\geq\\ &\geq\frac{1}{2}\int\limits_R e^{-2\omega s}\left[\rho\dot u_i\dot u_i+k_1\dot\phi^2+k_2\dot \psi^2+a\theta^2+P_{ij}T_iT_i+C^*\left(u_{i,j}u_{i,i}+\phi_{,i}\phi_{,i}+\psi_{,i}\psi_{,i}+\phi^2+\psi^2\right)\right]da+\\ &+\int\limits_0^t\int\limits_R e^{-2\omega s}\left\{\omega\left[k_1\dot\phi^2+k_2\dot \psi^2+a\theta^2+P_{ij}T_iT_i+C^*\left(u_{i,j}u_{i,i}+\phi_{,i}\phi_{,i}+\psi_{,i}\psi_{,i}+\phi^2+\psi^2\right)\right]+\kappa_{ij}\theta_{,i}\theta_{,i}\right.\\ &+\left.\frac{\kappa_{ij}}{T_0}\theta_{,i}\theta_{,i}+\frac{L_{ij}}{T_0}\theta_{,i}T_i+A_{ijrs}T_{i,j}T_{i,j}+R_{ij}T_iT_i+\lambda_{ij}\theta_{,i}T_i\right\}dads \end{flalign*} Based on the inequality of arithmetic and geometric means and also the Cauchy-Schwarz inequality we obtain: \begin{flalign*} |H_\omega(z,t)|&\leq C_\omega\left[\frac{e^{-2\omega t}}{2}\int\limits_{D(z)}\Phi(t)dz+\int\limits_0^t\int\limits_{D(z)}\omega e^{-2\omega s}\Phi(s) da ds+\right.\\ &+\int\limits_0^t\int\limits_{D(z)}e^{-2\omega s}\left[\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}T_i\theta_{,j}\right)\right.+\left. A_{ijrs}T_{i,j}T_{r,s}+R_{ij}T_iT_j+\lambda_{ij}T_i\theta_{,j}\right]dads \end{flalign*} Thus the alternative \eqref{e22} was proved. From the inequality \eqref{e22} we can extract the following two inequalities: \begin{flalign} \label{e23} -\frac{\partial H_\omega}{\partial z}\leq \frac{1}{C_\omega}H_\omega \text{ and } \frac{\partial H_\omega}{\partial z}\geq \frac{1}{C_\omega}H_\omega \end{flalign} Taking into consideration the computations from Flavin, \cite{flavin-knops2} we obtain two estimates: \begin{flalign} \label{e25} H_\omega(z,t)\geq H_\omega(z_0,t)e^{\frac{z-z_0}{C_\omega}} \end{flalign} $(\forall) z\geq z_0, z_0>0$ and $H_\omega(z_0,t)>0$ that lead to: $\lim\limits_{z\rightarrow \infty} e^{-\frac{z}{C_\infty}}\int\limits_{R(z)}\chi_\omega(t)dv>0$ and \begin{flalign} \label{f29} -H_\omega(z,t)\leq H_\omega(z_0,t)e^{\frac{-z}{C_\omega}} \end{flalign} $(\forall) z\geq 0$ and $H_\omega(z,t)\leq 0$. From \eqref{f29} it is obvious that $H_\omega(z,t)\rightarrow0$ for $z\rightarrow\infty$. Let us introduce the following estimate: \begin{flalign} \label{f30} E_\omega(z,t)= \frac{e^{-2\omega t}}{2}\int\limits_{R(z))}\Phi(t)dz+\int\limits_0^t\int\limits_{R(z)} e^{-2\omega s} \left[\omega\Phi(s)+\frac{1}{T_0}\left(\kappa_{ij}\theta_{,i}\theta_{,j}+L_{ij}T_i\theta_{,j}\right)\right.\\ +\left.A_{ijrs}T_{i,j}T_{r,s}+R_{ij}T_iT_j+\lambda_{ij}T_i\theta_{,j}\nonumber \right]da ds \end{flalign} where $R(z)=\{\textbf{X}\in B|z<x_3\}$. Based on \eqref{f29} we observe that: \begin{flalign} \label{f31} E_\omega(z,t)\leq E_\omega(0,t)e^{-\frac{z}{C_\omega}}, z\geq 0 \end{flalign} Now, we can draw the following conclusions: if $(u_i, \phi,\psi,\theta,T_i)$ is a solution of the backward in time problem defined by the system \eqref{5a}-\eqref{5e} with the null initial conditions \eqref{e6} and boundary conditions \eqref{e7} there are two situations: the solution satisfies the asymptotic condition: $\lim\limits_{z\rightarrow\infty}e^{-\frac{z}{C_\omega}}\int\limits_{R(z)}\chi_\omega(t)dv>0$ pr it satisfies the decay estimate \eqref{f29}. This study can continue with obtaining of the upper bound for the amplitude $E_\omega(0,t)$ in terms of the boundary conditions, but this analysis will be the subject of another paper. \section{Conclusions} In the present paper it was studied the impossibility of localization in time for the solutions of the boundary value problem associated with the linear thermoelastic materials with double porosity structure and microtemperature. The uniqueness of the solutions for the backward in time problem in case of the materials with double porosity structure with microtemperature was proved. We can draw the conclusion that for the backward in time problem the only solution that vanishes is the null solution for every $t>0$. In the case of linear thermoelastic theories this results can not certify that the thermomechanical deformations from double porous bodies with microtemperature vanish after a finite time. In this situation it is necessary that the time should be unbounded to guarantee that the fraction of volumes becomes the same as the reference configuration. We obtained a function that defines a measure on the solutions and we deduced the usual exponential type alternative for the solutions of the problem defined in a semi-infinite cylinder.
train/arxiv
BkiUb3I5jDKDyDF8v6wy
5
1
\section{Closed strings and HCs in deconfined phase of the quantum $\mathbb{Z}_2$ theory } {\em Quantum $\mathbb{Z}_2$ LGT and closed strings.----}The Hamiltonian of the quantum $\mathbb{Z}_2$ LGT~\cite{qz2_cui, sachdev, Wegner1971} is \begin{equation} H = Z + gX \end{equation} with \begin{equation} X= -\sum_l { \sigma_l^x },\, Z = \sum_{\square} { Z_\square} ,\, Z_\square= -\prod_{l \in \square}{ \sigma_l^z } \end{equation} where the spins (qubits) occupy the links $l$'s, $\square$ represents the elementary plaquette of the lattice. Single-spin states $|0\rangle$ and $|1\rangle$ correspond to $\sigma_l^z= 1$ (up) and $\sigma_l^z=-1$ (down), respectively. The ground state at $g=0$ is \begin{equation} \ket{\psi_0} = \sum_{i=1}^{S_0} { \ket{\phi_i} }, \label{p0} \end{equation} where each ${ \ket{\phi_i} }$ is with $Z_\square = -1$ for any $\square$ and thus $Z=-9$. In the $M\times N$ torus lattice, there are $N_v\equiv MN$ plaquettes, $N_e\equiv 2N_v$ links. Hence the number of configurations with $Z=-9$ is $S_0 \equiv 2^{N_e}/(2^{N_v-1})$, where $2^{N_e}$ is the number of all possible configurations of spins, while $1/2^{N_v-1}$ is the probability that $Z_\square = -1$ for each $\square$. On the dual lattice, one can regard a spin with $\sigma_l^z=-1$ as occupied by a string while a link with $\sigma_l^z= 1$ as unoccupied. Occupied links can be connected to form a longer string, which may be open or closed, then $\ket{\phi_i}$ can be regarded as a configuration of closed strings. When $g<g_c$, the ground state can be represented as a closed-string condensate, which is a superposition of $S_0$ configurations of closed strings. Especially, when $g=0$, the superposition is of equal amplitude~\cite{freedman,Levin2005,fradkinbook}, Among the $S_0$ configurations of closed strings, two examples are as shown in the Fig.~\ref{fig_torus_graph_cycle}, where in (a), a single closed string passes through all $N_v$ plaquettes of the lattice, and is thus a HC, while in (b), there are two closed strings, each passing through a part of the $N_v$ plaquettes, so there is no HC. Therefore, in order to solve HC problem, one only needs to search for HCs in the $S_0$ configurations of closed strings, rather than $N_v!$ configurations in the original configuration space~\cite{Garey}. This greatly reduces complexity. {\em Quantum Algorithm for HC problem.----}The complexity of this quantum algorithm comes from the two consecutive processes. The first is to obtain the closed-string condensate, and the second is to search for HCs in it. The initial state is the ground state at $g=+\infty$, which is the equal superposition of all product states of $|0\rangle$ and $|1\rangle$. With $g$ decreased adiabatically towards $g \leq g_c$, the closed-string condensate is obtained. This process is equivalent to adiabatically evolve $H_\lambda= \lambda Z + X$, with $\lambda=\frac{1}{g}$, from $\lambda = 0$ towards $\lambda_c =\frac{1}{g_c}$. The adiabatic condition implies that the time scale is $t=O(\sqrt{N_e})$ for a 2-dimensional lattice \cite{RN693}. Hence the time scale is $t=O(\sqrt{N_e})\lambda_c$ for an lattice of an unknown $\lambda_c$. For the adiabatic evolution, we use the symmetric Trotter decomposition to decompose the unitary evolution, which is more efficient than the original Trotter decomposition \cite{Trotter1959, Childs_2019}. The error for each step in the adiabatic simulation is $\varepsilon_s(t,n,g) =| e^{-i(A+B)t} - (e^{-iA\frac{t}{2n}}e^{-iB\frac{t}{n}} e^{-iA\frac{t}{2n}})^n | =| e^{-i(A+B)t} - e^{-iA\frac{t}{2n}} (e^{-iB\frac{t}{n}}e^{-iA\frac{t}{n}})^{n-1}e^{-iB\frac{t}{n}} e^{-iA\frac{t}{2n}}| \leqslant ( \frac{1}{12} g^2 N_v n_l^2 + \frac{1}{24} g N_e*n_p^2) \frac{t^3}{n^2}$, where $A=Z$ and $B=gX$ are the two terms in the Hamiltonian, $t$ is time of the step, $n$ is number of symmetric Trotter substeps, $n_l$ is the number of links surrounding each plaquette, $n_p$ is the number of vertices connected by each edge or is the number of plaquettes sharing each link~\cite{qz2_cui}. Then the cumulative error from the start to the phase transition point $\lambda_c$ can be obtained as \begin{equation} \varepsilon = O( \frac{1}{N_{ss}^2} N_e^{3/2}(N_v^3+N_e\lambda_c) \lambda_c^4 ), \end{equation} where $N_{ss}$ is the total number of symmetric Trotter substeps. Therefore, the time complexity required in the adiabatic quantum simulation of the quantum $\mathbb{Z}_2$ LGT is \begin{equation} O_1= O( \sqrt{\frac{1}{\varepsilon} N_e^{3/2} (N_v^3+N_e\lambda_c) \lambda_c^4} )= O(\frac{1}{g_c^2} \sqrt{ \frac{1}{\varepsilon} N_e^{3/2}( N_v^3 + \frac{N_e}{g_c} } ) ), \end{equation} where $\varepsilon$ is reinterpreted as the precision required for the quantum simulation. In a connected undirected graph, $N_v \leq N_e\leq N_v(N_v-1)/2$, $g_c$, $\lambda_c$ are fixed quantities for a given graph. Subsequently, a quantum search algorithm searches for HCs in the closed-string condensate, which is is a superposition of $S_0 =O( 2^{N_e}/2^{N_v})$ components, including those with HCs. The time complexity is $O_2\sim \frac{S_0}{N_{hc}}$, since the probability of obtaining a state with a HC in a measurement is $\frac{N_{hc}}{S_0}$. The relationship between $N_ {hc}$ and $N_e$ of some sample graphs is shown in figure \ref{fig_hpc_lam_fit} (b). It is possible to develop a better quantum search algorithm finding HCs in the closed-string condensates. The time complexity for the whole process is $O=O_1\cdot O_2$. {\em Quantum Algorithm of finding $g_c$.----}$g_c$ depends on and thus may reveal some properties of the graph, for example, $N_{hc}$. After reaching the TQPT, a measurement of $g_c$ contributes additional time complexity $O_M$. According to the phase estimation method \cite{qz2_cui}, $O_M$ is of the same order of magnitude as $O_1$. Thus with adiabatic algorithm followed by measurement, we have a quantum algorithm of finding $g_c$, with time complexity $O'=O_1+O_M \sim O_1$. {\em Relations between $g_c$ and the graph properties.----}Because the complexity $O_1$ depends on $\lambda_c \equiv \frac{1}{g_c}$, we need to investigate the relations between $g_c$ and the graph properties, e.g. $N_{hc}$ and $N_e$. We work on this problem using the classical demonstration of the quantum adiabatic simulation, in terms of the QuEST simulator~\cite{quest}. The hardware we use is the Nvidia GPU Tesla V100-SXM2-32GB~\cite{qz2_cui}. We first randomly generate four graphs with $N_v=9$ and $N_e=18$, as shown in Fig. \ref{fig_hpc_list_graph}(a-d). The fourth corresponds to the $3\times 3$ torus lattice. $N_{hc}$ of the four graphs are 0, 10, 30, 48, respectively. For such small graphs, we use measurement method, as described in \cite{qz2_cui}, to obtain the ground state $\ket{\psi_0}$ at $g=0$, as given in \eqref{p0}. Note that this method is not feasible for larger graphs, as in our discussion on the quantum algorithm for HC problems. \newcommand{0.5}{0.5} \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[scale=0.5]{lattice_list_graph_a.eps} \subfigure[]{} \includegraphics[scale=0.5]{lattice_list_graph_d.eps} \subfigure[]{} \includegraphics[scale=0.5]{lattice_list_graph_e.eps} \subfigure[]{} \includegraphics[scale=0.5]{lattice_list_graph_torus.eps} \caption{ Four different graphs used in calculating the critical parameters of the TQPT of $\mathbb{Z}_2$ LGT. (a) $G_1: N_v=9, N_e=18, N_{hc} =0$. (b) $G_2: N_v=9, N_e=18, N_{hc} =10$. (c) $G_3: N_v=9, N_e=18, N_{hc} =30$. (d) $G_4$: the graph corresponding to the $3\times 3$ torus lattice, $N_v=9, N_e=18, N_{hc} =48$. } \label{fig_hpc_list_graph} \end{figure} Then we adiabatically increase $g$ from 0 to 1, in steps with $g_s=0.001, t_s=0.1, n=100$, where $g_s$ is the variation of $g$, $t_s$ and $n$ are the time and the number of the symmetric Trotter substeps within each step, respectively. The total cumulative error is $\varepsilon_{all} = \sum_{g=g_s}^1{\varepsilon_s(t,n,g)}$, which is less than $0.135\%$ for the graphs studied here. We study quantum phase transitions on the lattices corresponding to these graphs. First, $\braket{H}$, $\braket{Z}$ and $\braket{Z}$ are obtained as functions of $g$. Then we define two critical parameters, which are easily determined, to represent the critical parameter $g_c$. One is the extremal point $g_c^H$ of the second derivative of $\braket{H}$, the other is the extremal point $g_c^Z$ of the first derivative of $\braket{Z}$. $\lambda_c^H=\frac{1}{g_c^H}$, $\lambda_c^Z=\frac{1}{g_c^Z}$. As can be observed in Fig.~\ref{fig_hpc_list}(c-d), $g_c^H$ and $g_c^Z$ both increase with $N_{hc}$. This verifies that the properties of the graph directly affect the characteristics of the quantum phase transition. \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[scale=0.6]{hpc_list_h.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_list_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_list_dde.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_list_dz.eps} \caption{(a) $\langle H\rangle$ for four graphs denoted in terms of $N_{hc}$. (b) $\langle Z\rangle$ and $\langle X\rangle$ for four graphs denoted in terms of $N_{hc}$. (c) The second derivatives of $\langle H\rangle$ for the four graphs, which can be used to determine the values of $g_c^H$. (d) The first derivative of $\langle Z\rangle$ for the four graphs, which can be used to determine the values of $g_c^Z$. } \label{fig_hpc_list} \end{figure} In order to study how $g_c^H$ and $g_c^Z$ depend on $N_{hc}$, we prepare two groups of undirected and unweighted connected graphs with $N_v=9$. The first group consists of 1000 samples, each with $N_e=18$. We make the classical GPU demonstration of the adiabatic quantum simulation, and obtained $N_{hc}$ for each graph. The distribution of $N_{hc}$ is shown in Fig.~\ref{fig_hpc_sample}(a). The second group consists of $200$ graphs for each value of $N_e$ varying from $16$ to $22$. The distribution of $N_{hc}$ is shown in the Fig.~\ref{fig_hpc_sample}(b). \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[scale=0.6]{hpc_18_sample.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_sample.eps} \caption{ Distribution of the $N_{hc}$ among samples of graphs. $n_s$ represents the number of the samples. (a) The case in which $N_e = 18$ in each sample. There are 1000 samples in total. (b) The case in which $N_e$ varying from $16$ to $22$. There are 200 samples with each value of $N_e$, and 1400 samples in total. } \label{fig_hpc_sample} \end{figure} It can be seen from Fig.~\ref{fig_hpc_N_hc} that the average value of $g_c^H$ and $g_c^Z$ increase steadily with $N_{hc}$ when $N_e$ is fixed. Especially, when $N_{hc}$ = 0, $g_c^H$ and $g_c^Z$ are very small. This shows that $N_{hc}$ has a significant effect on $g_c$. This effect can help determine $N_{hc}$ without searching the HCs of the graph. The time complexity required to reach TQPT is $O_1$. $g_c^H$ and $g_c^Z$ may decrease linearly with $N_e$, as discussed below. Together with $N_v \leq N_e \leq N_v(N_v-1)/2$, these properties imply that HC problem in these small graphs can be solved in polynomial time. \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[scale=0.6]{hpc_18_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_18_z.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_z.eps} \caption{ Dependence of the critical parameters on $N_{hc}$. (a) Dependence of $g_c^H$ on $N_{hc}$ in graphs with $N_e=18$. (b) Dependence of $g_c^Z$ on $N_{hc}$ in graphs with $N_e=18$. (c) Dependence of $g_c^H$ on $N_{hc}$ in graphs with $N_e$ varying from $16$ to $22$. (d) Dependence of $g_c^Z$ on $N_{hc}$ in graphs with $N_e$ varying from $16$ to $22$. } \label{fig_hpc_N_hc} \end{figure} We have also studied the effect of $N_{e}$ on the critical parameters, by using the second group of graphs with the same $N_v$ value but different $N_e$ values. As can be seen in the Fig.~\ref{fig_hpc_all_N_edge}, the average values of $g_c^H$ and $g_c^Z$ decrease steadily with $N_e$ when $N_v$ is fixed. The larger the $N_e$ value, the smaller the critical parameters. In other words, the average value of $\lambda_c = \frac{1}{g_c}$ increase linearly with $N_e$ when $N_v$ is fixed. Thus $O_1$ increases with $N_e$ polynomially. On average, when $N_v$ is fixed, the connectivity between vertices of a graph is proportional to $N_e$. \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_nedge_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_nedge_z.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_nedge_lam_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_nedge_lam_z.eps} \caption{ The dependence of the critical parameters on $N_{e}$. (a) The dependence of $g_c^H$ on $N_{e}$ in the samples with $N_e$ varying from $16$ to $22$. (b) The dependence of $g_c^Z$ on $N_{e}$ in the samples with $N_e$ varying from $16$ to $22$. (c) The dependence of $\lambda_c^H$ on $N_{e}$ in the samples with $N_e$ varying from $16$ to $22$. (d) The dependence of $\lambda_c^Z$ on $N_{e}$ in the samples with $N_e$ varying from $16$ to $22$. } \label{fig_hpc_all_N_edge} \end{figure} As can be seen in Fig. \ref{fig_hpc_deg}, when $N_e$ and $N_v$ are fixed, the average values of the critical parameters decrease with the maximal degree of the vertices $Max(Deg)$, while increases with the minimal degree of the vertices $Min(Deg)$. On average, when $N_v$ and $N_e$ are fixed, the larger $Max(Deg)$, the smaller $(Min(Deg))$. So the results in Fig.~\ref{fig_hpc_deg}(a,c) and Fig.~\ref{fig_hpc_deg}(b,d) are consistent. \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[scale=0.6]{hpc_18_degmax_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_18_degmin_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_degmaxlist_e.eps} \subfigure[]{} \includegraphics[scale=0.6]{hpc_all_degminlist_e.eps} \caption{ The dependence of the critical parameters on the degree of vertices. (a) The dependence of $g_c^H$ on $Max(Deg)$ in the samples with $N_e=18$. (b) The dependence of $g_c^H$ on $Min(Deg)$ in the samples with $N_e=18$. (c) The dependence of $g_c^H$ on $Max(Deg)$ in the samples with $N_e$ varying from $16$ to $22$. (d) The dependence of $g_c^Z$ on $Min(Deg)$ in the samples with $N_e$ varying from $16$ to $22$. } \label{fig_hpc_deg} \end{figure} Furthermore, we find that the relation between the average values of $g_c^H$ and $N_{hc}$ can be very well fitted quantitatively as \begin{equation} g_c^H = A \sqrt{N_{hc}} + B, \label{gch} \end{equation} as shown in Fig.~\ref{fig_hpc_N_hc_fit} for the two groups of graph samples. On the other hand, the relation between $\lambda_c^H$ and $N_e$ can be very well fitted as \begin{equation} \lambda_c^H = 0.1513*N_e + 0.007536, \label{lch} \end{equation} as shown in Fig.\ref{fig_hpc_lam_fit}. \eqref{gch} and \eqref{lch} are consistent. Due to the limitation of computing power, $N_e$ is still relatively small in our simulated graphs, so the linear relationship needs verification in larger graphs. \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[width=0.45 \textwidth]{hpc_18_e_fit.eps} \subfigure[]{} \includegraphics[width=0.45 \textwidth]{hpc_all_e_fit.eps} \subfigure[]{} \includegraphics[width=0.45 \textwidth]{hpc_all_e_fit_arg_a.eps} \subfigure[]{} \includegraphics[width=0.45 \textwidth]{hpc_all_e_fit_arg_b.eps} \caption{ The average value of $g_c^H$ as a function of $N_{hc}$, fitted by $g_c^H = A\sqrt{N_ {hc}} + B $. (a) The samples with $N_e=18$. $A=0.0049005$, $B=0.345178$. (b) The sample with $N_e$ varying from $16$ to $22$, with $A$ and $B$ depending on $N_e$. (c) Dependence of $A$ in (b) on $N_e$. (d)Dependence of $B$ in (b) on $N_e$.} \label{fig_hpc_N_hc_fit} \end{figure} \begin{figure}[htb] \centering \subfigure[]{} \includegraphics[width=0.45 \textwidth]{hpc_all_nedge_lam_e_fit.eps} \subfigure[]{} \includegraphics[width=0.45 \textwidth]{hpc_all_sample_mean.eps} \caption{ (a) $\lambda_c^H$ as a function of $N_e$, fitted by $\lambda_c^H = 0.1513*N_e + 0.07536 $. $N_e$ varies from $16$ to $22$. (b) The dependence of the average value of $N_{hc}$ on $N_e$, which varies from $16$ to $22$.} \label{fig_hpc_lam_fit} \end{figure} In conclusion, we develop a novel approach to graph problem, by defining $\mathbb{Z}_2$ LGT on the lattice of which the graph is the dual lattice. Moreover, we present a quantum adiabatic algorithm with time complexity $O ( \frac{1}{g_c^2} \sqrt{ \frac{1}{\varepsilon} N_e^{3/2}( N_v^3 + \frac{N_e}{g_c} } ) ) $, to obtain the closed-string condensate emerging at the TQPT of the $\mathbb{Z}_2$ LGT, and find that the HC number $N_{hc}$ of in the graph has a significant effect on the TQPT critical parameter $g_c$, providing a novel quantum algorithm for HC problem. Thereby a new approach to HC problem and P versus NP in quantum computing is proposed. Given the importance of graph theory in mathematics and computer science, our approach may also be useful to, say, deep learning, represented by deep neural networks and are being integrated with graphical models~\cite{Johnson2016,HWang2016}, as well as quantum deep learning~\cite{QBM_PRX,Lloyd2017}. This work also suggests a new direction of research connecting graph problems with topological quantum matter. This work was supported by National Natural Science Foundation of China (Grant No. 12075059).
train/arxiv
BkiUd2o4dbghP4tkL5KT
5
1
\section{Introduction} \subsection{Motivation} To be useful for input to computational simulations and verification of output from these simulations, the observed data that numerical computations of weather and climate cannot resolve well enough to simulate in real time must be interpolated, extrapolated and spread over scales that allow real-time computational simulations. This is the process of ``upscaling", or ``coarse graining'' of the fine-scale data for use in computational simulations at coarser scales. The goal of the present paper is to quantify the uncertainty in the process of upscaling, or coarse graining of fine-scale computationally simulated data for use in computational simulations at coarser scales, in the example of two-level quasigeostrophic channel flow. Accomplishing this goal corresponds to taking the step from (ii) to (iii) in the well-known linked chain of discovery in climate science, which is {\obeylines (i) driven by large datasets and new methods for its analysis; (ii) informed by rigorous mathematical derivations and analyses of stochastic geophysical fluid equations; (iii) quantified using computer simulations, evaluated for uncertainty, variability and model error; (iv) optimized by cutting edge data assimilation techniques, then (v) compared with new observation datasets to determine what further analysis and improvements will be needed. } The question for coarse graining that we address in this paper is the following: \vspace{-3mm} \begin{quote} How can we use computationally simulated surrogate data at highly resolved scales, in combination with the mathematics of stochastic processes in nonlinear dynamical systems, to estimate and model the effects on the simulated variability at much coarser scales of the computationally unresolvable, small, rapid, scales of motion at the finer scales? \end{quote}\vspace{-3mm} We will address this question in the context of two-level quasigeostrophic channel flow. Our approach is guided by recent results in \citep{CoGoHo2017} which showed that a multi-scale decomposition of the deterministic Lagrange-to-Euler fluid flow map $g_t$ into a slow large-scale mean and a rapidly fluctuating small-scale map leads to Lagrangian fluid paths $x_t=g_tX$ with $g_0=Id$ on a manifold $ \mathcal{D} $ governed by the stochastic process $g_t\in {\rm Diff}(\mathcal{D})$ on the Lie group of diffeomorphic flows, which appears in the same form as had been proposed and studied for fluids in \citep{holm2015variational}; namely, \begin{equation} {{\color{red}\mathsf d}}x_t = {{\color{red}\mathsf d}}g_t \,X = u_t(x)dt + \sum\limits^N_{i=1} \xi_i(x)\circ dW^i_t = u_t(g_tX)dt + \sum\limits^N_{i=1} \xi_i(g_tX)\circ dW^i_t \,,\label{Lag-stoch-process} \end{equation} where $x=g_tX$, ${{\color{red}\mathsf d}}$ represents stochastic differentiation, the vector fields $\xi_i(x)$ for $i=1,2,\dots,N,$ are prescribed functions of the Eulerian spatial coordinates, $x\in \mathcal{D}$ on the domain of flow $\mathcal{D}$, and $\circ\, dW^i(t)$ denotes the Stratonovich differential with independent Brownian motions $dW^i(t)$. The stochastic process for the evolution of the Lagrangian process $g_t$ in equation \eqref{Lag-stoch-process} involves the pullback of the Eulerian total velocity vector field, which comprises the sum of a drift displacement vector field $u_t(x)dt$ plus a sum over terms in $\xi_i(x)$ representing the (assumed stationary) spatial correlations of the temporal noise in the Stratonovich representation, each with its own independent Brownian motion in time. In \citep{holm2015variational} the velocity decomposition formula \eqref{Lag-stoch-process} was applied in the Hamilton-Clebsch variational principle to derive coadjoint motion equations as stochastic partial differential equations (SPDEs) whose ensemble of realisations can be used to quantify the uncertainty in the slow dynamics of the resolved mean velocity $u_t(x)$. Under the conditions imposed in the derivation of formula \eqref{Lag-stoch-process} in \citep{CoGoHo2017} using homogenization theory, the sum of vector fields in \eqref{Lag-stoch-process} that had been treated in \citep{holm2015variational} from the viewpoint of stochastic coadjoint motion was found to represent a bona fide decomposition of the fluid transport velocity into a mean plus fluctuating flow. \subsection{Data-driven modelling of uncertainty} As opposed to theory-driven models such as Newtonian force laws and thermodynamic processes for the subgrid-scale dynamics, here we will make use of stochastic geometric mechanics as an opportunity to consider a stochastic version of data-driven modelling. In data-driven modelling, one seeks to model properties of a subsystem of a given dynamical system which, for example, may be observable at length or time scales which are below the resolution of available initial and boundary conditions, or scales finer than the resolution of numerical simulations of the dynamical system based on the assumed exact equations. The most familiar example of data-driven modelling occurs in numerical weather prediction (NWP). In NWP, various numerically unresolvable, but observable, local subgrid-scale processes, such as formation of fronts and generation of tropical cyclones, are expected to have profound effects on the variability of the weather. These subgrid-scale processes must be parameterized at the resolved scales of the numerical simulations. Of course, the accuracy of a given parameterization model often remains uncertain. In fact, even the possibility of modelling subgrid-scale properties in terms of resolved-scale quantities available to simulations may sometimes be questionable. However, if some information about the \textit{statistics} of the small-scale excitations is known, such as the spatial correlations of its observed transport properties at the resolved scales, one may arguably consider modelling the effects of the small scale dynamics on the resolved scales by a stochastic transport process whose spatial correlations match the observations, at the computationally unresolvable scales. As we will see, the eigenvectors of the correlation matrix of the observations will provide the modes of the subscale motion, to be modelled by applying stochasticity with the statistics of the unresolved scales. \subsection{The main content of the paper} The rest of the paper is structured as follows. Section~\ref{sec:ham} focuses on the derivation of the stochastic multilayer-layer quasi-geostrophic (QG) equations using the variational approach proposed by~\citet{holm2015variational}. It starts from the derivation of the deterministic $N$-layer QG model in Section~\ref{sec:D_NLQG} followed by the Hamiltonian formulation for the stochastic $N$-layer QG equations given in Section~\ref{sec:D_NLQG_ham}, which is then specialised to the case of two layers with a flat bottom for the remainder of this paper, Section \ref{sec:2d_qg} describes our numerical approach to the deterministic and stochastic QG equations. In particular, Section~\ref{sec:2d_qg_method_determ} focuses on the numerical method for the deterministic QG model, and Section~\ref{sec:2d_qg_num_determ} presents numerical results for the case of heterogeneous (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8}) and homogeneous (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-7}) flow, namely \begin{itemize} \setlength\itemsep{0.0cm} \item High-resolution deterministic solution $q^f$; \item Low-resolution deterministic solution (also referred to as {\it the truth} or {\it the true solution}), $q^a$, computed as the solution of the elliptic equation~\eqref{eq:q_psi} with the stream function $\psi^a$, where $\psi^a$ is computed by spatially averaging the high-resolution stream function $\psi^f$ over the coarse grid cell; \item Low-resolution deterministic solution, $q^m$, computed by simulating the QG model; \item Decorrelation time for the true solution $q^a$ (Figure~\ref{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7}). \end{itemize} The stochastic version of the numerical method is given in Section~\ref{sec:2d_qg_method_stochastic}. We show that the numerical method for the stochastic QG equations is in Stratonovich form and prove its consistency in time (Section~\ref{sec:2d_qg_consistency}). In Section~\ref{sec:ICs} we present a procedure of computing physically-consistent stochastic initial conditions and numerically test its correctness (Figure~\ref{fig:eof64_particles_deltaT_time_within_spread_L2_mu4D-8_mu4D-7}). Section \ref{sec:cal} describes our algorithm for calibrating the eigenvectors and demonstrates the approach using numerical results. In particular, we use the Lagrangian framework to quantify uncertainty in the homogeneous and inhomogeneous flow regimes and to analyse: \begin{itemize} \setlength\itemsep{0.0cm} \item The relative error between the true deterministic solution and the solution approximated with the leading Empirical Orthogonal Functions (EOFs) and their corresponding principal components, in particular its dependence as a function of EOFs (Figure~\ref{fig:ODE_EOF_re}, Section~\ref{sec:val_approx}); \item The spread of stochastic solutions (also referred to as ensemble members), and how it depends on the number of EOFs and the size of the stochastic ensemble (Figures~\ref{fig:eof64_100_400} and~\ref{fig:eof128_100_400}) in instantaneous snapshots (Section~\ref{sec:approx_lagrangian_evol}); \item The stochastic spread averaged over the Lagrangian particles in both fast and slow flow regions, and its dependence as a function of the number of EOFs and the size of the ensemble over time (Figures~\ref{fig:cloud_area_fast_slow_flow} and~\ref{fig:error_spread_fast_slow_flow}, Section~\ref{sec:approx_lagrangian_evol}); \item Along with the uncertainty quantification results for Lagrangian particles, we apply EOFs to the stochastic QG equations in Section~\ref{sec:xi_SQG} and study uncertainty quantification with respect to the number of EOFs and the size of the stochastic ensemble for the heterogeneous (Figure~\ref{fig:tildeT_EOF_ensemble_mu4D-8}) and homogeneous (Figure~\ref{fig:tildeT_EOF_ensemble_mu4D-7}) flows; \item In order to compare the modelled deterministic solution $q^m$ and stochastic solution with the true solution $q^a$, we study uncertainty quantification with respect to the number of EOFs and the size of the stochastic ensemble for the heterogeneous (Figure~\ref{fig:MC3test_mu4D-8}) and homogeneous (Figure~\ref{fig:MC3test_mu4D-7}) flows in the deterministic QG model. \end{itemize} In Section~\ref{sec:concl} we provide conclusions and outlook for future research. \section{Hamiltonian equations of motion for a multi-layer fluid} \label{sec:ham} \subsection{A deterministic $N$-layer quasi-geostrophic (NLQG) fluid\label{sec:D_NLQG}} Consider a stratified fluid of $N$ superimposed layers of constant densities $\rho_1 < \dots<\rho_N$; the layers being stacked according to increasing density, such that the density of the upper layer is $\rho_1$. The quasi-geostrophic (QG) approximation assumes that the velocity field is constant in the vertical direction and that in the horizontal direction the motion obeys a system of coupled incompressible shallow water equations. We shall denote by $\mathbf{u}_i = (- \,\partial_y\psi_i, \partial_x\psi_i) = \mathbf{\hat{z}}\times\nabla \psi_i$ the velocity field of the $i^{th}$ layer, where $\psi_i$ is its stream function, and the layers are numbered from the top to the bottom. We define the generalised total vorticity of the $i^{th}$ layer as \begin{equation}\label{omsubi} \omega_i = q_i + f_i = \Delta \psi_i + \alpha_i \sum\limits^N_{j=1} T_{ij}\psi_j + f_i =: \sum\limits^N_{j=1} E_{ij}\psi_j + f_i \,,\qquad i=1,\dots,N, \end{equation} where the generalised total vorticity is defined as $\omega_i = q_i + f_i $, the elliptic operator $E_{ij}$ defines the layer vorticity, \[q_i = \sum\limits^N_{j=1} E_{ij}\psi_j:= \Delta \psi_i + \alpha_i \sum\limits^N_{j=1} T_{ij}\psi_j\,,\] and the constant parameters $\alpha_i $, $f_i $, $f_0$, $\beta$, $f_N$ are \begin{align}\label{paramdefs} \begin{split} \alpha_i &= (f_0^2/g)\big((\rho_{i+1}-\rho_i)/\rho_0\big)D_i \,,\qquad i=1,\dots,N, \\ f_i &= f_0 + \beta y \,,\qquad i=1,\dots,N-1, \\ f_N &= f_0 + \beta y + f_0 d(y)/D_N, \\ f_0 &= 2\Omega \sin(\phi_0) \,,\qquad \beta = 2\Omega \cos(\phi_0)/R\,, \end{split} \end{align} where $g$ is the gravitational acceleration, $\rho_0 = (1/N)(\rho_1 + \dots + \rho_N)$ is the mean density, $D_i$ is the mean thickness of the $i^{th}$ layer, $R$ is the Earth's radius, $\Omega$ is the Earth's angular velocity, $\phi_0$ is the reference latitude, and $d(y)$ is the shape of the bottom. The $N \times N$ symmetric tri-diagonal matrix $T_{ij}$ represents the second-order difference operator, \begin{equation}\label{2ndDiffTop} \sum\limits^N_{j=1} T_{ij}\psi_j = (\psi_{i-1} - \psi_i) - (\psi_i - \psi_{i+1})\,, \end{equation} so that \begin{equation}\label{2ndDiffT} T_{ij} = \begin{bmatrix} -1 & 1 & 0 & 0 & \dots &\dots & 0 \\ 1 & -2 & 1 & 0 & \dots & \dots & 0 \\ 0 & 1 & \dots & \dots & \dots & 1 & 0 \\ 0 &\dots & \dots & 0 & 1 & -2 & 1 \\ 0 &\dots & \dots & 0 & 0 & 1 & -1 \end{bmatrix} \,,\qquad i,j=1,\dots,N. \end{equation} With these standard notations, the motion of the NLQG fluid is given by \begin{equation}\label{NlayerVortDyn} \partial_t q_i = \Big\{ \omega_i ,\,\psi_i \Big\}_{xy} = -\, \mathbf{\hat{z}} \times \nabla \psi_i \cdot \nabla \omega_i = -\, \mathbf{u}_i \cdot \nabla \omega_i \,,\qquad i =1,\dots,N, \end{equation} where $\mathbf{\hat{z}}$ is the vertical unit vector, $\mathbf{u}_i = \mathbf{\hat{z}} \times \nabla \psi_i $ is the horizontal flow velocity in the $i^{th}$ layer, and the brackets in \begin{equation}\label{canPoissonBrkt} \{\omega,\psi\}=J(\omega,\psi)=\omega_x\psi_y-\omega_y\psi_x = \mathbf{\hat{z}}\cdot \nabla \omega \times \nabla \psi \end{equation} denote the usual $xy$ canonical Poisson bracket in $\mathbb{R}^2$. The boundary conditions in a compact domain $D\subset\mathbb{R}^2$ with smooth boundary $\cup_{j}\partial{D}_j$ are $\psi_j|_{(\partial{D}_j)} = constant$, whereas in the entire $\mathbb{R}^2$ they are $\lim_{(x, y)\to±\infty} \nabla\psi_j=0$. The space of variables with canonical Poisson bracket in \eqref{canPoissonBrkt} consists of $N$-tuples $(q_1,\dots, q_N)$ of real-valued functions on $D$ (the ``generalized vorticities'') with the above boundary conditions and certain smoothness properties that guarantee that solutions are at least of class $C^1$. The Hamiltonian for the $N$-layer vorticity dynamics in \eqref{NlayerVortDyn} is the total energy \begin{equation}\label{NlayerVortDyn-erg} H(q_1,\dots, q_N) = \frac12\int_D \Big[\sum\limits^N_{i=1} \frac{1}{\alpha_i} |\nabla\psi_i|^2 + \sum\limits^{N-1}_{i=1} (\psi_{i+1}-\psi_i)^2\Big]dx\,dy \,,\qquad i =1,\dots,N, \end{equation} with stream function $\psi_i$ determined from vorticity $\omega_i$ by solving the elliptic equation \eqref{omsubi} for $q_i=\omega_i-f_i$ with \begin{equation}\label{elliptic-op} q_i = \sum\limits^N_{j=1} E_{ij}\psi_j\,, \end{equation} for the boundary conditions discussed above. Hence, we find that \begin{equation}\label{NlayerVortDyn-erq} H(q_1,\dots, q_N) = -\frac12\int_D\sum\limits^N_{i,j=1} \psi_iE_{ij}\psi_j dx\,dy = -\frac12\int_D\sum\limits^N_{i,j=1} q_i E^{-1}_{ij}*q_j dx\,dy = -\frac12\int_D\sum\limits^N_{i=1} q_i \psi_i dx\,dy\,, \end{equation} where $E^{-1}_{ij}*q_j = \psi_i$ denotes convolution with the Greens function $E^{-1}_{ij}$ for the symmetric elliptic operator $E_{ij}$. The relation \eqref{NlayerVortDyn-erq} means that $\delta H/\delta q_i = \psi_i$ for the variational derivative of the Hamiltonian functional $H$ with respect to the function $q_j$. \paragraph{\bf Lie--Poisson bracket.} Equations \eqref{NlayerVortDyn} are Hamiltonian with respect to the Lie--Poisson bracket on the dual of $\oplus \sum\limits^N_{i=1}{\cal F}(D)$ given by \begin{equation}\label{VortLie-PoissonBrkt} \{F,H\}(q_1,\dots, q_N) = \sum\limits^N_{i=1} \int_D (q_i + f_i(x)) \left\{\frac{\delta F}{\delta q_i},\,\frac{\delta H}{\delta q_i}\right\}_{xy}dx\,dy\,, \end{equation} provided the domain of flow $D$ is simply connected.% \footnote{If the domain $D$ is not simply connected, then variational derivatives such as $\delta H/\delta q_i$ must be interpreted with care, because in that case the boundary conditions on $\psi_i$ will come into play \citep{McWilliams1977}.} The motion equations \eqref{NlayerVortDyn} for $q_i$ now follow from the Lie--Poisson bracket \eqref{VortLie-PoissonBrkt} after an integration by parts to write it equivalently as \begin{equation}\label{VortLie-PoissonBrkt2} \frac{dF}{dt} = \{F,H\}(q_1,\dots, q_N) = -\sum\limits^N_{i=1} \int_D \frac{\delta F}{\delta q_i} \left\{q_i + f_i(x) ,\,\frac{\delta H}{\delta q_i}\right\}_{xy}dx\,dy \,, \end{equation} and recalling that $\delta H/\delta q_i =-E^{-1}_{ij}*q_j=- \,\psi_i$, $i=1,2,\dots,N$, so that equations \eqref{NlayerVortDyn} follow. \paragraph{\bf Constants of motion.} According to equations \eqref{NlayerVortDyn}, the material time derivative of $\omega_i(t, x, y)$ vanishes along the flow lines of the divergence-free horizontal velocity $\mathbf{u}_i = \mathbf{\hat{z}}\times\nabla \psi_i $. Consequently, for every differentiable function $\Phi_i: \mathbb{R}\to\mathbb{R}$ the functional \begin{equation}\label{Casimirs} C_{\Phi_i}(\omega_i) = \int_D \Phi_i (\omega_i)\,dx\,dy \end{equation} is a conserved quantity for the system \eqref{NlayerVortDyn} for $i =1,\dots,N$, provided the integrals exist. By Kelvin's circulation theorem, the following integrals over an advected domain $S(t)$ in the plane are also conserved, \begin{equation}\label{KelvinThm} I_i(t) = \int_{S(t)} \omega_i \,dx\,dy = \int_{\partial S(t)} \nabla \psi_i \cdot \mathbf{\hat{n}} \,ds \,, \end{equation} where $\mathbf{\hat{n}}$ is the horizontal outward unit normal and $ds$ is the arclength parameter of the closed curve $\partial S(t)$ bounding the domain $S(t)$ moving with the flow. \subsection{Hamiltonian formulation for the stochastic NLQG fluid\label{sec:D_NLQG_ham}} Having understood the geometric structure (Lie--Poisson bracket, constants of motion and Kelvin circulation theorem) for the deterministic case, we can introduce the stochastic versions of equations \eqref{NlayerVortDyn} by simply making the Hamiltonian stochastic while preserving the previous geometric structure, as done in the previous section. Namely, we choose \begin{equation}\label{Ham-stoch} {\color{red}\mathsf d} h = H(\{q\})dt + \int_D \sum\limits^N_{i=1}\sum\limits^K_{k=1} q_i (t,x,y)\zeta^k_i(x,y) \circ dW_k(t)\,dx\,dy \,, \end{equation} where the $\zeta^k_i(x,y)$, $k =1,\dots,K$ represent the correlations of the Stratonovich noise we have introduced in \eqref{Ham-stoch}. For this stochastic Hamiltonian, the Lie--Poisson bracket \eqref{VortLie-PoissonBrkt} leads to the following stochastic process for the transport of the $N$-layer generalised vortices, \begin{equation}\label{NlayerVortDyn-stoch} {\color{red}\mathsf d} q_i = \Big\{ \omega_i ,\,{\color{red}\mathsf d} \psi \Big\}_{xy} = J\big(\omega_i ,\,{\color{red}\mathsf d} \psi \big) = \nabla ({\color{red}\mathsf d} \psi_i ) \times \mathbf{\hat{z}}\cdot \nabla \omega_i = -\, {\color{red}\mathsf d} \mathbf{u}_i \cdot \nabla \omega_i \,,\qquad i =1,\dots,N, \end{equation} where we have defined the stochastic transport velocity in the $i^{th}$ layer \begin{equation}\label{Nlayer-stoch-vel} {\color{red}\mathsf d} \mathbf{u}_i := \mathbf{\hat{z}} \times \nabla ({\color{red}\mathsf d} \psi_i ) \,,\qquad i =1,\dots,N, \end{equation} in terms of its stochastic stream function \begin{equation}\label{Nlayer-stoch-dpsi} {\color{red}\mathsf d} \psi_i := \psi_i \,dt + \sum\limits^K_{k=1} \zeta^k_i(x,y) \circ dW_k(t) = \frac{\delta({\color{red}\mathsf d} h)}{\delta q_i} \,,\qquad i =1,\dots,N, \end{equation} determined from the variational derivative of the stochastic Hamiltonian in \eqref{Ham-stoch} with respect to the generalised vorticity $q_i$ in the $i^{th}$ layer. \paragraph{\bf Constants of motion.} The constants of motion $C_{\Phi_i}$ in \eqref{Casimirs} and the Kelvin circulation theorem for the integrals $I_i$ in \eqref{KelvinThm} persist for the stochastic generalised vorticity equations in \eqref{NlayerVortDyn-stoch}. This is because both of these properties follow from the Lie-Poisson bracket in \eqref{VortLie-PoissonBrkt}. However, the stochastic Hamiltonian in \eqref{Ham-stoch} is not conserved, since it depends explicitly on time, $t$, through its Stratonovich noise term. \paragraph{\bf The case of two layers.} In the case of two layers with a flat bottom which we study in the remainder of this paper, the $2$-layer generalised vorticity equations in \eqref{NlayerVortDyn-stoch} become \begin{align} \label{eq:SLTpv} \begin{split} {\color{red}\mathsf d} q_1+J(\psi_1\,dt +{\color{red} \sum\limits^K_{k=1} \zeta^k_1 \circ dW_k(t)},\, q_1+\beta y) &=(\nu\Delta^2\psi_1)\,dt ,\\ {\color{red}\mathsf d} q_2+J(\psi_2\,dt +{\color{red} \sum\limits^K_{k=1} \zeta^k_2 \circ dW_k(t)}, \, q_2+\beta y) &=(\nu\Delta^2\psi_2-\mu\Delta\psi_2)\,dt, \end{split} \end{align} in which viscosity and drag terms with constant parameters $\nu$ and $\mu$, respectively, have also been introduced. \section{The two-dimensional multilayer quasi-geostrophic model} \label{sec:2d_qg} \subsection{Deterministic case} To recap the previous section, the two-layer deterministic QG equations for the potential vorticity (PV) $q$ in a domain $\Omega$ are given by the PV material conservation law augmented with forcing and dissipation~\citep{Pedlosky1987,Vallis2006}: \begin{subequations} \begin{align} \partial_t q_1+J(\psi_1,q_1+\beta y)&=\nu\Delta^2\psi_1,\\ \partial_t q_2+J(\psi_2,q_2+\beta y)&=\nu\Delta^2\psi_2-\mu\Delta\psi_2, \end{align} \label{eq:pv} \end{subequations} where $\psi$ is the stream function, $J(f,g)=f_xg_y-f_yg_x$ is the Jacobian, the planetary vorticity gradient is given by parameter $\beta$, $\mu$ is the bottom friction parameter, and $\nu$ is the lateral eddy viscosity. The computational domain $\Omega=[0,L_x]\times[0,L_y]\times[0,H]$ is a horizontally periodic flat-bottom channel of depth $H=H_1+H_2$ given by two stacked isopycnal fluid layers of depth $H_1$ and $H_2$, respectively (Figure~\ref{fig:gem_setup}). A mollified version of the existence and uniqueness theorem for the QG model can be found in~\citep{Farhat_et_al2012}. \begin{figure}[H] \centering \begin{tikzpicture}[>=stealth] \draw[color=black,thick] (0,0) -- (7,0); \draw[color=black] (3.5,-0.25) node {$\Gamma_1$}; \draw[color=black] (7,-0.25) node {$L_x$}; \draw[color=black] (0,-0.25) node {$0$}; \draw[color=black] (-0.25,2) node {$L_y$}; \draw[color=black,dashed,thick] (7,0) -- (7,2); \draw[color=black] (7.25,1) node {$\Gamma_2$}; \draw[color=black,thick] (0,2) -- (7,2); \draw[color=black] (3.5,2.25) node {$\Gamma_3$}; \draw[color=black,dashed,thick] (0,0) -- (0,2); \draw[color=black] (-0.25,1) node {$\Gamma_4$}; \draw[color=black,thin] (0,1.5) -- (7,1.5); \draw[color=black,thin,<->] (5.5,1.5) -- (5.5,2); \draw[color=black] (5.75,1.75) node {$H_1$}; \draw[color=black,thin,<->] (5,0) -- (5,1.5); \draw[color=black] (5.25,0.75) node {$H_2$}; \end{tikzpicture} \caption{The present investigation involves a two-layer horizontally periodic channel $\Omega$ of horizontal length $L_x$, vertical length $L_y$ and depth $H=H_1+H_2$, given by two stacked isopycnal fluid layers of depth $H_1$ and $H_2$, respectively. We set periodic boundary conditions for the stream function $\psi$ on the lateral boundaries $\Gamma_2$ and $\Gamma_4$, namely $\psi_i|_{\Gamma_2}=\psi_i|_{\Gamma_4}=0\,, i=1,2$; and no-slip boundary condition on the top, $\Gamma_3$, and bottom, $\Gamma_1$, boundary: $\partial_{\bf n}\psi_i|_{\Gamma_1}=\partial_{\bf n}\psi_i|_{\Gamma_3}=0\,, i=1,2$. For all numerical simulations presented in this paper we take $L_x=3840\, \rm km$, $L_y=1920\, \rm km$, and total depth $H=H_1+H_2$, with $H_1=1.0\, \rm km$, $H_2=3.0\, \rm km$. } \label{fig:gem_setup} \end{figure} Forcing in system~\eqref{eq:pv} is introduced through a vertically sheared, baroclinically unstable background flow (e.g.,~\citep{BerloffKamenkovich2013}) \begin{equation} \psi_i\rightarrow-U_i\,y+\psi_i,\quad i=1,2, \label{eq:forcing} \end{equation} where the parameters $U_i$ are background-flow zonal velocities. The PV and stream function are related through two elliptic equations: \begin{subequations} \begin{align} q_1=\Delta\psi_1+s_1\psi_{[21]},\\ q_2=\Delta\psi_2+s_2\psi_{[12]}, \end{align} \label{eq:q_psi} \end{subequations} with stratification parameters $s_1$, $s_2$; $\psi_{[ij]}:=\psi_i-\psi_j$. System~(\ref{eq:pv})-(\ref{eq:q_psi}) is augmented by the integral mass conservation constraint~\citep{McWilliams1977} \begin{equation} \partial_t\iint\limits_{\Omega}\psi_{[12]}\ dydx=0, \label{eq:masscon} \end{equation} as well as by the periodic horizontal boundary conditions, \begin{equation} \boldsymbol{\psi}\Big|_{\Gamma_2}=\boldsymbol{\psi}\Big|_{\Gamma_4}=0\,,\quad \boldsymbol{\psi}=(\psi_1,\psi_2)\,, \label{eq:bc24} \end{equation} and no-slip boundary conditions at the top and bottom of the channel, \begin{equation} \partial_{\bf n}\boldsymbol{\psi}\Big|_{\Gamma_1}=\partial_{\bf n}\boldsymbol{\psi}\Big|_{\Gamma_3}=0\,, \label{eq:bc13} \end{equation} where $\bf n$ is the outward normal unit vector. \subsubsection{Numerical method\label{sec:2d_qg_method_determ}} The QG model~\eqref{eq:pv}-\eqref{eq:bc13} is solved using the high-resolution CABARET method, which is based on a second-order, non-dissipative and low-dispersive, conservative advection scheme~\citep{Karabasov_et_al2009}. The distinctive feature of this scheme is its ability to simulate large-Reynolds-number flow regimes at much lower computational costs compared to conventional methods (see, e.g.,~\citep{Arakawa1966,WoodwardColella1984,ShuOsher1988,Hundsdorfer_et_al1995}). The CABARET method is a predictor-corrector scheme in which the components of the conservative variables are updated at half time steps. Algorithm~\ref{alg:DCABARET} illustrates the principal steps of the CABARET method adopted from~\citep{Karabasov_et_al2009}. To make the notation more concise, we introduce the forward difference operators in space $$\Delta_x[f]=\frac{f_{i+1,j}-f_{ij}}{\Delta x},\quad \Delta_y[f]=\frac{f_{i,j+1}-f_{ij}}{\Delta y},$$ and omit spatial and layer indices wherever possible, unless stated otherwise. \begin{algorithm} \caption{CABARET scheme for the deterministic QG system~\eqref{eq:pv}-\eqref{eq:bc13}} \label{alg:DCABARET} \begin{algorithmic} \STATE{\underline{\bf Predictor}} \STATE{$\displaystyle q^{n+\frac12}_{i+\frac12,j+\frac12}=q^n_{i+\frac12,j+\frac12}+ \frac{\Delta t}{2}\,F\left(q^n,u(q^n),v(q^n)\right)+\Delta t\, F_{\beta}\left(v^n,v^{n-1}\right) +\Delta t\, F_{\rm visc}\left(\psi\left(q^{n+\frac12}\right)\right)$,}\\ \STATE{\quad $F\left(q^n_{ij},u(q^n),v(q^n)\right)=-\left(\Delta_x\left[(uq)^n_{i,j+\frac12}\right]+ \Delta_y\left[(vq)^n_{i+\frac12,j}\right]\right)$,}\\ \STATE{\quad$\displaystyle F_{\beta}\left(v^n,v^{n-1}\right)=\frac32R^n-\frac12R^{n-1},\quad R^n=-\frac{\beta}{2}\left(v^n_{i+\frac12,j+1}+v^n_{i+\frac12,j}\right)$.}\\ \STATE{\quad The forcing term \vspace*{-0.125cm} $$F_{\rm visc}\left(\psi\left(q^{n+\frac12}\right)\right)=\nu\left(\Delta^2\psi_l\right)^{n+\frac12}_{i+\frac12,j+\frac12} -\delta_{2l}\,\mu\left(\Delta\psi_l\right)^{n+\frac12}_{i+\frac12,j+\frac12},\, l=1,2$$ \quad is added in the prediction step after the elliptic problem is solved.}\\ \STATE{\quad\bf Solve the elliptic system of equations with respect to $(\psi_1)^{n+\frac12}_{i+\frac12,j+\frac12}$ and $(\psi_2)^{n+\frac12}_{i+\frac12,j+\frac12}$}\\ \STATE{\qquad$\displaystyle (q_1)^{n+\frac12}_{i+\frac12,j+\frac12}=\left(\Delta\psi_1\right)^{n+\frac12}_{i+\frac12,j+\frac12}+s_1\left(\psi_{[21]}\right)^{n+\frac12}_{i+\frac12,j+\frac12},\quad (q_2)^{n+\frac12}_{i+\frac12,j+\frac12}=\left(\Delta\psi_2\right)^{n+\frac12}_{i+\frac12,j+\frac12}+s_2\left(\psi_{[12]}\right)^{n+\frac12}_{i+\frac12,j+\frac12}\,.$}\\ \STATE{\quad\bf Calculate} \STATE{\qquad$\displaystyle \psi^{n+\frac12}_{ij}=\frac14\left(\psi^{n+\frac12}_{i+\frac12,j+\frac12}+ \psi^{n+\frac12}_{i+\frac12,j-\frac12}+\psi^{n+\frac12}_{i-\frac12,j+\frac12}+\psi^{n+\frac12}_{i-\frac12,j-\frac12}\right).$}\\ \STATE{\quad\bf Update velocity components at the cell faces} \STATE{\qquad $\displaystyle u^{n+\frac12}_{i,j+\frac12}=\Delta_y\left[\psi^{n+\frac12}_{ij}\right],\quad \left(v_l\right)^{n+\frac12}_{i+\frac12,j}=-\Delta_x\left[\psi^{n+\frac12}_{ij}\right].$}\\ \STATE{\bf\underline{Extrapolator}} \STATE{\quad$\displaystyle u^{n+1}_{i,j+\frac12}=\frac32u^{n+\frac12}_{i,j+\frac12}-\frac12 u^{n-\frac12}_{i,j+\frac12}\,,\quad v^{n+1}_{i,j+\frac12}=\frac32v^{n+\frac12}_{i,j+\frac12}-\frac12 v^{n-\frac12}_{i,j+\frac12}$.}\\ \STATE{\quad$\displaystyle q^{n+1}_{i+1,j+\frac12}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i,j+\frac12}$\quad if $\displaystyle u^{n+1}_{i+1,j+\frac12}\ge0$; \quad$\displaystyle q^{n+1}_{i,j+\frac12}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i+1,j+\frac12}$\quad if $\displaystyle u^{n+1}_{i,j+\frac12}<0$.}\\ \STATE{\quad$\displaystyle q^{n+1}_{i+\frac12,j+1}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i+\frac12,j}$\quad if $\displaystyle v^{n+1}_{i+\frac12,j+1}\ge0$; \quad$\displaystyle q^{n+1}_{i+\frac12,j}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i+\frac12,j+1}$\quad if $\displaystyle v^{n+1}_{i+\frac12,j}<0$.}\\ \STATE{\quad\bf Correction of the computed cell-face PV values}\\ \STATE{\qquad If $q^{n+1}_{i,j+\frac12}>M^{n+1}_{i,j+\frac12}\Rightarrow q^{n+1}_{i,j+\frac12}=M^{n+1}_{i,j+\frac12}$;\quad If $q^{n+1}_{i,j+\frac12}<m^{n+1}_{i,j+\frac12}\Rightarrow q^{n+1}_{i,j+\frac12}=m^{n+1}_{i,j+\frac12}$.}\\ \STATE{\qquad If $q^{n+1}_{i+\frac12,j}>M^{n+1}_{i+\frac12,j}\Rightarrow q^{n+1}_{i+\frac12,j}=M^{n+1}_{i+\frac12,j}$;\quad If $q^{n+1}_{i+\frac12,j}<m^{n+1}_{i+\frac12,j}\Rightarrow q^{n+1}_{i+\frac12,j}=m^{n+1}_{i+\frac12,j}$.}\\ \STATE{\qquad $\text{If}\,\, u^{n+1}_{i+1,j+\frac12}\ge0\quad \left\{ \begin{array}{ll} M^{n+1}_{i+1,j+\frac12}=\max\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12},\\ m^{n+1}_{i+1,j+\frac12}=\min\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12}.\\ \end{array} \right. $}\\ \STATE{} \STATE{\qquad $\text{If}\,\, u^{n+1}_{i,j+\frac12}<0\quad \left\{ \begin{array}{ll} M^{n+1}_{i,j+\frac12}=\max\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12},\\ m^{n+1}_{i,j+\frac12}=\min\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12}.\\ \end{array} \right. $} \STATE{\qquad $\displaystyle Q^{n+\frac12}_{i+\frac12,j+\frac12}=\frac{q^{n+\frac12}_{i+\frac12,j+\frac12}- q^n_{i+\frac12,j+\frac12}}{\Delta t/2}+\frac12\left((u_l)^{n+1}_{i+1,j+\frac12}+(u_l)^{n+1}_{i,j+\frac12}\right) \Delta_x\left[q^n_{i,j+\frac12}\right]$.}\\ \STATE{\bf\underline{Corrector}} \STATE{$\displaystyle q^{n+1}_{i+\frac12,j+\frac12}=q^{n+\frac12}_{i+\frac12,j+\frac12}+ \frac{\Delta t}{2}\,F\left(q^{n+1},u(q^{n+1}),v(q^{n+1})\right)$, where $q^{n+1}$, $u(q^{n+1})$, $v(q^{n+1})$ are computed in the extrapolation step.} \end{algorithmic} \end{algorithm} An efficient parallelization of the QG model has allowed us to carry out high-performance computations in eddy-resolving regimes. In particular, for the purpose of this paper we computed three solutions for the case of homogeneous and heterogeneous flow (Figures~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8} and \ref{fig:qf_qa_qc_qam_qcm_mu1D-7}): \begin{itemize} \setlength\itemsep{0.0cm} \item High-resolution deterministic $q^f$ computed on the fine grid $G^f=\{N_x\times N_y\}$, $N_x=2049$, $N_y=1025$ ($dx=dy=1.9\, {\rm km}$); \item Low-resolution deterministic solution $q^a$ computed on the coarse grid $G^c=129\times65$ ($dx=dy=299\, {\rm km}$) as the solution of the elliptic equation~\eqref{eq:q_psi} with the stream function $\psi^a$, where $\psi^a$ is computed by spatially averaging the high-resolution stream function $\psi^f$ over the coarse grid cell $G^c$. We refer to $q^a$ as \textit{the truth} or \textit{the true solution}, and use it for comparison with the parameterised solution; \item Low-resolution solution $q^m$ (also referred to as the coarse-grain modelled solution) computed on the coarse grid $G^c$ by simulating the QG model. This solution is used for parameterisation. \end{itemize} \subsubsection{Numerical results\label{sec:2d_qg_num_determ}} We define the computational domain $\Omega=[0,L_x]\times[0,L_y]\times[0,H]$ as a horizontally periodic flat-bottom channel with $L_x=3840\, \rm km$, $L_y=L_x/2\, \rm km$, and total depth $H=H_1+H_2$, with $H_1=1.0\, \rm km$, $H_2=3.0\, \rm km$ (Figure~\ref{fig:gem_setup}). We choose governing parameters of the QG model that are typical to a mid-latitude setting. These comprise the planetary vorticity gradient $\beta=2\times10^{-11}\, {\rm m^{-1}\, s^{-1}}$, lateral eddy viscosity $\nu=3.125\,\rm m^2 s^{-1}$, and the bottom friction parameters $\mu=\{4\times10^{-8},4\times10^{-7}\}\, {\rm s^{-1}}$. We will explain this choice below, as well as the reason for studying two different flow regimes. The background-flow zonal velocities in~\eqref{eq:forcing} are given by $U=[6.0,0.0]\,\rm m\, s^{-1}$, while the stratification parameters in system~\eqref{eq:q_psi} are $s_1=4.22\cdot10^{-3}\,\rm km^{-2}$, $s_2=1.41\cdot10^{-3}\,\rm km^{-2}$; chosen so that the first Rossby deformation radius is $Rd_1=25\, {\rm km}$. In order to ensure that the numerical solutions are statistically equilibrated, the model is initially spun up from the state of rest to $t=0$ over the time interval $T_{spin}=[-100,0]\, {\rm years}$. For smaller bottom friction, we find that jet-like structures (also referred to as striations) emerge in the simulations, resulting from interplay of forcing, damping and baroclinic instability. In contrast, for larger bottom friction, the flow pattern is essentially homogeneous and no coherent structures are seen in the simulations. This nonlinear emergent asymptotic behaviour runs counter to what one might have expected from linear analysis; in which the onset of baroclinic instability in a two -ayer channel flow occurs when the PV changes sign in the two layers and its onset occurs at lower drag, for a given forcing. In this paper we consider both heterogeneous (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8}) and homogeneous (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-7}) flow regime (which correspond to flows with low ($\mu=4\times10^{-8}$) and high ($\mu=4\times10^{-7}$) drag, respectively) and study how the parameterisation performs in each case. \begin{figure}[H] \centering \includegraphics[scale=0.70]{fig2.png} \caption{The series of snapshots shows the high-resolution solution $q^f$ computed on the fine grid $G^f=2049\times1025$ ($dx=dy=1.9\, {\rm km}$), the true solution $q^a$ computed on the coarse grid $G^c=129\times65$ ($dx=dy=299\, {\rm km}$), and the low-resolution solution $q^m$ (also referred to as the coarse-grain modelled solution) computed on the coarse grid $G^c$ by simulating the QG model for the \textit{\textbf{low drag}} $\boldsymbol{\mu=4\times10^{-8}\, {\rm s^{-1}}}$. All the fields are given in units of $[s^{-1}f^{-1}_0]$, where $f_0=0.83\times10^{-4}\, {\rm s^{-1}}$ is the Coriolis parameter. As seen in the figure, the flow is more energetic and small-scale features are prevalent in the first layer. The true solution $q^a$ captures the small-scales features of the high-resolution solution $q^f$ in the first and second layer, and also has the same energy. On the contrary, the coarse-grain modelled solution $q^m$ (which must be parameterised and will be used in uncertainty quantification tests presented in Section~\ref{sec:xi_SQG}) is much less energetic than the true solution, and does not capture the correct structure (the number of striations and their positions) of the true flow $q^a$. The lower resolution of the modelled solution $q^m$ arrests the small-scale eddies which take part in the jet-maintaining mechanism~\citep{Kamenkovich_et_al2009}. Note that in order to visualize all the solutions on the same color scale we have multiplied the modelled solution by a factor of 5. } \label{fig:qf_qa_qc_qam_qcm_mu1D-8} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.675]{fig3.png} \caption{The series of snapshots in the figure shows the high-resolution solution $q^f$ computed on the fine grid $G^f=2049\times1025$ ($dx=dy=1.9\, {\rm km}$), the true solution $q^a$ computed on the coarse grid $G^c=129\times65$ ($dx=dy=299\, {\rm km}$), and the low-resolution solution $q^m$ (also referred to as the coarse-grain modelled solution) computed on the coarse grid $G^c$ by simulating the QG model for the \textit{\textbf{high drag}} $\boldsymbol{\mu=4\times10^{-7}\, {\rm s^{-1}}}$. All the fields are given in units of $[s^{-1}f^{-1}_0]$, where $f_0=0.83\times10^{-4}\, {\rm s^{-1}}$ is the Coriolis parameter. As in the case of the heterogeneous flow (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8}), the homogeneous flow is more energetic in the first layer. However, this difference is less pronounced than that in the heterogeneous case. Moreover, the homogeneous flow teems with small-scale eddies not only in the first but also in the second layer, while the whole flow dynamics is more damped by higher drag and therefore less energetic in contrast to the heterogeneous flow. This, in turn, suppresses the zonally uniform eigenmodes which are responsible for maintaining the jet-like structure of the flow~\citep{Berloff_et_al2011}. The true solution $q^a$ captures the small-scales features of the high-resolution solution $q^f$ in both layers and has the same energy. As opposed to the heterogeneous flow, the coarse-grain modelled solution $q^m$ (which must be parameterised and then used in uncertainty quantification tests presented in Section~\ref{sec:xi_SQG}) is also homogeneous and has the same energy as the true solution $q^a$. The figure shows that the coarse-grain model can adequately represent the large-scale flow dynamics. } \label{fig:qf_qa_qc_qam_qcm_mu1D-7} \end{figure} For the low drag, which corresponds to the bottom friction coefficient $\mu=4\times10^{-8}$, flow dynamics is highly-heterogeneous (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8}). The high-resolution flow $q^f$ (computed on the fine grid $G^f=2049\times1025$ with the resolution $dx=dy=1.9\, {\rm km}$) in the first layer consists of two flow regions: the fast flow within the striations and the slow flow between striations. The flow dynamics in the first layer teems with small-scale eddies which, in turn, maintain the striated structure of the flow (see, e.g.~\citep{Kamenkovich_et_al2009}). On the contrary, the dynamics of the second layer is much less energetic than that of the first one, and exhibits neither small-scale features nor striations. The striated flow structure as well as flow energetics are captured by the true solution $q^a$ computed on the coarse grid $G^c=129\times65$ ($dx=dy=299\, {\rm km}$). However, the low-resolution solution $q^m$ (the solution which has to be parameterised and then used in uncertainty quantification tests presented in Section~\ref{sec:xi_SQG}) computed on the coarse grid $G^c$ by simulating the QG model is much less energetic in both the first and second layer than the true solution $q^a$, and cannot capture the correct structure (the number of striations and their positions) of the true flow dynamics. Thus, the coarse-grain QG equations fail to model the proper jet-like structure of the flow. Apparently, the coarse resolution suppresses the small-scale eddies, which are thought to be one of the mechanisms responsible for maintaining these structures (see, e.g.~\citep{Kamenkovich_et_al2009}). For the high drag flows, with bottom friction coefficient $\mu=4\times10^{-7}$, flow dynamics becomes more homogeneous (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-7}). As in the heterogeneous case, the high-resolution flow $q^f$ is more energetic in the first layer than in the second one, although this difference is less pronounced. In the homogeneous flow, small-scale eddies are ubiquitous in both the first and second layer. Comparing the high-resolution solution $q^f$ with its coarse-grained analogue $q^a$ we conclude that the latter captures the small-scales features as well as the energetics of $q^f$ in both layers. Unlike the heterogeneous flow, the coarse-grain modelled solution $q^m$ (the solution we parameterise and use in uncertainty quantification tests given in Section~\ref{sec:xi_SQG}) is also homogeneous in structure and adequately restores the energetics of the true solution $q^a$. In other words, the coarse-grain QG model properly represents the large-scale flow dynamics for flows with higher drag, which are more damped and therefore less energetic. In the case of high-drag flows, the zonally uniform eigenmodes responsible for maintaining the jet-like structure of the flow (see, e.g.~\citep{Berloff_et_al2011}) become more damped thus making jets much more latent compared with highly-energetic low-drag flows. Another important characteristic of the flow which can influence the accuracy of the parameterisation is decorrelation time (Figure~\ref{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7}). Only after the decorrelation time we can assume that $\boldsymbol{\xi(x)}$ does not depend on time. \begin{figure}[H] \centering \begin{tabular}{ccc} & \hspace*{0.45cm} {\bf (a):}\quad Heterogeneous flow ($\mu=4\times10^{-8}\, {\rm s^{-1}}$) & \hspace*{-0.45cm} {\bf (b):}\quad Homogeneous flow ($\mu=4\times10^{-7}\, {\rm s^{-1}}$) \\ $r$ & \begin{minipage}{0.45\textwidth}\includegraphics[width=8cm]{corr_time_mu4D-8_1.png}\end{minipage} & \begin{minipage}{0.45\textwidth}\includegraphics[height=4.65cm,width=7.5cm]{corr_time_mu4D-7_1.png}\end{minipage}\\ \\[-0.25cm] & \hspace*{0.45cm} $t\, {\rm[days]}$ & \hspace*{-0.45cm} $t\, {\rm[days]}$ \\ \end{tabular} \caption{Evolution of the correlation coefficient $r_{q(0),q(t)}=\frac{cov(q(0),q(t))}{\sigma_{q(0)}\sigma_{q(t)}}$ is shown for the heterogeneous (left) and homogeneous (right) flow. As seen in the figure, the larger the bottom friction coefficient is, the more homogeneous the flow becomes and the faster the correlation coefficient decays. The decorrelation time for the heterogeneous flow is much longer than that of the homogeneous one, as expected. For both homogeneous and heterogeneous flow, the decorrelation time in the first layer (solid black line) is shorter, compared with the second layer (dashed black line). This difference in decorrelation time is also expected, since the flow in the first layer is more energetic. The correlation coefficient of the true solution $r_{q^a(0),q^a(t)}$ (black line) for the heterogeneous flow significantly differs from the correlation coefficient of the modelled solution $r_{q^m(0),q^m(t)}$ (red line), while in the case of the homogeneous flow these coefficients have a similar behavior. Thus, we can conclude that in order for the parameterisation to restore the structure of the flow, it should take into account both spatial and temporal correlations.} \label{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7} \end{figure} As seen in Figure~\ref{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7}, the larger the bottom friction coefficient is, the more homogeneous flow becomes and the faster the correlation coefficient decays as expected. The decorrelation time for the heterogeneous flow (Figure~\ref{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7}(a)) is much longer than that of the homogeneous one (Figure~\ref{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7}(b)). For both types of flow, the decorrelation time in the first layer is much shorter than in the second layer, because the flow in the first layer is faster and more energetic, and therefore it faster ``forgets'' its initial state. Note that for the heterogeneous flow the correlation time of the true solution $q^a$ differs considerably from the correlation time of the modelled solution $q^m$. In the case of the homogeneous flow, the decorrelation time of $q^a$ and $q^m$ have a similar behavior. Thus, we can conclude from Figure~\ref{fig:corr_time_qa_qc_qm_mu1D-8_mu1D-7} that in order for the parameterisation to restore the structure of the flow it should take into account both spatial and temporal correlations. The simulation results presented in this section show that more interesting and energetic flow dynamics is confined in the first layer. Therefore, from now on we will focus our attention on the first layer unless stated otherwise. \subsection{Stochastic case\label{sec:SQG}} The stochastic version of the two-layer QG equations is given by system~\eqref{eq:SLTpv}. The terms in $\zeta^k_1$ and $\zeta^k_2$ the only differences from the deterministic QG model, all other equations remain the same as in the deterministic case. However, the CABARET scheme in the stochastic case differs from the deterministic version and therefore its use can only be justified if it is consistent with the stochastic QG model. In other words, the CABARET scheme should be in Stratonovich form. \subsubsection{Numerical method\label{sec:2d_qg_method_stochastic}} The CABARET scheme for the stochastic QG system~\eqref{eq:SLTpv} is given by Algorithm~\ref{alg:SCABARET} (with the stochastic terms highlighted in red). \begin{algorithm} \caption{The CABARET scheme for the stochastic QG system} \label{alg:SCABARET} \begin{algorithmic} \STATE{\underline{\bf Predictor}} \vspace*{-0.75cm} \STATE{\begin{equation} \begin{split} q^{n+\frac12}_{i+\frac12,j+\frac12}=q^n_{i+\frac12,j+\frac12}&+ \frac{\Delta t}{2}\,F\left(q^n,u(q^n),v(q^n)\right)+\Delta t\, F_{\beta}\left(v^n,v^{n-1}\right) +\Delta t\, F_{\rm visc}\left(\psi\left(q^{n+\frac12}\right)\right)\\ &+{\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^n\right)+G_{k,\beta}\right)\frac{\Delta W_k}{2}}, \end{split}\label{eq:s_pred} \end{equation}} \STATE{\quad ${\color{red}G_k(q^n)=-\left(\Delta_x\left[(\xi^u_kq^n)_{i,j+\frac12}\right]+ \Delta_y\left[(\xi^v_kq^n)_{i+\frac12,j}\right]\right)}$,\qquad $\displaystyle {\color{red}G_{k,\beta}=3R^n-R^{n-1}},\, {\color{red}R^n=-\frac{\beta}{2}\left((\xi^u_k)_{i+\frac12,j+1}+(\xi^v_k)_{i+\frac12,j}\right)}$.}\\ \STATE{\quad The forcing term \vspace*{-0.125cm} $$F_{\rm visc}\left(\psi\left(q^{n+\frac12}\right)\right)=\nu\left(\Delta^2\psi_l\right)^{n+\frac12}_{i+\frac12,j+\frac12}- \delta_{2l}\,\mu\left(\Delta\psi_l\right)^{n+\frac12}_{i+\frac12,j+\frac12},\, l=1,2$$ \quad is added in the prediction step after the elliptic problem is solved.}\\ \STATE{\quad\bf Solve the elliptic system of equations with respect to $(\psi_1)^{n+\frac12}_{i+\frac12,j+\frac12}$ and $(\psi_2)^{n+\frac12}_{i+\frac12,j+\frac12}$}\\ \STATE{\qquad$\displaystyle (q_1)^{n+\frac12}_{i+\frac12,j+\frac12}=\left(\Delta\psi_1\right)^{n+\frac12}_{i+\frac12,j+\frac12}+s_1\left(\psi_{[21]}\right)^{n+\frac12}_{i+\frac12,j+\frac12},\quad (q_2)^{n+\frac12}_{i+\frac12,j+\frac12}=\left(\Delta\psi_2\right)^{n+\frac12}_{i+\frac12,j+\frac12}+s_2\left(\psi_{[12]}\right)^{n+\frac12}_{i+\frac12,j+\frac12}\,.$}\\ \STATE{\quad\bf Calculate} \STATE{\qquad$\displaystyle \psi^{n+\frac12}_{ij}=\frac14\left(\psi^{n+\frac12}_{i+\frac12,j+\frac12}+ \psi^{n+\frac12}_{i+\frac12,j-\frac12}+\psi^{n+\frac12}_{i-\frac12,j+\frac12}+\psi^{n+\frac12}_{i-\frac12,j-\frac12}\right).$}\\ \STATE{\quad\bf Update velocity components at the cell faces} \STATE{\qquad $\displaystyle u^{n+\frac12}_{i,j+\frac12}=\Delta_y\left[\psi^{n+\frac12}_{ij}\right],\quad \left(v_l\right)^{n+\frac12}_{i+\frac12,j}=-\Delta_x\left[\psi^{n+\frac12}_{ij}\right].$}\\ \STATE{\bf\underline{Extrapolator}} \STATE{\quad$\displaystyle u^{n+1}_{i,j+\frac12}=\frac32u^{n+\frac12}_{i,j+\frac12}-\frac12 u^{n-\frac12}_{i,j+\frac12}\,,\quad v^{n+1}_{i,j+\frac12}=\frac32v^{n+\frac12}_{i,j+\frac12}-\frac12 v^{n-\frac12}_{i,j+\frac12}$.}\\ \STATE{\quad$\displaystyle q^{n+1}_{i+1,j+\frac12}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i,j+\frac12}$\quad if $\displaystyle u^{n+1}_{i+1,j+\frac12}+{\color{red}\Xi^u_{i+1,j+\frac12}}\ge0$; \quad$\displaystyle q^{n+1}_{i,j+\frac12}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i+1,j+\frac12}$\quad if $\displaystyle u^{n+1}_{i,j+\frac12}+{\color{red}\Xi^u_{i,j+\frac12}}<0$.}\\ \STATE{\quad$\displaystyle q^{n+1}_{i+\frac12,j+1}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i+\frac12,j}$\quad if $\displaystyle v^{n+1}_{i+\frac12,j+1}+{\color{red}\Xi^v_{i+\frac12,j+1}}\ge0$; \quad$\displaystyle q^{n+1}_{i+\frac12,j}=2q^{n+\frac12}_{i+\frac12,j+\frac12}-q^n_{i+\frac12,j+1}$\quad if $\displaystyle v^{n+1}_{i+\frac12,j}+{\color{red}\Xi^v_{i+\frac12,j}}<0$.}\\ \STATE{\quad ${\color{red}\Xi^u_{ij}=\sum\limits^m_{k=1}\left(\xi^u_k\right)_{ij}\Delta W_k},\quad {\color{red}\Xi^v_{ij}=\sum\limits^m_{k=1}\left(\xi^v_k\right)_{ij}\Delta W_k}$.} \STATE{\quad\bf Correction of the computed cell-face PV values} \STATE{\qquad If $q^{n+1}_{i,j+\frac12}>M^{n+1}_{i,j+\frac12}\Rightarrow q^{n+1}_{i,j+\frac12}=M^{n+1}_{i,j+\frac12}$;\quad If $q^{n+1}_{i,j+\frac12}<m^{n+1}_{i,j+\frac12}\Rightarrow q^{n+1}_{i,j+\frac12}=m^{n+1}_{i,j+\frac12}$.}\\ \STATE{\qquad If $q^{n+1}_{i+\frac12,j}>M^{n+1}_{i+\frac12,j}\Rightarrow q^{n+1}_{i+\frac12,j}=M^{n+1}_{i+\frac12,j}$;\quad If $q^{n+1}_{i+\frac12,j}<m^{n+1}_{i+\frac12,j}\Rightarrow q^{n+1}_{i+\frac12,j}=m^{n+1}_{i+\frac12,j}$.}\\ \STATE{\qquad $\text{If}\,\, u^{n+1}_{i+1,j+\frac12}+{\color{red}\Xi^u_{i+1,j+\frac12}}\ge0\quad \left\{ \begin{array}{ll} M^{n+1}_{i+1,j+\frac12}=\max\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12},\\ m^{n+1}_{i+1,j+\frac12}=\min\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12}.\\ \end{array} \right. $}\\ \STATE{} \STATE{\qquad $\text{If}\,\, u^{n+1}_{i,j+\frac12}+{\color{red}\Xi^u_{i,j+\frac12}}<0\quad \left\{ \begin{array}{ll} M^{n+1}_{i,j+\frac12}=\max\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12},\\ m^{n+1}_{i,j+\frac12}=\min\left(q^n_{i,j+\frac12} ,q^n_{i+\frac12,j+\frac12},q^n_{i+1,j+\frac12}\right)+\tau Q^n_{i+\frac12,j+\frac12}.\\ \end{array} \right. $} \STATE{\qquad $\displaystyle Q^{n+\frac12}_{i+\frac12,j+\frac12}=\frac{q^{n+\frac12}_{i+\frac12,j+\frac12}- q^n_{i+\frac12,j+\frac12}}{\Delta t/2}+\frac12\left(\left(u^{n+1}_{i+1,j+\frac12}+{\color{red}\Xi^u_{i+1,j+\frac12}}\right)+\left(u^{n+1}_{i,j+\frac12}+{\color{red}\Xi^u_{i,j+\frac12}}\right)\right) \Delta_x\left[q^n_{i,j+\frac12}\right]$.}\\ \STATE{\bf\underline{Corrector}} \vspace*{-0.5cm} \STATE{\begin{equation} q^{n+1}_{i+\frac12,j+\frac12}=q^{n+\frac12}_{i+\frac12,j+\frac12}+ \frac{\Delta t}{2}\,F\left(q^{n+1},u(q^{n+1}),v(q^{n+1})\right)+ {\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^{n+1}\right)+G_{k,\beta}\right)\frac{\Delta W_k}{2}}\,,\label{eq:s_corr}\end{equation} where $q^{n+1}$, $u(q^{n+1})$, $v(q^{n+1})$ are computed in the extrapolation step.} \end{algorithmic} \end{algorithm} In order to show that the CABARET scheme is consistent with the stochastic QG model, we rewrite the scheme as the improved Euler method (also known as Heun's method)~\citep{KloedenPlaten1999} \begin{equation*} \begin{split} x^*= & x^n+\Delta t f(x^n) + {\color{red}\Delta W g(x^n)},\\ x^{n+1}= & x^n+\frac{\Delta t}{2} (f(x^n)+f(x^*)) + {\color{red}\frac{\Delta W}{2} (g(x^n)+g(x^*))}, \end{split} \label{eq:Heun} \end{equation*} which solves stochastic differential equations (SDEs) in the form of Stratonovich. In doing so, we omit the space indices for the potential vorticity $q$ to emphasize the functional dependence on $q$, and introduce an extra variable $$q^*=2q^{n+\frac12}-q^n,$$ which allows to recast~\eqref{eq:s_pred} and~\eqref{eq:s_corr} in the form \begin{subequations} \begin{equation} q^*=q^n+ \Delta t\,F\left(q^n,u(q^n),v(q^n)\right)+2\Delta t\, F_{\beta}\left(v^n,v^{n-1}\right) +2\Delta t\, F_{\rm visc}\left(\psi\left(q^{n+\frac12}\right)\right)\\ +{\color{red}\sum\limits^m_{k=1}\left(G_k(q^n)+G_{k,\beta}\right){\Delta W_k}},\\ \label{eq:s_pred2} \end{equation} \begin{equation} q^{n+1}=\frac{q^*+q^n}{2}+ \frac{\Delta t}{2}\,F\left(q^*,u(q^*),v(q^*)\right)+{\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^*\right)+G_{k,\beta}\right)\frac{\Delta W_k}{2}}. \label{eq:s_corr2} \end{equation} \end{subequations} Substitution of~\eqref{eq:s_pred2} into~\eqref{eq:s_corr2} and~\eqref{eq:s_pred} into the forcing term $F_{\rm visc}\left(\psi\left(q^{n+\frac12}\right)\right)$ leads to \begin{equation} \begin{split} q^{n+1}=q^n &+\frac{\Delta t}{2}\,\left[F(q^n,u(q^n),v(q^n))+F(q^n+{\color{red}O_1(\Delta W_k)},u(q^n+{\color{red}O_1(\Delta W_k)}),v(q^n+{\color{red}O_1(\Delta W_k)}))\right]\\ &+\Delta t\,\left[F_{\beta}\left(v^n,v^{n-1}\right)+F_{\rm visc}\left(\psi\left(q^n+{\color{red}O_2(\Delta W_k)}\right)\right)\right]\\ &+{\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^n\right)+G_{k,\beta}\right)\frac{\Delta W_k}{2}} +{\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^n+O_1(\Delta W_k)\right)+G_{k,\beta}\right)\frac{\Delta W_k}{2}}, \end{split} \label{eq:s_pred_into_corr} \end{equation} where \begin{equation} \begin{split} O_1(\Delta W_k):=&\Delta t\,F\left(q^n,u(q^n),v(q^n)\right)+2\Delta t\, F_{\beta}\left(v^n,v^{n-1}\right)\\ +&2\Delta t\,F_{\rm visc}\left(\psi\left(q^n+{\color{red}O_2(\Delta W_k)}\right)\right) {\color{red}+\sum\limits^m_{k=1}\left(G_k\left(q^n\right)+G_{k,\beta}\right)\Delta W_k}\,, \end{split} \nonumber \end{equation} and \begin{equation} O_2(\Delta W_k):=\frac{\Delta t}{2}\,F\left(q^n,u(q^n),v(q^n)\right) +\Delta t\, F_{\beta}\left(v^n,v^{n-1}\right)+{\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^n\right)+G_{k,\beta}\right)\frac{\Delta W_k}{2}}\,. \nonumber \end{equation} Retaining the terms up to order $\Delta t$ in~\eqref{eq:s_pred_into_corr} we get \begin{equation} \begin{split} q^{n+1} =q^n &+\Delta t\,\left[F(q^n,u(q^n),v(q^n))+F_{\beta}\left(v^n,v^{n-1}\right)+F_{\rm visc}\left(\psi(q^n)\right)\right]\\ &+{\color{red}\sum\limits^m_{k=1}\left(G_k\left(q^n\right)+G_{k,\beta}\right)\Delta W_k+ {\color{red}\sum\limits^m_{k_1=1}\sum\limits^m_{k_2=1}G_{k_1}\left(G_{k_2}(q^n)+G_{k_2,\beta}\right)\frac{\Delta W_{k_1}\Delta W_{k_2}}{2}+H.O.T.}}\,,\\ \end{split} \label{eq:s_pred_into_corr2} \end{equation} where $G_{\beta}$ does not depend on $q^n$, and $H.O.T.$ denotes higher order terms. Thus we have shown that the CABARET scheme is in Stratonovich form up to order $(\Delta t)^{3/2}$. \subsubsection{Consistency\label{sec:2d_qg_consistency}} In this section we prove that the stochastic CABARET scheme~\eqref{eq:s_pred_into_corr2} is consistent with the stochastic QG equation~\eqref{eq:SLTpv} in the mean square sense in time, since its consistency in space is guaranteed by its second order approximation~\citep{Karabasov_et_al2009}. We consider a Stratonovich process $q=q(t,\mathbf{x})$, $\mathbf{x}=(x,y)$ satisfying the SPDE \begin{equation*} \diff q=a_t\diff t+{\color{red}\sum\limits^m_{i=1}b_{i,t}\circ\diff W_{i,t}},\quad a_t:=F(q^n,u(q^n),v(q^n))+F_{\beta}+F_{\rm visc}\left(\psi(q^n)\right),\quad b_{i,t}:=G_i(q^n)+G_{i,\beta}, \label{eq:Stratonovich1} \end{equation*} and rewrite it in the It\^{o} form \begin{equation*} \diff q=a_t\diff t+{\color{red}\sum\limits^m_{i=1}b_{i,t}\diff W_{i,t}}+\frac12\sum\limits^m_{i=1}b_{i,t}(b_{i,t})\diff t\,, \label{eq:Ito1_} \end{equation*} or alternatively \begin{equation} \diff q=q_d\diff t+{\color{red}\sum\limits^m_{i=1}q^i_{s,t}\diff W_{i,t}} \label{eq:Ito1} \end{equation} with the stochastic and deterministic parts defined as $\displaystyle q_d:=a_t+\frac12\sum\limits^m_{i=1}b_{i,t}(b_{i,t})$ and $\displaystyle q^i_{s,t}:=b_{i,t}$, respectively. We define consistency for SPDE~\eqref{eq:Ito1} as follows \begin{definition} We say that a discrete time-space approximation $q^n=q^n_d+q^n_s$ of $q=q_d+q_s$ with the time step $\Delta t$ and space steps $\Delta\mathbf{x}=(\Delta x_1,\Delta x_2,\ldots,\Delta x_d)$ is consistent in mean square of order $\alpha>1$ and $\beta>1$ in time and space with respect to~\eqref{eq:Ito1} if there exists a nonnegative function $c=c((\Delta t)^\alpha,(\Delta\mathbf{x})^\beta)$ with $\lim\limits_{\substack{\Delta t\rightarrow0\\ \Delta\mathbf{x}\rightarrow0}}c((\Delta t)^\alpha,(\Delta\mathbf{x})^\beta)=0$ such that \begin{equation*} \mathbb{E}\left[\left\|q_s- q^n_s \right\|^2_{L^2(\Omega)}\right]\le c((\Delta t)^\alpha,(\Delta\mathbf{x})^\beta)\,,\qquad \mathbb{E}\left[\left\|q_d- q^n_d \right\|^2_{L^2(\Omega)}\right]\le c((\Delta t)^\alpha,(\Delta\mathbf{x})^\beta) \label{eq:consitency_spde1} \end{equation*} for all fixed values $q^n$, time $n=0,1,2,\ldots$ and space indices. \end{definition} Since our focus in this section is on consistency in time, we have to prove that the following estimation holds: \begin{equation} \mathbb{E}\left[\left\|q_s- q^n_s \right\|^2_{L^2(\Omega)}\right]\le c((\Delta t)^\alpha)\,. \label{eq:consitency_spde2} \end{equation} \begin{theorem} Assuming that there exists a constant $\widetilde{C}>0$ such that the following assumptions hold \begin{enumerate}[label={\bf A\arabic*.}] \item $\mathbb{E}\left[\left\|a_r-a_s\right\|_{L^2(\Omega)}\right]\le \widetilde{C}\sqrt{r-s}$, \item $\mathbb{E}\left[\left\|\sum\limits^m_{i=1}(b_{i,r}-b_{i,s})\right\|_{L^2(\Omega)}\right]\le \widetilde{C}\sqrt{r-s}$, \item $\mathbb{E}\left[\left\|\sum\limits^m_{i=1}\sum\limits^m_{j=1}b_{i,s}(b_{j,s})\right\|_{L^2(\Omega)}\right]\le \widetilde{C}$, for $i,j=1,2,\ldots,m$, \item $\mathbb{E}\left[\left\|\sum\limits^m_{i=1}(b_{i,r}(b_{i,r})-b_{i,s}(b_{i,s}))\right\|_{L^2(\Omega)}\right]\le \widetilde{C}\sqrt{r-s}$, \item $\mathbb{E}\left[\left\|{\color{red}H.O.T.}\right\|\right]\le \widetilde{C}(r-s)^{3/2}$, \end{enumerate} with $\left|r-s\right|\le \Delta t$, the stochastic CABARET scheme~\eqref{eq:s_pred_into_corr2} is consistent in mean square with $c(\Delta t)=(\Delta t)^2$. \end{theorem} \begin{proof} Integration of~\eqref{eq:Ito1} with respect to time over the interval $[s,t]$ gives \begin{equation} q_t=q_s+\int\limits^t_s a_r\,dr+{\color{red}\int\limits^t_s \sum\limits^m_{i=1}b_{i,r}\, dW_{i,r}}+\frac12\int\limits^t_s \sum\limits^m_{i=1}b_{i,r}(b_{i,r}) dr\,. \label{eq:int_Ito1} \end{equation} \noindent Substitution of~\eqref{eq:s_pred_into_corr2} and~\eqref{eq:int_Ito1} into~\eqref{eq:consitency_spde2} leads to \begin{equation} \begin{split} &\mathbb{E}\left[\left\| \int\limits^t_s a_r\,dr+{\color{red}\int\limits^t_s \sum\limits^m_{i=1}b_{i,r}\, dW_{i,r}}+\frac12\int\limits^t_s \sum\limits^m_{i=1} b_{i,r}(b_{i,r}) dr\right.\right. \\ &-\left.\left. \left(a_t\Delta t +{\color{red}\sum\limits^m_{i=1} b_{i,s}\, \Delta W_{i,s}} +{\color{red}\frac12\sum\limits^m_{i,j=1} b_{i,s}(b_{j,s}) \Delta W_{i,s}\Delta W_{j,s}}\right)+{\color{red}H.O.T.}\right\|^2_{L^2(\Omega)} \right]\le c(\Delta t). \end{split} \label{eq:con_spde3} \end{equation} By combining the terms in~\eqref{eq:con_spde3}, we get \begin{equation} \mathbb{E}\left[\left\|A+B+C\right\|^2_{L^2(\Omega)}\right]\le c(\Delta t), \label{eq:con_spde4} \end{equation} where \begin{equation*} A:=\int\limits^t_s (a_r-a_s),dr\quad B:={\color{red}\int\limits^t_s \sum\limits^m_{i=1}(b_{i,r}-b_{i,s})\, dW_{i,r}},\quad C:=C_1-C_2-C_3, \end{equation*} with \begin{equation*} C_1:=\frac12\int\limits^t_s \sum\limits^m_{i=1}(b_{i,r}(b_{i,r})-b_{i,s}(b_{i,s}))\, dr,\quad C_2:=\frac12\sum\limits^m_{i=1}b_{i,s}(b_{i,s})({\color{red}(\Delta W_i)^2}-\Delta t),\quad C_3:={\color{red}\frac12\sum\limits^m_{i\ne j}b_{i,s}(b_{j,s})\Delta W_i\Delta W_j}\,. \end{equation*} \noindent Applying the triangle and Young's inequalities to~\eqref{eq:con_spde4} we arrive at \begin{equation*} \mathbb{E}\left[\left\|A+B+C\right\|^2_{L^2(\Omega)}\right]\le 3\mathbb{E}\left[\left\|A\right\|^2_{L^2(\Omega)}+ \left\|B\right\|^2_{L^2(\Omega)}+\left\|C\right\|^2_{L^2(\Omega)}+\widetilde{C}^2(\Delta t)^3\right]. \label{eq:con_spde5} \end{equation*} \noindent Using {\bf A2}, the Cauchy--Schwarz inequality and {\bf A1}, we estimate the first term as \begin{equation*} \mathbb{E}\left[\left\|A\right\|^2_{L^2(\Omega)}\right] \le\Delta t\, \mathbb{E}\left[ \int\limits^t_s \left\|a_r-a_s\right\|^2_{L^2(\Omega)}\, dr \right] \le \frac{\widetilde{C}^2}{2}(\Delta t)^3. \label{eq:A_estimate1} \end{equation*} \noindent Estimation of the second term is given by \begin{equation*} \begin{aligned} \mathbb{E}\left[\left\|B\right\|^2_{L^2(\Omega)}\right]=& \int\limits_{\Omega}\mathbb{E}\left [{\color{red}\left(\int\limits^t_s\sum\limits^m_{i=1}(b_{i,r}-b_{i,s})\, dW_{i,r}\right)^2} \right]\, d\Omega &&\text{(using the It\^{o} isometry)}\\ =& \mathbb{E}\left [\int\limits_{\Omega}\int\limits^t_s\left(\sum\limits^m_{i=1}(b_{i,r}-b_{i,s})\right)^2\, dr\, d\Omega \right] && \text{(the Cauchy--Schwarz inequality leads to)}\\ \le& \Delta t\, \mathbb{E}\left [\int\limits^t_s\left\|\sum\limits^m_{i=1}(b_{i,r}-b_{i,s})\right\|^2_{L^2(\Omega)}\, dr \right] \le \frac{\widetilde{C}^2}{2}(\Delta t)^3 && \text{(using {\bf A2}).}\\ \end{aligned} \label{eq:B_estimate1} \end{equation*} \noindent To estimate the term $C$ in \eqref{eq:con_spde4}, we use the triangle inequality to get \begin{equation*} \mathbb{E}\left[\left\|C\right\|^2_{L^2(\Omega)}\right] \le \mathbb{E}\left [ \left\|C_1\right\|^2_{L^2(\Omega)}+\left\|C_2\right\|^2_{L^2(\Omega)}+\left\|C_3\right\|^2_{L^2(\Omega)} \right]\,, \label{eq:C_estimate1} \end{equation*} and then separately estimate each term on the right hand side. Applying the Cauchy--Schwarz inequality and {\bf A4} to $C_1$, we get the following estimation \begin{equation*} \mathbb{E}\left[ \left\|C_1\right\|^2_{L^2(\Omega)} \right] \le\frac{\Delta t}{2}\mathbb{E}\left[\int\limits_{\Omega} \left\| \sum\limits^m_{i=1}(b_{i,r}(b_{i,r})-b_{i,s}(b_{i,s})) \right\|^2_{L^2(\Omega)}\, d\Omega \right] \le \frac{\widetilde{C}^2}{8}(\Delta t)^3. \label{eq:C1_estimate1} \end{equation*} \noindent The term $C_2$ is estimated as \begin{equation*} \begin{aligned} \mathbb{E}\left[ \left\|C_2\right\|^2_{L^2(\Omega)} \right]=& \int\limits_{\Omega} \mathbb{E}\left[\left(\frac12 \sum\limits^m_{i=1}(b_{i,s}(b_{i,s}))\left({\color{red}(\Delta W_i)^2}-\Delta t\right)\right)^2 \right]\, d\Omega && \\ = & \frac14 \int\limits_{\Omega} \sum\limits^m_{i=1}(b_{i,s}(b_{i,s}))^2\, \mathbb{E}\left[ {\color{red}(\Delta W_i)^4}-2{\color{red}(\Delta W_i)^2}\Delta t+(\Delta t)^2 \right]\,d\Omega && \\ = & \frac{(\Delta t)^2}{2} \left\| \sum\limits^m_{i=1}(b_{i,s}(b_{i,s}))^2\right\|^2_{L^2(\Omega)} \le \frac{\widetilde{C}^2}{2}(\Delta t)^2 && \text{(using {\bf A3}).}\\ \end{aligned} \label{eq:C2_estimate1} \end{equation*} \noindent Using {\bf A3} for $C_3$ leads to \begin{equation*} \mathbb{E}\left[ \left\|C_3\right\|^2_{L^2(\Omega)} \right]= \frac14 \int\limits_{\Omega} \sum\limits^m_{i\ne j}(b_{i,s}(b_{i,s}))^2\, \mathbb{E}\left[ {\color{red}(\Delta W_i)^2}\right] \mathbb{E}\left[ {\color{red}(\Delta W_j)^2}\right]\, d\Omega = \frac{(\Delta t)^2}{4} \left\|\sum\limits^m_{i\ne j}(b_{i,s}(b_{i,s}))\right\|^2_{L^2(\Omega)} \le \frac{\widetilde{C}^2}{4}(\Delta t)^2. \label{eq:C3_estimate1} \end{equation*} \noindent Finally, we arrive at the following estimation \begin{equation*} \mathbb{E}\left[\left\|A+B+C\right\|^2_{L^2(\Omega)}\right]\le C^*\left((\Delta t)^2+(\Delta t)^3\right)\le C^*(\Delta t)^2,\quad C^*>0, \label{eq:con_spde4_2} \end{equation*} which proves the theorem. \end{proof} \begin{remark} Conditions {\bf A1-A5} are satisfied and SPDE~\eqref{eq:Ito1} is well-posed for sufficiently large p for all $T>0$ if the stochastic QG equation~\eqref{eq:SLTpv} has a solution in $W^{2p,2}$ such that $\mathbb{E}\left[\sup\limits_{t\in [0,T]} ||q_i||^2_{W^{2p,2}}\right]<\infty$, $i=1,2$. \end{remark} \subsubsection{Initial conditions\label{sec:ICs}} The choice of the initial condition for the stochastic QG model is important, especially in the context of uncertainty quantification and data assimilation, for it significantly influences the evolution of the flow as well as its further predictability. A straightforward approach based on a random perturbation of the true solution (which is $q^a$ in our case) at time $t=0.0$ can lead to the injection of unphysical perturbations into flow dynamics which, in turn, can result in a unphysical solution. Therefore, in order to perform uncertainty quantification tests, presented in Section~\ref{sec:cal}, we need a number of independent realizations of the initial condition that are physically consistent with the flow dynamics. To this end, we start at time $t=-t^*$ with the true solution $q^a$ of the deterministic model and run it until $t_0=0$ with independent realizations of the Brownian noise $W$ (see Section~\ref{sec:cal} for details) to produce independent samples from the initial condition. As a result, the ensemble of stochastic solutions (also referred to as ensemble members) ``covers'' the true deterministic solution at time $t_0$. The next experiment is to study for how long this property holds. To this end, we introduce the following function \begin{equation*} \widetilde{T}_{\mathcal{S}}:=\frac{1}{|T|}\int\limits_{T}\widetilde{\delta}(q^a) \, dt,\qquad \widetilde{\delta}(q^a)=\left\{ \begin{aligned} 1\quad \text{if } q^a\in\mathcal{S},\\ 0\quad \text{if } q^a\not\in\mathcal{S}, \end{aligned} \right. \label{eq:T_S} \end{equation*} which represents the time period spent by the true deterministic solution, $q^a$, within the spread of stochastic solutions $\mathcal{S}$. The behavior of the function $\widetilde{T}_{\mathcal{S}}$ for the whole computational domain is given in Figure~\ref{fig:eof64_particles_deltaT_time_within_spread_L2_mu4D-8_mu4D-7}. \begin{figure}[H] \centering \includegraphics[scale=0.35]{fig5.png} \caption{Shown is the time $\widetilde{T}_{\mathcal{S}}$ spent by the true deterministic velocity ${\bf V}^a$, stream function $\psi^a$, and potential vorticity $q^a$ within the spread of stochastic solutions $\mathcal{S}$ over the time period $T=[-1,0]$ hour and $T=[-16,0]$ hour for the heterogeneous and homogeneous flow. The stochastic spread (consisting of 100 independent samples from the initial condition) has been computed with the stochastic QG model~\eqref{eq:SLTpv} using the first 64 leading EOFs, which capture 96\% of the flow variability (see Section~\ref{sec:cal}). The blue color in the colorbar corresponds to $\widetilde{T}_{\mathcal{S}}=0$ (the stochastic spread never captures the true deterministic solution over the time period $T$). The red color in the colorbar corresponds to $\widetilde{T}_{\mathcal{S}}=1$ (the stochastic spread always captures the true deterministic solution over the time period $T$). As mentioned above, the potential vorticity $q^a$ is computed as the solution of the elliptic equation~\eqref{eq:q_psi} with the stream function $\psi^a$, where $\psi^a$ is computed by spatially averaging the high-resolution stream function $\psi^f$ over the coarse grid cell $G^c=129\times65$. The true velocity ${\bf V}^a$ is computed by differentiating the stream function $\psi^a$. As the plot shows, the spread of stochastic solutions captures the true values of ${\bf V}^a$, $\psi^a$, and $q^a$ for both heterogeneous and homogeneous flow equally well. For both the heterogeneous and homogeneous flow regime, the length of the time interval $T$ has a minor influence on the behavior of the stochastic spread (compare the top and bottom row in the Figure) thus ensuring a better coverage of the true solution with the stochastic spread over longer time. The region along the upper and lower boundary is not properly covered by the spread, because the boundary layer dynamics is difficult to capture on the coarse grid. However, the area of the boundary region is very small compared to the area of the whole domain, and therefore it has a negligible effect on the uncertainty quantification results. } \label{fig:eof64_particles_deltaT_time_within_spread_L2_mu4D-8_mu4D-7} \end{figure} In order to compute the function $\widetilde{T}_{\mathcal{S}}$, we start at time $t^*=-1$ hour with the true solution $q^a(t^*)$ and run the stochastic QG model~\eqref{eq:SLTpv} until $t_0=0$ with 100 independent realizations of the Brownian noise $W$, and with the first 64 leading EOFs, which capture 96\% of the total variance (see Section~\ref{sec:cal}). As seen in Figure~\ref{fig:eof64_particles_deltaT_time_within_spread_L2_mu4D-8_mu4D-7}, the spread of stochastic solutions $\mathcal{S}$ captures the true deterministic velocity ${\bf V}^a$, stream function $\psi^a$, and potential vorticity $q^a$ for both heterogeneous and homogeneous flow, and over short and long time intervals $T$ equally well except the neighbourhood of the upper and lower boundary. This boundary layer dynamics is difficult to capture on the coarse grid, because of the low resolution. However, the boundary layer is very small with respect to the whole domain, and so its contribution to uncertainty quantification results is miniscule. Overall, we have shown that the stochastically advected deterministic initial condition provides a solid basis for uncertainty quantification tests (given in Section~\ref{sec:cal}) as well as for data assimilation, which will be the object of future research. \section{Calibration of eigenvectors~\label{sec:cal}} We present a methodology for modelling the difference between passive, infinitesimal Lagrangian particles advected by the high-resolution deterministic velocity field $u$ computed on the fine grid $G^f=2049\times1025$ and its coarsened counterpart $\overline{u}$ computed on the coarse grid $G^c=129\times65$ by differentiating the coarse-grain stream function $\psi^a$. The stream function is computed by spatially averaging the high-resolution stream function $\psi^f$ over the coarse grid cell $G^c$. Based on this difference, we compute Empirical Orthogonal Functions (EOFs)~\citep{Preisendorfer1988,HaJoSt2007} and evaluate how the accuracy of the deterministic flow dynamics reconstructed from the leading EOFs and their corresponding Principal Components (PCs) depends on the number of EOF-PC pairs. We also perform uncertainty quantification tests for the stochastic differential equation for Lagrangian particles~\eqref{eq:barx} and the stochastic QG model~\eqref{eq:SLTpv}, and study how the number of EOF-PC pairs and size of the ensemble of stochastic solutions (referred to as ensemble members) affect the width of the stochastic spread. \subsection{Measuring the Lagrangian evolution} In the stochastic GFD framework, stochastic PDEs are derived from the starting assumption that (averaged) fluid particles satisfy the equation \begin{equation} \label{eq:barx} \diff \bar{x}(a,t) = \bar{u}(\bar{x}(a,t),t)\diff t + {\color{red}\sum\limits^{N_{\xi}}_{i=1} \xi_i(\bar{x}(a,t)) \circ \diff W_i}, \end{equation} where $a$ is the Lagrangian label. The assumption \eqref{eq:barx} leads to, for example, the stochastic QG equation \begin{equation*} \diff\bar{q}^{l}(x,t) + (\bar{u}^{l}(x,t)\diff t + {\color{red}\sum\limits^{N_{\xi}}_{i=1}\xi^{l}_i(x,t)\circ \diff W^{l}_i(t)})\cdot\nabla q^{l}(x,t) = F^{l}\diff t,\quad l=1,2, \end{equation*} with $F^{l}$ being the right hand side of~\eqref{eq:pv}. This is the system of stochastic PDEs that we actually solve. That is, equation \eqref{eq:barx} is not explicitly solved but describes the motion of fluid particles under the stochastic PDE solution. The goal of the stochastic PDE is to model the coarse-grained components of a deterministic PDE that exhibits rapidly fluctuating components. The derivation of deterministic fluid dynamics starts from the equation \begin{equation} \label{eq:dx} \diff {x}(a,t) = {u}({x}(a,t),t)\diff t. \end{equation} After defining an averaged trajectory $\bar{x}(a,t)$, we write \begin{equation*} x(a,t) = \bar{x}(a,t) + \zeta(\bar{x}(a,t),t/\epsilon^2), \end{equation*} on the assumption that the fluctuations in $\zeta$ are faster than those in $\bar{x}$; this scale separation is parameterised by the small parameter $\epsilon$. Thus the deterministic equation for $\bar{x}$ is \begin{equation*} \label{eq:dxbar} \diff \bar{x}(a,t) = {u}(\bar{x}(a,t) + \zeta(\bar{x}(a,t)),t/\epsilon^2)\diff t - \zeta(\bar{x}(a,t),t/\epsilon^2)\diff t. \end{equation*} If $\zeta$ has a fast dependency on $t$ and has a stationary invariant measure, then according to homogenisation theory \citep{CoGoHo2017} we may average this equation over the invariant measure (subject to a centring condition) to get an effective equation \begin{equation*} \diff \bar{x}(a,t) = \bar{u}(\bar{x}(a,t), t) + {\color{red}\sum\limits^\infty_{i=1} \xi_i(\bar{x}(a,t))\circ \diff W_i(t)} + \mathcal{O}(\epsilon). \end{equation*} After truncation of this sum, we recover equation \eqref{eq:barx}. We assume that $u(x,t)$ can be modelled well with a fine grid simulation, whilst $\bar{u}(\bar{x},t)$ can be modelled on a coarse grid simulation. Then, we wish to estimate $\xi_i$ using data from $u(x,t)$ in order to simulate $\bar{u}(x,t)$. Methodology is as follows. We spin up a fine grid solution from $t=-T_{spin}$ to $t=0$ (till some statistical equilibrium is reached), then record velocity time series from $t=0$ to $t=M\Delta t$, where $\Delta t=k\delta t$, and $\delta t$ is the fine grid timestep. We define $X_{ij}^0$ as coarse grid points. For each $m=0,1,\ldots,M-1$, we \begin{enumerate} \item Solve $\dot{X}_{ij}(t)=u(X_{ij}(t),t)$ with initial condition $X_{ij}(m\Delta t)=X^0_{ij}$, where $u(x,t)$ is the solution from the fine grid simulation. \item Compute $\bar{u}_{ij}(t)$ by spatially averaging $u(x,t)$ over the coarse grid cell size around gridpoint $(i,j)$. \item Compute $\bar{X}_{ij}$ by solving $\dot{\bar{X}}_{ij}(t) = \bar{u}_{ij}(t)$ with the same initial condition. \item Compute the difference $\Delta X_{ij}^m = \bar{X}_{ij}((m+1)\Delta t) - X_{ij}((m+1)\Delta t)$, which measures the error between the fine and coarse trajectory. \end{enumerate} Having obtained $\Delta X_{ij}^m$, we would like to extract the basis for the noise. This amounts to a Gaussian model of the form \[ \frac{\Delta X_{ij}^m}{\sqrt{\Delta t}} = \bar{\Delta X_{ij}} + {\color{red}\sum\limits^{N_{\xi}}_{k=1} \xi_{ij}^k\Delta W^k_m}, \] where $\Delta W^k_m$ are independent and identically distributed normal random variables with mean zero and variance one. We estimate $\xi$ by minimising \[ \mathbb{E}\left[\left(\sum\limits_{ijm}\frac{\Delta X_{ij}^m}{\sqrt{\delta t}} - \bar{\Delta X_{ij}} - {\color{red}\sum\limits^{N_{\xi}}_{k=1} \xi_{ij}^k\Delta W^k_m}\right)^2\right], \] where the choice of $N$ can be informed by using EOFs. Our choice of Empirical Orthogonal Function analysis is based on the capability of this method for extracting spatially coherent, temporally uncorrelated and statistically significant modes of transient variability from multivariable time series. In particular, this method is efficient for dimensionality reduction, compression and spatio-temporal variability analysis of atmospheric and oceanic data. Generally speaking, one can use different flow decomposition methods instead of EOF analysis (e.g. Dynamic Mode Decomposition (DMD)~\citep{Schmid2010}, Optimized DMD~\citep{Chen_et_al2012}, Singular Spectrum Analysis~\citep{ElsnerTsonis1996}, etc.), and analyse how they affect the parameterisation, but such a study would be beyond the scope of this paper. \subsection{Validity of the approximation\label{sec:val_approx}} Having computed EOFs and their corresponding PCs, we can analyze how the coarse-grid solution \begin{equation} {\bf x}^c(t):=\widehat{\bf x}^c(t)+\Delta {\bf X}(t), \label{eq:xc} \end{equation} depends on the number of EOF-PC pairs (denoted by $\xi$ and $P$), which approximates the difference $\Delta {\bf X}:={\bf x}-\widehat{{\bf x}}^c$ defined as \begin{equation} \Delta {\bf X}(t)\approx\sum\limits^{N_{\xi}}_{i=1}{\boldsymbol \xi}_i(\widehat{{\bf x}}^c(t))P_i(t). \label{eq:dx_approx} \end{equation} Here, ${\bf x}$ is the solution of the deterministic equation~\eqref{eq:dx} with $\bf u$ being the high-resolution velocity, and $\widehat{\bf x}^c(t)$ is the solution of the deterministic equation \begin{equation} \diff {\widehat{\bf x}}^c(t) = \overline{\bf u}({\widehat{\bf x}^c}(t),t)\diff t\,, \label{eq:x} \end{equation} with the spatially averaged velocity $\overline{\bf u}$. In this section, we solve ODE~\eqref{eq:x} with the velocity $\overline{\bf u}$ computed from the high-resolution heterogeneous flow (Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8}). However, it is important to note that the results presented here are qualitatively independent of flow dynamics, and hence they are equally valid for both heterogeneous and homogeneous flows. In order to solve equation~\eqref{eq:x}, we use the classical 4-stage Runge--Kutta method~\citep{HNW1993} given by the Butcher tableau~\eqref{eq:Butcher_tableau}. \begin{equation} \renewcommand\arraystretch{1.2} \begin{array}{c|cccc} 0\\ \frac{1}{2} & \frac{1}{2}\\ \frac{1}{2} &0 &\frac{1}{2} \\ 1& 0& 0& 1\\ \hline & \frac{1}{6} &\frac{1}{3} &\frac{1}{3} &\frac{1}{6} \end{array} \label{eq:Butcher_tableau} \end{equation} We present the results in Figure~\ref{fig:ODE_EOF_re}. We remark that in this case Lagrangian particles move freely within the flow, i.e. they are not remapped every time step $\Delta t$ as in Section~\ref{sec:approx_lagrangian_evol}. It is also worth noting that we computed the relative error $\delta$ for $\Delta {\bf X}$ including both the time-mean and fluctuating component, since excluding the time-mean would result in a much higher error. \begin{figure}[H] \hspace*{2.5cm} \begin{tabular}{cc} \begin{minipage}{0.02\textwidth}{$\delta$}\end{minipage} & \begin{minipage}{0.8\textwidth}\includegraphics[width=10cm]{ODE_EOF_relerr.png}\end{minipage}\\ &\\[-0.25cm] & \hspace*{-4cm} $t\, {\rm [days]}$\\ \end{tabular} \caption{Dependence of the $L^2$-norm relative error of Lagrangian path separations $\delta=\displaystyle \|{\bf x}-{\bf x}^c\|_{L^2} / \|{\bf x}\|_{L^2}$ on the number of leading EOF-PC pairs used in approximation~\eqref{eq:dx_approx} is shown. Here ${\bf x}$ (see equation~\eqref{eq:dx}) and $\widehat{\bf x}^c$ (see equation~\eqref{eq:x}) are the positions of Lagrangian particles freely advected by the high-resolution velocity $\bf u$ computed on the fine-grid $G^f=2049\times1025$, and its coarse-grained analogue $\overline{\bf u}$ computed on the coarse grid $G^c=129\times65$ by differentiating the coarse-grain stream function $\psi^a$, respectively. The EOFs and their corresponding PCs, used in this test, capture 96\% of the flow variability and have been computed over the period of $T=[0,70]$ days. The time period is not a critical parameter; it has been chosen so as to demonstrate how accurately the solution can be approximated by a given number of EOF-PC pairs. Our results show that using more EOF-PC pairs in computing the positions of Lagrangian particles $\widehat{\bf x}^c$, tends to increase the accuracy of the solution ${\bf x}^c$.} \label{fig:ODE_EOF_re} \end{figure} As seen in Figure~\ref{fig:ODE_EOF_re}, using more EOF-PC pairs to compute the positions of Lagrangian particles tends to increase the accuracy of the approximated solution. This is an expected result, because $\Delta {\bf X}\rightarrow 0$ as $N_{\xi}\rightarrow DOF$ in inequality~\eqref{eq:dx_approx}, where DOF is the number of degree of freedom on the coarse grid $G^c$. \subsection{Approximation of the Lagrangian evolution\label{sec:approx_lagrangian_evol}} In contrast with the previous section, we apply EOF analysis to the fluctuating component of $\Delta {\bf X}:=\widehat{\bf x}^c-\overline{\bf x}$ and perform uncertainty quantification tests for the stochastic differential equation (SDE)~\eqref{eq:barx} by comparing the true deterministic solution with the ensemble of stochastic equations. As the true deterministic solution, we take the solution $\widehat{\bf x}^c$ of the deterministic equation~\eqref{eq:x}, while the stochastic ensemble is given by a solution $\overline{\bf x}$ of SDE~\eqref{eq:barx} computed for independent realizations of the Brownian noise $W$. The uncertainty quantification tests are carried out for different number of EOFs and the size of the stochastic ensemble. As opposed to the deterministic case, we use the Brownian noise instead of the Principal Components, and Lagrangian particles are remapped to their original positions (the nodes of the Eulerian grid $G^c$) every time step $\Delta t$. The size of the time step is a critical component for uncertainty quantification and data assimilation, and should be chosen so as to lead to a stochastic ensemble which covers the true solution over this time step. As mentioned before, to solve the deterministic equation~\eqref{eq:x}, we use the Runge--Kutta method with the Butcher tableau~\eqref{eq:Butcher_tableau}. The SDE~\eqref{eq:barx} is solved with the stochastic version of the Runge--Kutta method presented by Algorithm~\ref{alg:SRK4}. \begin{algorithm} \caption{Stochastic Runge--Kutta method} \label{alg:SRK4} \begin{algorithmic} \FOR{$n=0,1,2,\ldots$} \STATE{ \begin{tabular}{ll} ${\bf k}_1={\bf v}(t_n,\widetilde{\bf x}_n)$, & ${\bf l}_1={\color{red}\Xi(t_n,\widetilde{\bf x}_n)}$,\\[0.125cm] ${\bf k}_2={\bf v}(t_n+\frac{\Delta t}{2},\widetilde{\bf x}_n+\frac{\Delta t}{2}{\bf k}_1)$, & ${\bf l}_2={\color{red}\Xi(t_n+\frac{\Delta W}{2},\widetilde{\bf x}_n+\frac{\Delta W}{2}{\bf l}_1)}$,\\[0.125cm] ${\bf k}_3={\bf v}(t_n+\frac{\Delta t}{2},\widetilde{\bf x}_n+\frac{\Delta t}{2}{\bf k}_2)$, & ${\bf l}_3={\color{red}\Xi(t_n+\frac{\Delta W}{2},\widetilde{\bf x}_n+\frac{\Delta W}{2}{\bf l}_2)}$,\\[0.125cm] ${\bf k}_4={\bf v}(t_n+\Delta t,\widetilde{\bf x}_n+\Delta t{\bf k}_3)$, & ${\bf l}_4={\color{red}\Xi(t_n+\Delta W,\widetilde{\bf x}_n+\Delta W{\bf l}_3)}$,\\ \end{tabular}} \hspace*{-1cm} \STATE{ \begin{equation} \widetilde{\bf x}_{n+1}=\widetilde{\bf x}_n+({\bf k}_1+2({\bf k}_2+{\bf k}_3)+{\bf k}_4)\frac{\Delta t}{6}+{\color{red}({\bf l}_1+2({\bf l}_2+{\bf l}_3)+{\bf l}_4)\frac{\Delta W}{6}}. \label{eq:srk4} \end{equation}} \ENDFOR \end{algorithmic} \end{algorithm} \noindent Here ${\bf v}$ is the velocity vector, and $\{\widetilde{\bf x}\}^{N_l}_{i=1}$ is the vector of coordinates of Lagrangian particles, with $N_l$ being the number of Lagrangian particles. The stochastic term $\Xi(t,\widetilde{\bf x})$ is given by \[ \Xi(t,\widetilde{\bf x}):={\color{red}\sum\limits_{k=1}^{N_{\xi}}\boldsymbol{\xi}_{k,l}(\widetilde{\bf x}(t)) \Delta W_{k,l}(t)},\quad l=1,2\,. \] By Taylor expanding the right hand side of~\eqref{eq:srk4}: \[ \widetilde{\bf x}_{n+1}=\widetilde{\bf x}_n+{\bf v}(t_n,\widetilde{\bf x}_n)\Delta t+{\color{red}\Xi(t_n,\widetilde{\bf x})\Delta W} +{\color{red}\left(\Xi(t_n,\widetilde{\bf x})\cdot\nabla\Xi(t_n,\widetilde{\bf x})\right)\frac{(\Delta W)^2}{2}}+{\color{red}H.O.T.}\, \] it can be seen that the stochastic Runge--Kutta method is in Stratonovich form, and thus consistent with SDE~\eqref{eq:barx}. Before passing to numerical results, we note that the size of the time step is a critical component for uncertainty quantification and data assimilation. The time step should be short enough that the stochastic ensemble encompasses the true solution. Our experiments show that $\Delta t=24\, \rm hours$ properly fulfils this condition. First, we demonstrate that the true deterministic solution $\widehat{\bf x}^c$ is enclosed within a cloud of stochastic solutions (also referred as a stochastic spread). To this end, we study how the area of the stochastic cloud (denoted by $A^c$) depends on both the size of the stochastic ensemble, $N_a$, and the number of EOFs. The results are presented in Figures~\ref{fig:eof64_100_400} and~\ref{fig:eof128_100_400}. \begin{figure}[H] \centering \includegraphics[scale=0.30]{fig7.png} \caption{Shown is a typical dependence of the area of the stochastic cloud $A^c$ on the size of the stochastic ensemble $\overline{\bf x}$ at the time moments {\bf (a)} $t=0$, {\bf (b)} $t=50$ hours, {\bf (c)} $t=100$ hours, {\bf (d)} $t=200$ hours. The left and right column shows the area of the stochastic cloud (marked in grey color) which consists of 100 and 400 ensemble members, respectively. The stochastic ensemble has been computed for the \textbf{\textit{first 64 leading EOFs (96\% of the flow variability)}}. The true solution $\widehat{\bf x}^c$ is marked with a black dot. The plot represents a typical part of the computational domain of size $[10,45]\times[45,65]$ in the first layer, which can be divided into two regions: a fast flow region (the boundary layer along the upper boundary $[10,45]\times[60,65]$, the striation occupying the domain $[10,45]\times[45,52]$) and a slow flow region $[10,45]\times(52,60)$. As it can be seen in the figure, there are two key parameters which influence the size of the stochastic cloud, namely the number of ensemble members and the flow velocity. In particular, the larger the stochastic ensemble is, the wider the cloud becomes. The same is true for the flow velocity: the faster the flow, the larger the stochastic spread. This behavior is expected, since large ensembles or fast flows inevitably increase the variance of the whole stochastic cloud. Clearly, the velocity of the flow contributes much more to the size of the spread than the number of ensemble members (compare the area of the stochastic cloud in the fast and slow region for different number of ensemble members). The most important observation is that the true solution lies within the stochastic cloud almost everywhere. This observation confirms that the parameterisation works well for different flow regimes. } \label{fig:eof64_100_400} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.295]{fig8.png} \caption{Shown is a typical dependence of the area of the stochastic cloud $A^c$ on the size of the stochastic ensemble $\overline{\bf x}$ at the time moments {\bf (a)} $t=0$, {\bf (b)} $t=50$ hours, {\bf (c)} $t=100$ hours, {\bf (d)} $t=200$ hours. The left and right column shows the area of the stochastic cloud (marked in grey color) which consists of 100 and 400 ensemble members, respectively; the stochastic ensemble has been computed for the \textbf{\textit{first 128 leading EOFs (99\% of the flow variability)}}. The true solution $\widehat{\bf x}^c$ is marked with a black dot. The plot represents a typical part of the computational domain of size $[10,45]\times[45,65]$ in the first layer, which can be divided into two regions: a fast flow region (the boundary layer along the upper boundary $[10,45]\times[60,65]$ and the striation occupying the domain $[10,45]\times[45,52]$) and a slow flow region $[10,45]\times(52,60)$. As can be seen in the figure, there are two key parameters which influence the size of the stochastic cloud, namely the number of ensemble members and the flow velocity. In particular, the larger the size of the stochastic ensemble is, the wider the cloud becomes. The same is true for the flow velocity: the faster the flow, the larger the stochastic spread. This is an expected behavior, since large ensembles or fast flows inevitably increases the variance of the whole stochastic cloud. Clearly, the velocity of the flow contributes much more to the size of the spread than the number ensemble members (compare the area of the stochastic cloud in the fast and slow region for different number of ensemble members). As in the previous test with 64 EOFs (see Figure~\ref{fig:eof64_100_400}), the true solution lies within the stochastic cloud almost everywhere. This result confirms again that the parameterisation works well, with little dependence on the number of EOFs. } \label{fig:eof128_100_400} \end{figure} Figures~\ref{fig:eof64_100_400} and \ref{fig:eof128_100_400} show a typical flow region in the horizontal channel at different time moments and for different number of EOFs and the size of the stochastic ensemble. As seen in the figures, the channel flow can be divided into two regions: a fast flow region (the boundary layer along the upper boundary $[10,45]\times[60,65]$ and the striation occupying the domain $[10,45]\times[45,52]$) and a slow flow region $[10,45]\times(52,60)$. By comparing Figures~\ref{fig:eof64_100_400} and~\ref{fig:eof128_100_400}, we identify three key parameters which influence the size of the stochastic cloud: the number of EOFs, the size of the stochastic ensemble, and the flow velocity. As Figures~\ref{fig:eof64_100_400} and \ref{fig:eof128_100_400} show, the more EOFs are used in the stochastic model, the wider the stochastic spread becomes. The same is true for the size of the stochastic ensemble and the velocity of the flow, namely the stochastic cloud widens as the ensemble size or the flow velocity increases. This behavior is expected, since increasing these parameters offer a better quantification of the uncertainty of the model. The velocity of the flow contributes much more to the size of the spread than the number of ensemble members or EOFs. The results show that regardless of the flow dynamics the true solution $\widehat{\bf x}^c$ lies within the stochastic cloud almost everywhere. This confirms that the parameterisation works well for a wide range of governing parameters and different flow regimes. Figures~\ref{fig:eof64_100_400} and \ref{fig:eof128_100_400} give important insights into how the uncertainty in the stochastic equation~\eqref{eq:barx} behaves with respect to the number of EOFs, the size of the stochastic ensemble, and the flow velocity. Next, we study a more global picture. To take a more global view, we first divide the flow dynamics into fast (the northern and southern boundary layers, and striations) and slow (flow between the striations) as shown in Figures~\ref{fig:vel_fields}(a) and~\ref{fig:vel_fields}(b), respectively. In terms of the flow velocity, we quantify the flow by the Reynolds number defined as \[ Re=\overline{\bf v}\, Rd_1/\nu, \] where $\overline{\bf v}$ is the maximum time-mean velocity, and $Rd_1$ is the first baroclinic Rossby deformation radius. We remark that $Re$ can be defined by using different velocity and length scales (e.g.~\citep{SiegelEtAl2001}). Our definition is focused on the mesoscale eddies characterized by the length scale up to $O(100)\, {\rm km}$ and striations. In terms of the Reynolds number, the flow decomposition is given by $Re_s<432$ and $432\le Re_f\le 1440$, where $Re_s$ and $Re_f$ are the Reynolds numbers for the slow and fast flow dynamics, respectively. \begin{figure}[H] \centering \includegraphics[scale=0.3]{fig9.png} \caption{Shown are {\bf (a)} instantaneous and {\bf (b)} time-averaged normalized velocity fields in the horizontally periodic channel $\Omega=[0,3840\, \rm km]\times[0,1960\, \rm km$]. As seen in the plot, the flow is heterogeneous. That is, fast velocity regions appear intermittently between slow velocity regions. In particular, the fast flow region includes the jet-like structure (also referred to as striations) and the boundary flows along the northern and southern boundary, while the slow flow regions mainly comprise the flows between the jets. } \label{fig:vel_fields} \end{figure} Before going into detail, it is helpful to introduce the area of the stochastic cloud $\overline{A}^c_s$ and $\overline{A}^c_f$ averaged over the number of Lagrangian particles in the slow, $N_{l,s}$, and fast, $N_{l,f}$, flow region, respectively: \[ \overline{A}^c_s(t_k):=\frac{1}{N_{l,s}}\sum\limits^{N_{l,s}}_{i=1}A^c_{s,i}(t_k),\quad \overline{A}^c_f(t_k):=\frac{1}{N_{l,f}}\sum\limits^{N_{l,f}}_{i=1}A^c_{f,i}(t_k), \] where $\overline{A}^c_{s,i}$ and $\overline{A}^c_{f,i}$ are the area of the cloud surrounding the $i$-th Lagrangian particle belonging to the slow and fast flow, respectively. \noindent We define the $L^2$-error spread for the slow, $\widetilde{S}_s(t_k)$, and fast, $\widetilde{S}_f(t_k)$, flow as \[ \widetilde{S}_s(t_k):=\left[\min\limits_{j\in[1,N_a]}\widetilde{e}_{s,j}(t_k),\max\limits_{j\in[1,N_a]}\widetilde{e}_{s,j}(t_k)\right],\quad \widetilde{S}_f(t_k):=\left[\min\limits_{j\in[1,N_a]}\widetilde{e}_{f,j}(t_k),\max\limits_{j\in[1,N_a]}\widetilde{e}_{f,j}(t_k)\right], \] with mean $L^2$-norm relative errors given by \[ \widetilde{e}_{s,j}(t_k):=\frac{1}{N_{l,s}}\sum\limits^{N_{l,s}}_{i=1}\frac{\|\widehat{x}^c_i(t_k)-\overline{x}_j(t_k)\|_{L^2}}{\|\widehat{x}^c_i(t_k)\|_{L^2}},\quad \widetilde{e}_{f,j}(t_k):=\frac{1}{N_{l,s}}\sum\limits^{N_{l,s}}_{i=1}\frac{\|\widehat{x}^c_i(t_k)-\overline{x}_j(t_k)\|_{L^2}}{\|\widehat{x}^c_i(t_k)\|_{L^2}}, \] where $\overline{x}_j$ is the $j$-th ensemble member. \noindent We also introduce the mean $L^2$-norm relative errors \[ e_s(t_k):=\frac{1}{N_{l,s}}\sum\limits^{N_{l,s}}_{i=1}\frac{\|x_i(t_k)-\widehat{x}^c_i(t_k)\|_2}{\|x_i(t_k)\|_{L^2}},\quad e_f(t_k):=\frac{1}{N_{l,f}}\sum\limits^{N_{l,f}}_{i=1}\frac{\|x_i(t_k)-\widehat{x}^c_i(t_k)\|_2}{\|x_i(t_k)\|_{L^2}} \] to compute the error between the solution $\widehat{\bf x}^c$ of equation~\eqref{eq:x} (the true solution in the uncertainty quantification context) for which Lagrangian particles are advected by spatially averaged velocity $\overline{\bf u}$ and the deterministic solution $\bf x$ of equation~\eqref{eq:dx} for which Lagrangian particles are moved by the high-resolution velocity~$\bf u$. The time index $k=k_t\cup k_f$ is split into a training period $k_t=[0,299]$~days (the period over which the EOFs have been computed) and a forecast period $k_f=[300,365]$~days; $N_l=N_{l,s}\cup N_{l,f}$ is the total number of Lagrangian particles. The subdivision of the time intervals into the training and forecast subintervals allows us to study how the parameterisation performs after the training period, where there is no data available for computing EOFs. The dependence of the averaged area of the stochastic cloud for the slow and fast flow regions on the number of EOFs and the size of the stochastic ensemble is presented in Figure~\ref{fig:cloud_area_fast_slow_flow}. \begin{figure}[H] \centering \hspace*{-2cm} \begin{tabular}{cc} \begin{minipage}{0.02\textwidth}\rotatebox{0}{$\overline{A}^c$}\end{minipage} & \begin{minipage}{0.5\textwidth}\includegraphics[width=10cm]{cloud_area_fast_slow_flow.png}\end{minipage}\\ & \begin{minipage}{0.5\textwidth}\hspace*{4.5cm} $t\, {\rm [days]}$\end{minipage}\\ \end{tabular} \caption{Shown is the dependence of the averaged area of the stochastic cloud for the slow, $\overline{A}^c_s$, and fast, $\overline{A}^c_f$, flow regions on the number of EOFs, $N_{\xi}$, and the size of the stochastic ensemble $N_a$. The results presented in the figure are in good agreement with the analysis of instantaneous snapshots (see Figures~\ref{fig:eof64_100_400} and~\ref{fig:eof128_100_400}). In particular, the averaged area of the stochastic cloud $\overline{A}^c$ is significantly influenced by the three parameters we identified for the case of instantaneous flows: the number of EOFs, the size of the stochastic ensemble, and the flow velocity. More importantly, the qualitative behavior of $\overline{A}^c$ is similar to those of $A^c$ (the area of the stochastic ensemble associated with a given Lagrangian particle). Namely, as the size of the stochastic ensemble, $N_a$, increases so does the area of the cloud (for example, compare the red and brown lines for which the ensemble size is $N_a=100$ and $N_a=400$, respectively). The increase in the number of EOFs, $N_{\xi}$, also leads to a larger area of the cloud (for instance, compare the red and magenta lines for which $N_{\xi}=64$ and $N_{\xi}=128$, respectively). The same is true for the flow velocity: the faster the flow, the larger the stochastic cloud (compare the red and blue lines which corresponds to the fast and slow velocity region, respectively). The most important observation here is that we can estimate the contribution of each parameter to the parameterisation. The size of the stochastic ensemble and the number of EOFs have rather small effects on the area of the stochastic cloud. The most significant contribution comes from the velocity of the flow. Namely, the size of the stochastic cloud for fast flows is always larger than that for slow flows (compare the upper four curves with the lower four curves in the plot). } \label{fig:cloud_area_fast_slow_flow} \end{figure} Upon analysing the results presented in Figure~\ref{fig:cloud_area_fast_slow_flow}, we have found that the averaged area of the stochastic cloud $\overline{A}^c$ significantly depends on the number of EOFs, the size of the stochastic ensemble, and the flow velocity. This dependence is qualitatively the same as that for the area of the stochastic ensemble, $A^c$, associated with a given Lagrangian particle (see Figures~\ref{fig:eof64_100_400} and~\ref{fig:eof128_100_400}). In particular, as the size of the stochastic ensemble or the number of EOFs increases so does the area of the cloud. These results stay the same for the stochastic QG model studied in Section~\ref{sec:xi_SQG}. With respect to the flow velocity, we observe that the faster the flow is, the larger the stochastic cloud becomes. As seen in Figure~\ref{fig:cloud_area_fast_slow_flow}, the size of the stochastic ensemble and the number of EOFs have a smaller effect on the area of the stochastic cloud. The major contribution comes from the velocity of the flow. Having studied the influence of different parameters on the area of the stochastic cloud, we can now perform uncertainty quantification tests, as presented in Figure~\ref{fig:error_spread_fast_slow_flow}. \begin{figure}[H] \centering \includegraphics[scale=0.325]{fig11.png} \caption{Shown is the dependence of the $L^2$-error spread for the slow, $\widetilde{S}_s(t_k)$, and fast, $\widetilde{S}_f(t_k)$, flow, and the mean $L^2$-norm relative error $e^c_s(t_k)$ and $e^c_f(t_k)$ on the number of EOFs, $N_{\xi}$, and the size of the stochastic ensemble $N_a$. The presented results show that the $L^2$-norm relative error between the true solution $\widehat{\bf x}^c$ for which Lagrangian particles are advected by spatially averaged velocity $\overline{\bf u}$ (see equation~\eqref{eq:x}) and the deterministic solution $\bf x$ for which Lagrangian particles are moved by the high-resolution velocity~$\bf u$ (see equation~\eqref{eq:dx}) is small over the whole time interval for both slow and fast flows. Moreover, this error is also enclosed within the spread of stochastic solutions over the whole time interval. Most importantly, the error remains small not only over the training interval, but also over the forecast interval. This confirms that the leading EOFs properly capture the spatial structure of the flow, and the parameterization performs equally well for both fast and slow flows. Along with the uncertainty quantification results, we can ask the question: ``How does the stochastic spread depend on the number of EOFs, the size of the ensemble, and the flow dynamics?'' As seen in the figure, the presented results are in good agreement with the ones in Figure~\ref{fig:cloud_area_fast_slow_flow}. The stochastic spread is significantly influenced by the number of EOFs, the size of the stochastic ensemble, and the flow velocity. More specifically, the spread widens as the number of EOFs, $N_{\xi}$, increases (compare the red and magenta spreads (Figures~{\bf(a)} and~{\bf(b)}) for which $N_{\xi}=64$ and $N_{\xi}=128$, respectively). As the size of the stochastic ensemble, $N_a$, grows, so does the spread (compare the red and brown spreads (Figures~{\bf(a)} and~{\bf(c)}) for which the ensemble size is $N_a=100$ and $N_a=400$, respectively). The velocity of the flow has a much more noticeable effect on the spread: the faster the flow, the wider the stochastic spread (compare the red and blue spread (Figure~{\bf a}) which corresponds to the fast and slow velocity region, respectively). Thus, we conclude that the stochastic ensemble and the number of EOFs have a smaller effect on the width of the stochastic spread, and the major contribution is given by the velocity of the flow, as also confirmed by the results in Figure~\ref{fig:cloud_area_fast_slow_flow}. } \label{fig:error_spread_fast_slow_flow} \end{figure} The uncertainty quantification results presented in Figure~\ref{fig:error_spread_fast_slow_flow} show that the $L^2$-norm relative error between the true solution $\widehat{\bf x}^c$ (computed for the spatially averaged velocity $\overline{\bf u}$, equation~\eqref{eq:x}) and the deterministic solution $\bf x$ (computed for the high-resolution velocity $\bf u$, equation~\eqref{eq:dx}) is small and contained in the spread of stochastic solutions over the whole time interval for both slow and fast flows. Moreover, the error remains small not only within the training interval but also within the forecast interval. The results in Figure~\ref{fig:error_spread_fast_slow_flow} are in good agreement with the ones presented in Figure~\ref{fig:cloud_area_fast_slow_flow}. In particular, the spread gets wider, upon increasing either the number of EOFs, or the size of the stochastic ensemble. In addition, the faster the flow is, the wider the stochastic spread becomes. However, the stochastic ensemble and the number of EOFs have a smaller effect on the width of the stochastic spread, compared with the velocity of the flow, as also confirmed by the results in Figure~\ref{fig:cloud_area_fast_slow_flow}. Overall, we conclude that the leading EOFs properly capture the spatial structure of the flow, and the parameterization performs equally well for both fast and slow flows. \subsection{Application of EOFs to the stochastic QG equations\label{sec:xi_SQG}} The Lagrangian evolution studied in the previous section demonstrates encouraging results. However, it cannot guarantee that the application of EOFs to the stochastic QG equations~\eqref{eq:SLTpv} is equally beneficial. Therefore, this section focuses on uncertainty quantification for the stochastic QG model. Here, we impose more stringent restrictions upon the parameterisation (compared to those given in Section~\ref{sec:ICs}). In particular, we analyse how long the true deterministic solution remains within the stochastic ensemble one standard deviation, not within the whole stochastic ensemble as in Sections~\ref{sec:ICs} and~\ref{sec:approx_lagrangian_evol}. In other words, we study how the function \begin{equation*} \widetilde{T}_{\mathcal{S}_{\sigma}}:=\frac{1}{|T|}\int\limits_{T}\widetilde{\delta}(q^a) \, dt,\qquad \widetilde{\delta}(q^a)=\left\{ \begin{aligned} 1\quad \text{if } q^a\in\mathcal{S}_{\sigma},\\ 0\quad \text{if } q^a\not\in\mathcal{S}_{\sigma}, \end{aligned} \right. \label{eq:T_S2} \end{equation*} depends on the number of EOFs and the size of the stochastic ensemble. As in Section~\ref{sec:ICs}, $\widetilde{T}_{\mathcal{S}_{\sigma}}$ is a time period spent by the true deterministic solution, $q^a$, within the stochastic ensemble one standard deviation $\mathcal{S}_{\sigma}=[\mathcal{S}-\sigma(\mathcal{S}),\mathcal{S}+\sigma(\mathcal{S})]$, where $\sigma(\mathcal{S})$ is the standard deviation of $\mathcal{S}$. The results are presented in Figures~\ref{fig:tildeT_EOF_ensemble_mu4D-8} and~\ref{fig:tildeT_EOF_ensemble_mu4D-7}. \begin{figure}[H] \centering \includegraphics[scale=0.275]{fig12.png} \caption{Shown is the dependence of $\widetilde{T}_{\mathcal{S}_{\sigma}}$ on the number of EOFs $N_{\xi}$ and size of the stochastic ensemble $N_a$ over the time period $T=[0,24]$ hours for the \textbf{\textit{heterogeneous flow ($\boldsymbol{\mu}=\bf 4\boldsymbol{\times}10^{-8}\, s^{-1}$)}} presented in Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-8}. The blue color in the colorbar corresponds to $\widetilde{T}_{\mathcal{S}_{\sigma}}=0$ (the stochastic spread never captures the true deterministic solution over the time period $T$). The red color in the colorbar corresponds to $\widetilde{T}_{\mathcal{S}_{\sigma}}=1$ (the stochastic spread always captures the true deterministic solution over the time period $T$). As seen, the smoother the field is, the longer the true deterministic solution, $q^a$, remains within the spread of stochastic solutions (see, for example, Figure {\bf (a)} showing the stream function $\psi_1$, velocity $\mathbf{V}_1$ and potential vorticity (PV) $q_1$). Obviously, the stream function is enclosed within the spread for a longer period of time compared to the velocity and PV, while the spread captures the velocity for a longer time than PV. Besides, using more EOFs (compare Figures~{\bf (a)} and~{\bf (b)} for which $N_{\xi}=1$ (23\% of the flow variability) and $N_{\xi}=2$ (42\% of the flow variability) leading EOFs, respectively) leads to a better coverage of the true solution with the spread. Surprisingly, using even more EOFs does not lead to significantly better results; for example, compare Figures~{\bf (b)} and~{\bf (c)} which present the uncertainty quantification results for $N_{\xi}=2$ (42\% of the flow variability) and $N_{\xi}=4$ (60\% of the flow variability) leading EOFs, respectively. The same conclusion holds for the size of the stochastic ensemble: the large the size of the ensemble, the longer the spread captures the true solution (compare Figures~{\bf (a)} and~{\bf (d)} for which $N_a=100$ and $N_a=200$, respectively). However, using more ensemble members does not results in a much better coverage of the true solution with the stochastic spread (compare Figures~{\bf (d)} and~{\bf (e)} for which $N_a=200$ and $N_a=400$, respectively). The uncertainty quantification results presented here are in good qualitative agreement with the Lagrangian simulations given in Section~\ref{sec:approx_lagrangian_evol}. Thus, we conclude that uncertainty quantification tests for Lagrangian simulations can be used to qualitatively quantify uncertainty for the stochastic QG model. This observation allows to significantly reduce computational resources needed for uncertainty quantification tests, since Lagrangian simulations are computationally much less intensive than those of the stochastic QG model. } \label{fig:tildeT_EOF_ensemble_mu4D-8} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.295]{fig13.png} \caption{Shown is the dependence of $\widetilde{T}_{\mathcal{S}_{\sigma}}$ on the number of EOFs $N_{\xi}$ and size of the stochastic ensemble $N_a$ over the time period $T=[0,24]$ hours for the \textbf{\textit{homogeneous flow ($\boldsymbol{\mu}=\bf 4\boldsymbol{\times}10^{-7}\, s^{-1}$)}} presented in Figure~\ref{fig:qf_qa_qc_qam_qcm_mu1D-7}. The blue color in the colorbar corresponds to $\widetilde{T}_{\mathcal{S}_{\sigma}}=0$ (the stochastic spread never captures the true deterministic solution over the time period $T$). The red color in the colorbar corresponds to $\widetilde{T}_{\mathcal{S}_{\sigma}}=1$ (the stochastic spread always captures the true deterministic solution over the time period $T$). As for the heterogeneous flow, the smoother the field is, the longer the true deterministic solution, $q^a$, remains within the spread of stochastic solutions (see Figure {\bf (a)} showing the stream function $\psi_1$, velocity $\mathbf{V}_1$, and potential vorticity $q_1$). The stream function is enclosed within the spread for longer compared to the velocity and PV, while the spread captures the velocity for a longer time period than PV. Using more EOFs (compare Figures~{\bf (a)} and~{\bf (b)} for which $N_{\xi}=1$ (15\% of the flow variability) and $N_{\xi}=2$ (28\% of the flow variability) leading EOFs, respectively) leads to a better coverage of the true solution with the spread. As in the heterogeneous case, using even more EOFs does not lead to significantly better results; compare Figures~{\bf (b)} and~{\bf (c)} which present the uncertainty quantification results for $N_{\xi}=2$ (28\% of the flow variability) and $N_{\xi}=4$ (48\% of the flow variability) leading EOFs, respectively. The same conclusion holds for the size of the stochastic ensemble: the large the size of the ensemble, the longer the spread captures the true solution (compare Figures~{\bf (a)} and~{\bf (d)} for which $N_a=100$ and $N_a=200$, respectively). However, using more ensemble members does not results in a much better coverage of the true solution with the stochastic spread (compare Figures~{\bf (d)} and~{\bf (e)} for which $N_a=200$ and $N_a=400$, respectively). The uncertainty quantification results for the homogeneous flow are qualitatively the same as for the heterogeneous flow presented in Figure~\ref{fig:tildeT_EOF_ensemble_mu4D-7}. Thus, the parameterisation was found to perform equally well for both homogeneous and heterogeneous flows. } \label{fig:tildeT_EOF_ensemble_mu4D-7} \end{figure} As seen in Figures~\ref{fig:tildeT_EOF_ensemble_mu4D-8} and~\ref{fig:tildeT_EOF_ensemble_mu4D-7}, the uncertainty quantification results are qualitatively the same for the heterogeneous and homogeneous flow. In particular, the smoother the field is, the longer the true deterministic solution, $q^a$, remains enclosed within the spread of stochastic solutions. In both cases, the stream function $\psi_1$ is enclosed within the spread for a longer period of time compared to the velocity $\mathbf{V}_1$ and potential vorticity $q_1$, while the spread captures the velocity for a longer time than the potential vorticity. Moreover, using more EOFs results in a better coverage of the true solution with the stochastic spread. However, we found that for both the heterogeneous and homogeneous flow, using more than two leading EOFs does not lead to significantly better results. The same conclusion holds for the size of the stochastic ensemble: the large the size of the ensemble, the longer the spread captures the true solution, but using more than 200 ensemble members does not results in a much better coverage of the true solution with the stochastic spread. Overall, we conclude that the proposed parameterisation performs equally well for both simpler homogeneous flows and more complex heterogeneous ones. The uncertainty quantification results presented here are in good qualitative agreement with the Lagrangian simulations given in Section~\ref{sec:approx_lagrangian_evol}. Thus, uncertainty quantification tests in the Lagrangian framework can be used to qualitatively quantify uncertainty for the stochastic QG model. This important observation allows to use Lagrangian simulations which are computationally much less intensive than those of the stochastic QG model. Another important question we study is to quantify how the uncertainty in the initial stochastic condition is propagated by the deterministic QG model and compare these uncertainty quantification results with the stochastic case. For doing so, we start the deterministic QG model from the stochastic initial condition at time $t=0$ (see Section~\ref{sec:ICs} for details) and run it for each independent realization of the Brownian noise $W$, and compare the behavior of the deterministic spread (denoted by $T_{\mathcal{S}_{\sigma}}$) for the deterministic QG model with the stochastic spread for the stochastic QG model (denoted by $\widetilde{T}_{\mathcal{S}_{\sigma}}$). The results of this simulation for the heterogeneous and homogeneous flow are given in Figures~\ref{fig:MC3test_mu4D-8} and~\ref{fig:MC3test_mu4D-7}, respectively. \begin{figure}[H] \centering \includegraphics[scale=0.35]{fig14.png} \caption{Shown is the spread $\widetilde{T}_{\mathcal{S}_{\sigma}}$ for the stochastic QG model (top row) and spread $T_{\mathcal{S}_{\sigma}}$ for the deterministic QG model (bottom row) for the \textbf{\textit{heterogeneous flow ($\boldsymbol{\mu}=\bf 4\boldsymbol{\times}10^{-8}\, s^{-1}$)}}. The stochastic spread $\widetilde{T}_{\mathcal{S}_{\sigma}}$ has been computed for one leading EOF with 100 independent realizations of the Brownian noise over the time period $T=[0,24]$ hours. The deterministic spread $T_{\mathcal{S}_{\sigma}}$ has been computed with the deterministic QG model started at $t=0$ and run from the stochastic initial condition (see Section~\ref{sec:ICs} for details) with 100 independent realizations of the Brownian noise over the same period of time. The blue color in the colorbar indicates that the spread never captures the true deterministic solution over the time period $T$, while the red one indicates that the stochastic spread always captures the true deterministic solution over the time period $T$. As seen in the plot, the true deterministic solution, $q^a$, remains enclosed within the stochastic spread much longer than within the deterministic spread (compare either the stream function $\psi_1$, velocity $\mathbf{V}_1$, or potential vorticity $q_1$ for the stochastic spread (top row) and deterministic spread (bottom row)). Moreover, if we compare the deterministic and stochastic spreads at individual grid points we find that many more points in the domain are captured by the stochastic spread than by the deterministic one. Thus, we conclude for data assimilation that the proposed stochastic parameterisation would be preferable to the deterministic QG model. } \label{fig:MC3test_mu4D-8} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.35]{fig15.png} \caption{Shown is the spread $\widetilde{T}_{\mathcal{S}_{\sigma}}$ for the stochastic QG model (top row) and spread $T_{\mathcal{S}_{\sigma}}$ for the deterministic QG model (bottom row) for the \textbf{\textit{homogeneous flow ($\boldsymbol{\mu}=\bf 4\boldsymbol{\times}10^{-7}\, s^{-1}$)}}. The stochastic spread $\widetilde{T}_{\mathcal{S}_{\sigma}}$ has been computed for one leading EOF with 100 independent realizations of the Brownian noise over the time period $T=[0,24]$ hours. The deterministic spread $T_{\mathcal{S}_{\sigma}}$ has been computed with the deterministic QG model started at $t=0$ and run from the stochastic initial condition (see Section~\ref{sec:ICs} for details) with 100 independent realizations of the Brownian noise over the same period of time. The blue color in the colorbar indicates that the spread never captures the true deterministic solution over the time period $T$, while the red one indicates that the stochastic spread always captures the true deterministic solution over the time period $T$. As seen in the plot, the true deterministic solution, $q^a$, remains enclosed within the stochastic spread much longer than within the deterministic spread (compare either the stream function $\psi_1$, velocity $\mathbf{V}_1$, or potential vorticity $q_1$ for the stochastic spread (top row) and deterministic spread (bottom row)). As for the heterogeneous flow (Figure~\ref{fig:MC3test_mu4D-8}), the stochastic spread captures much more individual grid points in the domain than the deterministic spread. Thus, we have found that for data assimilation the proposed parameterisation would be preferable to the deterministic QG model; not only for heterogeneous flows, but also for homogeneous flows. } \label{fig:MC3test_mu4D-7} \end{figure} In summary, by comparing the uncertainty quantification results for the heterogeneous flow (Figure~\ref{fig:MC3test_mu4D-8}) and homogeneous flow (Figure~\ref{fig:MC3test_mu4D-7}), we have found that the stochastic spread captures the true deterministic solution $q^a$ (computed on the coarse grid $G^c$) for much longer times and for much more individual grid points in the computational domain. Therefore, for data assimilation, the proposed parameterisation would be considerably more preferable than the deterministic QG model; not only for heterogeneous flows (Figure~\ref{fig:MC3test_mu4D-8}), but also for homogeneous ones (Figure~\ref{fig:MC3test_mu4D-7}). Overall, we conclude that the parameterisation of the stochastic QG model is robust to large variations of the flow dynamics and governing parameters, and can be equally well applied to both homogeneous and heterogeneous flows. \section{Conclusion and future work} \label{sec:concl} In this paper we have introduced a stochastic parameterisation for unresolved eddy motions in a two-layer quasi-geostrophic channel model with forcing and dissipation. The parameterisation is based upon the idea of ``transport noise'', which models the modifications to the velocity field due to unresolved dynamics. This model assumes that the transport of large scale components is accurate, but that the velocity field used to transport these components is missing contributions from unresolved scales. We first introduced a time-integration scheme for the stochastic PDE, shown that it is in Stratonovich form, and proved its consistency as $\Delta t\to 0$. Then we described a procedure for extracting stochastic forcing by post-processing high resolution simulations, and demonstrated the procedure by using uncertainty quantification experiments for both the SDE and stochastic QG model for homogeneous and heterogeneous flow dynamics. The results show that the proposed parameterisation is efficient and effective for both homogeneous and heterogeneous flows, and they lay the solid foundation for data assimilation. In future work, we intend to use this approach as the basis for data assimilation algorithms, to investigate the assimilation of data from a high-resolution deterministic model into a low-resolution stochastic model. We also intend to examine the derivation of ``prognostic'' parameterisations where the stochastic forcing patterns are determined from the coarse model itself using physical principles, rather than the ``diagnostic'' parameterisations of this paper where they are determined from high resolution simulations. The diagnostic approach proposed in this paper will provide important insight by comparing the diagnosed forcing with the state of the stochastic model. \section{Acknowledgments} The authors thank The Engineering and Physical Sciences Research Council for the support of this work through the grant EP/N023781/1. We also thank Pavel Berloff, Mike Cullen, John Gibbon, Georg Gottwald, Nikolas Kantas, Etienne Memin, Sebastian Reich, Valentin Resseguier, and Aretha Teckentrup for useful and constructive discussions throughout the preparation of this work. \clearpage \bibliographystyle{apalike}
train/arxiv
BkiUfcHxK7IDBOV4tRlM
5
1
\section{Introduction} \label{intro} Over the past few years, General Relativity has been supported by substantial evidence and observations, which have been witnessed in many astrophysical scenarios, these in turn range from Eddington's measurement of light deflection in 1919 to the recent direct observation of gravitational waves by collaboration LIGO \cite{Dyson:1920cwa,Abbott:2016blz}. These observational data served as motivation to consider the modified gravity as a theory that has given rich support to phenomenology \cite{Casalino:2018mna}. However, some problems considered fundamental to be understood in the context of General Relativity, such as dark matter, dark energy, and the inflationary phase of the Universe. In recent years, modifications of General Relativity have been proposed; however, to carry out such modifications of General Relativity, it is necessary to maintain some of its essential properties, which are second-order equations of motion resulting from an invariable action by diffeomorphism and Lorentz's invariance \cite{Heisenberg:2018vsk}. By maintaining these properties, additional degrees of freedom of propagation can be added in the gravity sector that is consistent to include additional fields such as scalars, vectors, or tensors. However, to deal with these problems in Einstein's gravity, one way was to couple the theory with scalar fields. In this sense, these efforts led to the development of the now known Galileo theories, which are theories of scalar tensors \cite{Nicolis:2008in}. Besides, these studies led to the rediscovery of Horndeski's gravity \cite{Horndeski:1974wa}. This theory was presented in 1974 by Horndeski for further discussions see \cite{Horndeski:1974wa,Bruneton:2012zk,Brito:2018pwe,Santos:2019ljs}, it is characterized by being a theory of scalar tensors with second-order field equations and second-order energy-moment tensor \cite{Heisenberg:2018vsk,Horndeski:1974wa,Cisterna:2014nua,Anabalon:2013oea,Bravo-Gaete:2013dca,Rinaldi:2012vy}. The Lagrangian for this theory produces second-order equations of motion \cite{Cisterna:2014nua,Anabalon:2013oea,Deffayet:2011gz,VanAcoleyen:2011mj,Gomes:2015dhl,Rinaldi:2016oqp,Cisterna:2017jmv,Zumalacarregui:2013pma,Feng:2015oea,Brito:2019ose,Cisterna:2016vdx}. This theory also includes four arbitrary functions of the scalar field and its kinetic term. In recent years, the Horndeski gravity \cite{Cisterna:2016vdx} has been shown to support static black hole solutions with asymptotically anti-de Sitter behavior. In general, on the astrophysical scale, black holes are not expected to be static and has spherical symmetry, but asymmetric because they have angular momentum \cite{Tattersall:2018nve}. However, asymmetric black holes are the objects expected to be involved in the events detected in the gravitational wave observatories by the collaborations LIGO and VIRGO \cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2017vtc}. As discussed by \cite{Tattersall:2018nve}, the author performed an extension of the calculation of perturbations from a Schwarzschild black hole to a Kerr black hole with a slow rotation, showing an agreement of the almost normal modes analytically with the calculated numerical frequencies. However, the 'bald' black holes in Horndeski's gravity are identical to their equivalents in General Relativity. They can display a modified gravitational wave signal during the ringdown, which is characterized by only one or two observation parameters useful for attempts to restrict gravity in this new era of gravitational-wave astronomy. In addition to the results explored by \cite{Tattersall:2018nve} to explore the equivalence between 'bald' black holes in Horndeski's gravity and General Relativity, another scenario explored by \cite{Bravo-Gaete:2014haa} showed that due to a particular truncation of Horndeski's action, it reduces to EinsteinHilbert's Lagrangian with a cosmological constant and a scalar field, which has a dynamics governed by their usual kinetic term, together with a non-minimal kinetic coupling. In this sense, the radial component of the conserved current has to disappear, providing a solution with a geometry of a BTZ black hole with a radial scalar field that is well defined on the horizon. Slowly rotating black holes in Horndeski's gravity context has been presented by \cite{Maselli:2015yva}, where first-order rotational corrections were considered for a wide range of black hole solutions in Horndeski's gravity. It has been shown that the drag function, which describes the rotational corrections of leadership order, is precisely the same as in General Relativity for all known black hole solutions by Horndeski. However, a very relevant fact for these black holes with slow rotation in Horndeski's gravity is that the no-hair theorem is valid in the static regime and can be extended to apply also to slow rotation black holes \cite{Maselli:2015yva,Hui:2012qt,Sotiriou:2015pka}. In this work, we will consider the computational complexity of a rotating black hole in (2+1)-spacetime in Horndeski's gravity. For further discussions regarding the computational complexity of black holes, see the following references \cite{Nagasaki:2017kqe,Nagasaki:2018csh,deBoer:2017xdk,Banerjee:2019vff,Brown:2015lvg,Brown:2015bva}. For this black hole, butterfly effects caused by a small disturbance in an asymptotic region of this black hole were presented by \cite{Reynolds:2016pmi}. However, in our study, we considered the effect of the string moving on this spacetime geometry of the BTZ black hole. This work is summarized as follows: In Sec.~\ref{v1}, we present the Horndeski gravity. In Sec.~\ref{v2}, we address the issue of finding black hole ansatz, and we explore the effect of the probe string in the black hole. In the Sec.~\ref{v3}, we investigate the Nambu-Goto (NG) action of the string moving in the black hole. Finally, in Sec.~\ref{v4}, we present our conclusions. \section{Horndeski gravity}\label{v1} In this section, we present the model of John Lagrangian \cite{Bruneton:2012zk}, which is a specific model related to F4 theories for further discussions on F4 theories see \cite{Charmousis:2011bf,Charmousis:2011ea}, they are a particular subclass of Horndeski's theory. This subclass has attracted attention in recent years when carrying out investigations around the standard kinetic term for the scalar \cite{Starobinsky:2016kua}, providing a model that is sometimes called Fab Five (F5), which gained high visibility in the study of the cosmological scenarios to know \cite{Santos:2019ljs}. The following action gives John Lagrangian \cite{Bruneton:2012zk} model \begin{equation} S[g_{\mu\nu},\phi]=\int{d^{4}x\sqrt{-g}\left[\kappa(R-2\Lambda)-\frac{1}{2}(\alpha g_{\mu\nu}-\gamma G_{\mu\nu})\nabla^{\mu}\phi\nabla^{\nu}\phi\right]}+S_{m}.\label{1} \end{equation} Where $\kappa=(16\pi G)^{-1}$, we can define a new field $\phi^{'}\equiv\psi$, and $S_{m}$ describes an ordinary matter that is supposed to be a perfect fluid. We can see that in action the field has a dimension of $(mass)^{2}$ with the parameters $\alpha$ and $\gamma$ controlling the strength of the kinetic couplings, note that $\alpha$ is dimensionless and $\gamma$ has a dimension of $(mass)^{-2}$. The Einstein-Horndeski field equations for the action (\ref{1}) can be written formally by varying this action, that is, $\delta S[g_{\mu\nu},\phi]$ and assuming $S_{m}=$constant, we can write the equations of motion as in the form \begin{eqnarray} E_{\mu\nu}[g_{\mu\nu},\phi]&=&G_{\mu\nu}+\Lambda g_{\mu\nu}-\frac{\alpha}{2\kappa}\left(\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\lambda}\phi\nabla^{\lambda}\phi\right)\label{2}\\ &-&\frac{\gamma}{2\kappa}\left(\frac{1}{2}\nabla_{\mu}\phi\nabla_{\nu}\phi R-2\nabla_{\lambda}\phi\nabla_{(\mu}\phi R^{\lambda}_{\nu)}-\nabla^{\lambda}\phi\nabla^{\rho}\phi R_{\mu\lambda\nu\rho}\right)\nonumber\\ &-&\frac{\gamma}{2\kappa}\left(-(\nabla_{\mu}\nabla^{\lambda}\phi)(\nabla_{\nu}\nabla_{\lambda}\phi)+(\nabla_{\mu}\nabla_{\nu}\phi)\Box\phi+\frac{1}{2}G_{\mu\nu}(\nabla\phi)^{2}\right)\nonumber\\ &-&\frac{\gamma}{2\kappa}\left[-g_{\mu\nu}\left(-\frac{1}{2}(\nabla^{\lambda}\nabla^{\rho}\phi)(\nabla_{\lambda}\nabla_{\rho}\phi)+\frac{1}{2}(\Box\phi)^{2}-(\nabla_{\lambda}\phi\nabla_{\rho}\phi)R^{\lambda\rho}\right)\right],\nonumber\\ E_{\phi}[g_{\mu\nu},\phi]&=&\nabla_{\mu}J^{\mu};\quad J^{\mu}=[(\alpha g^{\mu\nu}-\gamma G^{\mu\nu})\nabla_{\nu}\phi].\label{3} \end{eqnarray} Using the fact that $E_{\mu\nu}[g_{\mu\nu},\phi]=0$ and $E_{\phi}[g_{\mu\nu},\phi]=0$, we can write \begin{equation} G_{\mu\nu}+\Lambda g_{\mu\nu}=\frac{1}{2\kappa}T_{\mu\nu},\label{4} \end{equation} where $T_{\mu\nu}=\alpha T^{(1)}_{\mu\nu}+\gamma T^{(2)}_{\mu\nu}$ and the energy-momentum tensors $T^{(1)}_{\mu\nu}$ and $T^{(2)}_{\mu\nu}$ take the following format \begin{equation}\begin{array}{rclrcl} T^{(1)}_{\mu\nu}&=&\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\lambda}\phi\nabla^{\lambda}\phi\\ T^{(2)}_{\mu\nu}&=&\frac{1}{2}\nabla_{\mu}\phi\nabla_{\nu}\phi R-2\nabla_{\lambda}\phi\nabla_{(\mu}\phi R^{\lambda}_{\nu)}-\nabla^{\lambda}\phi\nabla^{\rho}\phi R_{\mu\lambda\nu\rho}\\ &-&(\nabla_{\mu}\nabla^{\lambda}\phi)(\nabla_{\nu}\nabla_{\lambda}\phi)+(\nabla_{\mu}\nabla_{\nu}\phi)\Box\phi+\frac{1}{2}G_{\mu\nu}(\nabla\phi)^{2}\\ &-&g_{\mu\nu}\left[-\frac{1}{2}(\nabla^{\lambda}\nabla^{\rho}\phi)(\nabla_{\lambda}\nabla_{\rho}\phi)+\frac{1}{2}(\Box\phi)^{2}-(\nabla_{\lambda}\phi\nabla_{\rho}\phi)R^{\lambda\rho}\right].\label{5} \end{array}\end{equation} And the scalar field equation is given by \begin{equation} \nabla_{\mu}[(\alpha g^{\mu\nu}-\gamma G^{\mu\nu})\nabla_{\nu}\phi]=0.\label{6} \end{equation} \section{Black hole solutions and probe string}\label{v2} Let us consider for Horndeski's gravity that the string is moving in the spacetime of the BTZ black hole \cite{Reynolds:2016pmi}. The BTZ black hole at ($2+1$)-dimensions is given by \begin{equation} ds^{2}=-f(r)dt^{2}+r^{2}\left(d\chi-\frac{J}{r^{2}}dt\right)^{2}+\frac{dr^{2}}{f(r)}.\label{7} \end{equation} where $J$ is the angular momentum, to escape the no-hair theorem, which has been well discussed in \cite{Bravo-Gaete:2013dca}, we impose that the conserved current's radial component disappears identically without restricting the radial dependence of the scalar field: \begin{equation} \alpha g_{rr}-\gamma G_{rr}=0\label{8}. \end{equation} Mind that $\phi^{'}(r)\equiv\psi(r)$, we can easily see that this condition annihilates $\psi^2(r)$, regardless of its behavior on the horizon. It is possible to find the function $f(r)$ using the equation (\ref{8}). Thus, equation (\ref{6}) is satisfied with the following solution \begin{eqnarray} f(r)&=&-M+\frac{\alpha r^{2}}{\gamma}+\frac{J^{2}}{r^{2}},\label{9}\\ \psi^{2}(r)&=&-\frac{2\kappa(\alpha+\gamma\Lambda)}{\alpha\gamma f(r)}.\label{10} \end{eqnarray} The Einstein-Horndeski field equations (\ref{4}) and (\ref{6}) are satisfied by these equations. Besides, as discussed by \cite{Anabalon:2013oea}, we have that $\alpha/(\gamma)=l^{-2}_{AdS}$ is defined as an effective radius of AdS $l_{AdS}$. For the case in which we have limitations in the storage of information, we have that the area of the black hole delimits the information \cite{Brown:2015lvg,Brown:2015bva}. The black hole entropy, which can be obtained by applying the first law of black hole thermodynamics $dM=TdS$, can be written as \begin{eqnarray} &&S=\frac{1}{2G}\int^{r_{h}}_{0}{\frac{1}{T(r_{h})}\frac{dM}{dr_{h}}dr_{h}}=\frac{2\pi r_{h}}{G}=\frac{A}{4G}\label{10.1}\\ &&M(r_{h})=\left(\frac{\alpha r^{2}_{h}}{\gamma}+\frac{J^{2}}{r^{2}_{h}}\right)\label{10.2}\\ &&T(r_{h})=\frac{1}{2\pi}\left(\frac{\alpha r_{h}}{\gamma}-\frac{J^{2}}{r^{3}_{h}}\right)\label{10.3} \end{eqnarray} Which obeys the celebrated Hawking's law. If an object can be forced to undergo a gravitational collapse by adding mass, the second law of thermodynamics insists that it must have less entropy than the resulting black hole. Now, we will present the effect of the string moving in this spacetime geometry. For this, we will calculate the induced metric using the parameters $\tau$ and $\sigma$ in the world-sheet of the fundamental string. These parameters are given as follows: \begin{eqnarray} t=\tau,\quad r=\sigma\quad,\chi=v\tau+\xi(\sigma),\label{11} \end{eqnarray} Where $v$ is a constant velocity, and $\xi(\sigma)$ is a function that determines string's shape. However, the metric induced in the world-sheet is given by \begin{eqnarray} &&ds^{2}_{ind}=H(\sigma)d\tau^{2}+G(\sigma)d\sigma^{2}+2F(\sigma)d\tau d\sigma\label{12}\\ &&H(\sigma)=-f(\sigma)+\left(v\sigma-\frac{J}{\sigma}\right)^{2}\nonumber\\ &&G(\sigma)=\frac{1}{f(\sigma)}+\sigma^{2}\xi^{'2}(\sigma)\nonumber\\ &&F(\sigma)=\xi^{'}(\sigma)(v\sigma^{2}-J)\nonumber \end{eqnarray} Through the equation (\ref{8}) we can provide that the solution for (\ref{12}), which is given by \begin{eqnarray} &&f(\sigma)=\left(v\sigma-\frac{J}{\sigma}\right)^{2}\label{13}\\ &&\psi^{2}(\sigma)=\frac{4\kappa\Lambda G(\sigma)(-F^{2}(\sigma)+G(\sigma)H(\sigma))}{\alpha(-2F^{2}(\sigma)+G(\sigma)H(\sigma))}\label{13.1} \end{eqnarray} Where this solution satisfies all equations (\ref{4}) and (\ref{6}). Now, integrating the equation (\ref{6}), we can write that \begin{eqnarray} \psi(\sigma)=\frac{-F^{2}(\sigma)+G(\sigma)H(\sigma)}{\alpha H(\sigma)}\label{13.2} \end{eqnarray} \section{Evaluating of the NG action}\label{v3} We are now going to carry out the NG action evaluation process, which was recently presented by \cite{Nagasaki:2017kqe,Nagasaki:2018csh,deBoer:2017xdk,Banerjee:2019vff}. Let us address the action of the Nambu-Goto (NG) term, which is given by \begin{eqnarray} S_{NG}=-T_{s}\int{d\sigma^{2}\sqrt{-detg_{ind}}},\label{14} \end{eqnarray} where $T_{s}$ is the fundamental string tension and the horizon is determined by $f(r)=0$. Adding the Wilson loop implies the insertion of a fundamental string whose worldsheet has a limit in the Wilson loop. In this sense, we can calculate the NG action of this fundamental string. Besides, we can calculate the time derivative of the NG action, which is obtained by integrating the square root of the determinant of the induced metric, and write that \begin{eqnarray} \frac{dS_{NG}}{dt}=T_{s}\int^{r_{+}}_{r_{-}}{d\sigma\sqrt{\xi^{'2}(\sigma)(v\sigma^{2}-J)^{2}}},\label{15} \end{eqnarray} The Lagrangian is given by \begin{eqnarray} \mathcal{L}=T_{s}\xi^{'}(\sigma)(v\sigma^{2}-J),\label{16} \end{eqnarray} which has the following equations of motion \begin{eqnarray} \frac{d}{d\sigma}\frac{\partial\mathcal{L}}{\partial\xi^{'}(\sigma)}-\frac{\partial\mathcal{L}}{\partial\xi(\sigma)}=0\label{17} \end{eqnarray} Where through the equation of motion (\ref{17}), we can easily show that $v=0$, this fact characterizes a stationary string. So, by the equation (\ref{15}), we have \begin{eqnarray} \frac{dS_{NG}}{dt}=\left.T_{s}J\xi(\sigma)\right|^{r_{+}}_{r_{-}},\label{18} \end{eqnarray} We can see that through the equation (\ref{13}) combined with $H(\sigma)$, we can draw from the equation (\ref{13.2}) that $\xi(\sigma)=c_{\xi}/J$ where $c_{\xi}$ is an integration constant that we will take to be $c_{\xi}>0$ and $c_{\xi}<0$ in our analysis. So, we can write for equation (\ref{18}): \begin{eqnarray} \frac{dS_{NG}}{dt}=T_{s}c_{\xi}\left(\sqrt{\frac{\gamma M}{2\alpha}\left(1+\sqrt{1-\frac{4\alpha J^{2}}{\gamma M^{2}}}\right)}+\sqrt{\frac{\gamma M}{2\alpha}\left(1-\sqrt{1-\frac{4\alpha J^{2}}{\gamma M^{2}}}\right)}\right),\label{19} \end{eqnarray} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.7]{f.eps}\hspace{1 cm}\includegraphics[scale=0.7]{g} \caption{BTZ: Action growth-Black hole angular momentum/Mass for $c_{\xi}=-1$ with the values $M=1$-$\alpha=0.1$-$\gamma=0.5$ (blue curve), $M=2$-$\alpha=0.5$-$\gamma=1$ (curve red), and $M=3$-$\alpha=1$-$\gamma=1.5$ (green curve). Action growth-Black hole mass (small mass region) for $q=-1$ with the values $J=0.2$-$\alpha=0.1$-$\gamma=0.5$ (blue curve), $J=0.4$-$\alpha=0.5$-$\gamma=1$ (curve red), and $J=0.6$-$\alpha=1$-$\gamma=1.5$ (green curve).}\label{p} \label{planohwkhz} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.7]{h.eps}\hspace{1 cm}\includegraphics[scale=0.7]{i} \caption{BTZ: Action growth-Black hole angular momentum/Mass for $c_{\xi}=1$ with the values $M=1$-$\alpha=0.2$-$\gamma=0.5$ (blue curve), $M=2$-$\alpha=0.5$-$\gamma=1$ (curve red), and $M=3$-$\alpha=1$-$\gamma=1.5$ (green curve). Action growth-Black hole mass for $q=1$ with the values $J=0.2$-$\alpha=0.2$-$\gamma=0.5$ (blue curve), $J=0.4$-$\alpha=0.5$-$\gamma=1$ (curve red), and $J=0.6$-$\alpha=1$-$\gamma=1.5$ (green curve).}\label{w} \label{planohwkhz} \end{center} \end{figure} We can observe, according to figure \ref{p}, the dependence of the relationship between the growth of the action and the angular momentum/mass and mass of the black hole along with Horndeski's parameters. In this way, the increase in complexity reaches its maximum when the string is in a steady-state. Therefore, we can observe that the effect of complexity reaches its maximum when the relative speed is zero for a rotating black hole. As expected, this effect is greater for larger black holes, as in our case, where we consider the AdS$_{4}$ case. In figure \ref{w}, we have a dependence on the angular momentum/mass where we can see that the string rotates on a different axis, the relative speed never becomes zero. We can also notice that in figure \ref{w} the dependence of the growth of the action with the black hole mass, the parameters of Horndeski's theory also influence this behavior. When the angular momentum is small, we have a monotonically increasing function of the mass. As the angular momentum becomes larger, it stops increasing rapidly, and the extreme appears around $J=0.6$. \section{Conclusion}\label{v4} In this work, we show the effects of the probe string on a BTZ black hole, a solution of Horndeski's theory without any restrictions on the parameters. We have seen that although the string is stationary, it can provide maximum growth in complexity, which is obtained for $c_{\xi}<0$ along with the parameters of Horndeski's theory, see figure \ref{p}. However, for $c_{\xi}>0$ as shown in figure \ref{w} the growth of the action with the black hole mass, which has its behavior influenced by the parameters of Horndeski's theory where the complexity increases with the black hole mass giving us an idea of how complex the physical system. As we showed in the calculation of NG's action on the remaining flame, the peak is located at $v=0$, where this behavior derives from the expansion of time. We can perceive a decrease in complexity, as shown in figure \ref{p}, with the growth of the action with the the black hole mass, which is a monotonically decreasing function of the black hole mass. One might think that this is because complexity defines how complex the physical system is. So, if complexity decreases, we have less information from the physical system. So, this is a bigger system and has bigger information. We would like to thank CNPq and CAPES for partial financial support. I would like to thank Moises Bravo Gaete and Jackson Levi Said for fruitful discussions at the end of this work.
train/arxiv
BkiUbs7xaJiQnuolFsbe
5
1
\section{Introduction} Quark model has achieved great success in describing the experimentally observed hadronic structures to a large extent. And the quark potential in between quark and anti-quark deduced from Chromodynamics (QCD) can explain the meson spectrum quite well. Many of predicted states by potential model were discovered in experiment and the theoretical predictions are in good agreement with experimental data, especially in charmonium and bottomonium sectors \cite{w_Lucha, c_quigg, v_novikov}, where the masses of charm and bottom quarks are heavy enough to be treated non-relativistically. However, things became confused after the discovery of $X(3872)$ in 2003 at $\mathrm{Belle}$ \cite{belle}, which was later confirmed by $\mathrm{BaBar}$ \cite{babar}. In recent years, a series of unusual states in charmonium sector, such as $Y(4260)$, $Y(4360)$, $Y(4660)$, and $Z^{\pm}(4430)$, were observed in experiment \cite{ex}. Due to their extraordinary decay nature, it is hard to embed them into the conventional charmonium spectrum, which leads people to treat them as exotic rather than quark-quark bound states. The typical scenarios in explaining these newly found states include treating $Y(4260)$ as a hybrid charmonium \cite{Y4260}, a $\chi_{c}\rho^{0}$ molecular state \cite{Y4260-Liu}, a conventional $\Psi(4S)$ \cite{Y4260-FJ}, an $\omega\chi_{c1}$ molecular state \cite{Y4260-Yuan}, a $\Lambda_{c}\bar{\Lambda}_{c}$ baryonium state \cite{Y4260-Qiao}, a $D_{1}D$ or $D_{0}D^{*}$ hadronic molecule \cite{Y4260-Ding}, and a $P$-wave tetraquark $[cs][\bar{c}\bar{s}]$ state \cite{Y4260-L}; $Y(4360)$ is interpreted as the candidate of the charmonium hybrid or an excited D-wave charmonium state, the $3^{3}D_{1}$ \cite{Ding} and an excited state of baryonium \cite{Qiao}; $Y(4660)$ is suggested to be the excited S-wave charmonium states, the $5^{3}S_{1}$ \cite{Ding} and $6^{3}S_{1}$ \cite{KTChao}, a baryonium state \cite{Qiao,Y4660-Bugg}, a $f_{0}(980)\Psi'$ bound state \cite{Guo,zgwang}, a $5^{3}S_{1}$-$4^{3}D_{1}$ mixing state \cite{Badalian}, and also a tetraquark state \cite{Y4660-QCDSR, Y4660-Ebert-2}. There have been recently many research works on "exotic" heavy quarkonium study in experiment and theory. To know more of recent progress in this respect and to have a more complete list of references one can see e.g. recent reviews \cite{N.Bram,N.Der} and references therein. In the baryonium picture, the tri-quark clusters are baryon-like, but not necessarily colorless. In the pioneer works of heavy baryonium for the interpretation of newly observed ``exotic" structures \cite{Y4260-Qiao,Qiao}, there were only phenomenological and kinematic analysis, but without dynamics. In this work we attempt to study the heavy baryonium interaction potential arising from two-pion exchanges in the framework of Heavy Baryon Chiral Perturbation Theory (HBCPT) \cite{T.M.Yan}. The paper is organized as follows. In Section 2, we present the formalism for the heavy baryon-baryon interaction study; in Section 3 we perform the numerical study for the mass spectrum of the possible baryonium with the obtained potential in preceding section; the Section 4 is devoted to the summary and conclusions. For the sake of reader's convenience some of the used formulae are given in the Appendix. \section{Formalism} To obtain the heavy baryonium mass spectrum, we first start from extracting the baryon-baryon interaction potential in the same procedure as for quark-quark interaction \cite{w_Lucha}. \subsection{Heavy Baryonium} In the heavy baryonium picture \cite{Qiao}, $\Lambda_c$ and $\Sigma^0_c$ are taken as basis vectors in two-dimensional space. The baryonia are loosely bound states of heavy baryon and anti-baryon, namely \begin{eqnarray} B^+_1&\equiv &|\Lambda_c^+ \; \bar{\Sigma}_c^0>~~~~~~~~~\nonumber\\ {\rm Triplet:}\;\;\;\;\; B^0_1&\equiv & \frac{1}{\sqrt{2}}(|\Lambda_c^+ \;\bar{\Lambda}_c^+>\; -\; |{\Sigma}_c^0 \bar{\Sigma}_c^0>)\\ B^-_1&\equiv&|\bar{\Lambda}^+_c\; {\Sigma}_c^0>~~~~~~~~~\nonumber \label{triplet} \end{eqnarray} and \begin{eqnarray} {\rm Singlet:}\;\;\;\;\; B^0_0\equiv \frac{1}{\sqrt{2}}(|\Lambda_c^+ \;\bar{\Lambda}_c^+>\; + \; |{\Sigma}_c^0 \bar{\Sigma}_c^0>)\ . \label{sin} \end{eqnarray} Here, approximately the transformation in this two-dimensional "C-spin" space is invariant, which is in analog to the invariance of isospin transformation in proton and neutron system. \subsection{Effective Chiral Lagrangian} Heavy baryon contains both light and heavy quarks, of which the light component exhibits the chiral property and the heavy component exhibits heavy symmetry. Therefore, it is plausible to tackle the problem of heavy baryon interaction through the heavy chiral perturbation theory. Following we briefly review the gists of the HBCPT for later use. In usual chiral perturbation theory, the nonlinear chiral symmetry is realized by making use of the unitary matrix \begin{equation} \Sigma=e^{\frac{2i M}{f_\pi}}\; , \end{equation} where $M$ is a $3\times3$ matrix composed of eight Goldstone-boson fields, i.e., \beq M = \left(\begin{array}{ccc} \frac{1}{\sqrt{2}} \pi^0 + \frac{1}{\sqrt{6}} \eta &\phantom{+} \pi^+ & \phantom{+} K^+ \\ \phantom{+} \pi^- & -\frac{1}{\sqrt{2}} \pi^0 + \frac{1}{\sqrt{6}} \eta & \phantom{+} K^0\\ \phantom{+} K^- & \phantom{+} \bar{K}^0 & \phantom{+} - \frac{2}{\sqrt{6}} \eta \end{array}\right) \; . \eeq Here, $ f_\pi $ is the $pion$ decay constant. After the chiral symmetry spontaneously broken, the Goldstone boson interaction with hadron is introduced through a new matrix \cite{A.Manohar, M.Wise} \begin{equation} \xi=\Sigma^{\frac{1}{2}}=e^{\frac{iM}{f_\pi}}\; . \end{equation} From $\xi$ one can construct a vector field $V_\mu$ and an axial vector field $A_\mu$ with simple chiral transformation properties, i.e., \begin{equation} V_\mu=\frac{1}{2}(\xi^{\dag}\partial_{\mu}\xi+\xi\partial_{\mu}\xi^{\dag})\; , \end{equation} \begin{equation} A_\mu=\frac{i}{2}(\xi^{\dag}\partial_{\mu}\xi-\xi\partial_{\mu}\xi^{\dag})\; . \end{equation} For our aim, we work only on the leading order vector and axial vector fields in the expansion of $\xi$ in terms of $f_\pi$, they are \begin{equation} V_\mu=\frac{1}{f_\pi^2}M\partial_\mu M\; , \label{vector-current} \end{equation} \begin{equation} A_\mu=-\frac{1}{f_\pi}\partial_\mu M\; . \end{equation} For heavy baryon, each of the two light quarks is in a triplet of flavor SU(3), and hence the baryons can be grouped in two different SU(3) multiplets, the sixtet and antitriplet. The symmetric sixtet and antisymmetric triplet can be constructed out in $3\times 3$ matrices \cite{M.Wise}, they are \begin{equation} B_6=\left(\begin{array}{ccc}\Sigma_c^{++}& \frac{1} {\sqrt{2}}\Sigma_c^{+} & \frac{1}{\sqrt{2}}\Xi_c^{'+}\\ \frac{1}{\sqrt{2}}\Sigma_c^{+} & \Sigma_c^0 & \frac{1}{\sqrt{2}}\Xi_c^{'0}\\ \frac{1}{\sqrt{2}}\Xi_c^{'+} & \frac{1}{\sqrt{2}}\Xi_c^{'0} & \Omega_c^0\end{array}\right)\; , \end{equation} and \begin{equation} B_{\bar{3}}=\left(\begin{array}{ccc}0& \Lambda_c & \Xi_c^{+}\\ -\Lambda_c & 0 & \Xi_c^{-}\\ -\Xi_c^{+} & -\Xi_c^{-} & 0\end{array}\right)\; , \end{equation} respectively. Introducing six coupling constant $g_i$, $i=1,6$, the general chiral-invariant Lagrangian then reads \cite{T.M.Yan} \begin{eqnarray} \mathcal{L_G}& = &\frac{1}{2}tr[\bar{B}_{\bar{3}}(iD\!\!\!/-M_{\bar{3}}) B_{\bar{3}}]+tr[\bar{B}_6(iD\!\!\!/-M_6)B_6]\nonumber\\ &+&tr[\bar{B}_6^{*\mu}[-g_{\mu\nu}(iD\!\!\!/-M_6^{*})+i(\gamma_\mu D_\nu+\gamma_\nu D_\mu)-\gamma_\mu(iD\!\!\!/+M_6^{*})\gamma_\nu]B_6^{*\nu}]\nonumber\\ &+&g_1tr(\bar{B}_6\gamma_{\mu}\gamma_5A^{\mu}B_6)+g_2tr(\bar{B}_6 \gamma_{\mu}\gamma_5A^{\mu}B_{\bar{3}})+ h.c.\nonumber\\ &+&g_3tr(\bar{B}_{6{\mu}}^*A^{\mu}B_6)+ h.c. + g_4 tr(\bar{B}_{6{\mu}}^*A^{\mu}B_{\bar{3}}) + h.c. \nonumber\\ &+&g_5tr(\bar{B}_6^{\nu*}\gamma_{\mu}\gamma_5A^{\mu} B_{6\nu}^*)+g_6tr(\bar{B}_{\bar{3}}\gamma_{\mu}\gamma_5A^{\mu}B_{\bar{3}})\; . \label{general-lag} \end{eqnarray} Here, $B_{6\nu}^*$ is a Rarita-Schwinger vector-spinor field for spin-$\frac{3}{2}$ particle; $M_{\bar{3}}$, $M_6$, $M_6^*$ represent for heavy baryon mass matrices of corresponding fields; With the help of vector current $V_\mu$ defined in Eq.~(\ref{vector-current}), we may construct the covariant derivative $D_\mu$, which acts on baryon field, as \begin{equation} D_\mu B_6 = \partial_\mu B_6 + V_\mu B_6 + B_6 V_\mu ^T \;, \end{equation} \begin{equation} D_\mu B_{\bar{3}} = \partial_\mu B_{\bar{3}} + V_\mu B_{\bar{3}} + B_{\bar{3}} V_\mu ^T \;, \end{equation} where $V_\mu ^T$ stands for the transpose of $V_\mu$. Thus, the couplings of vector current to heavy baryons relevant to our task take the following form \begin{eqnarray} \mathcal{L}_{{\mathcal{E}_1}} & = & \frac{1}{2} tr(\bar{B}_{\bar{3}}i\gamma^\mu V_\mu B_{\bar{3}})\nonumber\\ & = & \frac{1}{2 f_\pi ^2}\bar{\Lambda}_c i\gamma^\mu (\pi^0 \partial_\mu \pi^0 + \pi^{-}\partial_\mu \pi^{+} + \pi^{+}\partial_\mu \pi^{-})\Lambda_c \;, \end{eqnarray} and \begin{eqnarray} \mathcal{L}_{{\mathcal{E}_2}} & = & \frac{1}{2} tr(\bar{B}_{\bar{3}} B_{\bar{3}}i\gamma^\mu V_\mu ^T)\nonumber\\ & = & \frac{1}{2f_\pi ^2}\bar{\Lambda}_c\Lambda_c i\gamma^\mu (\pi^0 \partial_\mu \pi^0 + \pi^{-}\partial_\mu \pi^{+} + \pi^{+}\partial_\mu \pi^{-}) \;. \end{eqnarray} According to the heavy quark symmetry, there are four constraint relations among those six coupling constants of the Lagrangian of Eq.~(\ref{general-lag}), i.e., \begin{eqnarray} g_6 = 0 \; ,\; g_3 = \frac{\sqrt{3}}{2}g_1\; ,\; g_5 = -\frac{3}{2}g_1\; ,\; g_4 = -\sqrt{3}g_2\; , \label{couplings} \end{eqnarray} which means the number of independent couplings are then reduced to two. In this work, we employ $g_1$ and $g_2$ for the numerical evaluation as did in Ref.~\cite{T.M.Yan}. Here, to get the dominant interaction potential we restrict our effort only on the $pion$ exchange processes as usual. Notice that the couplings of $pion$ to spin-$\frac{3}{2}$ and -$\frac{1}{2}$ baryons, and $pion$ to two spin-$\frac{1}{2}$ baryons take a similar form, in the following we merely present the spin-$\frac{3}{2}$ and -$\frac{1}{2}$ baryon-$pion$ coupling for illustration, i.e., \begin{equation} \mathcal{L}_1=\frac{g_3}{\sqrt{2}f_\pi} \bar{\Sigma_c}\!^{0*\mu}\partial_\mu\pi^0\Sigma_c^0 + h.c.\; ,\label{vertex1} \end{equation} \begin{equation} \mathcal{L}_2=-\frac{g_3}{\sqrt{2}f_\pi} \bar{\Sigma_c}\!^{+*\mu}\partial_\mu\pi^{+}\Sigma_c^{0} + h.c.\; ,\label{vertex2} \end{equation} \begin{equation} \mathcal{L}_3=\frac{g_4}{f_\pi} \bar{\Sigma_c}\!^{++*\mu}\partial_\mu\pi^{+}\Lambda_c^{+} + h.c.\; ,\label{vertex3} \end{equation} \begin{equation} \mathcal{L}_4=-\frac{g_4}{f_\pi} \bar{\Sigma_c}\!^{0*\mu}\partial_\mu\pi^{-}\Lambda_c^{+} + h.c.\; ,\label{vertex4} \end{equation} \begin{equation} \mathcal{L}_5=-\frac{g_4}{f_\pi}\bar{\Sigma_c}\!^{+*\mu} \partial_\mu\pi^{0}\Lambda_c^{+} + h.c.\; .\label{vertex5} \end{equation} To get the $pion$ and two spin-$\frac{1}{2}$ baryon couplings one only needs to replace the $\Sigma_c^{*\mu}$ by $\Sigma_c$, $g_3$ by $g_1$, $g_4$ by $g_2$, and insert $\gamma^\mu \gamma_5$ in between the two baryon fields in Eqs.(\ref{vertex1})-(\ref{vertex5}). \begin{figure}[t,m] \begin{center} \scalebox{0.50}{\includegraphics{box.eps} \scalebox{0.50}{\includegraphics{cross.eps} \caption{Schematic Diagrams which contribute to the baryonium potential.} \label{fig-bc} \end{center} \end{figure} \subsection{Baryonium Potential from Two-pion Exchange} To obtain heavy baryon-baryon interaction potential in configuration space, we start from writing down the two-body scattering amplitude in the center-of-mass frame(CMS), i.e. taking $\textbf{p}_a = - \textbf{p}_b$ and $\textbf{p}_a' = -\textbf{p}_b'$. In CMS the total and relative four momenta are defined as \begin{eqnarray} P & = &(p_a\; +\; p_b)\; =\; (p_a'\; +\; p_b')=(E,\; 0)\; ,\\ p & = &\frac{1}{2}(p_a \;-\; p_b)\; =\; (0,\;\textbf{p})\; ,\\ p'& = &\frac{1}{2}(p_a'\; -\; p_b')\; =\; (0,\; \textbf{p}')\; . \end{eqnarray} To perform the calculation, it is convenient to introduce some new variables as functions of $\textbf{p}$ and $\textbf{p}'$, i.e., \begin{eqnarray} &\mathcal{W}(\textbf{p})& = E_a(\textbf{p})+E_b(\textbf{p})\; ,\\ &\mathcal{W}(\textbf{p}')& = E_a(\textbf{p}')+E_b(\textbf{p}')\; ,\\ &F_E(\textbf{p},\; p_0)& = \frac{1}{2}E+p_0-E(\textbf{p})+i\delta\; , \end{eqnarray} where $\delta$ is an infinitesimal quantity introduced in the so-called $i\delta$ prescription. Following the same procedure as in Refs.~\cite{Th.Rijken1,Th.Rijken2}, it is straightforward to write down the baryon-baryon scattering kernels, shown as box and crossed diagrams in Figure \ref{fig-bc}, \begin{eqnarray} K_{box}=&-&\frac{1}{(2\pi)^2}(E-\mathcal{W} (\textbf{p}'))(E-\mathcal{W}(\textbf{p}))\int dp_0' dp_0 dk_{20} dk_{10}d^3 \textbf{k}_2d^3 \textbf{k}_1\nonumber\\ & \times& \frac{i}{(2\pi)^4}\delta^4 (p - p'- k_1 - k_2) \frac{1}{k_2^2 - m^2+i\delta} \frac{1}{F_E(\textbf{p}',p_0') F_E(-\textbf{p}',-p_0')}\nonumber\\ &\times&\frac{\Gamma_j\Gamma_i\Gamma_i\Gamma_j} {F_E(\textbf{p} - \textbf{k},p_0-k_{10}) F_E(-\textbf{p}+\textbf{k},-p_0+k_{10})} \frac{1}{F_E(\textbf{p},p_0) F_E(\textbf{p}, -p_0))}\nonumber\\ &\times&\frac{1} {k_1^2-m^2+i\delta}\; ,\label{kbox} \end{eqnarray} \begin{eqnarray} K_{cross} = & - & \frac{1}{(2\pi)^2}(E-\mathcal{W} (\textbf{p}'))(E-\mathcal{W}(\textbf{p}))\int dp_0' dp_0dk_{20}dk_{10}d^3 \textbf{k}_2d^3 \textbf{k}_1\nonumber\\ &\times&\frac{i}{(2\pi)^4} \delta^4(p - p'- k_1 - k_2) \frac{1}{k_2^2 - m^2 + i\delta} \frac{1}{F_E(\textbf{p}', p_0') F_E(- \textbf{p}',-p_0')}\nonumber\\ &\times&\frac{\Gamma_j\Gamma_i\Gamma_j\Gamma_i} {F_E(\textbf{p} - \textbf{k}, p_0-k_{10}) F_E(-\textbf{p}'- \textbf{k}, -p_0' - k_{10})} \frac{1} {F_E(\textbf{p}, p_0)F_E(-\textbf{p},-p_0)}\nonumber\\ & \times&\frac{1} {k_1^2-m^2 + i\delta}\; . \label{kcross} \end{eqnarray} Here, $m$ corresponds to the $pion$ mass and $\Gamma_{i,j}$ are heavy baryon-$pion$ interaction vertices that can be read out from the Lagrangian in Eqs.(\ref{vertex1})-(\ref{vertex5}). In case of spin-$\frac{3}{2}$ intermediate, \begin{eqnarray} \Gamma_j\Gamma_i\Gamma_i\Gamma_j & = &\left(\frac{g_4}{f_\pi}\right)^4\bar{u}(-p)k_2^\mu u_\mu(p-k_1)\bar{u}_\nu(p-k_1)k_1^\nu u(p)\nonumber\\ & \times &\bar{v}(p)(-k_1^\alpha) v_\alpha(-p+k_1)\bar{v}_\beta(-p+ k_1)k_2^\beta v(-p)\; , \end{eqnarray} and in case of spin-$\frac{1}{2}$ intermediate \begin{eqnarray} \Gamma_j\Gamma_i\Gamma_i\Gamma_j&=& \left(\frac{g_2}{f_\pi}\right)^4 \bar{u}(-p)\gamma_\mu\gamma_5 k_2^\mu u(p-k_1) \bar{u}(p-k_1)\gamma_\nu\gamma_5 k_1^\nu u(p)\nonumber\\ & \times &\bar{v}(p)\gamma_\alpha\gamma_5 (-k_1^\alpha) v(-p+k_1) \bar{v}(-p+k_1)\gamma_\beta\gamma_5 k_2^\beta v(-p)\; . \end{eqnarray} Integrating over $p'_0$, $p_0$, $k_{10}$, and $k_{20}$ in Eq.(\ref{kbox}) one obtains the interaction kernel of box diagram at order $\mathcal{O}(\frac{1}{M_H})$, \begin{eqnarray} K_{box}=&-&\frac{1}{(2\pi)^3}\int\frac{d^3 \textbf{k}_1 d^3 \textbf{k}_2}{4E_{\textbf{k}_1}E_{\textbf{k}_2}} \frac{\Gamma_j\Gamma_i} {E_{\textbf{p}-\textbf{k}_1}+E_{\textbf{p}}-W+E_{\textbf{k}_1}}\nonumber\\ &\times&\frac{\Gamma_i\Gamma_j} {E_\textbf{p}'+E_{\textbf{p}-\textbf{k}_1}-W+E_{\textbf{k}_2}} \frac{1}{E_{\textbf{p}}+E_{\textbf{p}'} -W+E_{\textbf{k}_1}+E_{\textbf{k}_2}}\;, \end{eqnarray} where $M_H$ represents one of the heavy baryon mass, $M_{\Lambda_c^+}$, $M_{\Sigma^0_c}$ or $M_{\Sigma_c^*}$; $E_{\textbf{p}-\textbf{k}_1}= \sqrt{(\textbf{p}-\textbf{k}_1)^2+M_{\Sigma_c^*}^2}$ is the intermediate state energy; $E_{\textbf{k}_1}=\sqrt{\textbf{k}_1^2+m^2}$ and $E_{\textbf{k}_2}=\sqrt{\textbf{k}_2^2+m^2}$ are two $\it{pion}$s' energies; and $W = 2 E(\textbf{p})$. With the same procedure, we can get the interaction kernel of crossed diagram, i.e. \begin{eqnarray} K_{cross}=&-&\frac{1}{(2\pi)^3}\int\frac{d^3 \textbf{k}_1 d^3 \textbf{k}_2}{4E_{\textbf{k}_1}E_{\textbf{k}_2}} \frac{\Gamma_j\Gamma_i} {E_{\textbf{p}-\textbf{k}_1}+ E_{\textbf{p}}-W+E_{\textbf{k}_1}}\nonumber\\ &\times&\frac{\Gamma_j\Gamma_i} {E_\textbf{p}'+E_{\textbf{p}'+ \textbf{k}_1}-W+E_{\textbf{k}_1}} \frac{1}{E_{\textbf{p}}+ E_{\textbf{p}'}-W+E_{\textbf{k}_1}+E_{\textbf{k}_2}}\; .\label{cross} \end{eqnarray} Next, since what we are interested in is the heavy baryons, we can further implement the non-relativistic reduction on spinors with the help of vertices given in Eqs.(\ref{vertex1})-(\ref{vertex5}). In the end, the non-relativistic reduction for $\Lambda_c^ + \Sigma_c^{+*}\pi^0$ and $\Lambda_c^+\Sigma_c^{+}\pi^0$ couplings gives \begin{equation} i\left(\frac{g_4}{f_\pi}\right)\bar{u}(p_2) u_\mu(p_1)(p_2-p_1)^\mu = -i\left(\frac{g_4} {f_\pi}\right)\textbf{S}^{\dag}\cdot\textbf{q}\; ,\label{spin12} \end{equation} and \begin{equation} i\left(\frac{g_2}{f_\pi}\right)\bar{u}(p_2)\gamma_\mu\gamma_5 u(p_1)(p_2-p_1)^\mu=i\left(\frac{g_2} {f_\pi}\right)\boldsymbol{\sigma}_1\cdot\textbf{q}\; ,\label{spin32} \end{equation} respectively. Here, $\textbf{q}=\textbf{p}_2-\textbf{p}_1$ and $\textbf{S}^{\dag}$ is the spin-$\frac{1}{2}$ to spin-$\frac{3}{2}$ transition operator. In the process of deriving $\Lambda_c^+-\bar{\Lambda}_c^+$ potential, the $\Sigma_c^+$ and $\Sigma_c^{+*}$ are taken into account as intermediate states. Using Eqs. (\ref{spin12})-(\ref{spin32}) and the explicit forms of spinors given in the appendix, we can readily obtain the reduction forms for the $\Sigma_c^+$ intermediate \begin{eqnarray} &&\bar{u}(-p)\gamma_\mu\gamma_5 k_2^\mu u(p-k_1) \bar{u}(p-k_1)\gamma_\nu\gamma_5 k_1^\nu u(p)\times\nonumber\\ && \bar{v}(p)\gamma_\alpha\gamma_5 (-k_1^\alpha) v(-p+k_1) \bar{v}(-p+k_1)\gamma_\beta\gamma_5 k_2^\beta v(-p)\nonumber\\ & = &(\textbf{k}_1\cdot\textbf{k}_2)^2+ (\boldsymbol{\sigma}_1\cdot\textbf{k}_1 \times\textbf{k}_2)(\boldsymbol{\sigma}_2\cdot\textbf{k}_1 \times\textbf{k}_2)\;, \end{eqnarray} the $\Sigma_c^{+*}$ intermediate in the box diagram \begin{eqnarray} &&\bar{u}(-p)k_2^\mu u_\mu(p-k_1) \bar{u}_\nu(p-k_1)k_1^\nu u(p)\times\nonumber\\ &&\bar{v}(p)(-k_1^\alpha) v_\alpha(-p+k_1)\bar{v}_\beta(-p+ k_1)k_2^\beta v(-p)\nonumber\\ & = &\frac{4}{9}(\textbf{k}_1\cdot\textbf{k}_2)^2- \frac{1}{9}(\boldsymbol{\sigma}_1\cdot\textbf{k}_1 \times\textbf{k}_2)(\boldsymbol{\sigma}_2\cdot\textbf{k}_1 \times\textbf{k}_2)\; , \end{eqnarray} and the crossed diagram \begin{eqnarray} &&\bar{u}(-p)k_2^\mu u_\mu(p-k_1) \bar{u}_\nu(p-k_1)k_1^\nu u(p)\times\nonumber\\ &&\bar{v}(p)(-k_1^\alpha) v_\alpha(-p+k_1)\bar{v}_\beta(-p+ k_1)k_2^\beta v(-p)\nonumber\\ & = &\frac{4}{9}(\textbf{k}_1\cdot\textbf{k}_2)^2 + \frac{1}{9}(\boldsymbol{\sigma}_1\cdot\textbf{k}_1 \times\textbf{k}_2)(\boldsymbol{\sigma}_2\cdot\textbf{k}_1 \times\textbf{k}_2)\; , \end{eqnarray} respectively. Thus, the spinor reduction finally leads to an operator $\mathcal{O}_1(\textbf{k}_1,\; \textbf{k}_2)$, of which the variables $\textbf{k}_1$ and $\textbf{k}_2$ can be replaced by gradient operators $\boldsymbol{\nabla}_1$ and $\boldsymbol{\nabla}_2$ in configuration space and acting on $\textbf{r}_1$ and $\textbf{r}_2$, respectively. This operator is expressed as \begin{eqnarray} \mathcal{O}_1(\textbf{k}_1,\; \textbf{k}_2)&=&c_1O_1(\textbf{k}_1,\; \textbf{k}_2)+c_2O_2(\textbf{k}_1,\; \textbf{k}_2)\nonumber\\ &=&c_1(\textbf{k}_1\cdot\textbf{k}_2)^2+ c_2(\boldsymbol{\sigma}_1\cdot\textbf{k}_1 \times\textbf{k}_2)(\boldsymbol{\sigma}_2\cdot\textbf{k}_1 \times\textbf{k}_2)\; .\label{rdo} \end{eqnarray} Here, the decomposition coefficients $c_1$ and $c_2$ are given in Table 1. The first part of Eq.~(\ref{rdo}) may generate the central potential and the second part may generate the spin-spin coupling and the tensor potentials, which are explicitly shown in the Appendix. \begin{center} \vspace{-2mm} \begin{table}[bht] \caption{\small The values of coefficients $c_1$ and $c_2$ in the decomposition of operator $O(\textbf{k}_1,\; \textbf{k}_2)$ in Eq.~(\ref{rdo}). The left one is for the spin-$\frac{1}{2}$ intermediate state case and the right one is for the spin-$\frac{3}{2}$ case.} \vspace{2mm} \centering \begin{tabular}{|c c c |}\hline $ $spin-1/2 &$~c_1$& $~c_2$\tabularnewline\hline\hline box & ~1 & ~1\\ cross & ~1 & ~1\\\hline\hline \end{tabular} \begin{tabular}{|c c c |}\hline $ $spin-3/2 &$c_1$& $c_2$\tabularnewline\hline\hline box &$ 4/9 $ &$-1/9$\\ cross &$ 4/9$ & $~~1/9$\\\hline\hline \end{tabular} \end{table} \end{center} To get the leading order central potential, e.g. for $\Lambda_c$-$\bar{\Lambda}_c$ system, we first expand the energy in powers of $\frac{1}{M_H}$, but keep only the leading term, like \begin{eqnarray} \frac{1}{E_{\textbf{p}-\textbf{k}_1}+ E_{\textbf{p}}-W+E_{\textbf{k}_1}}&\approx& \frac{1}{M_{\Sigma_c^{*}}+ M_{\Lambda_c}-2M_{\Lambda_c}+E_{\textbf{k}_1}} = \frac{1}{E_{\textbf{k}_1}+\Delta_1}, \end{eqnarray} where $\Delta_1=M_{\Sigma_c^{*}}-M_{\Lambda_c}$ represents the mass splitting. By virtue of the factorization in integrals given in the Appendix, we can then make a double Fourier transformation, i.e., \begin{eqnarray} V_C^B (r_1,\; r_2)= - \left(\frac{g_4^4} {f_\pi^4}\right)\int\int \frac{d^3\textbf{k}_1d^3\textbf{k}_2}{(2\pi)^6} \frac{\mathcal{O}_1(\textbf{k}_1,\textbf{k}_2) e^{i\textbf{k}_1\textbf{r}_1}e^{i\textbf{k}_2\textbf{r}_2} f(\textbf{k}_1^2)f(\textbf{k}_2^2)} {2E_{\textbf{k}_1}E_{\textbf{k}_2} (E_{\textbf{k}_1}+\Delta_1)(E_{\textbf{k}_2} +\Delta_1)(E_{\textbf{k}_1}+E_{\textbf{k}_2})}\; , \label{cp0} \end{eqnarray} where the superscript $B$ denotes the box diagram and the subscript $C$ means central potential. Similarly, one can get the central potential from the crossed diagram contribution \begin{equation} V_C^C (r_1,\; r_2)= - \left(\frac{g_4^4} {f_\pi^4}\right)\int\int \frac{d^3\textbf{k}_1d^3\textbf{k}_2}{(2\pi)^6} \mathcal{O}_1(\textbf{k}_1,\textbf{k}_2) e^{i\textbf{k}_1\textbf{r}_1}e^{i\textbf{k}_2\textbf{r}_2} f(\textbf{k}_1^2)f(\textbf{k}_2^2)\ D\; , \label{cross2} \end{equation} where the superscript $C$ denote crossed diagram and the subscript $C$ means central potential, and \begin{eqnarray} D&=& \!\!\!\frac{1}{4 E_{\textbf{k}_1} E_{\textbf{k}_2}}\left[\left(\frac{1}{(E_{\textbf{k}_1} + \Delta_1)^2} + \frac{1}{(E_{\textbf{k}_2} + \Delta_1)^2}\right) \frac{1}{E_{\textbf{k}_1} + E_{\textbf{k}_2}}\right.\nonumber\\ &+&\!\!\! \left(\frac{1}{(E_{\textbf{k}_1} + \Delta_1)^2} \left. + \frac{1}{(E_{\textbf{k}_2} + \Delta_1)^2} + \frac{2}{(E_{\textbf{k}_1} + \Delta_1) (E_{\textbf{k}_2} + \Delta_1)}\right)\frac{1}{E_{\textbf{k}_1} + E_{\textbf{k}_2} + 2 \Delta_1} \right]. \end{eqnarray} In order to regulate the potentials we have introduced form factors at each baryon-pion vertex. The resulting $f(\bf k^2)$ form factors appearing in Eqs.~(\ref{cp0}) and (\ref{cross2}) will be given in Section 3. Taking a similar approach as given in above one can readily get the central potential in other interaction channels and also the tensor potential. Notice that although there exists the one-pion exchange contribution in $\Sigma_c$-$\Sigma_c$ system, due to the $\gamma_\mu \gamma_5$ nature in interaction vertex, it only contributes to $\boldsymbol{\sigma}_1\cdot \boldsymbol{\sigma}_2$ term, which is out of our concern in this work. Here we just focus on the central potential. \begin{figure}[t,m] \begin{center} \scalebox{0.50}{\includegraphics{onepair.eps} \scalebox{0.50}{\includegraphics{twopair.eps} \caption{The triangle and two-pion loop diagrams.} \label{fig-ot} \end{center} \end{figure} Besides box and crossed diagrams, there are also contributions from triangle and two-pion loop diagrams as shown in Fig.~\ref{fig-ot}. As in the box and crossed diagrams, after integrating over energy component, we get the pion-pair contribution, as shown in the left diagram of Figure \ref{fig-ot}, as \cite{th_rijken} \begin{equation} V_{triangle}(r_1,r_2) = \frac{g_4^2}{2f_\pi^4}\int\int \frac{d^3\textbf{k}_1d^3\textbf{k}_2}{(2\pi)^6} \frac{\mathcal{O}_2(\textbf{k}_1,\textbf{k}_2) (E_{\textbf{k}_1} + E_{\textbf{k}_2}) e^{i\textbf{k}_1\textbf{r}_1}e^{i\textbf{k}_2\textbf{r}_2} f(\textbf{k}_1^2)f(\textbf{k}_2^2)}{E_{\textbf{k}_1} E_{\textbf{k}_2}(E_{\textbf{k}_1} + \Delta_1) (E_{\textbf{k}_2} + \Delta_1)}\;, \label{tri11} \end{equation} where the $\mathcal{O}_2(\textbf{k}_1,\textbf{k}_2 )= (\textbf{k}_1\cdot\textbf{k}_2)$ from spinor reduction can be replaced in configuration space by the gradient operator $(\boldsymbol{\nabla}_1\cdot\boldsymbol{\nabla}_2)$. Similarly, the two-pion loop contribution, as shown in the right diagram of Figure \ref{fig-ot} reads \begin{equation} V_{2\pi-loop}(r_1, r_2) = \frac{1}{16 f_\pi^4}\int\int\frac{d^3\textbf{k}_1d^3\textbf{k}_2}{(2\pi)^6} e^{i\textbf{k}_1\textbf{r}_1}e^{i\textbf{k}_2\textbf{r}_2} f(\textbf{k}_1^2)f(\textbf{k}_2^2) A\;. \label{twopair1} \end{equation} Here, $A=-\frac{1}{2 E_{\textbf{k}_1}}-\frac{1}{2 E_{\textbf{k}_2}} +\frac{2}{E_{\textbf{k}_1}+E_{\textbf{k}_2}}~$. Expressing Eps.~(\ref{tri11}) and (\ref{twopair1}) in the integral representation of $E_{\textbf{k}_1}$, and making the Fourier transformation, one can then obtain the corresponding potentials. \section{Numerical Analysis} With the central potentials obtained in preceding section, one can calculate the heavy baryonium spectrum by solving the Schr\"{o}dinger equation. In our numerical evaluation, the Matlab based package Matslise \cite{matslise} is employed. The following inputs from Particle Data Book \cite{PDG} are used in the numerical calculation: \begin{equation} M_{\Lambda_c^+}=2.286 \mathrm{GeV}\; , \; M_{\Sigma_c^0}=2.454\mathrm{GeV}\; , \; M_{\Sigma_c^*}=2.518\mathrm{GeV}\; , \; f_\pi=0.132\mathrm{GeV}\;,\; m = 0.135\mathrm{GeV}\; , \end{equation} and both spin-$\frac{1}{2}$ and -$\frac{3}{2}$ fermion intermediates are taken into account. It is obvious that the main uncertainties in the evaluation of heavy baryonium remain in the couplings of Eq.~(\ref{couplings}). The magnitudes of the two independent couplings $g_1$ and $g_2$ were phenomenologically analyzed in Ref.~\cite{T.M.Yan}, and two choices for them were suggested, i.e., \begin{equation} g_1 = \frac{1}{3}\; , \; g_2=-\sqrt{\frac{2}{3}} \label{para1} \end{equation} and \begin{equation} g_1 = \frac{1}{3}\times 0.75\; , \; g_2 = -\sqrt{\frac{2}{3}}\times 0.75\; , \label{para2} \end{equation} which implies the $g_4$ lies in the scope of 1 to 1.4, similar as estimated by Ref. \cite{Savage} in the chiral limit. \subsection{Gaussian form factor case} The central potential from two-pion exchange box which can be regularized by widely used Gaussian form factor $f(\textbf{k}^2) = e^{-\textbf{k}^2/\Lambda^2}$ reads \begin{eqnarray} V_{CG}^B(r_1,\; r_2) &=&-\left(\frac{g_4^4}{f_\pi^4}\right)\left[\frac{1}{\pi} \int_0^\infty\frac{d\lambda}{\Delta_1^2+\lambda^2} O_1(\textbf{k}_1,\textbf{k}_2) F(\lambda,r_1)F(\lambda,r_2)\right.\nonumber\\&&\left.- \frac{2\Delta_1}{\pi^2}O_1(\textbf{k}_1,\textbf{k}_2)\int_0^\infty \frac{d\lambda}{\Delta_1^2+\lambda^2}F({\lambda,r_1}) \int_0^\infty\frac{d\lambda}{\Delta_1^2+ \lambda^2}F({\lambda,r_2})\right]\nonumber\\ &=&\sum_i V_{CGi}^B + \cdots\; . \label{cp1} \end{eqnarray} Details of the derivation of Eq.~(\ref{cp1}) from Eq.~(\ref{cp0}) can be found in the Appendix. There, the function $F(\lambda,r)$ is defined by Eq.~(\ref{app72}). And, similarly the central potential from two-pion exchange crossed diagram gives \begin{eqnarray} V_{CG}^C(r_1,\; r_2) &=& - \left(\frac{g_4^4}{f_\pi^4}\right)\left[\frac{1}{\pi} \int_0^\infty\frac{d\lambda (\Delta_1^2-\lambda^2)}{(\Delta_1^2+\lambda^2)^2} O_1(\textbf{k}_1,\textbf{k}_2) F(\lambda,r_1)F(\lambda,r_2)\right]\nonumber\\ &=&\sum_i V_{CGi}^C + \cdots\; . \label{crossp1} \end{eqnarray} Here, the ellipsis represents the high singular terms in $r_2\rightarrow r_1=r$ limit, which behave as higher order corrections to the potential and will not be taken into account in this work, but will be discussed elsewhere. The central potential of Eq.~(\ref{cp1}) is obtained in the case of spin-$\frac{3}{2}$ intermediate state, and the explicit forms of $V_{CGi}$ from box diagram are \begin{equation} V_{CG1}^B = -\frac{g_4^4 \Lambda^7}{128 \sqrt{2} \pi^{7/2} f_{\pi}^4 \Delta_1^2} e^{-\frac{\Lambda^2 r^2}{2}}\; , \end{equation} \begin{equation} V_{CG2}^B = -\frac{g_4^4 \Lambda^5}{16 \sqrt{2} \pi^{7/2} f_{\pi}^4 \Delta_1^2 r^2} e^{-\frac{\Lambda^2 r^2}{2}}\; , \end{equation} \begin{equation} V_{CG3}^B = \frac{g_4^4 \Lambda^3 m^{5/2} e^{m^2/\Lambda^2}}{32 \sqrt{2} \pi^3 f_{\pi}^4 \Delta_1^2 r^{3/2}} e^{-\frac{\Lambda^2 r^2}{4}-m r}\; , \end{equation} \begin{equation} V_{CG4}^B = \frac{g_4^4 \Lambda^3 m^{3/2} e^{m^2/\Lambda^2}}{16 \sqrt{2} \pi^3 f_{\pi}^4 \Delta_1^2 r^{5/2}} e^{-\frac{\Lambda^2 r^2}{4}-m r} - \frac{g_4^4 m^{9/2} e^{2 m^2/\Lambda^2}}{128 \pi^{5/2} f_{\pi}^4 \Delta_1^2 r^{5/2}} e^{-2 m r}\; . \end{equation} With Gaussian form factors it is seen from Eq.~(\ref{app72}) in the Appendix that for a given $\Lambda$ the function $F(\lambda,r)$ is suppressed for large $\lambda$ values, that is the dominant contribution to potential comes from the small $\lambda$ region. So, in obtaining the analytic expressions of above potentials and hereafter, we expand the corresponding functions, as defined in the Appendix, in $\lambda$ and keep only the leading term. In this approach, the crossed diagram contributes to the potential the same as the box diagram at the leading order in $\lambda$ expansion, and hence is not presented here. Similarly, we obtain the potentials from triangle and two-pion loop diagrams, i.e., \begin{eqnarray} V_{CG5}^T &=& \frac{g_4^2 m \Lambda^3}{32 \sqrt{2} \pi^{7/2}f_\pi^4 \Delta_1 r^2}e^{-\frac{\Lambda^2 r^2}{2}} - \frac{g_4^2 m^{5/2} \Lambda e^{m^2/\Lambda^2}}{16 \sqrt{2} \pi^3f_\pi^4 \Delta_1 r^{5/2}}e^{-\frac{\Lambda^2 r^2}{4}-m r}\nonumber\\ & + & \frac{g_4^2 m^{7/2} e^{2 m^2/\Lambda^2}}{128 \pi^{5/2}f_\pi^4 \Delta_1 r^{5/2}}e^{-2 m r}\;, \end{eqnarray} and \begin{equation} V_{CG6}^L = - \frac{m^{1/2} \Lambda^3}{32 \sqrt{2} \pi^2 f_\pi^4 r^{3/2}} e^{- \frac{1}{4}\Lambda^2 r^2 -m r }\;. \end{equation} To get the central potential for the case of spin-$\frac{1}{2}$ intermediate state, one needs only to make the following replacement \begin{eqnarray} g_4\rightarrow g_2\; ,\; \Delta_1\rightarrow \Delta'_1 = M_{\Sigma_c}-M_{\Lambda_c} \end{eqnarray} in Eq.(\ref{cp1}). Note that in above asymptotic expressions we keep only those terms up to order $\frac{1}{r^{5/2}}$, and more singular terms are not taken into accounted in this work. The dependence of potential with various parameters are shown in Figure \ref{fig-pgaussian}. The results indicate that the potential approaches to zero quickly in long range in every case, while in short range the potential diverges very much with different parameters, as expected. As a result, the binding energy heavily depends on input parameters, the coupling constants and cutoff. One can read from the figure that in the small coupling situation, the potential becomes too narrow and shallow to bind two heavy baryons. Table \ref{tb2} presents the binding energies of $\Lambda_c$-$\bar{\Lambda}_c$ and $\Sigma_c$-$\bar{\Sigma}_c$ systems with different inputs. Schematically, the radial wave functions for the ground state of $\Lambda_c$-$\bar{\Lambda}_c$ system with Gaussian and monopole form factors are shown in Figure \ref{fig-wf} respectively, while the wave functions for $\Sigma_c$-$\bar{\Sigma}_c$ system exhibit similar curves. \begin{figure}[t,m] \begin{center} \scalebox{0.45}{\includegraphics{Graph1.eps}}\hspace{7mm \scalebox{0.45}{\includegraphics{Graph2.eps} \caption{The $\Lambda_c$-$\bar{\Lambda}_c$ central potential behavior in case of Gaussian form factor versus different parameter choices.} \label{fig-pgaussian} \end{center} \end{figure} \begin{center} \begin{table}[htb] \caption{\small Binding energies with different inputs with Gaussian form factor. The left table is for the $\Lambda_c$-$\bar{\Lambda}_c$ system, and the right one for $\Sigma_c$-$\bar{\Sigma}_c$ system.\vspace{3mm}} \centering \begin{tabular}{|c c c c |}\hline $|g_2|$ & $\Lambda(\mathrm{GeV})$& Binding & Baryonium\\ & &energy & mass\tabularnewline\hline\hline $<$0.9 &$<$0.6 & No & -\\ 0.9 &0.6 &-22 MeV & 4.550 GeV\\ 0.95 &0.6 &-77 MeV & 4.495 GeV\\ 1.0 &0.6 &-168 MeV & 4.404 GeV\\\hline 0.95 &0.7 &-196 MeV & 4.376 GeV\\ 0.95 &0.8 &-227 MeV & 4.345 GeV\\ 0.95 &0.9 &-588 MeV & 3.984 GeV\\\hline \end{tabular} \begin{tabular}{|c c c c|}\hline $g_1$ & $\Lambda(\mathrm{GeV}) $& Binding & Baryonium\\ & & energy & mass\tabularnewline\hline\hline $<1.0$ &$<0.8$ & No & -\\ 1.0 &0.8 & -11 MeV & 4.895 GeV\\ 1.05 &0.8 &-61 MeV & 4.845 GeV\\ 1.1 &0.8 &-145 MeV & 4.761 GeV\\\hline 1.05 &0.85 &-141 MeV & 4.765 GeV\\ 1.05 &0.9 &-266 MeV & 4.640 GeV\\ 1.05 &0.95 & -438 MeV & 4.468 GeV\\\hline \end{tabular} \label{tb2} \end{table} \end{center} \begin{figure}[t,m,u] \begin{center} \scalebox{0.45}{\includegraphics{wavefunction1.eps} \hspace{10mm} \scalebox{0.45}{\includegraphics{wavefunction2.eps} \caption{Radial wave function of $\Lambda_c$-$ \bar{\Lambda}_c$ ground state. The left figure is for case of Gaussian form factor under the condition of $|g_2|=0.95$ and $\Lambda=0.8$, and the right one is for the case of monopole form factor with $|g_2|=0.9$ and $\Lambda=0.95$.} \label{fig-wf} \end{center} \end{figure} \subsection{Monopole form factor case} In order to regulate the singularities at the origin in configuration space, usually people employ three types form factors in the literature, i.e. the Gaussian, the monopole, and the dipole form factors \cite{stoks}. For comparison we also calculate the potential with monopole form factor using the same factorization technique, and the basic Fourier transformation for monopole form factor is presented in Appendix for the sake of convenience. Here, in obtaining the analytic expressions for potentials we also take the measure of expanding the corresponding functions in parameter $\lambda$ and keeping only the leading term. Then, what obtained from the box-diagram contribution reads \begin{eqnarray} V_{CM}^B(r)= &-& \frac{g_4^4}{8 \pi^{5/2}f_\pi^4 \Delta^2 r^{5/2}} \left(\frac{m ^{9/2}}{4} e^{-2 m r} + \frac{\Lambda^4 m ^{1/2}}{4} e^{-2 \Lambda r}\right) \nonumber\\ &+& \frac{g_4^4 \Lambda^{5/2} m^{5/2}}{8\sqrt{2} \pi^{5/2}f_\pi^4 \sqrt{m+\Lambda}\Delta_1^2 r^{5/2}} e^{-(m + \Lambda) r}\;. \end{eqnarray} Contributions from triangle and two-pion loop diagrams are \begin{eqnarray} V_{CM}^T (r) &=& \frac{g_4^2 m^{7/2}}{16 \pi^{5/2} f_\pi^4 \Delta_1 r^{5/2}} e^{-2mr} + \frac{g_4^2 m \Lambda^{5/2}}{16 \pi^{5/2} f_\pi^4 \Delta_1 r^{5/2}} e^{-2\Lambda r}\nonumber\\ &-& \frac{g_4^2 m^{5/2}\Lambda^{3/2}}{4\sqrt{2} \pi^{5/2} f_\pi^4 \sqrt{m + \Lambda} \Delta_1 r^{5/2}} e^{-(m + \Lambda)r}\; \end{eqnarray} and \begin{equation} V_{CM}^L (r) = -\frac{(\Lambda^2 - m^2) m^{1/2}}{32 \sqrt{2} \pi^{3/2} f_\pi^4 r^{3/2}}e^{-(m + \Lambda) r} + \frac{(\Lambda^2 - m^2) \Lambda^{1/2}}{32\sqrt{2} \pi^{3/2} f_\pi^4 r^{3/2}}e^{-2 \Lambda r}\; \end{equation} respectively, where superscript $B$, $T$, and $L$ stand for box, triangle and $2\pi$ loop. Note that since there is no heavy baryon intermediate state in the $2\pi$ loop process, as shown in the right graph of Figure 2, the potential range of it appears different. \begin{figure}[t,m] \begin{center} \scalebox{0.45}{\includegraphics{Graph3.eps}}\hspace{7mm \scalebox{0.45}{\includegraphics{Graph4.eps} \caption{The $\Lambda_c$-$\bar{\Lambda}_c$ central potential behavior in case of monopole form factor versus different choices of inputs.} \label{fig-pmono} \end{center} \end{figure} We find that the structure of potential with monopole form factor is much simpler than the Gaussian case. The dependence of potential with various parameters are shown in Fig.\ref{fig-pmono}. From the figure one can see that in small coupling case the potential change less, which means the potential tends to be insensitive to the small coupling, and hence the binding energy. Solving the Schr\"{o}dinger equation we then obtain eigenvalues for different input parameters, given in Table \ref{tb3}. From the table, we notice that the binding energy is sensitive to and changes greatly with the variation of $g_1$, $|g_2|$ and the cutoff $\Lambda$, the same as the case with Gaussian form factor. Intuitively, the realistic baryonium can only accommodate small ones of those parameters. \begin{center} \begin{table}[htb] \caption{\small Binding energies with different inputs with monopole form factor. The left table is for the $\Lambda_c$-$\bar{\Lambda}_c$ system, and the right one for $\Sigma_c$-$\bar{\Sigma}_c$ system.\vspace{3mm}} \centering \begin{tabular}{|c c c c |}\hline $|g_2|$ & $\Lambda(\mathrm{GeV})$& Binding & Baryonium \\& & energy & mass\tabularnewline\hline\hline $<$0.7 &$<$0.9 & No & -\\ 0.8 &0.95 &-117 MeV & 4.455 GeV \\ 0.85 &0.95 &-420 MeV & 4.152 GeV \\ 0.9 &0.95 &-521 MeV & 4.051 GeV\\\hline 0.7 &0.9 &-5 MeV & 4.567 GeV \\ 0.7 &0.95 &-67 MeV & 4.505 GeV \\ 0.7 &1.0 &-252 MeV & 4.320 GeV\\\hline \end{tabular} \begin{tabular}{|c c c c|}\hline $g_1$ & $\Lambda(\mathrm{GeV})$& Binding & Baryonium\\ & & energy & mass\tabularnewline\hline\hline $<0.9$ &$<0.9$ & No & -\\ 0.95 &0.95 &-438 MeV & 4.468 GeV \\ 1.0 &0.95 &-830 MeV & 4.076 GeV \\ 1.05 &0.95 &-1003 MeV & 3.903 GeV \\\hline 0.9 &0.9 &-40 MeV & 4.866 GeV \\ 0.9 &0.95 &-153 MeV & 4.753 GeV \\ 0.9 &1.0 &-345 MeV & 4.561 GeV \\\hline \end{tabular} \label{tb3} \end{table} \end{center} \subsection{Ground state of $\Lambda_b$-$\bar\Lambda_b$ baryonium} \begin{center} \begin{table}[htb] \caption{\small Binding energies with the change of parameters for $\Lambda_b$-$\bar{\Lambda}_b$ system. The left table is for the Gaussian form factor, and the right one for the monopole form factor. Here $g_b$ corresponds to $g_2$ in charmed baryonium sector\vspace{3mm}} \centering \begin{tabular}{|c c c c |}\hline $|g_b|$ &$\Lambda(\mathrm{GeV})$& binding & Baryonium \\ & & energy & mass \tabularnewline\hline\hline $<$0.7 &$<$0.7& No & No\\ 0.7 &0.75 &-4 MeV & 11.236 GeV \\ 0.8 &0.75&-76 MeV &11.164 GeV \\ 0.9 &0.75&-294 MeV &10.946 GeV \\\hline 0.8 &0.8&-164 MeV &11.706 GeV \\ 0.8 &0.9&-396 MeV & 10.844 GeV \\ 0.8 &1.0&-622 MeV &10.618 GeV \\\hline \end{tabular} \begin{tabular}{|c c c c |}\hline $|g_b|$ & $\Lambda(\mathrm{MeV})$& Binding &Baryonium\\ & & energy & mass\tabularnewline\hline\hline $<1.0$ &$<0.8$ & No & No\\ 1.0 &0.8 &-11 MeV &11.229 GeV \\ 1.05 &0.8 &-56 Mev &11.184 GeV\\ 1.1 &0.8 &-143 MeV &11.097 GeV \\\hline \hline 1.05 &0.8 &-103 Mev &11.137 GeV \\ 1.05 &0.9 &-164 MeV &11.076 GeV \\ 1.05 &1.0 &-321 MeV &10.919 GeV\\\hline \end{tabular} \label{tb4} \end{table} \end{center} We also estimate the ground state of $\Lambda_b$-$\bar{\Lambda}_b$ baryonium system with Gaussian and monopole form factors. The result are shown in Table \ref{tb4}, where $g_b$ corresponds to $g_2$ in charmed baryonium sector. Note that since the dominant decay mode of $\Sigma_b$ is to $\Lambda_b \pi$, by which we may constrain the $\Sigma_b \Lambda_b \pi$ coupling from the experiment result, and this may shed lights on the further investigation on the nature of possible baryonium. \section{Summary and Conclusions} In the framework of heavy baryon chiral perturbation theory we have studied the heavy baryon-baryon interaction, and obtained the interaction potential, the central potential, in the case of two-pion exchange. The Gaussian and monopole types form factors are employed to regularized the loop integrals in the calculation. As a leading order analysis, the tensor potential and higher order contributions in $\frac{1}{M_H}$ expansion are neglected. As expected, we found that the potential is sensitive to the baryon-pion couplings and the energy cutoff $\Lambda$ used in the form factor. We apply the obtained potential to the Schr\"odinger equation in attempting to see whether the attraction of two-pion-exchange potential is large enough to constrain two heavy baryons into a baryonium. We find it true for a reasonable choice of cutoff $\Lambda$ and baryon-pion couplings, which is quite different from the conclusion of a recent work in the study of $D\bar{D}$ potential through two-pion exchange \cite{qing-xu}. Since usually the cutoff $\Lambda$ is taken to be less than the nucleon mass, i.e. about 1 GeV in the literature, in our calculation we adopt a similar value employed in the nucleon-nucleon case. In Ref.~\cite{qing-xu} authors took a fixed coupling $g=0.59$ and obtained the binding with a large cutoff. While in our calculation for the baryonium system with Gaussian form factor, there will be no binding in case $g_1<1.0$ and $\Lambda<0.8$. The increase of coupling constant will lead to an even smaller $\Lambda$ for a given binding energy. Based on our calculation results it is interesting to note that in case there exists binding in $\Sigma_c$-$\bar{\Sigma}_c$ system, with both Gaussian and monopole factors, the coupling $g_1$ will be much bigger than what conjectured in Ref.~\cite{T.M.Yan}. However, for $\Lambda_c$-$\bar{\Lambda}_c$ system, to form a bound state the baryon-Goldstone coupling $g_2$ could be similar in magnitude as what estimated in the literature. Notice that the potential depends not only on coupling constants and cutoff $\Lambda$, it also depends on the types of form factors employed. Our calculation indicates that the Gaussian form factor and Monopole form factor are similar in regulating the singularities at origin, and lead to similar results, with only subtle difference, for both $\Lambda_c$ and $\Lambda_b$ systems. Numerical result tells that the heavy baryon-baryon potentials are more sensitive to the coupling constants in the case of Monopole form factor, but more sensitive to the cutoff $\Lambda$ in the case of Gaussian form factor. From our calculation it is tempting to conjecture that the recently observed states $Y(4260)$ and $Y(4360)$, but not $Y(4660)$ \cite{ex}, in charm sector could be a $\Lambda_c$-$\bar{\Lambda}_c$ bound state with reasonable amount of binding energy, which deserves a further investigation. Our result also tells that the newly observed ``exotic" state in bottom sector, the $Y_b(10890)$ \cite{K.F.Chen}, could be treated as the $\Lambda_b$-$\bar{\Lambda}_b$ bound state, whereas with an extremely large binding energy. It is worth emphasizing at this point that although our calculation result favors the existence of heavy baryonium, it is still hard to make a definite conclusion yet, especially with only the leading order two-pion-exchange potential. The potential sensitivity on coupling constants and energy cutoff also looks unusual and asks for further investigation. To be more closer to the truth, one needs to go beyond the leading order of accuracy in $\frac{1}{M_H}$ expansion; one should also investigate the potential while two baryon-like triquark clusters carry colors as proposed in the heavy baryonium model \cite{Y4260-Qiao,Qiao}; last, but not least, the unknown and difficult to evaluate annihilation channel effect on the heavy baryonium potential should also be clarified, especially for heavy baryon-antibaryon interaction, which nevertheless could be phenomenologically parameterized so to reproduce known widths of some observed states. \vspace{0.3cm} {\bf Acknowledgments} This work was supported in part by the National Natural Science Foundation of China(NSFC) and by the CAS Key Projects KJCX2-yw-N29 and H92A0200S2. \newpage \vspace{0.5cm} {\bf Appendix} \vspace{.3cm}
train/arxiv
BkiUcRvxK6EuNBRhHU25
5
1
\section{Introduction} Highly excited states of string theory are of particular interest in perturbative string theory. An exponential growing number of states at higher levels leads to a characteristic temperature of the string ensemble, the Hagedorn temperature, at which the partition function gets divergent. This divergence may be interpreted as a signal of a phase transition; above this temperature, string theory has been speculated to have much fewer degrees of freedom than any kind of quantum field theory\cite{Atick:1988si}. This would be related to rich ``stringy symmetries'' that might emerge at a scale much higher than the string scale\cite{Gross:1988ue}. Furthermore, in various extreme situations, such as an early Universe or high-energy scattering processes, highly excited states can be created and then their properties would be important to applications of string theory\footnote{% An example of recent application is found in \cite{Skliros:2013pka}}. Excited states of a string are usually unstable and decay eventually. There have been lots of studies on this instability, such as a typical lifetime or a decay spectrum\cite{splitting_prob,decays}. One of the interesting setups to investigate this is a semi-inclusive decay process, where only the mass (and the angular momentum in some cases) of the initial state is fixed. By taking an average over the initial states, the process exhibits a thermodynamic behavior. Amati and Russo\cite{Amati:1999fv} have shown that the decay spectrum of the highly excited fundamental bosonic string is the thermal one of the Hagedorn temperature. Since then, there have been many works on this type of analysis for boson emission of an NS-R superstring and also closed string states emission from a heavy closed string\cite{Manes:2001cs, Chen:2005ra}, especially on the decay rate of maximally angular momentum states with interest in searching for possible long-lived states\cite{Decay_w_ang_mom}. Some other applications of this procedure for the cross section of strings are found in \cite{Kuroki:2007aj, Matsuo:2009sx}. Another motivation for the study on the decay of a heavy string is regarding black hole physics. A couple of decades ago, Susskind\cite{Susskind:1993ws} proposed that the microstates of a black hole could be explained by an exponentially growing number of states of a heavy fundamental string. This correspondence is considered to take place at a point where the typical size of a free string of a given mass becomes a size of the Schwarzshild radius with respect to that mass. The entropy of these two descriptions becomes the same order at that point. This idea was pursued further by Horowitz and Polchinski\cite{Horowitz:1996nw}, and they showed that this correspondence indeed holds for various types of black holes. The corresponding point of a black hole and a fundamental string typically appears at $g_s\sim N^{-1/4}$, where $g_s$ is the string coupling constant and $N$ is the excitation level of the fundamental string. For very large $N$, the leading-order treatment of this heavy string by perturbation theory would work, and may capture some aspects of the corresponding black holes\cite{Matsuo:2008fj}. Among others, one of the characteristics of a black hole is its greybody factor. Although black holes exhibit the blackbody radiation at the horizon, their gravitational potentials alter the spectrum for an asymptotic observer. This correction factor, known as the greybody factor, was actually an important clue for string/gauge theory correspondence in the early days of its development\cite{Das:1996wn, Maldacena:1996ix}. The greybody factor for a near-BPS black hole that has D-brane construction shows a perfect agreement with the calculation of gauge theory on the branes. As we will show in this paper, the decay rates of a heavy superstring indeed turn out to exhibit a thermal behavior of the Hagedorn temperature and we can read off corresponding greybody factors for the heavy superstring. We may, thus, expect that similar insight can be obtained for a more general class of black holes through the study of fundamental string decay. It should be noted that, as explained in the main part of the paper, our analysis is to the leading order of perturbation theory as well as to the leading order of $1/N$ expansion, and the correspondence point is not reached completely within this regime. For example, if we want to obtain the spectrum of the Hawking temperature, instead of that of the Hagedorn temperature, we would need to take the self-gravitational effect into account\cite{Horowitz:1997jc, Damour:1999aw}, but we will neglect self-interactions in this paper. However, we believe that the current study can be thought of as a first step toward this understanding, and it deserves more detailed study in the future. In this paper, we consider single open/closed string massless state emission from the decay of a massive open/closed superstring in the critical dimensions. As an initial state, we prepare an averaged open or closed superstring state at a very high excited level. We only specify the mass (and, therefore, the excited level) of the string, and also observe the energy spectrum of the emitted states. As the emitted states, we consider both open and closed string states. In the perturbative regime, we can take the massless states as the main channel of decay. We also integrate over the angular dependence and sum over the polarizations of the emitted massless states. We will work with Green-Schwarz formulation of superstrings in the light-cone gauge. It has an advantage that the physical degrees of freedom are explicit, and we do not need to worry about the treatment of unphysical modes. As we shall see, our setup is fit to the restriction of the momentum of the vertex operators in the light-cone gauge, and we can carry out the whole calculation very explicitly. The organization of this paper is as follows. In Section \ref{sec:emiss-massl-stat}, we shall present a setup of a semi-inclusive decay process of a heavy superstring. Then, we carry out the calculation of the emission rates of massless states from a heavy open superstring, by use of Green-Schwarz formalism in the light-cone gauge. We also argue the closed string emission from both heavy open and closed superstrings. We conclude this section with a detailed argument of the emission rates of each case. There, we compare the greybody factors obtained from the emission rates with the ones from various types of black holes. In Section \ref{sec:conlusion-discussion}, we summarize our result and propose possible future directions. Appendix \ref{sec:misc-calc} is devoted to the summary of the details of the calculation. \section{Emission of massless states} \label{sec:emiss-massl-stat} \subsection{Semi-inclusive decay process} \label{sec:incl-decay-proc} As stated in the introduction, we observe emission from a heavy superstring at an asymptotic infinity and will be ignorant about the detailed profile of the initial and final string states. We shall study a semi-inclusive decay process of a highly excited superstring in the critical dimensions with a massless state (either bosonic or fermionic) emitted. The emitted massless state is characterized by its momentum $k^\mu$ and polarization tensor $\gamma(k)$. The initial state is at an excited level $N$ and carries momentum $P^\mu_\text{ini}$. It decays into a state at level $N'$ with $P^{\mu}_\text{fin}$ with a massless state emitted. First, we choose the center-of-mass frame of the initial string, $P_\text{ini}^\mu=(M, \vec{0})$, with $\sqrt{\alpha'} M =\mathcal{O}(\sqrt{N})$. In this frame, the momentum of the emitted state is to be $k^\mu = (-\omega, \vec{k})$ with $\omega^2=|\vec{k}|^2$ as it is massless, and then by momentum conservation, the final-state momentum is determined as $P_\text{fin}^\mu=(-M+\omega, -\vec{k})$. The (differential) decay rate is given by \begin{align} \Gamma =& \frac{d^{9} k}{M(M-\omega)\omega} P(\Phi_N \rightarrow \gamma(k) + \Phi_{N'}) \,, \end{align} where $\Phi_N$ denotes arbitrary states of string at the level $N$, and the probability $P(\Phi_N \rightarrow \gamma(k) + \Phi_{N'})$ is the modulus square of the amplitude of the process. We will not be interested in the angular dependence of emission, and $d^9 k$ will eventually be set to $\omega^8 d \omega$. The masses of both the initial and the final string are heavy and are assumed to be much larger than the typical energy of the emitted massless states, $M \gg \omega$. We are considering the semi-inclusive decay process. We specify only the mass (therefore, the level) of the final state and are also interested in the energy distribution of the emitted states. We do not consider all possible final states, which may involve multistring states and many light states, but rather restrict ourselves to this three-body decay process; namely, we are working in the leading order of perturbation theory for a given process. In summary, for calculations of probability, we sum over all possible states of the final string states $\Phi(N')$ and the emitted massless state $\gamma(k)$, as well as the angular part of $k^\mu$. As for the initial state, we do not prepare any particular state of mass $M$ but rather take a typical state by averaging over the possible states of the initial string at a given level. The probability is, thus, \begin{eqnarray} P(\Phi_{N} \to \gamma(k)+\Phi_{N'}) =\frac{1}{{\cal{G}}(N)}\sum_{\Phi|N}\sum_{\Phi|N'}\sum_{\gamma{}} |\langle \Phi(N')|V(\gamma{}, k)|\Phi(N)\rangle|^2 , \end{eqnarray} where $\sum_{\Phi|N}$ represents the summation over all the states at level $N$, and $\Phi(N)$ stands for a state at level $N$. The number of states at level $N$ is denoted by ${\cal{G}}(N)$, and the asymptotic form of ${\cal{G}}(N)$ at large $N$ is calculated in Appendix \ref{sec:dens-stat-haged}. $V(\gamma,k)$ is the string vertex operator corresponding to the emitted state. In general, it is a formidable task to handle a general string state at a high fixed level, due to the exponentially growing number of states. We trim the expression of the probability to make it a more tractable form, following the trick initiated by \cite{Amati:1999fv}. It is convenient to introduce a projection operator onto the level $N$ states \begin{eqnarray} \hat{P}_N = \oint \frac{dv}{2\pi i v} v^{\hat{N}-N} \,, \qquad \sum_{\Phi|N} \ket{\Phi} = \sum_{\Phi} \hat{P}_N \ket{\Phi} \,, \label{projectionP} \end{eqnarray} where the sum in the right-hand side of the second equation runs over all the states in Fock space. Then, the probability is written as \begin{align} P(\Phi_{N} \to \gamma(k)+\Phi_{N'}) =& \frac{1}{{\cal{G}}(N)}\sum_{\gamma} \sum_{\Phi,\Phi'} \big| \bra{\Phi'} \hat{P}_{N'}\, V(\gamma,k) \, \hat{P}_N \ket{\Phi} \big|^2 \nonumber\\ =& \frac{1}{{\cal{G}}(N)}\sum_{\gamma} \oint \frac{dw}{2\pi i w}w^{-N} \oint \frac{dv}{2\pi i v}v^{-N'} \mbox{tr}[V^\dag(\gamma, k)\, v^{\hat{N}}\, V(\gamma, k)\, w^{\hat{N}}] \nonumber\\ =& \frac{1}{{\cal{G}}(N)}\sum_{\gamma} \oint \frac{dw}{2\pi i w}w^{-N} \oint \frac{dv}{2\pi i v}v^{N-N'} \mbox{tr}[V(\gamma,k,1)^\dagger\, V(\gamma, k,v) \, w^{\hat{N}}] \,, \label{eq:5} \end{align} where the trace is taken in Fock space, namely, for the oscillator part. In the last line, we have used the fact that the operator $v^{\hat{N}}$ transports the (oscillator part of the) vertex operator to the position $v$ as $v^{\hat{N}}V(\gamma, k, 1)v^{-\hat{N}}=V(\gamma, k, v)$. The third entry of the vertex operator now stands for the insertion point along world-sheet time direction $\tau$ with $v=e^{i\tau}$. As for the bosonic zero-mode part, the momentum operators will be evaluated as the initial- or final-state momentum value since this is a disk amplitude. The other contribution from the bosonic zero modes is a trivial momentum conservation factor that we do not write down explicitly in this paper. The trace part appears as a similar form to the oscillator part of string one-loop computations. However, it should be noted that there is a crucial difference; the trace here is originated from the square of the disk amplitude and, thus, not the supertrace defined with the $(-1)^F$ operator inserted. Therefore, it is different from superstring one-loop amplitudes, and the result is nonvanishing even though we have only two vertex operators inserted. This \eqref{eq:5} is the master formula for the semi-inclusive decay process we are going to study. We will evaluate this trace in the open and closed superstring theory, with the identical two vertex operators for both open and closed massless states inserted. A couple of comments on the emission of other states are in order. A heavy string can emit massive states or split into heavy strings, too. Now, we briefly argue that the emission of soft massless states is a dominant channel of decay. A heavy string may split into two heavy strings. In this case, two final states have string scale masses, $M^2 \sim \mathcal{O}(N)/\alpha'$. Starting from the rest frame of the initial string, these two strings move much more slowly than light states unless their spatial momenta are of $\mathcal{O}(N)$ in the string scale. Therefore, when we consider ourselves as an asymptotic observer, we would have little chance to detect such heavy string states. Note that once higher-order effects are included, these kinds of end states are more irrelevant, as they are bound by their own gravitational potential. In the exponentially many number of possible states, noninteractive pairs, like BPS configurations, would be negligibly scarce. A heavy state may further decay into lighter states and eventually emit sufficiently light states that can propagate far enough to be detected. There can be enormous intermediate steps to end up with light states, and then these processes may be favored with respect to an entropic viewpoint. However, in this paper, we consider only the leading-order contribution of string perturbation theory and will not take this multistep decay process into account. It is interesting to investigate the competition between the growing number of possible intermediate states and the suppressing power of coupling constant, but it is beyond the scope of our current study. Finally, we consider the contribution from rather light but stringy massive states. Since we study highly excited string states, then these lowest-level states may be regarded as light states to enter our consideration. As we will see, it turns out that the emission spectrum for massless states becomes a thermal one at the Hagedorn temperature. The Hagedorn temperature of the superstring, $T_H=(\pi \sqrt{8\alpha'})^{-1}$, is numerically smaller than the mass of the first excited state, $M_1 =c (\alpha')^{-1/2}$, where $c=1$ for open and $c=2$ for closed string states. Therefore, in the thermal distribution of the Hagedorn temperature, the massive states will hardly be observed, and we concentrate on massless states emissions. From the same reason, the energy of the emitted massless state should also be much smaller than the string scale. Therefore, we take the emission of soft massless states as the main channel of the decay process in this paper. \subsection{Open string emission from an open superstring} \label{sec:open-string-emission} First, we consider an open string emission rate from a heavy open superstring. In this case, the mass of the initial and the final states are $M=\sqrt{N/\alpha'}$ and $M'=\sqrt{N'/\alpha'}$. From the momentum conservation, we find the level difference between the initial and the final state is $\mathcal{O}(\sqrt{N})$, \begin{align} \label{eq:80} N - N' =& 2\omega \sqrt{\alpha' N} + \alpha' \omega^2 \,, \end{align} and the last term is negligible as $\sqrt{\alpha'}\omega \ll \sqrt{N}$. We now explicitly evaluate the traces of massless boson and fermion vertex operators shown in the previous section. From now on, we set the Regge slope parameter $\alpha'=1/2$ for simplicity. In Green-Schwarz superstring in the light-cone gauge, the vertex operators for massless boson and fermion states are \begin{align} \label{eq:7} V_B(\zeta,k,z) =& \left( \zeta^i(k) B^i - \zeta^-(k) {p}^+ \right) e^{i k \cdot X(z)} \,, \\ V_F(u,k,z) =& \left( u^a(k) F^a + u^{\dot{a}}(k) F^{\dot{a}} \right) e^{i k \cdot X(z)} \,, \end{align} where $B^i$, $F^a$, and $F^{\dot{a}}$ are represented by the light-cone fields\cite{GSW}. The explicit forms are given in Appendix \ref{sec:calculation-trace}. It should be noted that these vertex operators are valid only for the emission with momentum $k^+=0$, and otherwise they take more complicated forms. Since we have neglected the angular distribution of the momentum of the emitted states, we can choose the momentum $k^\mu = (-\omega,0,\cdots,0,\omega)$ by transverse $SO(9)$ rotation of the rest frame of the initial string. So we can consistently choose the light-cone coordinate so that $k^+=0$ for the emitted state. We are going to calculate the decay rate of a heavy open superstring with a massless boson/fermion state emitted, \begin{align} \label{eq:15} \Gamma_A =& \frac{\omega^7 d\omega}{M^2} P\big(\Phi_N \rightarrow \gamma_A(k) + \Phi_{N'} \big) \,, \end{align} where $P\big(\Phi_N \rightarrow \gamma_A(k) + \Phi_{N'} \big)$ is given by \eqref{eq:5} with the vertex operator $V_A$, where $A=B$ for boson emission and $F$ for fermion emission. In the same way, the polarization is $\gamma_B = \zeta^i, \zeta^-$ or $\gamma_F=u^a, u^{\dot{a}}$. What we need to do first is to calculate the oscillator trace and then evaluate the $v$ and $w$ integral to derive the probability $P$. The explicit calculation is straightforward but rather lengthy. It is summarized in Appendix \ref{sec:open-string-vertex}. We cite the final result of the trace calculation, \begin{align} \label{eq:8} \mbox{tr}\left( V_B(\zeta,k,1)^\dagger V_B(\zeta,k,v) \, w^{\hat{N}} \right) =& \left( |\zeta^i|^2 \Omega(v,w) +|\zeta^-|^2 (P_\text{ini}^+)^2 \right) Z(w) \,,\\ \mbox{tr}\left( V_F(u,k,1)^\dagger V_F(u,k,v) \, w^{\hat{N}} \right) =& \frac{1}{4} \bigg[ P_\text{ini}^+ u^{a *} u^a + u^{\dot a *} \gamma_{\dot a b}^i u^{ b} P_\text{ini}^i + u^{ a *} \gamma_{a\dot b}^i u^{\dot b} P_\text{ini}^i \nonumber\\&\hskip2em + \frac{ u^{\dot a *} u^{\dot a}}{P_\text{ini}^+} \big((P_\text{ini}^i)^2+ \Omega(v,w) \big) \bigg] \Xi(v,w) Z(w) \,, \end{align} where \begin{align} \label{eq:9} \Omega(v,w) = \sum_{n=1}^\infty \, n \frac{v^n+(w/v)^n}{1-w^n} \,, \qquad \Xi(v,w)= \frac{1}{2}+ \sum_{n=1}^\infty \frac{v^n + (w/v)^n}{1+w^n} \,, \end{align} and $Z(w)$ is the partition function, \begin{align} \label{eq:68} Z(w)=& 16 \, \left(\frac{f_+(w)}{f_-(w)} \right)^8 \,, \qquad {f}_\pm (w)= \prod_{n=1}^\infty (1\pm w^n) \,. \end{align} $f_\pm(w)$ are contributions from bosonic oscillators ($-$) and fermionic ones ($+$), respectively. The value of $16$ is from the vacuum degeneracy due to the fermionic zero modes. As noted in the introduction, the trace here is different from the supertrace of the superstring one-loop calculation, and then the bosonic and fermionic parts do not cancel and lead to the partition function. Its asymptotic behavior is evaluated in Appendix \ref{sec:dens-stat-haged}. Since the initial momentum is given by $P_\text{ini}^i=0$ and $P_\text{ini}^+=\sqrt{N}$, the terms multiplied by $P_\text{ini}^i$ vanish in the expression. Let us start with the boson emission process: \begin{align} \label{eq:1} P(\Phi_{N} \to \zeta(k)+\Phi_{N'}) = \frac{1}{{\cal{G}}(N)}\sum_{\zeta} \oint \frac{dw}{2\pi i w}w^{-N} \oint \frac{dv}{2\pi i v}v^{N-N'}\left( |\zeta^i|^2 \Omega(v,w) +N |\zeta^-|^2 \right) Z(w) \,. \end{align} After the contour integration with respect to $v$, only the term $v^{-n}$ with $n=N-N'$ in the $\Omega$ survives. Note that $N>N'$. Thus, we have \begin{align} \label{eq:10} P(\Phi_{N} \to \zeta(k)+\Phi_{N'}) =& \frac{1}{{\cal{G}}(N)}\sum_{\zeta} |\zeta^i|^2 \oint \frac{dw}{2\pi i w} \frac{(N-N')w^{-N'}}{1-w^{N-N'}} Z(w) \,. \end{align} For large $N'$, the integral can be evaluated by the saddle point method. For $w=e^{-\beta}$, the dominant contribution will come from $\beta\simeq 0$. By using the modular transformation property of the partition function, which is shown in Appendix \ref{sec:dens-stat-haged}, one finds a saddle point at $\beta = {\pi \sqrt{2/N'}}$. After the Gaussian integration around the saddle point and noting $\sqrt{N}-\sqrt{N'} \simeq \omega$, we obtain \begin{align} \label{eq:11} P(\Phi_{N} \to \zeta(k)+\Phi_{N'}) \simeq & \frac{1}{\mathcal{G}(N)} \sum_{\zeta} |\zeta^i|^2 \frac{(N-N') e^{\pi\sqrt{8N'}}N^{\prime -\frac{11}{4}}}{1-e^{-\sqrt{2}\pi\frac{N-N'}{\sqrt{N'}}}} \big(1+\mathcal{O}(N^{-1/2}) \big) \nonumber\\\simeq & \frac{\omega \sqrt{N}}{e^{2\pi \omega}-1} \big(1+\mathcal{O}(N^{-1/2}) \big) \,. \end{align} Hereafter, the $\mathcal{O}(N^{-1/2})$ correction terms, $\mathcal{O}(1)$ numerical coefficients, and the summation over the polarizations will often be implicit. This leads a thermal distribution of the emission rate \begin{align} \label{eq:12} \Gamma_B \simeq & \frac{\omega^8 d\omega}{M^2} \frac{\sqrt{N}}{e^{\beta_H \omega}-1} \end{align} with the inverse temperature $\beta_H=2\pi$, namely, the inverse Hagedorn temperature. We move on to the fermion emission rate. We have \begin{align} & P(\Phi_{N} \to u(k)+\Phi_{N'}) \nonumber\\=& \frac{1}{4 {\cal{G}}(N)}\sum_{u} \oint \frac{dw}{2\pi i w}w^{-N} \oint \frac{dv}{2\pi i v}v^{N-N'} \left( \sqrt{N} u^{a*} u^a +\frac{u^{{\dot{a}} *} u^{\dot{a}}}{\sqrt{N}} \Omega(v,w) \right) \Xi(v,w) Z(w) \label{eq:79} \,. \end{align} There are two terms in the parenthesis, and it seems that the first term is dominant since we take $N$ to be large. It is indeed the case, as explicitly checked by evaluating the contour integrals. A brief comment on this comparison is found in the last part of Appendix \ref{sec:eval-domin-contr}. We, thus, focus on the first term. In the same way as the boson case, we have \begin{align} P(\Phi_{N} \to u(k)+\Phi_{N'}) =& \frac{\sqrt{N}}{4{\cal{G}}(N)}\sum_{u} |u^a|^2 \oint \frac{dw}{2\pi i w} \frac{w^{-N'}}{1+w^{N-N'}} Z(w) \nonumber\\\simeq & \frac{\sqrt{N}}{{\cal{G}}(N)}\sum_{u} |u^a|^2 \frac{e^{\pi\sqrt{8N'}} (N')^{-\frac{11}{4}}} {1+e^{-\sqrt{2}\pi\frac{N-N'}{\sqrt{N'}}}} \nonumber\\\simeq & \sum_{u} |u^a|^2 \frac{\sqrt{N}} {e^{2\pi\omega}+1} \,, \end{align} where the saddle point appears at the same value as the boson case, $\beta=\pi\sqrt{2/N'}$. Thus, we have the emission rate for massless fermion, \begin{align} \Gamma_F \simeq & \frac{\omega^7 d\omega}{M^2} \frac{\sqrt{N}}{e^{\beta_H \omega}+1} \,, \end{align} which depends on the same inverse temperature $\beta_H$. We make a comment on the twisted trace part. There is also contribution from nonplanar diagrams, where the copies of the vertex operators are located on the opposite ends of the open string world sheet. The twisting is realized by the operator\cite{GSW} \begin{align} \label{eq:25} \Theta = -(-1)^{\hat{N}} \,, \end{align} for which the action on the vertex operator is \begin{align} \label{eq:33} \Theta V'(k,z) \Theta = V'(k,-z) \,, \end{align} where $V'(k,z)$ denotes the oscillator part of the vertex operator. We need to include the twisted sector as in \cite{Amati:1999fv}, by replacing the vertex operator as $V(\gamma,k) \rightarrow (V(\gamma,k) + \Theta V(\gamma,k) \Theta)/\sqrt{2}$. We then have the untwisted part (taking the first vertex operator squared or the second one squared), which is equivalent to the one we have already considered. The other is the twisted part, which comes from the cross terms. The net effect for the twisted part is just to replace the location of the second vertex operator as $V(\gamma,k,-v)$. It amounts to replacing $\Omega(v,w) \rightarrow \Omega(-v,w)$ and $\Xi(v,w) \rightarrow \Xi(-v,w)$ in evaluation of $v$ integral in both the boson and the fermion emission rates. Therefore, the final form is obtained by multiplying a level-difference-dependent phase factor to the untwisted result as \begin{align} \label{eq:12} \Gamma_B \simeq & \frac{\omega^8 d\omega}{M^2} \frac{(-1)^{N-N'} \sqrt{N}}{e^{\beta_H \omega}-1} \,. \end{align} This tells that for odd $N - N'$, the twisted part contrition will cancel out with the untwisted one. However, it does not change the thermal behavior of the decay rate, and we simply omit the contribution from the twisted part. In order to have a consistent open-closed superstring theory in the flat spacetime of critical dimensions, it is known that we need to consider unoriented theory with an appropriate Chan-Paton factor. For simplicity, we first examine the effect of unoriented projection without Chan-Paton factor. The physical states are to satisfy the condition \begin{align} \label{eq:42} \ket{\Phi} =& \frac{1+\Theta}{2} \ket{\Phi} \,. \end{align} By replacing the initial and the final physical state with these unoriented ones, the calculations are parallel with the above twisted sector calculations. Finally, we find, for example, for open bosonic emissions, \begin{align} \Gamma_B \simeq & \frac{1+(-1)^{N-N'}-(-1)^{N}-(-1)^{N'}}{4} \frac{\omega^8 d\omega}{M^2} \frac{\sqrt{N}}{e^{\beta_H \omega}-1} \,. \label{eq:Unori} \end{align} If both the initial-state level $N$ and the final one $N'$ are odd, as required by the unoriented projection condition \eqref{eq:42}, the level-dependent phase factor of \eqref{eq:Unori} is trivially unity. Therefore, it does not have any quantitative effect. After including the Chan-Paton factor, there appear both odd and even states. They do not mix, and the decay rates for each of them are proportional to oriented ones. So it would make no difference as long as we are interested in the thermal behavior and we do not refer to the Chan-Paton factor and unoriented projection in this paper. \subsection{Closed-string emission} \label{sec:closed-string-case} We move on to the consideration of emission of massless closed-string states, namely, graviton, gravitino, dilaton, and so on. \subsubsection{Closed string from a closed superstring} \label{sec:closed-from-closed} We consider the emission of a massless closed string state from a heavy closed superstring. The semi-inclusive decay process is the same as the heavy open string case. The mass-shell condition for the closed string is \begin{align} M^2 =& \frac{2}{\alpha'} \left( N_R + N_L \right) = \frac{4}{\alpha'} N \,, \end{align} where $L$ and $R$ refer to the left- and right-moving part as usual. From this, the level difference between the initial and the final states is found to be \begin{align} N-N' =& \sqrt{\alpha' N} \omega + \frac{\alpha'}{4}\omega^2 \,. \end{align} In this case, the momentum operator picks up the initial-state energy $P_\text{ini}^+ = \sqrt{\frac{2N}{\alpha'}}$. The calculation is parallel with the open-string case. The oscillator part of the vertex operator is factorized as \begin{align} \label{eq:34} V^\text{(closed)}(\gamma,k,e^{i\tau}) =& \int_0^{\pi} \frac{d\sigma }{\pi} :V_L(\gamma_L,\tfrac{k}{2},e^{i(\tau+\sigma)}): \, : V_R(\gamma_R,\tfrac{k}{2},e^{i(\tau-\sigma)}): \nonumber\\=& \int_0^{\pi} \frac{d\sigma }{\pi} e^{-2i\sigma(\hat{N}_L-\hat{N}_R)} :V_L(\gamma_L,\tfrac{k}{2},e^{i\tau}): \, : V_R(\gamma_R,\tfrac{k}{2},e^{i\tau}) : e^{2i\sigma(\hat{N}_L-\hat{N}_R)} \,, \end{align} where $V_{L,R}$ is either $V_B$ or $V_F$, and $\gamma=\gamma_L \otimes \gamma_R$. As shown, the normal ordering is taken for the left and right parts individually. Since we consider massless vertex operators, we do not write the normal ordering symbol hereafter. As we consider a tree level three-point amplitude with closed string states that satisfy the level matching condition, the $\sigma$ integral trivially drops out. The initial and final states are also decomposed into \begin{align} \label{eq:43} \ket{\Phi(N)} = \ket{\Phi_L(N)} \otimes \ket{\Phi_R(N)} \,, \end{align} and these two sectors must have the same level. Then, the projection operator is also decomposed as \begin{align} \label{eq:46} \hat{P}_N=& \oint \frac{dv_L}{2\pi i v_L} v_L^{\hat{N}_L-N} \times \oint \frac{dv_R}{2\pi i v_R} v_R^{\hat{N}_R-N}\,, \end{align} which gives \begin{align} \label{eq:47} \sum_{\Phi_L|N} \sum_{\Phi_R|N} \ket{\Phi_L(N)} \otimes \ket{\Phi_R(N)} =& \sum_{\Phi_L} \sum_{\Phi_R} \hat{P}_N \ket{\Phi_L} \otimes \ket{\Phi_R} \end{align} where in the sums on the right-hand side, the levels of $\Phi_L$ and $\Phi_R$ are not restricted to be the same. As shown in Appendix \ref{sec:dens-stat-haged}, the density of states for closed string $\mathcal{G}^\text{cl}(N)$ is the square of the open-string one. Therefore, the probability can be evaluated as \begin{align} & P(\Phi_N \rightarrow \gamma(k) + \Phi_{N'}) \nonumber\\=& \frac{1}{\mathcal{G}^\text{cl}(N)} \sum_{\Phi |N} \sum_{\Phi |N'} \sum_{\gamma_L, \gamma_R} \left| \bra{\Phi(N')} V(\gamma,k, 1) \ket{\Phi(N)} \right|^2 \nonumber\\=& \frac{1}{\mathcal{G}^\text{cl}(N)} \sum_{\Phi } \sum_{\Phi'} \sum_{\gamma_L, \gamma_R} \left| \bra{\Phi'}\hat{P}_{N'} V(\gamma,k, 1) \hat{P}_N \ket{\Phi} \right|^2 \nonumber\\=& \frac{1}{\mathcal{G}(N)} \int \frac{dv_L}{2\pi i v_L} v_L^{N-N'} \int \frac{dw_L}{2\pi i w_L} w_L^{-N} \sum_{\gamma_L} \mbox{tr} \left( V_L(\gamma_L,\tfrac{k}{2}, 1)^\dagger V_L(\gamma_L,\tfrac{k}{2}, v_L) w_L^{\hat{N}_L} \right) \nonumber\\& \times \frac{1}{\mathcal{G}(N)} \int \frac{dv_R}{2\pi i v_R} v_R^{N-N'} \int \frac{dw_R}{2\pi i w_R} w_R^{-N} \sum_{\gamma_R} \mbox{tr} \left( V_R(\gamma_R,\tfrac{k}{2}, 1)^\dagger V_R(\gamma_R,\tfrac{k}{2}, v_R) w_R^{\hat{N}_R} \right) \,. \end{align} After inserting the level projection operator, the calculation is factorized into the left- and right-moving parts. By setting $\alpha'=2$, it is easy to see that each part is just a copy of the amplitude of open-string one with $\alpha'=1/2$. We define the contribution of the averaged trace of bosonic and fermionic vertex operators (with numerical factors neglected), \begin{align} \label{eq:35} f_B =\sum_\zeta |\zeta^i|^2 \frac{\sqrt{N} \omega }{e^{2\pi \omega}-1} \,, \qquad f_F =\sum_u |u^a|^2 \frac{ \sqrt{N} }{e^{2\pi \omega}+1} \,, \end{align} and the emission rate is \begin{align} \label{eq:36} \Gamma^\text{cl}_{LR} = & \frac{\omega^7 d\omega}{M^2} f_L f_R \,, \end{align} with $L,R$ being $B$ or $F$. First, we consider the case with $L=R=B$; namely, we prepare the vertex operator for $\mathbf{8}_v \times \mathbf{8}_v$ states, which include the graviton, dilaton, and $B$-field, \begin{align} \label{eq:37} \Gamma^{\text{cl}}_{BB} \simeq & \frac{\omega^7 d\omega}{M^2} f_B f_B = \sum_{\zeta^{ij}} (\zeta^{ij} \zeta^{ij*}) \frac{\omega^8 d\omega}{M^2} \frac{N\omega}{(e^{2\pi \omega}-1)^2} \,. \end{align} In the same manner, $\mathbf{8}_c \times \mathbf{8}_s$ for gravitino and dilatino and $\mathbf{8}_c \times \mathbf{8}_c$ for Ramond-Ramond (R-R) fields are given by \begin{align} \label{eq:38} \Gamma^{\text{cl}}_{FB} \simeq & \frac{\omega^7 d\omega}{M^2} f_F f_B =\sum_{u^{ia}} |u^{ia}|^2 \frac{\omega^8 d\omega}{M^2} \frac{N}{(e^{2\pi \omega}-1)(e^{2\pi \omega}+1)} \,, \end{align} and \begin{align} \label{eq:45} \Gamma^{\text{cl}}_{FF} \simeq \frac{\omega^7 d\omega}{M^2} f_F f_F =\sum_{\zeta^{ab}} |\zeta^{ab}|^2 \frac{\omega^8 d\omega}{M^2} \frac{N\omega^{-1}}{(e^{2\pi \omega}+1)^2} \,, \end{align} respectively. For the type IIA closed-string case, the second fermionic state is replaced with $\mathbf{8}_s$, but the result is essentially the same. It should be noted that the thermal factors for the left and right mover, \begin{align} \label{eq:39} \beta_L=\beta_R=2\pi = \pi \sqrt{2\alpha'} \,, \end{align} are half of the inverse Hagedorn temperature for the closed string, \begin{align} \label{eq:40} \beta_H = \pi \sqrt{8\alpha'} = \beta_L + \beta_R \,, \end{align} since we are working with $\alpha'=2$. \subsubsection{Closed-string emission from an open superstring} \label{sec:closed-from-open} We consider a closed-string state emission from open-string states and use the same closed-string vertex operator, \begin{align} V^\text{(closed)}(\gamma,k,e^{i\tau}) =& \int_0^{\pi} \frac{d\sigma }{\pi} :V_L(\gamma_L,\tfrac{k}{2},e^{i(\tau+\sigma)}): \, : V_R(\gamma_R,\tfrac{k}{2},e^{i(\tau-\sigma)}): \,, \end{align} but now both $V_L$ and $V_R$ include the same open-string oscillator $\alpha_n^i$ and $S_n^a$. We work with $\alpha'=1/2$ and denote the position of the operator by $e^{i\sigma}$ (we take $\tau=0$). By using the same trick, we have \begin{align} \label{eq:44} & P(\Phi_N \rightarrow\gamma(k)+\Phi_{N'}) \nonumber\\=& \frac{1}{\mathcal{G}(N)} \sum_{\Phi|N} \sum_{\Phi|N'} \sum_{\gamma} \big|\bra{\Phi(N')} V^\text{(closed)}(\gamma,k,1) \ket{\Phi(N)} \big|^2 \nonumber\\=& \frac{1}{\mathcal{G}(N)} \int_0^\pi \frac{d\sigma}{\pi} \int_0^\pi \frac{d\sigma'}{\pi} \oint \frac{dv}{2\pi v }v^{N-N'} \oint \frac{dw}{2\pi w }w^{-N} \nonumber\\&\hskip2em \times \mbox{tr} \left[ V_R^\dagger(\gamma_R, \tfrac{k}{2},e^{-i\sigma'}) V_L^\dagger(\gamma_L, \tfrac{k}{2},e^{i\sigma'}) V_L(\gamma_L, \tfrac{k}{2},ve^{i\sigma}) V_R(\gamma_R, \tfrac{k}{2},ve^{-i\sigma}) w^{\hat{N}}\right] \,. \end{align} Here $V_{L,R}$ is either $V_B$ or $V_F$. Although we have now four vertex operators inside the trace, the calculation is similarly straightforward but lengthy. The evaluation has been done in Appendix \ref{sec:closed-string-vertex}, and we cite the result in the following. When $V_L=V_R=V_B$, namely, an NS--NS massless state emission case, we find \begin{align} \label{eq:48} \Gamma_{BB} \simeq \sum_{i,j} \frac{\omega^8 d\omega}{M^2} \frac{N\omega}{(e^{\pi \omega}-1)^2} \times \begin{cases} \zeta^{ij}(\zeta^{ij}+\zeta^{ji})^* & N-N'= \text{even} \\ \zeta^{ij}(\zeta^{ij}-\zeta^{ji})^* & N-N'= \text{odd} \end{cases} \,. \end{align} In this case, only the symmetric (antisymmetric) part is emitted when the level difference is even (odd). Note that the level difference is restricted to be even when we consider the unoriented theory, and then this selection rule is consistent with the unoriented projection. For $V_L=V_F$ and $V_R=V_B$, namely, a massless fermionic state emission, \begin{align} \label{eq:49} \Gamma_{FB}\simeq \sum_{i,a} |\zeta^i|^2 |u^a|^2 \frac{\omega^8 d\omega}{M^2} \frac{N}{(e^{\pi \omega}-1)(e^{\pi\omega}+1)} \,, \end{align} where the numerical coefficients may be different for $N-N'$ even or odd. We are only interested in the $\omega$-dependent part of the emission rate and $N$ dependence. Finally, when $V_L=V_R=V_F$, an R--R boson emission rate is \begin{align} \label{eq:50} \Gamma_{FF} \simeq \sum_{a,b} \frac{\omega^8 d\omega}{M^2} \frac{\omega^{-1} N}{(e^{\pi\omega}+1)^2} \times \begin{cases} u^{ab}(u^{ab}-u^{ba})^* & N-N'= \text{even} \\ u^{ab}(u^{ab}+u^{ba})^* & N-N'= \text{odd} \end{cases} \,. \end{align} Now the antisymmetric part is emitted when the level difference is even. For fermionic sectors, the unoriented projection picks up the graded-symmetrized states\cite{GSW}, and then it is again consistent with the unoriented projection. Recall that we are working with $\alpha'=1/2$ here. On the other hand, for closed-string emission from a heavy closed superstring, we used $\alpha'=2$. By taking this difference into account, one can find that the $\omega$-dependent part of the emission rate is the same for heavy open and closed superstrings. \subsection{Summary of the results and discussion} \label{sec:emiss-rate-massl} The emission rate we have calculated so far can be written as \begin{align} \Gamma \simeq \frac{\omega^8 d\omega}{M^2} \frac{\sigma(\omega)}{e^{\beta_H \omega}\mp 1} \,, \end{align} where the $-$ sign is for massless boson emissions and the $+$ for fermionic ones. $\beta_H=\pi\sqrt{8\alpha'}$ is the inverse Hagedorn temperature. $\sigma(\omega)$ is the greybody factor and $\sigma(\omega)=1$ means the pure blackbody radiation. The results are at the leading order in the coupling constant $g_s$ and $1/N$ and are valid for $\sqrt{\alpha'} \omega \ll \sqrt{N}$. We omit $\mathcal{O}(1)$ numerical coefficients, and the summation over the polarizations is implicit. All the information is now packed in $\sigma(\omega)$. For massless open-string state emission, we have found \begin{align} \label{eq:51} \sigma_B^{(\text{op})} =& g_s^2 \sqrt{N} \cdot 1 \,, \qquad \sigma_F^{(\text{op})} = g_s^2 \sqrt{N} \cdot \omega^{-1} \,, \end{align} where in order to show explicitly that this is the leading order in the coupling constant, we inserted the open-string coupling constant $g_s$ (we consider the amplitude squared). $B$ and $F$ stand for boson and fermion massless state emission, respectively. For massless closed string state emissions, both from heavy open and closed superstrings, we have found \begin{align} \label{eq:52} \sigma^{(\text{cl})}_{BB}=& g_s^4 N \cdot \frac{\omega (e^{\beta_H \omega}-1)}{(e^{\frac{\beta_H \omega}{2}}-1)^2} \,,\\ \sigma^{(\text{cl})}_{FB}=& g_s^4 N \cdot \frac {e^{\beta_H \omega}+1}{(e^{\frac{\beta_H \omega}{2}}-1)(e^{\frac{\beta_H \omega}{2}}+1)} \,,\\ \sigma^{(\text{cl})}_{FF}=& g_s^4 N \cdot \frac{\omega^{-1} (e^{\beta_H \omega}-1)}{(e^{\frac{\beta_H \omega}{2}}+1)^2} \,. \end{align} Here, $BB$ stands for the massless states corresponding to $V_B \otimes V_B$ vertex operator, and so on. First of all, one should notice that the greybody factors of massless closed string emissions have the same form regardless of whether its source is a heavy open superstring or a closed one. This may be explained by the fact that to the leading order, the emission of massless states is local on the world sheet. So the emitted massless states are only affected by the excited level of the heavy string but not by its topology. Since open and closed superstrings have very different sets of states at a given level $N$, it is interesting to see that the averaged states exhibit the same thermal behavior. It should also be noted that $\sigma_{BB}$ has the same form as the greybody factor of the near BPS D$1$--D$5$ black hole system\cite{Das:1996wn, Maldacena:1996ix}, with different $\beta$. (This fact has already been pointed out by Amati and Russo\cite{Amati:1999fv} for the bosonic string case.) As for fermion emissions, $\sigma_{FB}$ is slightly different from the one calculated in \cite{Hosomichi:1997if} by a factor of $\omega$, which may be due to a formal $s$-wave limit discussed later, but the exponential factors are the same. It should be interesting to study more on why this universal form appears. The massless boson states of an open superstring show the blackbody spectrum. The fermionic states have nontrivial $\omega^{-1}$ dependence, which might be interpreted as an $s$-wave extrapolation of blackbody result shown below. Intuitively, we may understand why an open string has a blackbody spectrum in the following way. The greybody factor is identified with the absorption cross section. Now, we cast a massless open-string state from an asymptotic infinity toward a heavy open superstring. When the massless state is absorbed into the heavy string, we observe the probability that the same state is reflected back from the string with the same energy. To the leading order, an open-string state can be captured or emitted only from the ends of the heavy open string. An open string may split at any point into two open strings, but in order to emit a massless state, the interaction has to take place exactly at each of the two ends. The splitting probability of an open string is uniform\cite{splitting_prob}, and then for a heavy and long open string at level $N$, the probability of emitting massless states is suppressed by $1/N$ (or $2/N$ to be more precise). Thus, for asymptotic observers, a heavy open string can be viewed as a hole of a cavity; namely, once it absorbs a wave of a certain frequency, it will hardly reemit it. We can then interpret the heavy open-string emission rate as a cavity radiation. On the other hand, closed strings can emit massless states from any point of the world sheet. So the probability is not dumped as the level gets higher, and it may have a nontrivial greybody factor to the leading order in $1/N$. It is also interesting to compare our result with the greybody factors of black holes in higher dimensions. In four dimensions, the greybody factors of spherically symmetric black holes for bosons and fermions are calculated in \cite{Page:1976df,Unruh:1976fm}. In higher dimensions, the formulas are derived by \cite{Harmark:2007jy, Kanti:2002nr, Kanti:2002ge}. It can be schematically written as \begin{align} \label{eq:29} \Gamma \simeq & \frac{\sigma_{j s}(\omega) \omega^8 d\omega}{e^{\beta\omega} \mp 1} \,, \end{align} in ten dimensions, where $\sigma_{j s} (\omega)$ is the greybody factor for spin-$s$ field. $j$ denotes the total angular momentum of the partial wave, and some examples for lower $s$ are \begin{align} \label{eq:81} \sigma_{j0} = \omega^{2j} \,, \qquad \sigma_{j \frac{1}{2}} = \omega^{2j-1} \,, \qquad \sigma_{j 1} = \omega^{2j} \,. \end{align} Here, we write down only $\omega$ dependence and neglect other factors including the dependence of the profile of the black hole. It satisfies the constraint $j \geq s$, and for the $\omega$ small region, the dominant contribution comes from the modes $j=s$. For the first few modes, the results are \begin{align} \label{eq:53} \sigma_{00} = 1 \,, \qquad \sigma_{\frac{1}{2}\; \frac{1}{2}} = 1 \,, \qquad \sigma_{11} = \omega^2 \,, \end{align} and so on. Note that in our calculation of open-string state emission, the boson is a vector field, and the fermion is a Dirac field. So the results do not agree with the black hole ones. However, in our calculation, we integrate over the angular dependence of the decay rate, and it essentially picks up the $s$-wave part ($j=0$ part) of the partial wave decomposition. In the above formulas for black holes, if we take a formal limit of $j=0$, we get \begin{align} \label{eq:54} \sigma_{\text{boson},\, j=0} = 1 \,, \qquad \sigma_{\text{fermion},\, j=0} = \omega^{-1} \,, \end{align} which depends only on whether $s$ is an integer or a half-integer. This resembles our result for massless open-string state emission, but we do not claim that this procedure is completely justifiable. We leave further study on this suggestive observation to the future. If we look at a particularly low-energy emission, $\omega \ll T_H$, we can expand the exponential factor to find that all the emission rates take the form of \begin{align} \Gamma \sim \left( g_s^2 \sqrt{N}\right)^\alpha \frac{\omega^7 d\omega}{M^2} \,, \end{align} with $\alpha=1$ for emission from a heavy open string and $\alpha=2$ for that from a closed one. In this regime, the law of equal partition works well, and there will be no distinction between bosonic and fermionic emission. We, thus, have a thermodynamically acceptable result. In the emission rates we have calculated, the coupling constant and the excited level $N$ of the heavy string appears in the combination of $g_s^2 \sqrt{N}$. As a $1/N$ expansion, the subleading corrections are found to be $\mathcal{O}(N^{-1/2})$. It would be interesting to investigate how the higher-order corrections in $g_s$ depend on $N$. We may imagine that the subleading corrections appear in the same combination, and if it were the case, the perturbative calculation is valid for $g_s \ll N^{-1/4}$, which is smaller than the corresponding point value $g_s \sim N^{-1/4}$ in the large-$N$ limit. Namely, when $g_s^2 \sqrt{N} \simeq 1$, near the corresponding point, the perturbative expansion of this type becomes invalid. Another possibility is that the subleading corrections have the same order in $N$, and the perturbative corrections are negligible when $g_s \ll 1$. In this case, when $g_s \sim N^{-1/2}$, the corrections become the same order as the $1/N$ corrections of the leading order, and a perturbative calculation might be useful even near the corresponding point. Of course, correction terms may appear in a completely different way. However, the form of the higher-order corrections may tell us in what regime of $g_s$ and $N$ we can use perturbation theory and whether we may approach the black hole/string corresponding point. \section{Conclusion} \label{sec:conlusion-discussion} In this paper, we have calculated the semi-inclusive decay rates of very massive open and closed superstrings in the flat background by use of the Green-Schwarz superstring in the light-cone gauge. We focus on the emission of massless open- and closed-string states. The initial state is averaged over all the states at a fixed level $N$, and the final state is summed over. In this setup, we find that the emission rates for all the cases, open- and closed-string massless states from a heavy open superstring and closed string massless states from a heavy closed superstring, exhibit the thermal distribution of the Hagedorn temperature with possible greybody factor corrections. In the thermal distribution at the Hagedorn temperature, the dominant emission channel for an asymptotic observer is due to massless states, and our result provides the leading order of the emission spectrum of a heavy superstring. It is notable that they have the same leading-order string coupling $g_s$ and the initial excited level $N$ dependence, contrary to the suggestion in the previous literature. It is also interesting that the greybody factors for massless closed-string emission take the same form in the cases of decay from both heavy open and closed superstrings. For open-string massless states from a heavy open superstring, the emission rate of bosonic states exhibits the blackbody behavior, while the fermion emission part involves a greybody factor $\sigma = \omega^{-1}$. These behaviors approximate the $s$-wave approximation of the black hole greybody factors but do not really agree with the greybody factors in the physical regime. This result may suggest that a heavy open superstring would hardly reflect back the incoming massless states once absorbed and would exhibit the thermal spectrum of the cavity radiation. As for closed massless states emission, the greybody factors do not depend on whether it is from heavy open superstring or closed one. This suggests that these two heavy string states, as thermal equilibrium states, have a common essential property. The frequency-dependent part of the greybody factors also takes very similar forms to those of D1--D5 near BPS black holes. So such a heavy superstring would also share the essential property with this BPS black holes. It will be interesting to study the origin of this similarity. There will be a lot of future directions, on top of the ones we have proposed so far here and in Section \ref{sec:emiss-rate-massl}. It is interesting to consider the scattering process with a heavy string state. This analysis should clarify if the ``greybody factor'' found here indeed has the interpretation of the absorption cross section. Another issue may be to observe the detailed angular dependence of the heavy string decay. The partial waves of massless states exhibit different behavior in the case of the black hole. It is, thus, worth carrying out a partial wave analysis to observe that the angular dependence also gives a consistent result or there appear some differences. \section*{Acknowledgment} \label{sec:acknowledgement} The authors thank H.~Itoyama, H.~Kawai, and K.~Murakami for valuable discussions. The authors also thank Center for Theoretical Sciences, Taipei, Taiwan, R.O.C., and Research Group for Mathematical Physics, Osaka City University, for warm hospitality. The work of S.~K. is supported by NSC99--2112--M--029--003--MY3 and NSC101--2811--M--029--001. The work of T.~M. is supported in part by JSPS Grand-in-Aid for Young Scientists No. 22740190.
train/arxiv
BkiUeAM4dbghTeUMVYgN
5
1
\section{Introduction} \label{section introduction} \paragraph{Motivation.} Transfer learning methods aim to produce machine learning models that are trained on a given problem but perform well also on different \emph{new} tasks. The interest in transfer learning comes from situations where large data sets can be used for solving a given \emph{training} task but the data associated with new tasks are too small to train expressive models from scratch. The general transfer-learning strategy is to use the small available new data for \emph{adapting} a large model that has been previously optimized on the training data. One option consists of keeping the structure of the pre-trained large model intact and fine-tuning its weights to solve the new task. When very few data points are available and the pre-trained network is large, however, customized regularization strategies are needed to mitigate the risk of over-fitting. Fine-tuning only a few parameters is a possible way out but can strongly limit the performance of the final model. Another option is to prune the pre-trained model to reduce its complexity, increase \emph{transferability}, and prevent overfitting. Existing strategies, however, focus on optimized models and are unable to \emph{disentangle} the network architecture from the attached weights. As a consequence, the pruned version of the original model can hardly be interpreted as a transferable new \emph{architecture} and it is difficult to reuse it on new tasks. \paragraph{In this paper.} We propose a new Architecture Pruning (AP) approach for finding transferable and arbitrarily light sub-architectures of a given \emph{parent} model. AP is not dissimilar to other existing pruning methods but based on a feasible approximation of the objective function normally used for Neural Architecture Search (NAS). We conjecture that the proposed architecture-focused objective makes it possible to \emph{separate} the role of the network architecture and the weights attached to it. To validate our hypothesis, we define a new AP algorithm and use it to extract a series of low-complexity sub-architectures from state-of-the-art computer vision models with millions of parameters. The size of the obtained sub-architectures can be fixed \emph{a priori} and, in the transfer learning setup, adapted to the amount of data available from the new task. Finally, we test the transferability of the obtained sub-architectures empirically by fine-tuning them on different small-size data sets. \paragraph{Technical contribution.} NAS is often formulated as a nested optimization problem, e.g. \begin{align} \label{nested optimization} \min_{{\cal A}} {\cal L}\left({\cal A}, \ {\rm arg} \min_{\theta} {\cal L}({\cal A}, \theta, {\cal D}), \ {\cal D} \right) \end{align} where ${\cal L}({\cal A}, \theta, {\cal D})$ is a loss function that depends on the network structure, ${\cal A}$, the corresponding weights, $\theta$, and the input-output data set, ${\cal D}$. This approach has a few practical problems: i) the architecture search space, i.e. the set of possible architectures to be considered, is ideally unbounded, ii) each new architecture, ${\cal A}'$, should be evaluated after solving the inner optimization problem, ${\rm arg} \min_{\theta} {\cal L}({\cal A}', \theta)$, which is computationally expensive, and iii) architectures are usually encoded as \emph{discrete} variables, i.e. $\min_{{\cal A}} {\cal L}$, is a high-dimensional Discrete Optimization (DO) problem and its exact solution may require an exponentially large number of architecture evaluations. To address these issues, we approximate \eqref{nested optimization} with the \emph{joint} mixed-DO problem $\min_{{\cal A}, \theta} {\cal L}({\cal A}, \theta)$ i.e. we simultaneously search for an optimal architecture, ${\cal A}$ and the corresponding weights, $\theta$. The DO problem is then solved through a new \emph{two-temperature} gradient-based approach where a first approximation makes $\min_{{\cal A}, \theta} {\cal L}({\cal A}, \theta)$ a fully continuous optimization problem and a second approximation is introduced to avoid the \emph{vanishing gradient}, which may prevent gradient-based iterative algorithm to converge when the first approximation is tight. Our scheme belongs to a class of recent differentiable NAS approaches, which are several orders of magnitude faster than standard Genetic and Reinforcement Learning schemes (see for example the comparative table of \cite{ren2020comprehensive}) but the first to address the vanishing-gradient problem explicitly in this framework.\footnote{ To the best of our knowledge.} Similar relaxation methods have been used in the binary networks literature (see for example \cite{bengio2013estimating}), but this is the first time that similar ideas are transferred from the binary network literature to NAS. Moreover, our method is provably accurate and, among various existing follow-ups of \cite{bengio2013estimating}, the only one that can be provided with quantitative convergence bounds. \footnote{Mainly thanks to the novel two-temperature idea.} We validate our hypothesis and theoretical findings through three sets of empirical experiments: i) we compare the performance of the proposed two-temperature scheme and a more standard continuous relaxation method on solving a simple AP problem (on MNIST data), ii) we use CIFAR10 and CIFAR100 to test the transferability of VGG sub-architectures obtained through AP and other pruning methods \footnote{A fair comparison with other NAS methods is non-trivial because it is not clear how to fix a common search space and we leave it for future work. } , and iii) we confirm the hypothesis of \cite{frankle2020pruning} that NAS can be often reduced to selecting the right layer-wise density and is quite insensitive to specific configurations of the connections.\footnote{ At least in the transfer learning setup. } \paragraph{Related Work.} {\bf NAS} approaches \cite{zoph2016neural,liu2018darts,gaier2019weight,you2020greedynas} look for optimal neural structures in a given search space\footnote{Usually, the boundary of the search spaces are set by limiting the number of allowed neural operations, e.g. node or edge addition or removal.}, and employ \emph{weight pruning} procedures that attempt to improve the performance of large (over-parameterized) networks by removing the `less important' connections. Early methods \cite{stanley2002evolving,zoph2016neural,real2019regularized} are based on expensive \emph{genetic algorithms} or \emph{reinforcement learning} approaches. More recent schemes either design differentiable losses \cite{liu2018darts,xie2019snas}, or use random weights to evaluate the performance of the architecture on a validation set \cite{gaier2019weight,pham2018efficient}. Unlike these methods, which search architectures by adding new components, our method \emph{removes} redundant connections from an over-parameterized parent network. {\bf Network pruning} approaches \cite{collins2014memory,Han2015LearningBW,han2015deep,Han2015LearningBW,Frankle2019TheLT,yu2019playing,frankle2020pruning} start from pre-trained neural models and prune the unimportant connections to reduce the model size and achieve better performance. Contrarily to the transfer learning goals of AP, these methods mostly focus on single-task setups. {\bf Network quantization and binarization} reduce the computational cost of neural models by using lower-precision weights \cite{jacob2018quantization,zhou2016dorefa}, or mapping and hashing similar weights to the same value \cite{chen2015compressing,hu2018hashing}. In an extreme case, the weights, and sometimes even the inputs, are binarized, with positive/negative weights mapped to $\pm1$ \cite{soudry2014expectation,courbariaux2016binarized,hubara2017quantized,shen2019searching,courbariaux2015binaryconnect}. As a result, these methods keep all original connections, i.e. do not perform any architecture search. Often used for binary network optimization, Straight-Through gradient Estimator (STE) algorithms are conceptually similar to the two-temperature scheme proposed here. STE looks at discrete variables as the output of possibly non-smooth and non-deterministic quantizers which depends on real auxiliary quantization parameters and can be handled through (possibly approximate) gradient methods. \cite{guo2018survey} compares a large number of recent deterministic and probabilistic quantization methods. Compared to standard discrete optimization techniques, STE methods are advantageous because they can be combined with stochastic iterative techniques to handle large models and large data sets. \cite{bengio2013estimating} is the first work where different objective functions are used in the backward and forward passes. In \cite{bengio2013estimating}, the derivative of the quantizer is completely neglected, which leads to an unpredictable gradient bias and a non-vanishing optimization gap \cite{li2017training}. Some theoretical control of STE is given in \cite{yin2019understanding} under quite some assumptions on the objective function. Certain convergence guarantees are obtained in \cite{ajanthan2019mirror} but through an annealing technique that should be fixed in advance. \cite{yin2019blended} and \cite{xiong2019fast} propose proximal gradient approaches where the gradient is guaranteed to define descent direction but both works lack of a full convergence proof. \cite{hou2016loss} and \cite{uhlich2019mixed} define a new class of loss-aware binarization methods and are designed for taking into account certain `side' effects of the quantization step, but these methods considerably increase the size of the parameter space. The proposed approach is the first to address explicitly the vanishing-gradient issues associated with the deterministic quantizers. \section{Methods} \label{sec:our_method} \subsection{Problem formulation} \label{section problem formulation} To address the three NAS technical challenges mentioned in Section \ref{section introduction}, we i) let the search space be the set of all sub-networks of a very large and general parent network defined by a given architecture, ${\cal A}_{parent}$ and the associated weights, $\theta \in {\mathbb R}^{D}$, where $D = |{\cal A}_{parent}|$ is the number of weighted connections of ${\cal A}_{parent}$, ii) we approximate \eqref{nested optimization} with\footnote{ This enables us to find the optimal edge structure directly, without averaging over randomly sampled weights, as for weight-agnostic networks \cite{gaier2019weight}, or training edge-specific real-value weights, as in other architecture search methods \cite{liu2018darts,Frankle2019TheLT}. } \begin{align} \label{problem formulation general} \min_{{\cal A} \subseteq {\cal A}_{parent}} \ \min_{ \theta \in {\mathbb R}^{D}} {\cal L}({\cal A}, \theta, {\cal D}) \end{align} where ${\cal L}({\cal A}, \theta, {\cal D})$ is an arbitrary real-valued loss function and ${\cal D} = \{ (x, y) \in {\cal X} \otimes {\cal Y} \}$ a training data set, iii) we approximate the DO part of \eqref{problem formulation general}, i.e. the minimization over ${\cal A} \subseteq {\cal A}_{parent}$, with a \emph{low-temperature} continuous relaxation of \eqref{problem formulation general} and solve it with iterative parameter updates based on a further \emph{high-temperature} approximation of the gradient. We let the parent network be $F (\theta) = F({\cal A}_{parent}, \theta): {\cal X} \to {\cal Y}$. Each sub-network of $F(\theta)$ is represented as a \emph{masked version} of $F(\theta)$, i.e. a network with architecture ${\cal A}_{parent}$ and masked weights \begin{align} \label{weight expression} \tilde \theta = m \circ \theta, \quad m_i = \left\{ \begin{array}{cc} 1 & i \in {\cal A}(m) \subseteq {\cal A}_{parent} \\ 0 & {\rm otherwise} \end{array}\right. \end{align} where $\circ$ is the element-wise product, $m \in \{ 0, 1 \}^{D}$, $\theta \in {\mathbb R}^{D}$, ${\cal A}(m)$ is a subset of the set of connections of ${\cal A}_{parent}$, and $i=1, \dots, D$. Equivalently, ${\cal A}(m)$ can be interpreted as the sub-architecture of ${\cal A}_{parent}$ obtained by masking ${\cal A}_{parent}$ with $m$. Given a data set, ${\cal D}$, we let the corresponding task be the prediction of the outputs, $y \in {\cal Y}$, given the corresponding input, $x \in {\cal X}$. In the transfer learning setup, we assume we have access to a large data set associated with the training task, ${\cal D}_{train} = \{ (x, y) \in {\cal X}_{train} \otimes {\cal Y}_{train} \} $, and a small data set associated with a new task, ${\cal D}_{new} = \{ (x, y) \in {\cal X}_{new} \otimes {\cal Y}_{new} \} $. We also assume that the two data sets may have different input-output spaces, i.e. we may have ${\cal X}_{train} \neq {\cal X}_{new}$, ${\cal Y}_{train} \neq {\cal Y}_{new}$, and $|{\cal D}_{train}| >> |{\cal D}_{new}|$. The idea is to use ${\cal D}_{train}$ to extract a sub-architecture ${\cal A} \subseteq {\cal A}_{parent}$, that can be retrained on ${\cal D}_{new}$ to solve the new task, i.e. the task associated ${\cal D}_{new}$. In our setup, this is equivalent to solve \begin{align} \label{AP problem formulation} m_* = {\rm arg} \min_{m} \left( \min_\theta {\cal L}(F(m, \theta), {\cal D}_{train})\right) \end{align} where $F(m, \theta) = F(\tilde \theta)$ ${\cal L}(F(m, \theta), {\cal D}) = {\cal L}({\cal A}(m), \theta, {\cal D})$ with with $m \in \{ 0, 1 \}^{D}$. To evaluate the \emph{transferability} of $m_*$, we find \begin{align} \label{problem transfer} \theta_* = {\rm arg} \min_\theta {\cal L}(F(m_*, \theta), {\cal D}_{new}) \end{align} and evaluate the performance of $F(\tilde \theta*) = F(m_*, \theta_*)$ on the new task. \subsection{Two-temperature continuous relaxation} \label{section two temperature} The network mask, $m\in \{0, 1 \}^D$, is a discrete variable and \eqref{AP problem formulation} cannot be solved with standard gradient-descent techniques. Exact approaches would be intractable for any network of reasonable size.\footnote{ For the simple $64$-dimensional linear model defined in Section \ref{section mnist experiment}, the number of possible architectures, i.e. configurations of the discrete variable $m$, is $|{\cal P}(\{ 1, \dots, 64\})| = 2^{64}$. } We propose to find possibly approximate solutions of \eqref{AP problem formulation} by solving a continuous approximation of \eqref{AP problem formulation} (first approximation) through approximate gradient updates (second approximation). Let $t_l >> t_s > 0$ be two constant associated with two temperatures ($1/t_l << 1/t_s$). The low-temperature approximation of \eqref{AP problem formulation} is obtained by replacing $m \in \{0, 1 \}^D $ with \begin{align} \label{low temperature approximation} v_{t_l} = \sigma(t_l w), \quad \quad w \in {\mathbb R}^D \end{align} everywhere in \eqref{AP problem formulation}. When $t_l$ is large, the approximation is tight but minimizing it through gradient steps is challenging because the gradient of the relaxed objective with respect to the quantization parameter, $\nabla_w {\cal L}(F(v_{t_l}, \theta), {\cal D}_{train}))$, vanishes almost everywhere.\footnote{ The problem arises because $d/dt \sigma(t_l w)= t_l v_{t_l} (1 - v_{t_l}) \to 0$ for any $w \neq 0$.} With high probability, naive gradient methods would get stuck into the exponentially-large flat regions of the energy landscape associated with {\cal L}. To mitigate this problem, we use a second high-temperature relaxation of \eqref{AP problem formulation} for computing an approximate version of the low-temperature gradient $\nabla_w {\cal L}\approx 0$. More precisely we let \begin{align} \label{gradient approximation} &\Tilde \nabla_{w} {\cal L}= \nabla_{v_{t_l}}{\cal L}(v_{t_l},\theta, {\cal D}) \circ \left( t_s v_{t_s}(1 - v_{t_s})\right), \\ & v_{t_s} = \sigma(t_s w) \nonumber \end{align} The proposed scheme allows us to i) use good (low-temperature) approximations of \eqref{AP problem formulation} without compromising the speed and accuracy of the gradient-optimization process and ii) derive, under certain conditions, quantitative convergence bounds. \subsection{Convergence analysis} \label{subsec:proof} Let $v \in [0,1]^D$ be a general version of \eqref{low temperature approximation}, defined as $v = \sigma(t w)$ for some $t > 0$ and $w \in {\mathbb R}^D$, $\theta \in {\mathbb R}^D$, and \begin{align} {\cal L}_t(w) = |{\cal D}|^{-1} \sum_{z = (x,y) \in {\cal D}} \ell(z, v, \theta)), \end{align} where $\ell(z; v, \theta) = \ell(z; F(v, \theta))$ is a single-sample loss, e.g. $\ell(z, F(v, \theta)) = (F(x; v, \theta) - y)^2$. We assume that $\ell$ is convex in $v$ for all $z$, and that $\nabla_v \ell$ is bounded, i.e., that $\ell$ satisfies the assumption below. \begin{assumption} \label{assumptions ell} For all $v, v' \in [0, 1]^D$, any possible input $z$, and any parameters $\theta \in \mathbb{R}^D$, $\ell(z; v, \theta)$ is differentiable with respect to $v$ and % \begin{align} \label{ass:assumption ell} &\max_{z,v, \theta} \| \nabla_v \ell(z;v,\theta) \|^2 \leq G^2, \\ &\ell(z;v,\theta) - \ell(z;v',\theta) \geq \nabla_{v'} \ell(z;v',\theta)^\intercal (v - v'), \end{align} where $G$ is a positive constant. \end{assumption} \noindent Under this assumption, we prove that gradient steps based on \eqref{gradient approximation} produce a (possibly approximate) locally optimal binary mask. \begin{theorem} \label{theorem:main} Let $\ell$ satisfy Assumption \ref{assumptions ell} and $\{ w_i \in {\mathbb R}^D\}_{i=1}^T$ be a sequence of stochastic weight updates defined using \eqref{gradient approximation}. For $i=1, \dots, T$, let $z_i$ be a sample drawn from ${\cal D}_{train}$ uniformly at random and $\alpha_i = \frac{c}{\sqrt{i}}$, where $c$ is a positive constant. Then $$ \mathbb{E}[{\cal L}_{t_l}(w_T) - {\cal L}_{t_l}(w^*)] \leq \frac{1}{c \sqrt{T}} + \\ \frac{c G^2 (1 + C) (1 + \log T)}{T}, $$ where the expectation is over the distribution generating the data, $w^* = \arg \min_{w \in \mathbb{R}^d} {\cal L}_{t_l}(w)$, $G$ is defined in Eq. \eqref{ass:assumption ell} (with $v = \sigma(t_l w)$), and \begin{align} &C = t_l t_s \left(\frac{1}{t_l t_s} - 2 g_{max}(t_l) g_{max}(t_s) + \frac{t_l t_s}{16^2} \right) \\ &g_{max}(t) = \sigma(t M) (1 - \sigma(t M)), \nonumber \end{align} with $M = \max \{ |w_i|, i=1, \dots, T \}$. \end{theorem} A proof is provided in the Supplementary Material.\footnote{ For simplicity, we consider the convergence of updates based on \eqref{gradient approximation} for fixed $\theta$, but a full convergence proof can be obtained by combining Theorem \ref{theorem:main} with standard results for unconstrained SGD.} Theorem \ref{theorem:main} and $\lim_{t_l \to \infty}{\mathcal L}_{t_l} = {\mathcal L}$, implies that a locally-optimal binary mask $m_* \in \{0,1\}^D$ can be obtained with high probability by letting $$m_* = \lim_{t_l \to \infty} \sigma(t_lw_T) = \mathbb m{1}[w_T > 0].$$ \section{Experiments} \label{section experiments} \subsection{Algorithm convergence (MNIST data, Figure \ref{figure convergence})} \label{section mnist experiment} To check the efficiency of the proposed optimization algorithm, we apply the two-temperature scheme described in \ref{sec:our_method} to the problem of find selecting a sparse sub-model of the logistic regression model \[ F(x; \tilde \theta) = \sigma(\tilde \theta^T x) \] by letting $\tilde \theta = m \circ \theta $ and solving \begin{align} \label{mnist objective} &m_* = {\rm arg} \min_{m} \left( \min_\theta \ell(F, {\cal D}) + \gamma \|m\|^2 \right) \\ &{\cal L}(F, {\cal D})) = |{\cal D}|^{-1} \sum_{(x, y) \in {\cal D}} \log \left(F^y (1-F)^{1-y}\right) + \gamma \|\theta\|^2 \nonumber \end{align} We use the MNIST data set and consider the binary classification task of discriminating between images of hand-written 0s and 1s. To evaluate the role of the relaxation temperatures, we replace $m$ in \eqref{mnist objective} with $v_{t_l}$ defined in \eqref{low temperature approximation}, with $t_l=1000$, and solve the obtained low-temperature approximation through SGD updates based on \eqref{gradient approximation} where $t_s \in \{ t_l, t_l/100, t_l/1000\}$. The first case, $t_s = t_l$, is equivalent to a naive gradient descent approach where the parameter updates are computed without any gradient approximation. \begin{figure}[ht] \centering \includegraphics[width=.6\textwidth]{pics/mnistExperimentNew.png} \caption{Convergence of the gradient descent algorithm for different choices of the relaxation constants, $t_s \in \{ t_l, t_l/100, t_l/1000\}$. In the legend, $|m|$ denotes the number of free parameters of the final model, ($\|m\|^2$ in \eqref{mnist objective}). } \label{figure convergence} \end{figure} \subsection{Transferability (CIFAR10 and CIFAR100 data, Figures \ref{fig:in_domain}, \ref{fig:transfer_5k}, \ref{fig:transfer_1k}, and \ref{fig:transfer_500})} \label{section experiment transfer} To check the scalability transfer-learning performance of AP, we use the VGG19 \cite{simonyan2014very} model ($D\sim 144 \ m$ free parameters) as parent network (see Section \ref{section problem formulation}), images and labels from CIFAR10 (10 classes, 5000+1000 images per class, referred to as ${\cal D}_{train}$ in Section \ref{section problem formulation}) for the AP step, and images and labels from CIFAR100 (100 classes, 500+100 images per class, referred to as ${\cal D}_{new}$ in Section \ref{section problem formulation}) to evaluate the obtained sub-architectures. The VGG sub-architectures are evaluated by fine-tuning them on new-task data sets of $5k$, $1k$ $500$, $100$, $50$ images. The new-task data sets, referred to as ${\cal D}_{new}$, are obtained by randomly sub-sampling a balanced number of images from 10 classes of CIFAR100. In particular, we compare the transfer-learning performance of VGG sub-architectures of different complexity, $|m| = D (1 - {sparsity})$, ${sparsity} \in \{0.1, 0.3, 0.5, 0.9, 0.95, 0.99\}$, obtained with three different methods: i) random pruning ({\tt Rnd} in the plots), ii) the proposed AP approach ({\tt Ours}), and iii) the Iterative Magnitude Pruning ({\tt IMP}) scheme described in \cite{frankle2018lottery} (our implementation). In all cases, we report the average and standard deviations (over 5 runs) of the accuracy versus the models' sparsity, ${sparsity}$ defined above. \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{pics/new_in_domain.pdf} \caption{In-domain performance: all models are fine-tuned on $5k$ test images from CIFAR10 (same 10-class classification task used for AP). } \label{fig:in_domain} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{pics/transfer_5k.pdf} \caption{ Transfer-learning performance for $|{\cal D}_{new}| = 5k$. } \label{fig:transfer_5k} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{pics/transfer_1k.pdf} \caption{ Transfer-learning performance for $|{\cal D}_{new}| = 1k$. $|{\cal D}_{new}| = 1k$. } \label{fig:transfer_1k} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{pics/transfer_500.pdf} \caption{ Transfer-learning performance for $|{\cal D}_{new}| = 500$. } \label{fig:transfer_500} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{pics/transfer_100.pdf} \caption{ Transfer-learning performance for $|{\cal D}_{new}| = 100$. } \label{fig:transfer_100} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{pics/transfer_50.pdf} \caption{ Transfer-learning performance for $|{\cal D}_{new}| = 50$. } \label{fig:transfer_50} \end{figure} \begin{table}[ht] \centering \begin{tabular}{|c|c c|} \hline Sparsity & Reshuffle Ours & Reshuffle IMP \\ \hline 0.1 & .378$\pm$.021 & .300$\pm$.024 \\ 0.3 & .343$\pm$.023 & .332$\pm$.022 \\ 0.5 & .429$\pm$.023 & .342$\pm$.026 \\ 0.7 & .448$\pm$.020 & .376$\pm$.025 \\ 0.9 & .497$\pm$.021 & .100$\pm$.000 \\ 0.95 & .301$\pm$.001 & .100$\pm$.000 \\ 0.99 & .100$\pm$.000 & .100$\pm$.000 \\ \hline \end{tabular} \caption{ Transfer-learning performance after random layer-wise reshuffling ($|{\cal D}_{new}| = 500$). } \label{tab:shuffle} \end{table} \subsection{Layer-wise sparsity (CIFAR10 and CIFAR100 data sets, Table \ref{tab:shuffle})} \label{section experiment density} As noticed in previous work \cite{frankle2020pruning}, the transfer learning properties of neural architectures depends more on the layer-wise connection density than on the specific connection configuration. To test this hypothesis, we compare the transfer-learning performance of all VGG sub-architectures upon layer-wise reshuffling of the optimized binary masks. From each sub-architecture, we extract an optimal \emph{layer-wise} connection density, $1 - {sparsity}_*^{(l)} = 1^T m_*^{(l)}/D^{(l)}$, $l=1, \dots, n_{layers}$, where ${sparsity}_*^{(l)}$ is the layer-wise sparsity and $D^{(l)}$ the total number of layer $l$ in the original VGG model, and test the transfer-learning performance of a random architecture with such an optimal layer-wise density, i.e. a random architecture with layer-wise binary masks $\tilde m^{(l)}$ satisfying $|\tilde m^{(l)}| = D^{(l)} (1 - {sparsity}_*^{(l)})$, $l=1, \dots, n_{layers}$. \section{Discussion} \subsection{Results} The proposed two-temperature approach improves both the speed and the efficiency of gradient-based algorithms in solving continuous relaxation of discrete optimization problems. Our experiment on MNIST data (see Section \ref{section mnist experiment} and Figure \ref{figure convergence}) shows that a careful choice of the high-temperature parameter, $t_s$, defined in Section \ref{section two temperature}, may help the stability of the gradient updates. Setting the high temperature to $t_s \sim t_l / 100$ makes a standard SGD algorithm reach a lower objective value in fewer iterations than for $t_s = t_l$ (equivalent to using the exact gradient in \eqref{gradient approximation}) or $t_s = t_l/1000$ (higher gradient-approximation temperature). The optimized models have different complexity, as this is implicitly controlled through the regularization parameter $\lambda$ in \eqref{mnist objective}.\footnote{To obtain models of fixed sparsity (as in our transfer-learning experiments) we choose a suitably large $\lambda$ and stop updating the mask weights, $w$, when the target sparsity is reached.} Choosing $t_s = t_l/1000$ is less efficient because the gap between the true gradient and its approximation becomes too large (in our experiments, it causes the AP optimization to prune all network connections). These results are in line with the theoretical convergence bound of Theorem \ref{theorem:main}. AP can be efficiently used to extract low-complexity and transferable sub-architectures of a given large network, e.g. VGG (see Section \ref{section experiment transfer}). According to our transfer learning experiment on CIFAR10 and CIFAR100 (see Section \ref{section experiment transfer}), AP sub-architectures adapt better than random or IMP sub-architectures of the same size to solve new tasks, especially when the data available for retraining the networks on the new task is small. When $|{\cal D}_{new}|$ is big enough, random sub-architectures may also perform well, probably because their structure is not biased by training on a different domain (Figure \ref{fig:transfer_5k}). AP models are consistently better than random when fewer than 1000 data points are available (Figures \ref{fig:transfer_1k} and \ref{fig:transfer_500}). IMP models are worse than AP and random models in all setups, confirming that IMP produces sub-architectures that are too strongly related to the training task (Figures \ref{fig:transfer_5k}, \ref{fig:transfer_1k}, and \ref{fig:transfer_500}). When $|{\cal D}_{new}|$ is too small, e.g. $|{\cal D}_{new}| < 100$, all models perform badly, which may be due to numerical instabilities in the fine-tuning optimization phase. Interestingly, while lower-complexity models adapt better to the new domain when $|{\cal D}_{new}|$ is not too small (Figures \ref{fig:transfer_5k}, \ref{fig:transfer_1k}, and \ref{fig:transfer_500}), this is not true when $|{\cal D}_{new}|$ contains fewer than $100$ images, i.e. 10 images per class. The good performance of IMP models in the in-domain experiment (Figure \ref{fig:in_domain}) confirms their stronger link to the original task and a higher level of entanglement between the learned sub-architectures and the corresponding weights. The results we obtained in the layer-wise reshuffling experiment (Table \ref{tab:shuffle}) confirm the conjecture of \cite{frankle2020pruning} about the importance of ink learning the right layer-wise density. A comparison between Table \ref{tab:shuffle} and Figure \ref{fig:transfer_5k} suggests that a good layer-wise density is what matters the most in making a neural architecture more transferable. Probably, the good performance of reshuffled and random-selected architectures also indicates that fine-tuning may be powerful enough to compensate for non-optimal architecture design when the number of network connections is large enough. We should note, however, that this does not happen if i) the size of the new-task data is small, suggested by the increasing performance gap between AP and random models when only 1k or 500 samples from the new task are available (Figures \ref{fig:transfer_1k} and \ref{fig:transfer_500}), and ii) the learned sub-architectures are too task-specific, e.g. for all IMP models. \subsection{Directions} Many more experimental setups can and should be tried. For example, it would be interesting to: \begin{itemize} \item test the performance of the method on the same data set but for different choices parent network \item see if the proposed method can handle more challenging transfer learning task, i.e. for less similar learning and testing tasks \item compare with other architecture search method (this is not easy as a fair comparison would require starting from analogous search spaces \item try other than random initialization in the fine-tuning step, as it has been proved beneficial in similar NLP transfer learning experiments (see for example \cite{chen2020lottery}) \item study the difference between the performance of the \emph{bare} architecture, i.e. with binary weights taking values in $\{-1, 1 \}$, which could be used as a low-memory version of the transferable models for implementation on small devices) \end{itemize} From the theoretical perspective, follow-up work will consist of applications of the proposed two-temperature method to other discrete optimization problems.
train/arxiv
BkiUdgc5qoaAwj4XONoT
5
1
\section*{A: Sub-tree reconstruction algorithm} \label{sec.subtree_algorithm} Here, inference of the classical statistical complexity $C_\mu$ is achieved through the sub-tree reconstruction algorithm~\cite{crutchfield1989inferring}. It works by explicitly building an $\varepsilon$-machine of a stochastic process, from which $C_\mu$ may readily be deduced. The steps are detailed below. \textbf{1. Constructing a tree structure.} The sub-tree construction begins by drawing a blank node to signify the start of the process with outputs $y \in \mathcal{A}$. A moving window of size $2L$ is chosen to parse through the process. Starting from the blank node, $2L$ successive nodes are created with a directed link for every $y$ in each moving window $\{y_{0:2L}\}$. For any sequence starting from $y_0$ within $\{y_{0:2L}\}$ whose path can be traced with existing directed links and nodes, no new links and nodes are added. New nodes with directed links are added only when the $\{y_{0:2L}\}$ does not have an existing path. This is illustrated in \figref{fig.subtree} For example, suppose $y_{0:6} = 000000$ , giving rise to six nodes that branch outwards in serial from the initial blank node. If $y_{1:7} = 000001$, the first five nodes gain no new branches, while the sixth node gains a new branch connecting to a new node with a directed link. Each different element of $|\mathcal{A}|^{2L}$ has its individual set of directed links and nodes, allowing a maximum of $|\mathcal{A}|^{2L}$ branches that originate from the blank node. \begin{figure*}[!h] \includegraphics[width=0.8\linewidth]{fig_subtree.pdf} \caption{The sub-tree reconstruction algorithm, here illustrated for $L=3$.} \label{fig.subtree} \end{figure*} \textbf{2. Assigning probabilities.} The probability for each branch from the first node to occur can be determined by the ratio of the number of occurences the associated strings to the total number of strings. Correspondingly, this allows each link to be denoted with an output $y$ with its respective transition probability $p$. \vspace{0.4cm} \textbf{3. Sub-tree comparison.} Next, starting from the initial node, the tree structure of $L$ outputs is compared against all other nodes. Working through all reachable $L$ nodes from the initial node, any nodes with identical $y|p$ and branch structure of $L$ size are given the same label. Because of finite data and finite $L$, a $\chi^2$ test is used to account for statistical artefacts. The $\chi^2$ test will merge nodes that have similar-enough tree structures. This step essentially enforces the causal equivalence relation on the nodes. \vspace{0.4cm} \textbf{4. Constructing the $\varepsilon$-machine.} It is now possible to analyse each individually-labelled node with their single output and transition probability to the next node. An edge-emitting hidden Markov model of the process can then be drawn up. This edge-emitting hidden Markov model represents the (inferred) $\varepsilon$-machine of the process. \vspace{0.4cm} \textbf{5. Computing the statistical complexity.} The hidden Markov model associated with the $\varepsilon$-machine has a transition matrix $T_{kj}^y$ giving the probability of the next output being $y$ given we are in causal state $S_j$, and $S_k$ being the causal state of the updated past. The steady-state of this (i.e., the eigenvector $\pi$ satisfying $\sum_yT^y\pi=\pi$) gives the steady-state probabilities of the causal states. Taking $P(S_j)=\pi_j$, we then have the Shannon entropy of this distribution gives the statistical complexity: \begin{equation} C_\mu := H[ P(S_j) ] = -\sum_j P(S_j) \log_2 P(S_j). \end{equation} \section*{B: Quantum Models} \label{sec.quantummodels} Quantum models are based on having a set of non-orthogonal memory states $\{\ket{\sigma_j}\}$ in one-to-one correspondence with the causal states $S_j$. These quantum memory states are constructed to satisfy \begin{equation} \label{eq.quantumcausalstates} U\ket{\sigma_j}\ket{0} = \sum_{y} \sqrt{P(y|j)} \ket{\sigma_{\lambda(y,j)}} \ket{y} \end{equation} for some suitable unitary operator $U$~\cite{binder2018practical,liu2019optimal}. Here, $P(y|j)$ is the probability of output $y$ given the past is in causal state $S_j$, and $\lambda(y,j)$ is a deterministic update function that updates the memory state to that corresponding to the causal state of the updated past. Sequential application of $U$ then replicates the desired statistics (see Fig. \ref{fig.quantummodels}). Then, $\rho=\sum_jP(S_j)\ket{\sigma_j}\bra{\sigma_j}$ is the steady-state of the quantum model's memory, and the quantum statistical memory is then given by the the von Neumann entropy of this state: \begin{equation} C_q = -\text{Tr} (\rho \log_2 \rho). \end{equation} \begin{figure}[h!] \includegraphics[width=0.6\linewidth]{fig_quantummodels.pdf} \caption{A quantum model consists of a unitary operator $U$ acting on a memory state $\ket{\sigma_j}$ and blank ancilla $\ket{0}$. Measurement of the ancilla produces the output symbol, with the statistics of the modelled process realised through the measurement statistics.} \label{fig.quantummodels} \end{figure} \section*{C: Quantum Inference Protocol} \label{sec.quantuminferenceprotocol} A quantum model can be systematically constructed from the $\varepsilon$-machine of a process, and so a quantum model can be inferred from data by first inferring the $\varepsilon$-machine. However, the quantum model will then inherit errors associated with the classical inference method, such as erroneous pairing/separation of pasts into causal states (due to e.g., the $\chi^2$ test in sub-tree reconstruction). For this reason, a quantum-specific inference protocol was recently developed~\cite{ho2020robust} that bypasses the need to first construct a $\varepsilon$-machine, thus circumventing some of these errors. Moreover, it offers a means to infer the quantum statistical memory $C_q$ of a quantum model without explicitly constructing said model. It functions by scanning the through the stochastic process in moving windows of size $L+1$, in order to estimate the probabilities $P(Y_{0:L+1})$, from which the marginal and conditional distributions $P(Y_{0:L})$ and $P(Y_0|Y_{-L:0})$ can be determined. From these, we construct a set of inferred quantum memory states $\{\ket{\varsigma_{y_{-L:0}}}\}$, satisfying \begin{equation} \label{eq.quantuminferenceprotocol} U \ket{\varsigma_{y_{-L:0}}} \ket{0} = \sum_{y_0 } \sqrt{P(y_0) | y_{-L:0})} \ket{\varsigma_{y_{-L+1:1}}} \ket{y_0}. \end{equation} for some suitable unitary operator $U$. When $L$ is greater than or equal to the Markov order of the process, and the probabilities used are exact, this recovers the same quantum memory states to the exact quantum model Eq.~\eqref{eq.quantumcausalstates}, where the quantum memory states associated to two different pasts are identical iff the pasts belong to the same causal state. Otherwise, if $L$ is sufficiently long to provide a `good enough' proxy for the Markov order, and the data stream is long enough for accurate estimation of the $L+1$-length sequence probabilities, then the quantum model will still be a strong approximation with a similar memory cost. From the steady-state of these inferred quantum memory states, the quantum statistical memory $C_q$ can be inferred~\cite{ho2020robust}. However, the explicit quantum model need not be constructed as part of the inference of the quantum statistical memory. The spectrum of the quantum model steady-state is identical to that of its Gram matrix~\cite{Horn2012}. For the inferred quantum model, this Gram matrix is given by \begin{equation} \label{eq.grammatrix} G_{y_{-L:0} y'_{-L:0}} = \sqrt{ P(y_{-L:0}) P(y'_{-L:0}) } \sum_{y_{0:L}} \sqrt{P(y_{0:L}|y_{-L:0}) P(y_{0:L}|y'_{-L:0})}. \end{equation} The associated conditional probabilities $P(Y_{0:L}|Y_{-L:0})$ can either be estimated from compiling the $P(Y_0|Y_{-L:0})$ using $L$ as a proxy for the Markov order, or directly by frequency counting of strings of length of $2L$ in the data stream. Then, the quantum inference protocol yields an estimated quantum statistical memory $C_q$: \begin{equation} C_q = -\text{Tr} (G \log_2 G). \end{equation} \section*{D: Methodology (Extended)} \label{sec.methodology} In this work, we study finite-width analogues of ECA. To avoid boundary effects from the edges of the ECA, to obtain a ECA state of width $W$ for up to $t_{\text{max}}$ timesteps we generate an extended ECA of width $W'=W+2t_{\text{max}}$ with periodic boundary conditions and keep only the centremost $W$ cells; this is equivalent to generating a width $W$ ECA with open boundaries (see \figref{fig.boundary}). Note however that the choice of boundary condition showed little quantitative effect upon our results. \begin{figure}[!h] \includegraphics[width=0.6\linewidth]{fig_boundary.pdf} \caption{Generation of finite-width ECA evolution with open boundaries via extended ECA with periodic boundaries.} \label{fig.boundary} \end{figure} The state of an ECA can be interpreted as a stochastic pattern. That is, given an ECA at time $t$ in state $x^{(t)}_{0:W}$, we can interpret this as a finite string of outcomes from a stochastic process $y^{(t)}_{0:W}$ with the same alphabet. We can then apply the tools of computational mechanics to this finite string, inferring the classical statistical complexity through the sub-tree reconstruction algorithm, and the quantum statistical memory from the quantum inference protocol. For both inference methods we use $L=6$, as little qualitative difference was found using larger $L$ (see \figref{fig.longer}). For the sub-tree reconstruction we set the tolerance of the $\chi^2$ test to 0.05. We apply the inference methods to ECA states of width $W=64,000$. For each ECA rule we generate an initial state for $t=1$ where each cell is randomly assigned $0$ or $1$ with equal probability, and then evolve for $t_\text{max}=10^3$ steps (in \figref{fig.longer} we analyse select rules for up to $t_\text{max}=10^5$, finding that the qualitative features of interest are already captured at the shorter number of timesteps). Note that this is many, many orders of magnitude smaller than the time for which a typical finite-width ECA is guaranteed to cycle through already-visited states ($\mathcal{O}(2^W)$)~\cite{Grassberger1986a}. We then apply the inference methods to the states at $t=1,2,3,...,9,10,20,...,90,100,200,...,t_\text{max}$; evaluating at every timestep shows little qualitative difference beyond highlighting the short periodicity of some Class II rules. We repeat five times for each rule, and plot the mean and standard deviation of $C_q^{(t)}$ and $C_\mu^{(t)}$ (see \figref{fig.Class1_Class2} and \figref{fig.Class3_Class4}). \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{fig_class1_class2.pdf} \caption{Evolution of $C_q^{(t)}$ (blue) and $C_\mu^{(t)}$ (red) for all Wolfram Class I and II rules. Lines indicate mean values over five different intial random states, and translucent surrounding the standard deviation.} \label{fig.Class1_Class2} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{fig_class3_class4.pdf} \caption{Evolution of $C_q^{(t)}$ (blue) and $C_\mu^{(t)}$ (red) for all Wolfram Class III and IV rules. Rules are placed on a simplicity-complexity spectrum according to the growth of $C_q^{(t)}$. Lines indicate mean values over five different intial random states, and translucent surrounding the standard deviation.} \label{fig.Class3_Class4} \end{figure} \pagebreak\null\pagebreak \section*{E: Longer $L$, larger $t_{\text{max}}$} Here [\figref{fig.longer}] we present plots supporting that our choice of $L=6$, $t_{\text{max}}=10^3$ appear to be sufficiently large to capture the qualitative features of interest, showing little difference when they are extended. \begin{figure}[!h] \centering \includegraphics[width=\linewidth]{fig_longer.pdf} \caption{$C_q^{(t)}$ plots for a selection of rules with longer $L$ and larger $t_{\text{max}}$. Plots shown for $\{W=64,000, L=6\}$, \mbox{$\{W=128,000, L=7\}$}, and $\{W=256,000, L=8\}$.} \label{fig.longer} \end{figure} \pagebreak The exception to this is Rule 110, which appears to plateau at longer times. We believe this to be attributable to the finite width of the ECA studied -- as there are a finite number of gliders generated by the initial configuration, over time as the gliders annihilate there will be fewer of them to interact and propagate further correlations. This is illustrated in \figref{fig.110longer}. \begin{figure}[!h] \centering \includegraphics[width=0.4\linewidth]{fig_110longer.pdf} \caption{Over time, the finite number of gliders present in the initial configuration of a finite-width Rule 110 ECA will disappear due to annihilation with other gliders. At longer times, there are then fewer gliders to propagate longer-range correlations across the ECA state.} \label{fig.110longer} \end{figure} \section{F: Rule 18 and kinks} \label{sec.Rule18} Within the dynamics of Rule 18 (Wolfram Class III), a phenomenon referred to as `kinks' has been discovered~\cite{Grassberger1984}. These kinks are identified with the presence of two adjacent black cells in the ECA state, and have been shown capable of undergoing random walks \cite{Eloranta1992}, and annihilate when they meet. These kinks can be seen by applying a filter to the dynamics of Rule 18, replacing all cells by white, with the exception of adjacent pairs of black cells (see \figref{fig.kinks}). The movement and interaction of kinks is reminiscent to that of a glider system like that of Rules 54 and 11; though, while information may be encoded into these kinks, they are noisy due to their seemingly random motion. \begin{figure}[!h] \centering \includegraphics[width=0.7\linewidth]{fig_kinks.pdf} \caption{Rule 18 with and without a filter. The filter indicates the location of the kinks.} \label{fig.kinks} \end{figure} \end{document}
train/arxiv
BkiUdLY5qhDBblGOUkAp
5
1
\section{Introduction} \label{intro} The astrophysical origin of the highest energy cosmic rays remains one of the major open questions in astrophysics. Cosmic rays with energies above about $55\times 10^{18}$ eV (55 EeV) -- commonly called UHECRs -- lose energy through interactions with the Cosmic Microwave Background radiation, implying that their sources must be relatively nearby (closer than $\sim 200$ Mpc\cite{Greisen:1966,Zatsepin:1966}). However, very few classes of astrophysical systems seem capable of accelerating particles to such high energies and there are few candidate sources within the required distance. The closest plausible UHECR source is the nearby radio galaxy Centaurus A\cite{Cavallo:1978,Romero:1996,Farrar:2000}. Tantalizingly, the highest energy Pierre Auger Observatory events show a distinct excess within $18^\circ$ of Cen A: 13 events observed, while 3.2 events would be expected for 69 events coming from an isotropic distribution\cite{Auger:2010}. This excess has re-ignited interest in the possibility that Cen A is the source of most of these events\cite{Wibig:2007,Gorbunov:2008,Hardcastle:2009,Kachelriess:2009,Rachen:2008, Fargion:2008,Fargion:2009}. However, Cen A is located in the ``Supergalactic Plane'' -- a nearby overdensity of galaxies -- so the UHECR excess could be produced by multiple sources in the general region, or possibly simply be due to some focussing effect of the GMF or EGMF. Fig. \ref{auger_events} shows the arrival directions of the 69 published events, with Cen A and the supergalactic plane indicated as well. \begin{figure} \centering \includegraphics[width=.95\linewidth]{auger_69events.pdf} \caption{The 69 published Auger events above 55 EeV\cite{Auger:2010} in Galactic coordinates, with Galactic longitude zero at the center and increasing to the left. The giant radio lobes of Cen A are marked in red and the super-Galactic plane is shown as a grey line. The blue contour is an $18^\circ$ circle around Cen A. The Auger exposure is shown in light blue; it is largest at the South Celestial Pole, about 50$^\circ$ south of Cen A. }\label{auger_events} \end{figure} Since UHECRs are charged protons or nuclei\cite{AugerICRCcorrelations:2009}, their arrival directions do not point exactly toward the source of their emission. Magnetic deflection is proportional to the UHECR's electric charge, which is undetermined. Some evidence, e.g., correlations with extragalactic objects such as AGNs \cite{AugerAGN:2007,Auger:2010}, favors UHECRs being mostly protons, while other evidence suggests a mixed or heavy composition \cite{AugerICRCcomposition:2009}. Therefore, use of a reliable model of the Galactic magnetic field including both the large scale and random components, is necessary to obtain trustworthy predictions for UHECR source locations. This is not a pedantic matter, as demonstrated in Fig. \ref{stanev} which shows the striking variation in predicted arrival directions using six currently-used models for the coherent GMF, for a 60 EeV proton produced in the core of Cen A. \begin{figure} \centering \includegraphics[width=1\linewidth]{stanev_defl.pdf} \caption{The predicted locus of arrival directions for a 60 EeV proton emitted from the nucleus of Cen A (white circle), for the JF12 GMF and five other popular large-scale Galactic magnetic field models: the ASS/BSS models by Stanev \cite{Stanev:1997}, the best-fit model of Sun et al. \cite{Sun:2008,Sun:2010} with a 2 $\mu$G halo field, and the ASS/BSS models of Pshirkov et al. \cite{pshirkov+11}. The $2-\sigma$ uncertainty region of the predicted arrival direction due to the uncertainty in the JF12 parameter values is indicated by the shaded region; no such uncertainty analysis exists for the other models. JF12 provides a model of the random field, but for purposes of comparing to the other models which do not provide a model of the random field, only deflections due to the coherent field are shown.}\label{stanev} \end{figure} Of the six GMF models used in Fig. \ref{stanev}, the new model of Jansson and Farrar \cite{jf12,jf12rand} (JF12 below) gives by far the best global fit to the RM and polarized synchrotron data \cite{jf12}, with a $\chi_{\rm dof}^2$ per degree of freedom of 1.096 for the 6605 observables (pixels of RM, $Q$ and $U$). The next best model is that of Sun et al.\cite{Sun:2008}, whose parameters were updated in \cite{Sun:2010} (SR10). This is the most comprehensive attempt prior to JF12 to model the coherent GMF using constraints from both RM and synchrotron emission data. It does not allow for as general a functional form as JF12, and in particular does not include the out-of-plane field or possible striated random fields. With the parameters given in ref. \cite{Sun:2010}, its $\chi_{\rm dof}^2$ for the 6605 observables is 1.672. To test if the functional form adopted in SR10 is as good as JF12, we used the SR10 form and re-optimized its parameters to fit the ensemble of the data using the JF12 MCMC; the resulting fit has $\chi_{\rm dof}^2 = 1.325$, indicating that the out-of-plane and striated field components of JF12 are signficant improvements in the model. Although more recent, the models proposed by Pshirkov et al.\cite{pshirkov+11} (P+11) have a less general form and are less well constrained, as these authors used only RMs and not synchrotron emission. They are based on the SR10 and Prouza-Smida\cite{Prouza2003} (PS03) models. Being unable to disambiguate the large scale geometry, P+11 offers benchmark BSS and ASS versions. When fitting the complete set of 6605 observables, these give $\chi_{\rm dof}^2 = 2.663, \, 4.971$, respectively; with our re-optimization of their parameter values these become $\chi_{\rm dof}^2 = 1.452, \, 1.591$. We did not measure the $\chi_{\rm dof}^2$ of PS03 because the P+11 models are a generalization of it, and those give poor fits. The ASS and BSS models of Stanev\cite{Stanev:1997} are classics, developed to illustrate the impact of different field geometries more than to provide a detailed model for the field; they fare even worse in a global fit. Clearly, studies of the deflection of UHECRs such as refs \cite{Takami:2009, Vorobiov:2009} using these old models, cannot be trusted to reliably predict CR deflections in the direction of Cen A. In Sec. \ref{gmf} we take advantage of the exceptional RM coverage in the region surrounding Cen A from \cite{Feain:2009}, to test the various GMF models in the region relevant for predicting deflections of UHECRs from Cen A. We find that JF12 accurately predicts the mean Faraday rotation measure and polarized and total synchrotron intensity in the particular direction of Cen A, while other models perform less-well to very-poorly. Finally, having confirmed the validity of the JF12 model for Cen A deflections, we use JF12 in Sec. \ref{deflections} to determine the deflections of UHECRs through the GMF as a function of their energy and charge. We find that three events within $18^{\circ}$ of Cen A could be protons coming from Cen A and three others can be attributed to Cen A for more general charge assignments. Thus we find that the distribution of the arrival directions of the excess of events is not compatible with their dominant source being either the Active Galaxy or the extended radio lobes of Cen A, unless high Z nuclei can ``wrap back" to the Cen A region -- winding up arriving from that direction after deflections greater than $2 \pi$. Of course, in that case, an association with Cen A would be essentially accidental. \section{Centaurus A} Centaurus A (NGC 5128) is the nearest active galaxy, and a Fanaroff-Riley Class I (FR-I) radio galaxy (see \cite{Israel:1998} for a review), at a distance of 3.8 Mpc \cite{Harris:2009}. The massive elliptical host galaxy has Galactic coordinates $(l,\,b)=(309.5^\circ,\,19.4^\circ)$. Thanks to its proximity and size, its enormous radio lobes combine into the largest extragalactic object on the sky, with an angular size of 9\ensuremath{^\circ}$\times$5\ensuremath{^\circ}, corresponding to a physical size of 500$\times$250 kpc. About 5 kpc from the central galaxy, jets from the accretion disk surrounding the central supermassive black hole expand into plumes as they plow into the ambient intergalactic medium. These plumes are called the inner radio lobes. Some material goes farther, creating the northern middle lobe, which extends to 30 kpc and lacks a southern counterpart. The giant outer radio lobes extend 250 kpc in projection both in the north and the south; their outline is shown in Fig. \ref{cenA_data}. The 3D orientation of the lobes is not well-known. Cen A was first considered as a possible source of UHECRs by Cavallo \cite{Cavallo:1978}. The possibility that Cen A could in fact be the source of most cosmic rays, if turbulent extragalactic magnetic fields on the Cen A side of the Milky Way are near the maximum allowed value, was first proposed by Farrar and Piran \cite{Farrar:2000}. Some of the more recent works include: \cite{Wibig:2007}, which proposes that Cen A is one of three sources that, combined, are responsible for all observed UHECRs; \cite{Gorbunov:2008} which analyzes the significance of correlation between UHECRs and Cen A; \cite{Hardcastle:2009} which investigates the plausibility of the giant radio lobes of Cen A being acceleration sites of UHECRs; \cite{Kachelriess:2009} which considers the possibility that the radio jet at the core of Cen A is an accelerator; \cite{Rachen:2008, Rieger:2009} which considers various mechanisms that could accelerate UHECRs in radio galaxies such as Cen A; and \cite{Fargion:2008,Fargion:2009} which argue that Cen A is the source of the $\approx\!10$ events in the region of the sky surrounding Cen A. Ref. \cite{Moskalenko:2009} notes that Cen A could be associated with at least 4 out of the -- at the time -- 27 published Pierre Auger UHECRs above 57 EeV due to its large radio extent, but does not consider the deflection by any particular Galactic magnetic field model. \section{Magnetic field model and predictions}\label{gmf} As seen in Fig. \ref{stanev}, different GMF models yield highly disparate predictions for UHECR deflections. Hence, the validity of any conclusions about UHECR deflections hinges on the reliability of its GMF model. The JF12 model of refs. \cite{jf12,jf12rand} should be substantially more realistic than earlier models. It has a more general form and it is constrained by both RMs and synchrotron emission, which together probe both the line-of-sight and transverse components of the field. The model includes a thin disk component, an extended halo component, and an out-of-plane component as suggested by observations of external galaxies; random and striated random fields are also included in the analysis. We refer the reader to \cite{jf12} for details of the JF12 large scale GMF model, and to \cite{jf12rand} for a description of the associated random field model. The model is constrained by the WMAP7 Galactic synchrotron emission map \cite{Gold:2011} and more than forty thousand extragalactic rotation measures, and as noted in Sec. \ref{intro} it reproduces the global RM and polarized and unpolarized synchrotron data well. However, the Galactic magnetic field is very complicated and even the JF12 global model with 34 parameters describing the coherent and random fields cannot be expected to provide a highly accurate model of the magnetic field along every line-of-sight. Therefore, before proceeding to UHECR deflections, we first determine the constraining observables along lines-of-sight relevant for UHECRs propagating from Cen A and compare them to the predictions of the JF12 model. \begin{figure} \centering \includegraphics[width=1\linewidth]{PI_egs_cr.pdf} \caption{Polarized synchrotron radiation at 22 GHz (color) from WMAP7 data \cite{Gold:2011}. The published Auger UHECR events above 55 EeV \cite{Auger:2010} around Cen A are indicated with small gray circles; their energies are given in EeV. Contours from radio data \cite{Haslam:1982} outline Cen A (center) and parts of the Galactic plane (both contours drawn at 70 K at 408 MHz). 160 extragalactic sources with line-of-sights outside Cen A \cite{Feain:2009} are shown along the boundary of Cen A. Filled white circles denote negative rotation measures (corresponding to a line-of-sight electron-density-weighted average magnetic field \emph{away} from the observer), black squares (very few) denote positive rotation measures. The size of the markers is proportional to the magnitude of the rotation measure. The large white circle shows the region 5$^\circ$ to the right, used to estimate the PI in the direction of Cen A without foreground contamination.}\label{cenA_data} \end{figure} Fig. \ref{cenA_data} shows a portion of the sky centered on Cen A, with color scale indicating the synchrotron polarization intensity (PI) from WMAP7. It can be seen from Fig. \ref{cenA_data} that Cen A is located at the edge of a highly polarized region, part of the nearby North Polar Spur (NPS) or radio loop 1\cite{Wolleben:2007}. If the measured PI near Cen A is dominated by these local structures, the emission is likely coming from very nearby ($\lesssim 200$ pc in the case of the NPS), in which case this region is not a good indicator of the large scale magnetic field relevant for UHECR deflection. The measured PI in the direction of Cen A is likely dominated by emission from Cen A itself, and also cannot be used. But the area immediately to the right of Cen A appears mostly uncontaminated by local emission. We select the 2\ensuremath{^\circ}\ radius disk shown in Fig. \ref{cenA_data}, centered on $l=304.5^\circ, \, b=+19.4^\circ$, a point 5 degrees from Cen A; this point was chosen to be close to the direction of Cen A yet avoid any obvious contamination due to foreground sources. Here we compute the average PI and I and their variance. The PI is so small that the polarization angle cannot be reliably determined and we do not use it. In this way we estimate the polarized intensity and intensity at 22 GHz in the general direction of Cen A to be $\mathrm{PI}=0.008\pm0.006$ mK and $\mathrm{I}=0.14\pm0.07$ mK, where the uncertainties quoted are the standard deviation of the 1$^\circ$x1$^\circ$ subpixels. The rotation measures of 160 extragalactic sources (EGS) with lines-of-sight near, but outside, Cen A were measured using the Australian Telescope Compact Array by Feain et al. \cite{Feain:2009} (see Fig. \ref{cenA_data}). Nearby foreground structures do not disproportionately impact RMs as they do for synchrotron emission, so these RMs provide an excellent measurement of the RM in this direction. The average RM of these 160 new measurements is $-54\, \text{rad}\,\text{m}^{-2}$, with a standard deviation of $32\, \text{rad}\,\text{m}^{-2}$. The corresponding predictions of the combined JF12 coherent and random GMF models are $\mathrm{PI}=0.00895$ mK, $\mathrm{I}=0.17$ mK and $\mathrm{RM}=-51.092\,\text{rad}\,\text{m}^{-2}$. Although the JF12 model parameters were determined using all available RMs including those of Feian et al.\cite{Feain:2009}, the fitting procedure reduces the 160 RMs surrounding Cen A to just a handful of data points (4\ensuremath{^\circ}-by-4\ensuremath{^\circ}\ pixels) out of several thousands used to constrain the GMF parameters, so there is no {\em a priori} guarantee that the model prediction will agree well with the local observables. Thus the virtually perfect agreement between the JF12 model predictions and the observations along the Cen A line of sight, is a testimonial to the quality of the functional form of the model and the power of the global fitting approach. The uncertainty on the mean is $1/\sqrt{N}$ times the standard deviation, where $N$ is the number of statistically independent measurements. The extent to which the different subpixels for PI and the individual RMs can be considered independent measurements depends on the scale length of the random fluctuations, which has not yet been established. Even with $N$'s of 16 and 160, leading to observational mean values of $\mathrm{PI}=0.008 \pm 0.0015 , \, \mathrm{I}=0.14\pm0.025$ mK and $\mathrm{RM}= -54 \pm 3$, the JF12 predictions are in excellent agreement within the errors. Given the much worse global fit of other GMF models to the data, it is not surprising that they also give a poorer fit in the Cen A direction with \{$\mathrm{PI} \,({\rm mK}), \, \mathrm{RM}= \,(\text{rad}\,\text{m}^{-2})$\} predictions: SR10 \{0.00656, \,-25.486\}, P+11BSS \{0.0092, \, -72.411\}, P+11ASS \{0.0096, \, -100.469\} , Stanev BSS \{0.00426, \, -56.594\}, Stanev ASS \{0.00426, \, 61.704\}. One might think that polarized synchrotron data could be used directly to determine the deflections of UHECRs along a line-of-sight of interest, since both depend on the components of $\vec{B}$ transverse to the line of sight, but this is not the case for several reasons. Most importantly, the Stokes parameters $Q$ and $U$ are sensitive only to the orientation and strength of the transverse field, but not its sign, whereas a UHECR's deflection depends on the sign of $\vec{B}_{\perp}$: a trajectory which traverses a region with a sign reversal has a smaller net deflection than in the absence of the reversal. In addition, $Q$ and $U$ are weighted by the relativistic electron density, $n_{{\rm cre}}$, making them much less sensitive to the field in the halo than are UHECR deflections. (This means, of course, that when UHECR sources and charge assignments are known, UHECR deflections will become a very important tool for constraining the GMF.) Therefore it is a necessary but not sufficient condition that a GMF model gives a good fit to the PI data along its trajectory: a good {\em global} fit to the observables is also necessary to properly constrain the sign reversals. Fig. \ref{B_comp_CenA} shows the coherent field along the line of sight to Cen A in the JF12 GMF model; $\theta=0$ is angle of $\vec{B}_{\perp}$ with respect to the Galactic plane, in a right-handed coordinate system with $\hat{z}$ pointing in the direction of motion. \begin{figure} \centering \includegraphics[width=1\linewidth]{B_comp_cenA_f52.pdf} \caption{The coherent field along the trajectory from Cen A, in the JF12 model.}\label{B_comp_CenA} \end{figure} \section{Deflections of UHECRs from Cen A}\label{deflections} If there were no random component to the deflections of UHECRs, the arrival directions of UHECRs from any individual source would fall on a single arc of width the angular resolution of the Auger events ($\approx 1^\circ$), with the separation from the source region inversely proportional to the rigidity of the cosmic ray ($E/Z=$ \mbox{energy/charge}), in the limit of small-angle deflections. This distribution of arrival directions is manifestly not observed in the Cen A region. However, if coherent fields are negligible compared to random ones, the arrival directions would be smeared over all directions rather than aligned. Thus, the viability of the proposal that Cen A is the source of Auger's reported excess of UHECRs within $18^{\circ}$ with respect to the isotropic expectation, depends on the relative importance of coherent and random deflections and the magnitude of the latter. Therefore it is crucial to our analysis that the random fields along the trajectories of interest are well constrained as has been done by JF12 \cite{jf12rand}. The effect of Galactic random fields on UHECR deflections was first considered in \cite{Tinyakov:2005}; with the better-constrained JF12 random field, especially in the Cen A direction, the uncertainty in the random deflections is reduced. JF12 also considered the possibility of striated fields -- a field with a definite orientation but having a random sign and magnitude, as could be produced by stretching a random field. While random fields cause dispersion in all directions on the sky, striated fields aligned with the local regular field merely rescale the displacement due to the coherent field, similarly to UHECR energy resolution. Using RM, $Q$ and $U$ alone, the existence of a striated random field aligned with the local coherent field cannot be distinguished from a rescaling of the relativistic electron density, $n_{\rm cre}$, and \cite{jf12} only constrained the product. The degeneracy can be broken by fitting the total intensity as is done in ref. \cite{jf12rand}; we incorporate the effect of the striated random field along with those of the normal, isotropic random field, in the deflection calculations below. \subsection{Deflections in the Galactic magnetic field} \begin{figure} \centering \includegraphics[width=1\linewidth]{CenA_random.pdf} \caption{Colored regions indicate the locus within which UHECRs should be observed if the source is the core of Cen A; green, red, blue and ochre regions are for UHECRs of rigidity 160, 80, 40 and 20 EeV/Z, respectively; the super-Galactic plane is shown as a grey line; nearby UHECRs and their energies are indicated, those with open circles are more than 18$^{\circ}$ from Cen A. The dispersion due to the random (turbulent) component of the magnetic field and observational uncertainties are both included in producing these probability distributions. Note that the actual arrival directions will fall within a much narrower region within this domain because the trajectories of UHECRs of similar energies probe approximately the same random field. The uncertainty in the mean deflection from the regular field, due to 1-sigma uncertainties in the GMF parameters, is shown in Fig. \ref{stanev}. }\label{rainbow} \end{figure} The deflection angle is inversely proportional to the cosmic ray rigidity, for small deflections. Using the JF12 best-fit global GMF model, we find that the magnitude of the arrival direction deflection, of a UHECR from the direction of Cen A in the regular magnetic field, is \begin{equation} \label{regdef} \Delta\theta_{\rm reg} = ( 2.3^\circ\pm 0.24 ^\circ) ~ (Z/E_{100}), \end{equation} valid for energies for which this is a small deflection. Here, $Z$ is the charge in units of the proton charge, $E_{100}$ is the energy of the UHECR in units of 100 EeV, and $0.24^\circ$ is the uncertainty due to GMF parameter uncertainties (the standard deviation of the separation in arrival directions relative to that of the best-fit-field, using GMFs for randomly chosen parameter sets drawn from the Markov Chain Monte Carlo probability distribution of JF12 parameters). The CR is deflected so that it arrives from a direction closer to the Galactic plane and somewhat farther from the Galactic center, as shown in Fig. \ref{rainbow}. To estimate the locus of uncertainty in arrival direction due to random magnetic deflections we repeatedly propagate UHECRs of different rigidity through the Galaxy, dividing the path into domains of size $\lambda$. The field in the $i$th domain is the sum of the JF12 coherent GMF, $\vec{B_i}$, plus a random part, $B_{{\rm rand},i} \hat{n}$, and a striated-random part $B_{{\rm stri},i} \hat{\eta}$, where $\hat{n}$ is a randomly oriented unit vector chosen to have a different random direction in each step and $\hat{\eta}$ is a unit vector oriented along the direction of the local coherent field with sign randomly chosen in each step. $B_{{\rm rand},i}$ is the rms random field evaluated in the center of the $i$th domain using the JF12 random field model\cite{jf12rand}, and $B_{{\rm stri},i}$ is $\sqrt{\beta} | B_{{\rm reg},i} | $, with $\beta = 1.38$\cite{jf12,jf12rand}. The domain size $\lambda$ corresponds to the maximum coherence length of the turbulent field; this is uncertain but is plausibly of order $\approx$ 100 pc, a typical maximum size of supernova remnants\cite{Gaensler:1995, Haverkorn:2008}. As one would expect, the spread in arrival directions $\propto \sqrt{\lambda}$. For UHECRs arriving from Cen A the mean angular separation in 1000 different realizations of the random field, between the arrival direction and the centroid of the arrival-direction distribution, is \begin{equation} \label{sigrand} \sigma_{\rm rand} = 1.3^\circ (Z/E_{100}) \sqrt{ \lambda_{100} }, \end{equation} where $\lambda_{100}=\lambda/100$ pc. A more sophisticated treatment of the random field realization with a Kolmogorov spectrum would reduce the dispersion compared to Eq. \ref{sigrand}, since in that case power is shared over a range of scales rather than being concentrated in the largest coherence length which is most effective at deflecting UHECRs. It is important to emphasize that the dispersion quoted in Eq. \ref{sigrand} and the colored regions shown in Fig. \ref{rainbow} {\em do not} represent the typical spread of arrival directions predicted for UHECRs coming from Cen A, because UHECRs of a given energy from a given source direction and entry point into the GMF follow nearly the same trajectories and thus propagate in nearly the same fields for much of their trajectories, whereas the calculation above takes different fields for each UHECR to obtain the inclusive region of possible arrival directions, given our knowledge of the GMF. We can gain insight into whether different events of the same energy probe different coherence regions as follows. The deflection of an individual UHECR in a given random magnetic field can be described as a random walk with net deflection $\sqrt{N}$ times the deflection $\delta$ due to the random field in a typical coherence length: $\sigma_{{\rm rand}} \approx \sqrt{N} \, \delta$, where $\sqrt{N}$ is the number of independent random domains. From Fig. \ref{B_comp_CenA} and taking a 100 pc coherence length, $\sqrt{N} \approx 12$ crossing the Galaxy en route from Cen A. With $\sigma_{{\rm rand}} \approx 1.3^{\circ}$ we infer $\delta \approx 0.1^{\circ}$, so the lateral displacement relative to the trajectory in the coherent field alone, in crossing a typical coherence region, is $\approx 0.2$ pc. This is far smaller than the assumed 100 pc size of the coherence region, so random UHECR deflections do not cause trajectories for events of the same energy to diverge enough to sample different random magnetic field domains. More study is needed to determine whether the divergence of the beam from Cen A or the lateral size of the portion of the beam focussed on Earth, may be sufficient for different events to probe different magnetic domains. Thus Eq. (\ref{sigrand}) is an upper bound on the magnetic dispersion from random Galactic fields in the JF12 model. \subsection{Deflections in extragalactic magnetic fields} The direction of arrival to the Galaxy is also smeared compared to the direction of the source, by deflections in the Cen A system itself and by the extragalactic magnetic field (EGMF) between Cen A and the Milky Way. But even very large deflections within the Cen A region can only move the apparent source direction within the radio lobes, translating the image region accordingly. Thus we are concerned with the EGMF. The EGMF is thought to be turbulent, with a field strength typically assumed to be of order 0.1 - 1 nG \cite{Kronberg:1994} and coherence length $\lambda_{\rm EG} \approx 100$ kpc. It was pointed out in \cite{Farrar:2000b} that the EGMF of the low-redshift universe is poorly constrained by the dispersions in RM of high redshift sources as was done in the classic work of \cite{Kronberg:1994}, and also that \cite{Kronberg:1994} uses an unrealistically large value for the electron density and thus underestimates the true field. Refs. \cite{Neronov:2009,neronovVovkSci10,dermer+blazarEGMF11,TaylorVockNeronovEGMF11} provide an overview of recent efforts to constrain the EGMF using TeV blazers. If UHECRs experience many small deflections over a distance $D$ from source to Galaxy, in a turbulent EGMF whose rms value and coherence length are $B_{\rm EG}$ and $\lambda_{{\rm EG}}$, then the rms angle between the source and the UHECR arrival direction to the Galaxy is \cite{Waxman:1996} \begin{equation} \label{egmfdef} \delta \theta_{\rm EG} \approx 0.15^\circ \, \, \left( \frac{D}{3.8 \, \rm Mpc} \cdot \frac{\lambda_{\rm EG} }{100 \,\rm kpc } \right)^{\frac{1}{2} }\, (B_{\rm EG}/1\, {\rm nG}) \,\, (Z/E_{100}), \end{equation} assuming the lateral extent of the EGMF is $\gtrsim D \, \delta \theta $. The extragalactic random deflection predicted in Eq. (\ref{egmfdef}) must be added in quadrature with the Galactic random deflection given in Eq. (\ref{sigrand}) to obtain the total random deflection. In order for $\delta \theta_{\rm EG} \geq 12^\circ$ (or just to be greater than the coherent Galactic deflection in Eq. \ref{regdef}), we would need \begin{equation} \label{egmfMin} B_{\rm EG} \gtrsim 80 \, (10) \, \left(\frac{\lambda_{\rm EG} }{100 \,\rm kpc } \right)^{-\frac{1}{2} } {\rm nG}. \end{equation} Deflections in a variety of realizations of extragalactic fields between Cen A and the edge of the Galaxy were recently considered in \cite{yuksel+CenA12}. They attempted to find choices of rms strength and maximum coherence length of the EGMF in the intervening region, which could produce the observed distribution of UHECR arrival directions within $18^\circ$ of Cen A, assuming that Cen A is the source and ignoring GMF deflections. They were interested in the possibility that smaller rms field strength and larger coherence length than we employed above, could generate the observed pattern. However they find that except for specially selected examples, EGMFs do not reproduce the combination of small mean deflection and large rms deflection seen in the Auger events within $18^\circ$ of Cen A, taking Cen A to be the source. \subsection{Futuristic Note: RMs of Cen A sources and UHECR deflections} In the small-angle deflection approximation one can directly relate the smearing in UHECR arrival directions to the dispersion in RMs accumulated in any given region in which the electron density is roughly constant. If the magnetic field is turbulent with a typical coherence length $\lambda$ and rms strength $B$, over a region of size $R = N \, \lambda$, the rms dispersion in RM is (in $ {\rm rad \, m^{-2}}$) \begin{equation} \label{deltaRM} \delta_{\rm RM } = \sqrt{N} \, 8.1\times 10^4 \,\, n_e \, \frac{ B_{\mu \rm G}}{ \sqrt{3}} \, \frac{\lambda}{100 \,\rm kpc}, \end{equation} where $n_e$ is the electron density in ${\rm cm}^{-3}$. A UHECR of energy $E_{100}$ EeV and charge $Z$ propagating from the center of such a region would typically have an angular deflection \begin{equation} \label{deltatheta} \delta \, \theta = \sqrt{\frac{N}{2}} \sqrt{\frac{2}{3}}\, \frac{ \lambda }{ R_{\rm Larmor}} = 0.52 \times\sqrt{N} \, \lambda_{100} \left(\frac{B_{\mu \rm G} \,Z}{E_{100}}\right), \end{equation} where the factor $\sqrt{2/3}$ is due to the projection of the Larmor radius to the plane transverse to the line-of-sight. Combining Eq. (\ref{deltaRM}) and (\ref{deltatheta}) we obtain \begin{equation} \label{dispRMdispCR} \delta \, \theta_{\rm EG} \approx (6.4\times10^{-4})^\circ \, \delta_{\rm RM }\,(Z/E_{100})\, n_{e}^{-1}. \end{equation} Unfortunately, in spite of the greatly improved data on RMs near Cen A \cite{Feain:2009}, Eq. (\ref{dispRMdispCR}) does not provide a useful constraint on the extragalactic random field between Cen A and the Milky Way, because the Galaxy's much larger electron density and field strengths means that Faraday rotation in Galactic fields dominates the RM accumulated between Cen A and the Galaxy. When the GMF and electron densities are much better known, Eq. (\ref{dispRMdispCR}) could in principle become useful to eliminate the sensitivity of the random deflection prediction to the uncertain coherence length and field strength. \section{Cen A as the source of proton UHECRs} In the JF12 GMF model, a 60 EeV proton is deflected $\approx 3.8^\circ$ toward the Galactic plane, at an angle of about 45$^\circ$ away from the Galactic center. This prediction has an rms uncertainty of $\approx \frac{1}{4}^\circ$ about the mean deflection for the best-fit GMF parameters, coming from the uncertainty in GMF parameters; the locus is shown in Fig. \ref{stanev}. Smearing due to random magnetic deflections is sub-dominant by at least a factor of two compared to the deflection due to the regular field, as can be seen in Fig. \ref{rainbow} which shows the maximal arrival locus of UHECRs of rigidities 160, 80, 40 and 20 originating in the Cen A galaxy. The Auger collaboration estimates a $\approx 14\%$ statistical and $\approx 22\%$ systematic uncertainty on the UHECR energies \cite{Auger_energy:2009}. Adding them in quadrature produces a $\approx 26\%$ energy uncertainty for each event. Since deflections $\sim 1/E$, this produces an additional uncertainty factor in the overall deflection ranging from $0.8-1.35$. Taking a generous view of the random deflections and uncertainties in the GMF, it appears that up to three of the 69 published Auger UHECRs above 55 EeV could be protons originating from Cen A or its radio lobes. Two to three more events could be consistent with a Cen A origin, taking into account that the GMF model is probably not perfect, if their charges are $Z \sim 2-4$; events with higher charges would be deflected more than $18^{\circ}$. The Galactic magnetic field deflects UHECRs coming from Cen A or its radio lobes into a swath, with events of lower rigidity further from the source. Fig. \ref{rainbow} shows the expected swath of UHECRs for rigidities, E/Z = 20, 40, 80, and 160 EV if Cen A itself is the source. The opening angle of the region containing the UHECRs is given by $\sigma_{\rm rand}/|\Delta\theta_{\rm reg}| \lesssim 35\ensuremath{^\circ}$. As long as E/Z is small enough that $\sigma_{\rm rand}$ and $\Delta \theta_{\rm reg}$ do not become large, the numerator and denominator in this expression are both inversely proportional to the rigidity of the cosmic rays. Thus the opening angle of the swath is \emph{independent} of the energy calibration, charge, or composition, of the UHECRs. The maximum opening angle of the arrival-direction-swath for UHECRs originating in Cen A is not only independent of uncertainties about the UHECR, it is a particularly robust and reliable prediction of the JF12 model. This is because the largest source of uncertainty in the JF12 work is its dependence on $n_{{\rm cre}}(\vec{r})$ which is based on GALPROP modeling\cite{jf12,jf12rand}. But $n_{{\rm cre}}(\vec{r})$ is a factor common to all of the observables $I$, $Q$, and $U$ constraining respectively the random field and transverse coherent field, so the relative strength of the coherent and random deflections and hence opening angle of the swath, is quite robust. Details of the coherent field will change as $n_{{\rm cre}}$ and the functional form of the field model are refined, resulting in possible rotation of the orientation of the arrival-direction-swath. However the general relationship between magnitudes of random and coherent deflections is likely to persist as noted, so the swath would have a roughly similar appearance, but possibly be pivoted about Cen A by some amount and/or the footprint be slightly stretched or squished. Hence, it is unlikely that more than 5 to 6 of the 13 Auger UHECRs within 18$^\circ$ in the published 69 event data-set can be attributed to Cen A or its radio lobes, in spite of uncertainties in the composition or energy calibration. We emphasize that the arrival direction locus shown in Fig. \ref{rainbow} indicates the probability distribution for arrival directions for the given random field: it does not represent the actual image of Cen A in UHECRs, because the actual GMF is a particular realization of the random ensemble used to produce Fig. \ref{rainbow}. The actual arrival directions should fall in a considerably narrower band, whose width depends on the spatial properties of the random field, especially its maximum coherence length, which are not yet determined. It is tantalizing that 5-6 events do fall on such a narrow arc within the predicted Cen A locus. \section{Summary} We have checked that the new JF12 model of the Galactic magnetic field \cite{jf12}, which gives a very good global fit to a large amount of Faraday Rotation Measure and synchrotron emission data, also gives an accurate accounting of the extensive RM data as well as the synchrotron data in the particular direction of Cen A. This justifies confidence in the predictions of the JF12 GMF model for UHECR deflections. Using the JF12 model of the large scale GMF\cite{jf12} and the random and striated fields\cite{jf12rand}, we determine the locus in which protons and low-Z nuclei from Cen A should be found (Fig. \ref{rainbow}). Three UHECRs in the published Auger events above 55 EeV (those closest to Cen A), have arrival directions consistent with their being protons which originated in Cen A or its radio lobes. Three more events with energies 58, 78 and 64 EeV fall in the arrival locus for rigidity $E/Z \leq 20\,$EV originating from Cen A; they could have $Z=2-4$ and originate in Cen A. Events with higher charges and $E \lesssim 60 $ EeV are deflected more than $18^{\circ}$. Extragalactic fields would have to be $\sim 80$ nG -- far stronger than conventionally assumed -- in order for most UHECRs within $18^\circ$ of Cen A to have been produced by Cen A. \acknowledgments The research of R.J.\ and G.R.F.\ has been supported in part by NSF-PHY-0701451 and NASA grant NNX10AC96G. B.M.G.\ acknowledges the support of a Federation Fellowship from the Australian Research Council through grant FF0561298. The Australia Telescope Compact Array is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. G.R.F.\ acknowledges her membership in the Pierre Auger Collaboration and thanks her colleagues for their support of and contributions to her research.
train/arxiv
BkiUcFfxK5YsWR0KigT-
5
1
\section{Introduction}\label{s: introduction} In this paper we consider the following conjecture which is due to Cherlin, and which was given first in \cite{cherlin1}: \begin{conj}\label{conj: cherlin} A finite primitive binary permutation group must be one of the following: \begin{enumerate} \item a Symmetric group $\Sym (n)$ acting naturally on $n$ elements; \item a cyclic group of prime order acting regularly on itself; \item an affine orthogonal group $V\cdot O(V)$ with $V$ a vector space over a finite field equipped with an anisotropic quadratic form acting on itself by translation, with complement the full orthogonal group $O(V)$. \end{enumerate} \end{conj} Thanks to work of Cherlin himself \cite{cherlin2}, and of Wiscons \cite{wiscons}, Conjecture~\ref{conj: cherlin} has been reduced to a statement about almost simple groups. In particular, to prove Conjecture~\ref{conj: cherlin} it would be sufficient to prove the following statement. \begin{conj}\label{conj: cherlin2} If $G$ is a binary almost simple primitive permutation group on the set $\Omega$, then $G=\symme(\Omega)$. \end{conj} In this paper, we prove this conjecture for almost simple groups with sporadic socle. Formally, our main result is the following: \begin{thm}\label{t: sporadic} Let $G$ be an almost simple primitive permutation group with socle isomorphic to a sporadic simple group. Then $G$ is not binary. \end{thm} Note that we include the group ${^2F_4}(2)'$ in the list of sporadic groups -- this group is sometimes considered ``the $27^{\rm th}$ sporadic group'' -- so Theorem~\ref{t: sporadic} applies to this group too. The terminology of Theorem~\ref{t: sporadic} and the preceding conjectures is all fairly standard in the world of group theory, with the possible exception of the word ``binary''. Roughly speaking an action is ``binary'' if the induced action on $\ell$-tuples can be deduced from the induced action on pairs (for any integer $\ell>2$); a formal definition of a ``binary permutation group'' is given below in \S\ref{s: preliminaries1}. \subsection{Context and methods} We will not spend much time here trying to motivate the study of ``binary permutation groups''. As will be clear on reading the definition of ``binary'' in \S\ref{s: preliminaries1}, this notion is a particular instance of the more general concept of ``arity'' or ``relational complexity''. These notions, which we define below in group theoretic terms, can also be formulated from a model theoretic point of view where they are best understood as properties of ``relational structures''. These connections, which run very deep, are explored at length in \cite{cherlin1}, to which we refer the interested reader. Theorem~\ref{t: sporadic} settles Conjecture~\ref{conj: cherlin2} for one of the families given by the Classification of Finite Simple Groups. It is the third recent result in this direction: Conjecture~\ref{conj: cherlin2} has also been settled for groups with alternating socle \cite{gs_binary}, and for groups with socle a rank $1$ group of Lie type \cite{ghs_binary}. Work is ongoing for the groups that remain (groups with socle a group of Lie type of rank at least $2$) \cite{gls_binary}. Our proof of Theorem~\ref{t: sporadic} builds on ideas developed in \cite{gs_binary} and \cite{ghs_binary}, in particular the notion of a ``strongly non-binary action''. In addition to this known approach, we also make use of a number of new lemmas -- we mention, in particular, Lemma~\ref{l: characters}, which connects the ``binariness'' of an action to a bound on the number of orbits in the induced action on $\ell$-tuples. These lemmas are gathered together in \S\ref{s: preliminaries1}. In addition to these new lemmas, though, the current paper is very focused on adapting known facts about binary actions to create computational tests that can be applied using a computer algebra package like {\tt GAP} \cite{GAP4} or {\tt magma} \cite{magma}. This process of developing tests is explained in great detail in \S\ref{s: preliminaries2}. In the final two sections we describe the outcome of these computations. In \S\ref{s: nonmonster} we are able to give a proof of Theorem~\ref{t: sporadic} for all of the sporadic groups barring the monster. In \S\ref{s: monster} we give a proof of Theorem~\ref{t: sporadic} for the monster. The sheer size of the monster means that some of the computational procedures that we exploit for the other groups are no longer available to us, and so our methods need to be refined to deal with this special case. \subsection{Acknowledgments} At a crucial juncture in our work on Theorem~\ref{t: sporadic}, we needed access to greater computational power -- this need was met by Tim Dokchitser who patiently ran and re-ran various scripts on the University of Bristol {\tt magma} cluster. We are very grateful to Tim -- without his help we would have struggled to complete this work. We are also grateful to an anonymous referee for a number of helpful comments and suggestions. \section{Definitions and lemmas}\label{s: preliminaries1} Throughout this section $G$ is a finite group acting (not necessarily faithfully) on a set $\Omega$ of cardinality $t$. Given a subset $\Lambda$ of $\Omega$, we write $G_\Lambda:=\{g\in G\mid \lambda^g\in\Lambda,\forall \lambda\in \Lambda\}$ for the set-wise stabilizer of $\Lambda$, $G_{(\Lambda)}:=\{g\in G\mid \lambda^g=\lambda, \forall\lambda\in \Lambda\}$ for the point-wise stabilizer of $\Lambda$, and $G^\Lambda$ for the permutation group induced on $\Lambda$ by the action of $G_\Lambda$. In particular, $G^\Lambda\cong G_\Lambda/G_{(\Lambda)}$. Given a positive integer $r$, the group $G$ is called \textit{$r$-subtuple complete} with respect to the pair of $n$-tuples $I, J \in \Omega^n$, if it contains elements that map every subtuple of length $r$ in $I$ to the corresponding subtuple in $J$ i.e. $$\textrm{for every } \{k_1, k_2, \dots, k_r\}\subseteq\{ 1, \ldots, n\}, \textrm{ there exists } h \in G \textrm{ with }I_{k_i}^h=J_{k_i}, \textrm{ for every }i \in\{ 1, \ldots, r\}.$$ Here $I_k$ denotes the $k^{\text{th}}$ element of tuple $I$ and $I_k^g$ denotes the image of $I_k$ under the action of $g$. Note that $n$-subtuple completeness simply requires the existence of an element of $G$ mapping $I$ to $J$. \begin{defn}{\rm The action of $G$ is said to be of {\it arity $r$} if, for all $n\in\mathbb{N}$ with $n\geq r$ and for all $n$-tuples $I, J \in \Omega^n$, $r$-subtuple completeness (with respect to $I$ and $J$) implies $n$-subtuple completeness (with respect to $I$ and $J$). Note that in the literature the concept of ``arity'' is also known by the name ``relational complexity''. When the action of $G$ has arity 2, we say that the action of $G$ is {\it binary}. If $G$ is given to us as a permutation group, then we say that $G$ is a \emph{binary permutation group}. } \end{defn} A pair $(I,J)$ of $n$-tuples of $\Omega$ is called a {\it non-binary witness for the action of $G$ on $\Omega$}, if $G$ is $2$-subtuple complete with respect to $I$ and $J$, but not $n$-subtuple complete, that is, $I$ and $J$ are not $G$-conjugate. To show that the action of $G$ on $\Omega$ is non-binary it is sufficient to find a non-binary witness $(I,J)$. We now recall some useful definitions introduced in~\cite{ghs_binary}. We say that the action of $G$ on $\Omega$ is \emph{strongly non-binary} if there exists a non-binary witness $(I,J)$ such that \begin{itemize} \item $I$ and $J$ are $t$-tuples where $|\Omega|=t$; \item the entries of $I$ and $J$ comprise all the elements of $\Omega$. \end{itemize} We give a standard example, taken from~\cite{ghs_binary}, showing how strongly non-binary actions can arise. \begin{example}\label{ex: snba2}{\rm Let $G$ be a subgroup of $\Sym(\Omega)$, let $g_1, g_2,\ldots,g_r$ be elements of $G$, and let $\tau,\eta_1,\ldots,\eta_r$ be elements of $\Sym(\Omega)$ with \[ g_1=\tau\eta_1,\,\,g_2=\tau\eta_2,\,\,\ldots,\,\,g_r=\tau\eta_r. \] Suppose that, for every $i\in \{1,\ldots,r\}$, the support of $\tau$ is disjoint from the support of $\eta_i$; moreover, suppose that, for each $\omega\in\Omega$, there exists $i\in\{1,\ldots,r\}$ (which may depend upon $\omega$) with $\omega^{\eta_i}=\omega$. Suppose, in addition, $\tau\notin G$. Now, writing $\Omega=\{\omega_1,\dots, \omega_t\}$, observe that \[ ((\omega_1,\omega_2,\dots, \omega_t), (\omega_1^{\tau},\omega_2^{\tau}, \ldots,\omega_t^{\tau})) \] is a non-binary witness. Thus the action of $G$ on $\Omega$ is strongly non-binary.} \end{example} The following lemma, taken from~\cite{ghs_binary}, shows a crucial property of the notion of strongly non-binary action: it allows one to argue ``inductively'' on set-stabilizers (see also Lemma~\ref{l: again0}). \begin{lem}\label{l: again12} Let $\Omega$ be a $G$-set and let $\Lambda \subseteq \Omega$. If $G^\Lambda$ is strongly non-binary, then $G$ is not binary in its action on $\Omega$. \end{lem} \begin{proof}Write $\Lambda:=\{\lambda_1,\ldots,\lambda_\ell\}$ and assume that $G^\Lambda$ is strongly non-binary. Then there exists $\sigma\in \Sym(\ell)$ with $I:=(\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ and $J:=(\lambda_{1^\sigma},\lambda_{2^\sigma},\ldots,\lambda_{\ell^\sigma})$ a non-binary witness for the action of $G_\Lambda$ on $\Lambda$. Now, observe that $(I,J)$ is also a non-binary witness for the action of $G$ on $\Omega$ because any (putative) element $g$ of $G$ mapping $I$ to $J$ fixes $\Lambda$ set-wise and hence $g\in G_\Lambda$. \end{proof} Next we need an observation, made first in \cite{ghs_binary}, that the existence of a strongly non-binary witness is related to the classic concept of $2$-\emph{closure} introduced by Wielandt~\cite{Wielandt}: given a permutation group $G$ on $\Omega$, the \emph{$2$-closure of $G$} is the set $$G^{(2)}:=\{\sigma\in \Sym(\Omega)\mid \forall (\omega_1,\omega_2)\in \Omega\times \Omega, \textrm{there exists }g_{\omega_1\omega_2}\in G \textrm{ with }\omega_1^\sigma=\omega_1^{g_{\omega_1\omega_2}}, \omega_2^\sigma=\omega_2^{g_{\omega_1\omega_2}}\},$$ that is, $G^{(2)}$ is the largest subgroup of $\Sym(\Omega)$ having the same orbitals as $G$. The group $G$ is said to be $2$-\emph{closed} if and only if $G=G^{(2)}$. \begin{lem}\label{l: fedup} Let $G$ be a permutation group on $\Omega$. Then $G$ is strongly non-binary if and only if $G$ is not $2$-closed. \end{lem} \begin{proof} Write $\Omega:=\{\omega_1,\ldots,\omega_t\}$. If $G$ is not $2$-closed, then there exists $\sigma\in G^{(2)}\setminus G$. Set $I:=(\omega_1,\ldots,\omega_t)$ and $J:=I^\sigma=(\omega_1^\sigma,\ldots,\omega_t^\sigma)$; observe that $I$ and $J$ are $2$-subtuple complete (because $\sigma\in G^{(2)}$) and are not $G$-conjugate (because $\sigma\notin G$). Thus $(I,J)$ is a strongly non-binary witness. The converse is similar. \end{proof} Our next two lemmas make use of Lemma~\ref{l: again12} and Example~\ref{ex: snba2} to yield easy criteria for showing that a permutation group is not binary. \begin{lem}\label{l: M2} Let $G$ be a transitive permutation group on $\Omega$, let $\alpha\in \Omega$ and let $p$ be a prime with $p$ dividing both $|\Omega|$ and $|G_\alpha|$ and with $p^2$ not dividing $|G_\alpha|$. Suppose that $G$ contains an elementary abelian $p$-subgroup $V=\langle g,h\rangle$ with $g\in G_\alpha$, with $h$ and $gh$ conjugate to $g$ via $G$. Then $G$ is not binary. \end{lem} \begin{proof} Let $g\in G_\alpha$ and let $h\in g^G$ with $\langle g,h\rangle$ an elementary abelian $p$-subgroup of $G$ of order $p^2$ with $gh$ also conjugate to $g$ via $G$. In particular, $h=g^x$, for some $x\in G$. Write $\alpha_0:=\alpha$ and $\alpha_{p}:=\alpha^x$. Since $g\in G_{\alpha_0}$ and $h\in G_{\alpha_{p}}$ commute, $\alpha_0^{h^i}$ is fixed by $g$ and $\alpha_{p}^{g^i}$ is fixed by $h$, for every $i$. Write $\alpha_i:=\alpha_0^{h^i}$ and $\alpha_{p+i}:=\alpha_p^{g^i}$, for every $i\in \{0,\ldots,p-1\}$. Moreover, $g$ acts as a $p$-cycle on $\{\alpha_p,\ldots,\alpha_{2p-1}\}$ and $h$ acts as a $p$-cycle on $\{\alpha_0,\ldots,\alpha_{p-1}\}$. Since $gh$ is conjugate to $g$ via an element of $G$, there exists $y\in G$ with $gh=g^y$. Write $\alpha_{2p}=\alpha^{y}$. Observe that $gh$ fixes $(\alpha_{2p})^{g^{-i}}=\alpha_{2p}^{h^i}$ for every $i$. Write $\alpha_{2p+i}:=\alpha_{2p}^{g^i}$, for every $i\in \{0,\ldots,p-1\}$. Thus $g$ and $h$ act as inverse $p$-cycles on $\{\alpha_{2p},\ldots,\alpha_{3p-1}\}$. Write $\Lambda:=\{\alpha_0,\ldots,\alpha_{3p-1}\}$. We have \begin{align*} g^\Lambda&=(\alpha_{p},\ldots,\alpha_{2p-1})(\alpha_{3p-1},\ldots,\alpha_{2p}),\\ h^\Lambda&=(\alpha_{0},\ldots,\alpha_{p-1})(\alpha_{2p},\ldots,\alpha_{3p-1}),\\ (gh)^\Lambda&=(\alpha_{0},\ldots,\alpha_{p-1})(\alpha_{p},\ldots,\alpha_{2p-1}).\\ \end{align*} If $G^\Lambda$ is strongly non-binary, then $G$ is not binary by Lemma~\ref{l: again12}. Assume that $G^\Lambda$ is not strongly non-binary. Then, in view of Example~\ref{ex: snba2}, there exists $f\in G$ with $f^\Lambda=(\alpha_p,\ldots,\alpha_{2p-1}).$ This is a contradiction, because by hypothesis $|G_\alpha|$ is not divisible by $p^2$ but $\langle g,f\rangle$ has order divisible by $p^2$ and fixes $\alpha_0=\alpha$. \end{proof} \begin{lem}\label{l: added} Let $G$ be a permutation group on $\Omega$ and suppose that $g$ and $h$ are elements of $G$ of order $p$ where $p$ is a prime such that $g$, $h$ and $gh^{-1}$ are all $G$-conjugate. Suppose that $V=\langle g, h\rangle$ is elementary-abelian of order $p^2$. Suppose, finally, that $G$ does not contain any elements of order $p$ that fix more points of $\Omega$ than $g$. If $|\Fix(V)|<|\Fix(g)|$, then $G$ is not binary. \end{lem} We remark that there are well-known formulae that we can use to calculate $\Fix(V)$ and $|\Fix(g)|$ when $G$ is transitive (see for instance~\cite[Lemma~$2.5$]{LiebeckSaxl}). Suppose that $M$ is the stabilizer of a point in $\Omega$; then we have \begin{equation}\label{e: fora} |\Fix_\Omega(g)| = \frac{|\Omega|\cdot |M\cap g^G|}{|g^G|},\qquad |\Fix_\Omega(V)| = \frac{|\Omega|\cdot |\{V^g\mid g\in G,V^g\le M\}|}{|V^G|}. \end{equation} \begin{proof} We let \[ \Lambda:=\Fix(g)\cup\Fix(h)\cup\Fix(gh^{-1}). \] Observe, first, that $\Lambda$, $\Fix(g)$, $\Fix(h)$ and $\Fix(gh^{-1})$ are $g$-invariant and $h$-invariant. Observe, second, that \[ \Fix(g)\cap\Fix(h)=\Fix(g)\cap\Fix(gh^{-1})=\Fix(h)\cap\Fix(gh^{-1})=\Fix(V). \] Write $\tau_1$ for the permutation induced by $g$ on $\Fix(gh^{-1})$, $\tau_2$ for the permutation induced by $g$ on $\Fix(h)$, and $\tau_3$ for the permutation induced by $h$ on $\Fix(g)$ (observe that $\tau_i$'s are non trivial as $gh^{-1}$, $h$ and $g$ are conjugate). Since $|\Fix(V)|<|\Fix(g)|$, we conclude that $\tau_1,\tau_2$ and $\tau_3$ are disjoint non-trivial permutations. What is more, $g$ induces the permutation $\tau_1\tau_2$ on $\Lambda$, while $h$ induces the permutation $\tau_1\tau_3$ on $\Lambda$. In view of Example~\ref{ex: snba2}, $G^\Lambda$ is strongly non-binary provided there is no element $f\in G_\Lambda$ that induces the permutation $\tau_1$. Arguing by contradiction, if such an element $f$ exists, then $f$ has order divisible by $p$ and $f^{o(f)/p}$ is a $p$-element fixing more points than $g$, which is a contradiction. Thus $G^\Lambda$ is strongly non-binary and $G$ is not binary by Lemma~\ref{l: again12}. \end{proof} For the rest of this section we assume that $G$ is transitive. Given $\ell\in\mathbb{N}\setminus\{0\}$, we denote by $\Omega^{(\ell)}$ the subset of the Cartesian product $\Omega^\ell$ consisting of the $\ell$-tuples $(\omega_1,\ldots,\omega_\ell)$ with $\omega_i\ne \omega_j$, for every two distinct elements $i,j\in \{1,\ldots,\ell\}$. We denote by $r_\ell(G)$ the number of orbits of $G$ on $\Omega^{(\ell)}$. Let $\pi:G\to\mathbb{N}$ be the permutation character of $G$, that is, $\pi(g)=\fix_\Omega(g)$ where $\fix_{\Omega}(g)$ is the cardinality of the fixed point set $\Fix_{\Omega}(g):=\{\omega\in \Omega\mid \omega^g=\omega\}$ of $g$. From the Orbit Counting Lemma, we have \begin{align*} r_\ell(G)&=\frac{1}{|G|}\sum_{g\in G}\fix_\Omega(g)(\fix_\Omega(g)-1)\cdots (\fix_\Omega-(\ell-1))\\ &=\langle \pi(\pi-1)\cdots (\pi-(\ell-1)),1\rangle_G, \end{align*} where $1$ is the principal character of $G$ and $\langle \cdot,\cdot\rangle_G$ is the natural Hermitian product on the space of $\mathbb{C}$-class functions of $G$. \begin{lem}\label{l: characters} If $G$ is transitive and binary, then $r_\ell(G)\le r_2(G)^{\ell(\ell-1)/2}$ for each $\ell\in\mathbb{N}$. \end{lem} Note that this lemma is, in effect, an immediate consequence of the fact that, for a binary action, the orbits on pairs ``determine'' orbits on $\ell$-tuples. Thus, to uniquely determine the orbit of a particular $\ell$-tuple, it is enough to specify the orbits of all $\binom{\ell}{2}$ pairs making up the $\ell$-tuple. \begin{proof} We write $r_2:=r_2(G)$ and $r_\ell:=r_\ell(G)$ and we assume that $r_\ell>r_2^{(\ell-1)\ell/2}$ for some $\ell\in\mathbb{N}$. Clearly, $\ell> 2$. Let $$(\omega_{1,1},\ldots,\omega_{1,\ell}),\ldots,(\omega_{r_\ell,1},\ldots,\omega_{r_\ell,\ell})$$ be a family of representatives for the $G$-orbits on $\Omega^{(\ell)}$. From the pigeon-hole principle, at least $r_\ell/r_2$ of these elements have the first two coordinates in the same $G$-orbit. Formally, there exists $\kappa\in\mathbb{N}$ with $\kappa\ge r_\ell/r_2$ and a subset $\{i_1,\ldots,i_\kappa\}$ of $\{1,\ldots,r_\ell\}$ of cardinality $\kappa$ such that the $\kappa$ pairs $$(\omega_{i_1,1},\omega_{i_1,2}),\ldots,(\omega_{i_\kappa,1},\omega_{i_\kappa,2})$$ are in the same $G$-orbit. By considering all possible pairs of coordinates, this argument can be easily generalized. Indeed, from the pigeon-hole principle, there exists $\kappa$ with $\kappa\ge r_\ell/r_2^{(\ell-1)\ell/2}>1$ and a subset $\{i_1,\ldots,i_\kappa\}$ of $\{1,\ldots,r_\ell\}$ of cardinality $\kappa$ such that, for each $1\le u<v\le \ell$, the $\kappa$ pairs $$(\omega_{i_1,u},\omega_{i_1,v}),\ldots,(\omega_{i_\kappa,u},\omega_{i_\kappa,v})$$ are in the same $G$-orbit. In other words, the $\ell$-tuples $$(\omega_{i_1,1},\ldots,\omega_{i_1,\ell}),\ldots,(\omega_{i_\kappa,1},\ldots,\omega_{i_\kappa,\ell})$$ are $2$-subtuple complete. Since $G$ is binary, these $\ell$-tuples must be in the same $G$-orbit, contradicting $\kappa>1$. \end{proof} Observe that when $r_2(G)=1$, that is, $G$ is $2$-transitive, Lemma~\ref{l: characters} yields $r_\ell(G)=1$ for every $\ell\in \{2,\ldots,|\Omega|\}$. Therefore $G=\Sym(\Omega)$ is the only $2$-transitive binary group. \begin{lem}\label{l: again0} Let $G$ be transitive, let $\alpha$ be a point of $\Omega$ and let $\Lambda\subseteq \Omega$ be a $G_\alpha$-orbit. If $G$ is binary, then $G_\alpha^\Lambda$ is binary. In particular, if $g\in G$ and the action of $G_\alpha$ on the right cosets of $G_\alpha\cap G_\alpha^g$ in $G_\alpha$ is not binary, then $G$ is not binary. \end{lem} \begin{proof}Assume that $G$ is binary. Let $\ell\in\mathbb{N}$ and let $I:=(\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ and $J:=(\lambda_1',\lambda_2',\ldots,\lambda_\ell')$ be two tuples in $\Lambda^\ell$ that are $2$-subtuple complete for the action of $G_\alpha$ on $\Lambda$. Clearly, $I_0:=(\alpha,\lambda_1,\lambda_2,\ldots,\lambda_\ell)$ and $J_0:=(\alpha,\lambda_1',\lambda_2',\ldots,\lambda'_\ell)$ are $2$-subtuple complete for the action of $G$ on $\Omega$. As $G$ is binary, $I_0$ and $J_0$ are in the same $G$-orbit; hence $I$ and $J$ are in the same $G_\alpha$-orbit. From this we deduce that $G_\alpha^\Lambda$ is binary. Suppose now that $g\in G$ and that the action of $G_\alpha$ on the right cosets of $G_\alpha\cap G_\alpha^g$ in $G_\alpha$ is not binary. Set $\beta:=\alpha^g$ and $\Lambda:=\beta^{G_\alpha}$. Now $\Lambda$ is a $G_\alpha$-orbit contained in $\Omega\setminus \{\alpha\}$ and the action of $G_\alpha$ on $\Lambda$ is permutation isomorphic to the action of $G_\alpha$ on the right cosets of $G_\alpha\cap G_\beta=G_\alpha\cap G_\alpha^g$ in $G_\alpha$. Therefore, $G_\alpha^{\Lambda}$ is not binary and hence $G$ is not binary. \end{proof} \section{On computation}\label{s: preliminaries2} In this section we explain how to make use of the lemmas given in the previous section in a computational setting. The computational problem we are faced with is as follows: given a transitive action of a group $G$ on a set $\Omega$, we wish to show that the action is non-binary; in some cases we will require more, namely that the action is strongly non-binary. If the set $\Omega$ is small enough, then we can often exhibit $G$ as a permutation group in the computer algebra package {\tt magma} and compute explicitly; when $\Omega$ gets too large, then this may be infeasible and we may know only the isomorphism type of $G$ and the isomorphism type of a point-stabilizer. \subsection{Test 1: using Lemma~\ref{l: characters}.} In some cases, Lemma~\ref{l: characters} is very efficient for dealing with some primitive actions of almost simple groups $G$ with socle a sporadic simple group. In particular, whenever the permutation character of $G$ is available in~\texttt{GAP}~\cite{GAP4} or in~\texttt{magma}~\cite{magma}, we can simply check directly the inequality in Lemma~\ref{l: characters}. For instance, using this method it is easy to verify that each faithful primitive action of $M_{11}$ is non-binary. For practical purposes, it is worth mentioning that apart from \begin{itemize} \item the Monster, \item the action of the Baby Monster on the cosets of a maximal subgroup of type $(2^2\times F_4(2)):2$, \end{itemize} each permutation character of each primitive permutation representation of an almost simple group with socle a sporadic simple group is available in \texttt{GAP} via the package ``The GAP character Table Library". Therefore, for the proof of Theorem~\ref{t: sporadic}, we can quickly and easily use Lemma~\ref{l: characters} except for the Monster. To give a rough idea of the time to perform this test, in the Baby Monster (except for the action on the cosets on a maximal subgroup of type $(2^2\times F_4(2)):2$), it takes less than two minutes to perform this checking. (The permutation character of the Baby Monster $G$ on the cosets of a maximal subgroup $M$ of type $(2^2\times F_4(2)):2$ is missing from the \texttt{GAP} library because the conjugacy fusion of some of the elements of $M$ in $G$ remains a mystery: this information is vital for computing the permutation character.) For reasons that will be more clear later, for the proof of Theorem~\ref{t: sporadic}, we need to prove the non-binariness of permutation groups $G\le \Sym(\Omega)$ that are not necessarily almost simple, let alone having socle a sporadic simple group. When $|\Omega|$ is relatively small (for practical purposes, here relatively small means at most $10^9$), we can afford to compute the permutation character and check the inequality in Lemma~\ref{l: characters}. \subsection{Test 2: using Lemma~\ref{l: fedup}.} By connecting the notion of strong-non-binariness to 2-closure, Lemma~\ref{l: fedup} yields an immediate computational dividend: there are built-in routines in \texttt{GAP}~\cite{GAP4} and \texttt{magma}~\cite{magma} to compute the $2$-closure of a permutation group. Thus if $\Omega$ is small enough, say $|\Omega|\le 10^6$, then we can easily check whether or not the group $G$ is $2$-closed. Thus we can ascertain whether or not $G$ is strongly non-binary. \subsection{Test 3: a direct analysis} The next test we discuss is feasible once again provided $|\Omega|\le 10^6$. It simply tests whether or not $2$-subtuple-completeness implies $3$-subtuple completeness, and the procedure is as follows: We fix $\alpha\in \Omega$, we compute the orbits of $G_\alpha$ on $\Omega\setminus\{\alpha\}$ and we select a set of representatives $\mathcal{O}$ for these orbits. Then, for each $\beta\in \mathcal{O}$, we compute the orbits of $G_{\alpha}\cap G_{\beta}$ on $\Omega\setminus\{\alpha,\beta\}$ and we select a set of representatives $\mathcal{O}_\beta$. Then, for each $\gamma\in \mathcal{O}_\beta$, we compute $\gamma^{G_\alpha}\cap \gamma^{G_\beta}$. Finally, for each $\gamma'\in \gamma^{G_\alpha}\cap \gamma^{G_\beta}$, we test whether the two triples $(\alpha,\beta,\gamma)$ and $(\alpha,\beta,\gamma')$ are $G$-conjugate. If the answer is ``no'', then $G$ is not binary because by construction $(\alpha,\beta,\gamma)$ and $(\alpha,\beta,\gamma')$ are $2$-subtuple complete. In particular, in this circumstance, we can break all the ``for loops'' and deduce that $G$ is not binary. If the answer is ``yes'', for every $\beta,\gamma,\gamma'$, then we cannot deduce that $G$ is binary, but we can keep track of these cases for a deeper analysis. We observe that, if the answer is ``yes'', for every $\beta,\gamma,\gamma'$, then $2$-subtuple completeness implies $3$-subtuple completeness. \subsection{Test 4: using Lemma~\ref{l: again0}.} The next test is particularly useful in cases where $\Omega$ is very large, since its computational complexity is independent of $|\Omega|$. Let us suppose that $G$ and its subgroup $M$ are stored in a library as abstract groups (or as matrix groups or as permutation groups). When $|G:M|$ is too large, it is impractical (and sometimes impossible) to construct $G$ as a permutation group on the coset space $\Omega:=G/M$ with point stabilizer $M$. However, using Lemma~\ref{l: again0}, we can still prove that $G$ acting on $\Omega$ is non-binary: all we need is $g\in G$ such that the action of $M$ on $M\cap M^g$ is non-binary. Now, for carefully chosen $g$, $|M:M\cap M^g|$ might be much smaller than $|G:M|$ and we can use one of the previous tests to ascertain whether or not $M$ in its action on $M/(M\cap M^g)$ is binary. \subsection{Test 5: a new lemma} Our final test requires an extra lemma which we include here, rather than in the earlier section, as its computational aspect is somehow inherent in its very statement. \begin{lem}\label{l: alot} Let $G$ be a primitive group on a set $\Omega$, let $\alpha$ be a point of $\Omega$, let $M$ be the stabilizer of $\alpha$ in $G$ and let $d$ be an integer with $d\ge 2$. Suppose $M\ne 1$ and, for each transitive action of $M$ on a set $\Lambda$ satisfying: \begin{enumerate} \item $|\Lambda|>1$, and \item every composition factor of $M$ is isomorphic to some section of $M^\Lambda$, and \item either $M_{(\Lambda)}=1$ or, given $\lambda\in \Lambda$, the stabilizer $M_\lambda$ has a normal subgroup $N$ with $N\ne M_{(\Lambda)}$ and $N\cong M_{(\Lambda)}$, and \item $M$ is binary in its action on $\Lambda$, \end{enumerate} we have that $d$ divides $|\Lambda|$. Then either $d$ divides $|\Omega|-1$ or $G$ is not binary. \end{lem} \begin{proof} Suppose that $G$ is binary. Since $\{\beta\in\Omega\mid \beta^m=\beta,\forall m\in M\}$ is a block of imprimitivity for $G$ and since $G$ is primitive, we obtain that either $M$ fixes each point of $\Omega$ or $\alpha$ is the only point fixed by $M$. The former possibility is excluded because $M\neq 1$ by hypothesis. Therefore $\alpha$ is the only point fixed by $M$. Let $\Lambda\subseteq\Omega\setminus\{\alpha\}$ be an $M$-orbit. Thus $|\Lambda|>1$ and (1) holds. Since $G$ is a primitive group on $\Omega$, from~\cite[Theorem~3.2C]{dixon_mortimer}, we obtain that every composition factor of $M$ is isomorphic to some section of $M^\Lambda$ and hence (2) holds. From Lemma~\ref{l: again0}, the action of $M$ on $\Lambda$ is binary and hence (4) holds. Let now $\lambda\in\Lambda$ and consider the orbital graph $\Gamma:=(\alpha,\lambda)^G$. Observe that $\Gamma$ is connected because $G$ is primitive. Let $g\in G$ with $\alpha^g=\lambda$. Clearly, $\Lambda$ is the set of out-neighbors of $\alpha$ in $\Gamma$ and $\Lambda':=\Lambda^g$ is the set of out-neighbors of $\alpha^g=\lambda$ in $\Gamma$. Set $N:=(G_\lambda)_{(\Lambda')}$. Clearly, $(G_{\alpha})_{(\Lambda)}=M_{(\Lambda)}$ and $(G_{\alpha^g})_{(\Lambda^g)}=(G_\lambda)_{(\Lambda')}=N$ are isomorphic because they are $G$-conjugate via the element $g$. Moreover, $M_{(\Lambda)}=(G_\alpha)_{(\lambda^{G_\alpha})}$ is normalized by $G_\alpha$ and, similarly, $N$ is normalized by $G_\lambda$: therefore they are both normalized by $G_\alpha\cap G_\lambda=M\cap G_\lambda=M_\lambda$. If $M_{(\Lambda)}$ and $N$ are equal, an easy connectedness argument yields that $M_{(\Lambda)}=1$. Therefore (3) also holds. Since the four hypotheses in the statement of this lemma hold for the action of $M=G_\alpha$ on its $G_\alpha$-orbit $\Lambda$, we get $d$ divides $|\Lambda|$. Since this argument does not depend on the $G_\alpha$-orbit $\Lambda\subseteq\Omega\setminus\{\alpha\}$, we obtain that $\Omega\setminus\{\alpha\}$ has cardinality divisible by $d$. Thus $|\Omega|-1$ is divisible by $d$. \end{proof} When it comes to implementing Lemma~\ref{l: alot} on a computer, it is important to observe that we do {\bf not} need to construct the embedding of $M=G_\alpha$ in $G$; indeed we do not need the group $G$ stored in our computer at all. Instead we need only the index $|G:M|=|\Omega|$ and the abstract group $M$ (given as a group of matrices, or as a permutation group, or as a finitely presented group). Given $|\Omega|$ and $M$, we may choose a prime $p$ (typically $p=2$) with $p$ not dividing $|\Omega|-1$ and we construct all the transitive permutation representations of degree greater than $1$ and relatively prime to $p$ of $M$ satisfying (1),~(2) and~(3). If none of these permutation representations is binary (and we can use any of Test 1 to 4 to test this), we infer that every transitive permutation representation of $M$ of degree greater than $1$ satisfying (1),~(2),~(3) and~(4) has degree divisible by $p$. Now, from Lemma~\ref{l: alot}, we get that $G$ in its action on the set $M$ of right cosets of $M$ in $G$ is not binary because $p$ does not divide $|\Omega|-1$. We give an explicit example to show how easily Lemma~\ref{l: alot} can be applied. The monster $G$ has a maximal subgroup $M$ isomorphic to $\mathrm{PGL}_2(19)$. The index of $M$ in $G$ is $$118131202455338139749482442245864145761075200000000\sim 10^{50}$$ and we can easily observe that this number is even. After implementing Lemma~\ref{l: alot} in a computer, it takes the blink of an eye to prove that each permutation representation of $M$ of degree at least $1$ and odd is non-binary. Thus $G$ acting on the cosets of $M$ is non-binary. Observe that besides $|G:M|$ and the isomorphism class of $M$, no information about $G$ is needed. \section{The non-monster groups}\label{s: nonmonster} The centre-piece of this section is Table~\ref{t: table1}; it summarises the results of applying the tests described in the previous section to all almost simple groups with a sporadic socle, barring the monster. Table~\ref{t: table1} consists of two columns: the first column lists all of the almost simple groups $G$ with socle a sporadic simple group (recall that we include Tits group $^{2}F_4(2)'$ in the list of sporadic groups). In the second column, we list all pairs $(M,\circ)$, where $M$ is a maximal subgroup of $G$ with the property that the action of $G$ on the set $G/M$ of right cosets of $M$ in $G$ satisfies Lemma~\ref{l: characters} (in other words, the action is not excluded by Test 1, and hence is a potentially binary action). We use the ATLAS~\cite{ATLAS} notation for the group $M$. Now the symbol $\circ$ is either $\infty$ or a prime $p$ or ``?''. We write $\circ=\infty$ if we have proved the non-binariness of $G$ in its action on $G/M$ using Tests 2 or 3; we write $\circ=p$ if we have proved the non-binariness of $G$ in its action on $G/M$ using Test 5 applied to the prime $p$; and we write $\circ=?$ if both methods have failed. The symbol ``$-$" in the second column means that each primitive permutation representation of $G$ is not binary via Lemma~\ref{l: characters} (Test~1). \begin{table}[!ht] \begin{tabular}{|c|c|}\hline Group & Outcome of tests \\\hline $M_{11}$&$-$\\ $M_{12}$&$-$\\ $M_{12}.2$&$-$\\ $M_{22}$&$-$\\ $M_{22}.2$&$-$\\ $M_{23}$&$-$\\ $M_{24}$&$(L_2(7),\infty)$\\ $J_{1}$&$(D_6\times D_{10},\infty), (7:6,\infty)$\\ $J_{2}$&$(A_5,\infty)$\\ $J_{2}.2$&$(S_5,\infty)$\\ $J_{3}$&$-$\\ $J_{3}.2$&$(19:18,3)$\\ $J_4$&$(M_{22}:2,2)$, $(11_+^{1+2}:(5\times 2S_4), 2)$, $(L_2(32):5,11)$, $(L_2(23):2,2)$\\ & $(U_3(3),2)$, $(29:28,2)$, $(43:14,7)$, $(37:12,2)$\\ $^{2}F_4(2)'$&$-$\\ $^{2}F_4(2)$&$(M,2)$ where $M$ has order $156$\\ $Suz$&$(A_7,2)$, $(L_2(25),2)$\\ $Suz.2$&$(S_7,7)$\\ $McL$&$-$\\ $McL.2$&$-$\\ $HS$&$(M_{22},2)$\\ $HS.2$&$(M_{22}:2,2)$\\ $Co_3$&$(A_4\times S_5,?)$\\ $Co_2$&$(5_+^{1+2}:4S_4,2)$\\ $Co_1$&$(A_9\times S_3,3)$, $((A_7\times L_2(7)):2,2)$, $((D_{10}\times (A_5\times A_5).2).2,2)$\\ & $(5_+^{1+2}:\mathrm{GL}_2(5),2)$, $(5^3:(4\times A_5).2,2)$, $(5^2:4A_4,2)$, $(7^2:(3\times 2A_4),2)$\\ $He$&$(5^2:4A_4,2)$\\ $He.2$&$-$\\ $Fi_{22}$&$-$\\ $Fi_{22}.2$&$-$\\ $Fi_{23}$&$(L_2(23),2)$\\ $Fi_{24}'$&$((A_5\times A_9):2,3)$, $(A_6\times L_2(8):3,2)$, $(7:6\times A_7,7)$, $(U_3(3).2,2)$\\ &$(U_3(3).2,2)$, $(L_2(13).2,2)$, $(L_2(13).2,2)$, $(29:14,7)$\\ $Fi_{24}$&$(S_5\times S_9,3)$, $(S_6\times L_2(8):3,2)$, $(7:6\times S_7,7)$, $(7^{1+2}_+:(6\times S_3).2,2)$, $(29:28,7)$\\ $Ru$&$(L_2(13):2,2)$, $(5:4\times A_5,?)$, $(A_6.2^2,2)$, $(5_+^{1+2}:[2^5],2)$, $(3.A_6.2^2,2)$\\ $O'N$&$(A_7,2)$, $(A_7,2)$\\ $O'N.2$&$(31:30,5)$, $(L_2(7):2,2)$, $(A_6:2_2,2)$\\ $Ly$&$(67:22,11)$, $(37:18,3)$\\ $Th$&$(3^5:2S_6,2)$, $(5_{+}^{1+2}:4S_4,2)$, $(5^2:\mathrm{GL}_2(5),2)$, $(7^2:(3\times 2S_4),2)$\\ & $(L_2(19).2,2)$, $(L_3(3),2)$, $(M_{10}=A_6.2_3,2)$, $(31:15,4)$, $(S_5,5)$\\ $HN$&$(3_+^{1+4}:4A_5,2)$\\ $HN.2$&$-$\\ $B$&$((2^2\times F_4(2)):2,?)$, $(3^{1+8}.2^{1+6}.U_4(2).2,?)$, $((3^2:D_8\times U_4(3).2^2).2,?)$, $(5:4\times HS:2,2)$ \\ &$(3^2.3^3.3^6.(S_4\times 2S_4),?)$, $(S_4\times {^2}F_4(2),2)$, $(S_5\times (M_{22}:2),2)$, $((S_6\times (L_3(4):2)).2,2)$, \\& $(5^3:L_3(5),2)$, $(5^{1+4}.2^{1+4}.A_5.4,2)$, $((S_6\times S_6).4,2)$, $((5^2:4S_4)\times S_5,2)$\\ & $(L_2(49).2,2)$, $(L_2(31),2)$, $(M_{11},2)$, $(L_3(3),2)$, $(L_2(17):2,2)$, $(L_2(11):2,2)$, $(47:23,23)$\\ \hline \end{tabular} \caption{Disposing of some of the sporadic simple groups.}\label{t: table1} \end{table} We have made use of the fact that full information on the maximal subgroups for each group in the first column of Table~$1$ is available: these are all stored in \texttt{GAP} or in~\texttt{magma}. To be more precise, in each case, either the maximal subgroup $M$ is stored providing a generating set (written as words in the standard generators for $G$), or when such information is not available (for instance, already for some of the maximal subgroups of $Fi_{23}$), the group $M$ is explicitly described (for instance, as a $p$-local subgroup) and hence also in this case it is possible to construct $M$ with a computer. We are now able to prove Theorem~\ref{t: sporadic} for all groups bar the monster. \begin{prop}\label{p: 1} Let $G$ be an almost simple primitive group with socle a sporadic simple group. If $G$ is binary, then $G$ is the Monster group. \end{prop} \begin{proof} In view of Table~\ref{t: table1}, it suffices to consider the case that $G$ is either $Co_3$, or $Ru$, or $B$: these are the only groups having a ``?'' symbol in one of their rows. We first assume that $G$ is either $Co_3$ or $Ru$; here, in view of Table~\ref{t: table1} the group $G$ is acting on the cosets of $M=A_4\times S_5$ when $G=Co_3$, or $M=5:4\times A_5$ when $G=Ru$. Given a Sylow $2$-subgroup $P$ of $M$, in both cases it is easy to verify with \texttt{magma} that there exists $g\in \nor G P$ with $M\cap M^g=P$. When $G=Co_3$, $P$ is of type $2\times 2\times D_4$ and, when $G=Ru$, $P$ is of type $4\times 2\times 2$. Another computation shows that the actions of $A_4\times S_5$ on the cosets of $2\times 2\times D_4$, and of $5:4\times A_4$ on the cosets of $4\times 2\times 2$ are not binary. Therefore, $G$ in its action on the cosets of $M$ is not binary by Lemma~\ref{l: again0}. Finally assume that $G$ is the Baby Monster $B$. In view of Table~\ref{t: table1}, $G$ is acting on the cosets of $M$ where $M$ is of one of the following types $$ (2^2\times F_4(2)):2, \hspace{1cm} 3^{1+8}.2^{1+6}.U_4(2).2, \hspace{1cm} (3^2:D_8\times U_4(3).2^2).2, \hspace{1cm} 3^2.3^3.3^6.(S_4\times 2S_4).$$ Let $\Omega$ be the set of right cosets of $M$ in $G$ and let $\alpha\in \Omega$ with $G_\alpha=M$ (that is, $\alpha$ is the coset $M$). We go through the four remaining cases one at a time. {\bf that $M\cong (3^2:D_8\times U_4(3).2^2).2$.} Observe that a Sylow $7$-subgroup of $G$ has order $7^2=49$, that $G$ has a unique conjugacy class of elements of order $7$, that $|M|$ and $|G:M|$ are both divisible by $7$. Then Lemma~\ref{l: M2} implies that $G$ is not binary. \smallskip {\bf Suppose that $M\cong 3^{1+8}.2^{1+6}.U_4(2).2$.} The group $G$ has two conjugacy classes of elements of order $5$, with the ATLAS notation, of type 5A and of type 5B. By computing the permutation character of $G$ via the package ``The GAP character Table Library" in GAP, we see that an element of type 5A fixes no point and that an element of type 5B fixes $25000$ points. Observe that $|M|$ is divisible by $5$, but not by $5^2=25$. Moreover, using the ATLAS~\cite{ATLAS}, we see that $G$ contains an elementary abelian $5$-group $V$ of order $5^3$ generated by three elements of type 5B; moreover, the normalizer of $V$ is a maximal subgroup of $G$ of type $5^3:L_3(5)$. In particular, each non-identity $5$-element of $V$ is of type 5B, because $L_3(5)$ acts transitively on the non-zero vectors of $5^3$. Since $|M|$ is not divisible by $25$, we conclude that $\Fix(V)$ is empty. Now we apply Lemma~\ref{l: added} to $L$, a subgroup of $M$ of order $25$ such that $|\Fix(L)|<|\Fix(g)|$. We conclude that $G$ is not binary. \smallskip {\bf Suppose that $M\cong 3^{2}.3^3.3^6.(S_4\times 2S_4)$.} From the ATLAS~\cite{ATLAS}, we see that $M=\nor G V$, where $V$ is an elementary abelian $3$-subgroup of order $3^2$ with $V\setminus\{1\}$ consisting only of elements of type 3B. For the proof of this case, we refer to~\cite{MR892191} and~\cite{MR1656568}. According to Wilson~\cite[Section~$3$]{MR892191}, there exist three $G$-conjugacy classes of elementary abelian $3$-subgroups of $G$ of order $3^2$ consisting only of elements of type 3B, denoted in~\cite{MR892191} as having type~(a), or~(b), or~(c). Moreover, from~\cite[Proposition~$3.1$]{MR892191}, we see that only for the elementary abelian $3$-groups of type~(a) the normalizers are maximal subgroups of $G$ and of shape $3^{2}.3^3.3^6.(S_4\times 2S_4)$. Thus $V$ is of type~(a). Let $V_1,V_2,V_3$ be representatives for the conjugacy classes of elementary $3$-subgroups of $G$ of order $3^2$ and consisting only of elements of type 3B. We may assume that $V_1=V$. From~\cite{MR892191} or~\cite{MR1656568}, for $W\in \{V_1,V_2,V_3\}$, $\nor G W$ has shape $3^2.3^3.3^6.(S_4\times 2S_4)$, $(3^2\times 3^{1+4}).(2^2\times 2A_4).2$, and $(3^2\times 3^{1+4}).(2\times 2S_4)$; in \cite{MR892191,MR1656568}, these cases are refered to as type (a), type (b) and type (c), respectively. Next, we consider a maximal subgroup $K$ of $G$ isomorphic to $\mathrm{PSL}_3(3)$. From \cite{MR1656568} (pages 9 and 10 and the discussion therein on the interaction between $K$ and the types~(a),~(b) and~(c)), we infer that $K$ contains a conjugate of $V$. In particular, replacing $K$ by a suitable $G$-conjugate, we may assume that $V\le K$. In particular, $M\cap K=\nor K V$. Take $\Lambda:=\alpha^K$ and observe that $\Lambda$ is a $K$-orbit on $\Omega$ and that the stabilizer of the point $\alpha$ in $K$ is $\nor K V$. Moreover, since $K$ is maximal in $G$, we get $G_{\Lambda}=K$. We claim that $G^\Lambda=K^\Lambda$ is strongly non-binary, from which it follows that $G$ is not binary by~\ref{l: again12}. Observe that the action of $K$ on $\Lambda$ is permutation isomorphic to the action of $K$ on the set of right cosets of $\nor K V$ in $K$. Now, we consider the abstract group $K_0=\mathrm{PSL}_3(3)$, we consider an elementary abelian $3$-subgroup $V_0$ of order $9$ of $K_0$, we compute $N_0:=\nor {K_0}{V_0}$ and we consider the action of $K_0$ on the set $\Lambda_0$ of right cosets of $N_0$ in $K_0$. A straightforward computation shows that $K_0$ is not $2$-closed in this action, and hence $K_0$ in its action on $\Lambda_0$ is strongly non-binary by Lemma~\ref{l: fedup}. \smallskip {\bf Suppose that $M\cong (2^2\times F_4(2)):2$.} Here we cannot invoke ``The GAP character Table Library" to understand whether $F_4(2)$ contains elements of type 5A or 5B, because the fusion of $M$ in $G$ is (in some cases) still unknown. As we mentioned above, $G$ has two conjugacy classes of elements of order $5$, denoted by 5A and 5B; what is more the group $F_4(2)$ contains a unique conjugacy class of elements of order $5$. Observe that the centralizer in $G$ of an element of type 5A (respectively 5B) has order $44352000$ (respectively $6000000$). Now, the centralizer in $F_4(2)$ of a element of order $5$ has cardinality $3600$. Since $3600$ does not divide $6000000$, we get that $M$ contains only elements of type 5A; in particular elements of type 5B do not fix any element of $\Omega$. Using \eqref{e: fora}, we conclude that if $g$ is an element of order $5$ in $M$, then $$|\Fix_\Omega(g)|=\frac{|G|}{|M|}\frac{|M:\cent M g|}{|G:\cent G g|}=\frac{|\cent G g|}{|\cent M g|}=\frac{44352000}{3600\times 4 \times 2}=1540.$$ Now let $V$ be a Sylow $5$-subgroup of $M$ and observe that $V$ has order $5^2$ and $V\setminus\{1\}$ consists only of elements of type 5A. Referring to~\cite[Section~6]{MR892191}, we see that $G$ contains only one conjugacy class of elementary-abelian groups of order $25$ for which the non-trivial elements are all of type 5A. Thus $V$ is a representative of this $G$-conjugacy class. Now, Theorem~$6.4$ in~\cite{MR892191} yields $N_G(V)\cong 5^2:4S_4 \times S_5$. Appealing to \eqref{e: fora} again, we conclude that $$|\Fix_\Omega(V)|=\frac{|G|}{|M|}\frac{|M:\nor M V|}{|G:\nor G V|}=\frac{|\nor G V|}{|\nor M V|}=\frac{28800}{19200}=15.$$ Now Lemma~\ref{l: added} implies that $G$ is not binary. \end{proof} \section{The Monster}\label{s: monster} In this section we prove Theorem~\ref{t: sporadic} for the monster. Our proof will break down into several parts, and to ensure that we cover all possibilities we will make use of a recent account of the classification of the maximal subgroups of the sporadic simple groups in~\cite{wilsonArXiv}. From~\cite[Section~3.6]{wilsonArXiv}, we see that the classification of the maximal subgroups of the Monster $G$ is complete except for a few small open cases. In particular, if $M$ is a maximal subgroup of $G$, then either \begin{description} \item[(a)] $M$ is in~\cite[Section~$4$]{wilsonArXiv}, or \item[(b)] $M$ is almost simple with socle isomorphic to $L_2(8)$, $L_2(13)$, $L_2(16)$, $U_3(4)$ or $U_3(8).$ \end{description} From here on $G$ will always denote the monster group, and $M$ will be a maximal subgroup of $G$. We consider the action of $G$ on cosets of $M$. \subsection{The almost simple subgroups in {\bf (b)}} We begin by applying Test 5 to those groups in category {\bf (b)}. Provided that such a group $M$ is not isomorphic to $L_2(16).2$, we find that, by applying Test 5 with the prime $2$ or $3$, we can immediately show that $G$ in its action on $G/M$ is not binary. The group $M=L_2(16).2$ is exceptional here: for each prime $p$ dividing $|M|$, there exists a permutation representation of $M$ of degree coprime to $p$ satisfying the four conditions in Lemma~\ref{l: alot}; hence we cannot apply Test~5. We defer the treatment of $L_2(16).2$ to \S\ref{s: snbs} below. From here on we will consider those groups in category {\bf (a)}, as well as the deferred group $L_2(16).2$. \subsection{Constructing a strongly non-binary subset}\label{s: snbs} Our next step is to apply Lemma~\ref{l: M2} to the remaining group, $L_2(16).2$, from category {\bf(b)} and to the groups from category {\bf (a)}. We start with a technical lemma; this is then followed by the statement that we need, Lemma~\ref{l: M3}. \begin{lem}\label{l: M1}Let $G$ be the Monster, let $p\in \{5,7,11\}$ and let $x\in G$ with $o(x)=p$. Then there exists $g\in G$ with $\langle x,x^g\rangle$ elementary abelian of order $p^2$ and with $xx^g$ conjugate to $x$ via an element of $G$. \end{lem} \begin{proof} When $p=11$, there is nothing to prove: $G$ has a unique conjugacy class of elements of order $11$ and a Sylow $11$-subgroup of $G$ is elementary abelian of order $11^2$. When $p\in \{5,7\}$, it is enough to read~\cite[page 234]{ATLAS}: $G$ contains two conjugacy classes of elements of order $p$. Moreover, $G$ contains two elementary abelian $p$-subgroups $V$ and $V'$ both of order $p^2$, with $V$ generated by two elements of type pA and with $V'$ generated by two elements of type pB. Moreover, $\nor G V$ and $\nor G {V'}$ act transitively on the non-identity elements of $V$ and of $V'$, respectively. This lemma can also be easily deduced from~\cite{wilsonoddlocal}. \end{proof} \begin{lem}\label{l: M3} Let $G$ be the Monster and let $M$ be a maximal subgroup of $G$. If $M$ is as in the first column of Table~$\ref{t: table2'}$, then the action of $G$ on the right cosets of $M$ in $G$ is not binary. \end{lem} Note that the final line of the table is the remaining group from category {\bf (b)}, hence, once this lemma is disposed of, we only deal with groups from category {\bf (a)}. \begin{proof} If suffices to compare $|G:M|$ with $|M|$ and apply Lemmas~\ref{l: M1} and~\ref{l: M2}. For simplicity we have highlighted in the second column of Table~\ref{t: table2'} the prime $p$ that we are using to apply Lemma~\ref{l: M2}. \end{proof} \begin{table}[!ht] \begin{tabular}{|c|c|}\hline Maximal Subgroup &Prime\\\hline $2.B$&$11$\\ $2^{1+24}.Co_1$&11\\ $3.Fi_{24}$&$11$\\ $2^{2}.{^{2}E_6(2)}.S_3$&11\\ $2^{10+16}.O_{10}^+(2)$&7\\ $2^{2+11+22}.(M_{24}\times S_3)$&7\\ $3^{1+12}.2Suz.2$&7\\ $2^{5+10+20}.(S_3\times L_5(2))$&7\\ $2^{3+6+12+18}.(L_3(2)\times 3S_6)$&7\\ $3^8.O_8^{-}(3).2_3$&7\\ $(D_{10}\times HS).2$&7\\ $(3^2:2\times O_8^+(3)).S_4$&7\\ $3^{2+5+10}.(M_{11}\times 2S_4)$&11\\ $5^{1+6}:2J_2:4$&$7$\\ $(A_5\times A_{12}):2$&7\\ $(A_5\times U_3(8):3_1):2$&7\\ $(L_3(2)\times S_4(4):2).2$&7\\ $(5^2:[2^4]\times U_5(5)).S_3$&7\\ $7^{1+4}:(3\times 2S_7)$&5\\\hline $L_2(16).2$ & 5 \\\hline \end{tabular} \caption{Primitive actions of the Monster for Lemma~\ref{l: M3}.}\label{t: table2'} \end{table} \subsection{Using Test 5} We next apply Test 5 to the remaining maximal subgroups of $G$. The statement that we need is the following. \begin{lem}\label{l: M4} Let $G$ be the Monster and let $M$ be a maximal subgroup of $G$. If $M$ is as in the first column of Table~$\ref{t: table2}$, then the action of $G$ on the right cosets of $M$ in $G$ is not binary. \end{lem} \begin{proof} Table~\ref{t: table2} lists precisely those remaining maximal subgroups that can be excluded using Test 5, together with the prime $p$ that has been used. \end{proof} \begin{table}[!ht] \centering \begin{threeparttable} \begin{tabular}{@{}p{\textwidth}@{}} \centering \begin{tabular}{|c|c|c|}\hline Maximal Subgroup &Prime &Comments\\\hline $(7:3 \times He):2$&$2$&\\ $(A_6\times A_6\times A_6).(2\times S_4)$&2&\\ $(5^2:[2^4]\times U_3(5)).S_3$&$2$&\\ $(L_2(11)\times M_{12}):2$&$2$&\\ $(A_7\times (A_5\times A_5):2^2):2$&$2$&checked $4$-tuples\tnote{1}\\ $5^4:(3\times 2L_2(25)):2$&$2$&\\ $7^{2+1+2}:\mathrm{GL}_2(7)$&$2$&\\ $M_{11}\times A_6.2^2$&$2$&\\ $(S_5\times S_5\times S_5):S_3$&$3$&\\ $(L_2(11)\times L_2(11)):4$&$2$&\\ $13^2:(2L_2(13).4)$&$2$&\\ $(7^2:(3\times 2A_4)\times L_2(7)).2$&$2$&\\ $(13:6\times L_3(3)).2$&$2$&\\ $13^{1+2}:(3\times 4S_4)$&$2$&\\ $L_2(71)$&$2$&\\ $L_2(59)$&$5$&\\ $11^2:(5\times 2A_5)$&$2$&\\ $L_2(41)$&$2$&\\ $L_2(29):2$&$2$&\\ $7^2:\mathrm{SL}_2(7)$&$2$&\\ $L_2(19):2$&$2$&\\ $41:40$&$2$&\\\hline \end{tabular}\end{tabular} \caption{Primitive actions of the Monster for Lemma~\ref{l: M4}.}\label{t: table2} \begin{tablenotes} \item[1] \footnotesize This action turned out to be rather problematic. Each transitive action of odd degree satisfying the conditions~(1),~(2),~(3) in Lemma~\ref{l: alot} is not binary. However, for some of these actions to witness the non-binariness we had to resort to $4$-tuples, which was particularly time consuming. \end{tablenotes} \end{threeparttable} \end{table} \subsection{The remainder} By ruling out the groups listed in Tables~\ref{t: table2'} and \ref{t: table2}, we are left with precisely five subgroups on Wilson's list \cite{wilsonArXiv}. We now deal with these one at a time and, in so doing, we complete the proof of Theorem~\ref{t: sporadic}. The remaining groups are as follows: $$S_3\times Th,\hspace{1cm} 3^{3+2+6+6}:(L_3(3)\times SD_{16}),\hspace{1cm} (7:3\times He):2,\hspace{1cm} 5^{3+3}.(2\times L_3(5)),\hspace{1cm} 5^{2+2+4}:(S_3\times \mathrm{GL}_2(5)). $$ \smallskip {\bf Suppose that $M\cong S_3\times Th$.} Here we refer to \cite[Section 2]{wilsonoddlocal}. There are three conjugacy classes of elements of order $3$ in the Monster $G$, of type 3A, 3B and 3C and, the normalizers of the cyclic subgroups generated by the elements of type 3C are maximal subgroups of $G$ conjugate to $M$. Choose $x$, an element of type 3C with $M=\nor G {\langle x\rangle }$. We write $M:=H\times K$, where $H\cong S_3$ and $K\cong Th$. From \cite[first two lines of the proof of Proposition~2.1]{wilsonoddlocal}, for every $y\in K$ of order $3$, $xy$ is an element of type 3C. From the subgroup structure of the Thompson group $Th$, the group $K$ contains an element $y$ of order $3$ with $\nor {K}{\langle y\rangle}$ of shape $(3\times G_2(3)):2$ and maximal in $K$. Since $x$ and $xy$ are in the same $G$-conjugacy class, there exists $g\in G$ with $x^g=xy$. Moreover, an easy computation inside the direct product $M=H\times K$ yields $M\cap M^g=\nor {G}{\langle x\rangle}\cap \nor G{\langle xy\rangle}=\nor M{\langle xy\rangle}\cong (\langle x\rangle\times \cent K y):2$ has shape $(3\times 3\times G_2(3)):2$. This shows that the action of $M$ on the right cosets of $M\cap M^g$ is permutation isomorphic to the primitive action of $Th$ on the right cosets of $(3\times G_2(3)):2$. In other words, $G$ has a suborbit inducing a primitive action of the sporadic Thompson group. From Proposition \ref{p: 1}, this action is not binary, and hence the action of $G$ on the right cosets of $M$ is not binary by Lemma~\ref{l: again0}. \smallskip {\bf Suppose that $M\cong 3^{3+2+6+6}:(L_3(3)\times SD_{16})$.} Arguing as in the previous case, we note that $M$ contains only elements of type 13A and no elements of type 13B. Let $Q$ be a $13$-Sylow subgroup of $M$ and let $P$ be a $13$-Sylow subgroup of $G$ with $Q\le P$. Observe that $P$ is an extraspecial group of exponent $13$ of order $13^3$ and that $Q$ has order $13$. Replacing $P$ by a suitable $G$-conjugate we may also assume that $Q\ne \Zent P$. (Observe that to guarantee that we may actually assume that $Q\ne \Zent P$ we need to use~\cite[page~15]{wilsonoddlocal}, which describes how the $13$-elements of type A and B are partitioned in $P$. Indeed, not all $13$-elements of type B are in $\Zent P$ and hence, if accidentally $Q=\Zent P$, we may replace $Q$ with a suitable conjugate.) Let $\alpha\in \Omega$ with $G_\alpha=M$ and set $\Lambda:=\alpha^P$. From the previous paragraph, $P$ acts faithfully on the set $\Lambda$ and $|\Lambda|=13^2$. Now the permutation group $P$ in its action on $\Lambda$ is not $2$-closed; indeed the $2$-closure of $P$ in its action on $\Lambda$ is of order $13^{14}$, it is a Sylow $13$-subgroup of $\Sym(\Lambda)$ (this follows from an easy computation or directly from~\cite{Dobson}). Since P embeds into $G^{\Lambda}$, the $2$-closure of $G^{\Lambda}$ contains the $2$-closure of $P$, but since $13^{14}$ does not divide the order of $|G|$, $G^{\Lambda}$ is not $2$-closed. Lemmas~\ref{l: again12} and \ref{l: fedup} imply that the action is not binary. \smallskip {\bf Suppose that $M\cong (7:3\times He):2$.} Observe that $He$ has a unique conjugacy class of elements of order $5$ and that its Sylow $5$-subgroups are elementary abelian of order $5^2$. Thus, we let $V:=\langle g,h\rangle$ be an elementary abelian $5$-subgroup of $M$ and we note that $g,h$ and $gh$ are $M$-conjugate and hence $G$-conjugate. The group $G$ has two conjugacy classes of elements of order $5$, denoted 5A and 5B. We claim that $M$ contains only elements of type 5A. Indeed, a computation inside the Held group $He$ reveals that $\cent M g$ contains an element of order $7\times 3\times 5=105$ and hence $G$ contains an element $x$ of order $105$ with $x^{21}=g$ being an element of order $5$. By considering the power information on the conjugacy classes of $G$, we see that $g$ belongs to the conjugacy class of type 5A. Since all $5$-elements are conjugate in $M$, we get that $M$ contains only $5$-elements of type 5A. We now calculate the number of fixed points of $g$ and of $V$ on $\Omega$, making use of \eqref{e: fora}. Using the information on the conjugacy classes of $He$ and $G$ we deduce that $$|\Fix_\Omega(g)|=\frac{|G|}{|M|}\frac{|M:\cent M g|}{|G:\cent G g|}=\frac{|\cent G g|}{|\cent M g|}=\frac{1365154560000000}{12600}=108345600000.$$ Next, since $V$ is a Sylow $5$-subgroup of $M$, we deduce that $|\nor M V|=50400$ using the structure of the Held group. Moreover, from~\cite[Section~9]{wilsonoddlocal}, we get that the normalizer of an elementary abelian $5$-subgroup of the Monster consisting only of elements of type 5A is maximal in $G$ and is of the form $(5^2:4\cdot 2^2\times U_3(5)):S_3$. In particular, $|\nor G V|=302 400 000$. Thus $$|\Fix_\Omega(V)|=\frac{|G|}{|M|}\frac{|M:\nor M V|}{|G:\nor G V|}=\frac{|\nor G V|}{|\nor M V|}=\frac{302400000}{50400}=6000.$$ Now Lemma~\ref{l: added} implies that $G$ is not binary. \smallskip {\bf Suppose that $M\cong 5^{3+3}.(2\times L_3(5))$.} Let $P$ be a Sylow $31$-subgroup of $M$ and observe that $P$ is also a Sylow $31$-subgroup of $G$. Recall that $G$ has a maximal subgroup $K:=C\times D$, where $C\cong S_3$ and $D\cong Th$ (as usual $Th$ denotes the sporadic Thompson group). Now, by considering the subgroup structure of $Th$, we see that $D$ contains a maximal subgroup isomorphic to $2^5.\mathrm{L}_5(2)$ and hence $D$ contains a Frobenius subgroup $F$ isomorphic to $2^5:31$. Replacing $F$ by a suitable conjugate we may assume that $P\le F$. Comparing the subgroup structure of $M$ and of $F$, we deduce $M\cap F=P$. Consider $\Lambda:=\alpha^F$. By construction, as $M=G_\alpha$, we get $|\Lambda|=32$ and $F$ acts as a $2$-transitive Frobenius group of degree $32$ on $\Lambda$. Since the $2$-closure of a $2$-transitive group of degree $32$ is $\Sym(32)$ and since $G$ has no sections isomorphic to $\Sym(32)$, we deduce from Lemma~\ref{l: fedup} that $G^\Lambda$ is strongly non-binary. Therefore $G$ is not binary by Lemma~\ref{l: again12}. \smallskip {\bf Suppose that $M\cong 5^{2+2+4}:(S_3\times \mathrm{GL}_2(5))$.} For this last case we invoke again the help of a computer aided computation based on Lemma~\ref{l: alot}, but applied in a slightly different way than what we have described in Test~5. (We thank Tim Dokchitser for hosting the computations required for dealing with this case.) Observe that $|\Omega|-1$ is divisible by $5$, but not by $5^2$. With \texttt{magma} we construct all the transitive permutation representations on a set $\Lambda$ of degree greater than $1$ and with $|\Lambda|$ not divisible by $5^2$ of $M$. (Considering that a Sylow $5$-subgroup of $M$ has index $576$, this computation does require some time but it is feasible.) Next, with a case-by-case analysis we see that none of these permutation representations satisfies (1),~(2),~(3) and~(4). Therefore, every transitive permutation representation of $M$ of degree greater than $1$ satisfing (1),~(2),~(3) and~(4) has degree divisible by $25$. Now, from Lemma~\ref{l: alot} applied with $d:=25$, we get that $G$ in its action on the set $M$ of right cosets of $M$ in $G$ is not binary because $25$ does not divide $|\Omega|-1$.
train/arxiv
BkiUd8U5qhDBeSTLFReA
5
1
\section{Background and motivation.} A subset $S$ of an abelian group is \emph{sum-free} if the equation $x+y=z$ has no solutions in the elements of $S$; that is, if $S$ is disjoint from $2S$ where we use the standard notation $2S:=\{s_1+s_2\colon s_1,s_2\in S\}$. The idea of a sum-free set goes back to Schur~\refb{s} who was motivated by the modular version of the Fermat equation $x^n+y^n=z^n$. Despite this initial motivation, sum-free sets are treated in \refb{s} as a combinatorial object of independent interest. Originating from \refb{s}, the celebrated \emph{Schur's theorem} (``the positive integers cannot be partitioned into finitely many sum-free subsets'') is considered one of the origins of Ramsey theory. In the 1960's sum-free sets were studied under the name ``mutant sets''; see, for instance,~\refb{ki}. The subject gained popularity when it turned out to be related to a problem of Erd\H os. The reader is invited to check \cite{b:gr,b:tv} for a historical account and further references. How large can a sum-free subset of a given finite abelian group be? First considered in 1968 by Diananda and Yap \cite{b:d,b:dy}, this basic question did not receive a complete answer up until the year 2005 when it was eventually resolved by Green and Ruzsa \refb{gr}. Once the largest possible size is known, it is natural to investigate the corresponding stability problem: what is the structure of sum-free subsets of finite abelian groups of size close to the largest possible? In this respect, the cyclic groups of infinite order and of prime order and elementary abelian $p$-groups have received particular attention. Here we are concerned with the groups of the latter type. The case $p=2$ is of special interest due to its connections with the coding theory and the theory of finite geometries, see \cite{b:cp,b:kl} for a detailed explanation. Motivated by the applications in these areas, Davydov and Tombak \refb{dt} established the structure of large sum-free subsets in the binary settings. To state their principal result, we briefly review the basic notions of periodicity and maximality. The \emph{period} of a subset $A$ of an abelian group $G$ is the subgroup $\pi(A):=\{g\in G\colon A+g=A\}\le G$; that is, $\pi(A)$ is the largest subgroup $H\le G$ such that $A$ is a union of $H$-cosets. The set $A$ is \emph{periodic} if $\pi(A)\ne\{0\}$ and \emph{aperiodic} otherwise. One also says that $A$ is \emph{$H$-periodic} if $H\le\pi(A)$; that is, if $A$ is the inverse image of a subset of the quotient group $G/H$ under the canonical homomorphism $G\to G/H$. A sum-free set is \emph{maximal} if it is not properly contained in another sum-free set. By $\Z_p^n$ we denote the elementary abelian $p$-group of rank $n$. \begin{theorem}[{\cite[Theorem~1]{b:dt}}]\label{t:dtper} Let $n\ge 4$ and suppose that $A\seq\Z_2^n$ is a maximal sum-free set. If $|A|>2^{n-2}+1$, then $A$ is periodic. \end{theorem} From Theorem~\reft{dtper} it is not difficult to derive a detailed structural characterisation of large sum-free sets in $\Z_2^n$. \begin{primetheorem}[\cite{b:dt}]\label{t:dt} Let $n\ge 4$ and suppose that $A\seq\Z_2^n$ is sum-free. If $|A|\ge 2^{n-2}+1$, then either $A$ is contained in a nonzero coset of a proper subgroup, or there are an integer $k\in[4,n]$, a subgroup $H\le\Z_2^n$ of size $|H|=2^{n-k}$, and a maximal sum-free subset $\cA\seq\Z_2^n/H\simeq\Zn[k]$ of size $|\cA|=2^{k-2}+1$ such that $A$ is contained in the inverse image of $\cA$ under the canonical homomorphism $\Z_2^n\to\Z_2^n/H$. \end{primetheorem} As an easy consequence, we have the following corollary. \begin{corollary}[\cite{b:dt}]\label{c:dt2} Let $n\ge 4$ and suppose that $A\seq\Z_2^n$ is sum-free. If $|A|\ge 5\cdot 2^{n-4}+1$, then $A$ is contained in a nonzero coset of a proper subgroup. \end{corollary} Corollary~\refc{dt2} was independently obtained in~\refb{cp}. In the ternary case, only an analogue of Corollary~\refc{dt2} is known. \begin{theorem}[\cite{b:l1}]\label{t:l3} Let $n\ge 3$ and suppose that $A\seq\Z_3^n$ is sum-free. If $|A|\ge 5\cdot3^{n-3}+1$, then $A$ is contained in a nonzero coset of a proper subgroup. \end{theorem} As shown in \cite{b:l1}, the bound $5\cdot3^{n-3}+1$ is sharp. In this note we study the first open case $p=5$ proving the following result. \begin{theorem}\label{t:main} Let $n\ge 1$ and suppose that $A\seq\Zn$ is sum-free. If $|A|>\frac32\cdot5^{n-1}$, then there are a proper subgroup $H<\Zn$ and an element $e\notin H$ such that $A\seq(e+H)\cup(-e+H)$. \end{theorem} There are no reasons to believe that the assumption $|A|>\frac32\cdot5^{n-1}$ of Theorem~\reft{main} is sharp. Indeed, we expect that it can be relaxed to $|A|>\frac65\cdot5^{n-1}$; if true, this is best possible, as will be explained shortly. Suppose that $H$ is a nonzero subgroup of an abelian group $G$, and let $\phi_H$ denote the canonical homomorphism onto the quotient group $G/H$. A set $A\seq G$ is the inverse image under $\phi_H$ of a set $\cA\seq G/H$ if and only if $A$ is $H$-periodic. Clearly, in this situation $A$ is sum-free if and only if so is $\cA$; moreover, $A$ is \emph{maximal} sum-free if and only if $\cA$ is maximal. This shows that to describe sum-free subsets of an abelian group $G$, it suffices to describe aperiodic maximal sum-free subsets in the quotient groups; the sum-free subsets of $G$ are contained in their inverse images. The following example shows that for any $n\ge 1$, the group $\Zn$ contains an aperiodic maximal sum-free set of size $5^{n-1}+1$. \begin{example}\label{x:Ex1} If $n=1$, then for any nonzero element $a\in\Zn$ the set $A:=\{a,-a\}$ is an aperiodic maximal sum-free set. If $n\ge 2$, then we fix a maximal proper subgroup $H<\Zn$, a nonzero element $h\in H$, and an element $e\notin H$, and let $A:=\{h,-e\}\cup (e+H\stm\{h\})$. It is easily verified that $A$ is maximal sum-free, while $|A|=5^{n-1}+1$, showing that $A$ is aperiodic. \end{example} Suppose that $H<\Zn$ is a subgroup of size $5^{n-k}$ where $k\in[1,n-1]$, and let $\cA\seq\Zn/H$ be an aperiodic maximal sum-free set of size $|\cA|=5^{k-1}+1$, cf.~Example~\refx{Ex1}. Then the inverse image $A:=\phi_H^{-1}(\cA)\seq\Zn$ is an aperiodic maximal sum-free set of size $$ |A|=|\cA||H|=(5^{k-1}+1)\cdot 5^{n-k}=5^{n-1}+5^{n-k}. $$ Notice that if $k=1$, then $H$ is a maximal proper subgroup, $\cA|=2$, and $A$ is a union of two $H$-cosets, and that if $k=2$, then $|\cA|=6$ $|A|=\frac65\cdot5^{n-1}$, explaining the remark after the statement of Theorem~\reft{main}. It can be the case that Example~\refx{Ex1} lists all aperiodic maximal sum-free sets $A\seq\Zn$ of size $|A|\ge 5^{n-1}+1$; in particular, any maximal sum-free set of size $|A|>5^{n-1}+1$ is periodic. \begin{conjecture}\label{j:main} Let $n$ be a positive integer, and suppose that $A\seq\Zn$ is an aperiodic maximal sum-free set. If $|A|\ge 5^{n-1}+1$, then one of the following holds: \begin{itemize} \item[i)] $n=1$ and $|A|=\{a,-a\}$, where $a\ne 0$; \item[ii)] $n\ge 2$ and there are a maximal proper subgroup $H<\Zn$, a nonzero element $h\in H$, and an element $e\notin H$ such that $A:=\{h,-e\}\cup (e+H\stm\{h\})$. \end{itemize} (Notice that $|A|=5^{n-1}+1$ in both cases.) \end{conjecture} In view of the discussion above, according to Conjecture~\refj{main}, any sum-free set $A\seq\Zn$ of size $|A|\ge 5^{n-1}+1$ is contained in the inverse image of an aperiodic maximal sum-free set $\cA\seq\Zn/H$, where $H<\Zn$ is a subgroup of size $5^k$ with $k\in[0,n-1]$, and $|\cA|=5^{n-k}+1$. We now turn to the proof of Theorem~\reft{main}. \section{Proof of Theorem~\reft{main}}\label{s:proof} Our argument is self-contained except that we need the following classical result of Kneser (but see ~\cite[Theorem~6.1]{b:g} for our present formulation). \begin{theorem}[Kneser \cite{b:kn1,b:kn2}]\label{t:Kneser} If $A_1\longc A_k$ are finite, nonempty subsets of an abelian group, then letting $H:=\pi(A_1\longp A_k)$ we have $$ |A_1\longp A_k| \ge |A_1+H| \longp |A_k+H| - (k-1)|H|. $$ \end{theorem} Theorem~\reft{Kneser} is referred to below as \emph{Kneser's theorem}. We start with a series of ``general'' claims. At this stage it is not assumed that $A$ is a sum-free set satisfying the assumptions of Theorem~\reft{main}. \begin{lemma}\label{l:union} Let $n\ge 1$ be an integer and suppose that $A\seq\Zn$ is sum-free. If $|A|>\frac32\cdot5^{n-1}$ and $A$ is contained in a union of two cosets of a proper subgroup $H<\Zn$, then there is an element $e\notin H$ such that $A\seq(e+H)\cup(-e+H)$. \end{lemma} \begin{proof} Since $2|H|\ge|A|>\frac32\cdot5^{n-1}$, we have $|H|=5^{n-1}$. Suppose that $A=(e_1+A_1)\cup(e_2+A_2)$, where $A_1,A_2$ are contained in $H$, and $e_1,e_2\in\Zn$ lie in distinct $H$-cosets. From $|A|>\frac32\cdot5^{n-1}$ we get $|A_1|+|A_2|=|A|>\frac32\,|H|$. Therefore $\min\{|A_1|,|A_2|\}>\frac12\,|H|$, and by the pigeonhole principle, $2A_1=2A_2=A_1+A_2=H$. It follows that $2A=(2e_1+H)\cup(e_1+e_2+H)\cup(2e_2+H)$. Since $A$ is sum-free, each of the three cosets in the right-hand side is distinct from each of the cosets $e_1+H$ and $e_2+H$, which is possible only if $e_2+H=-e_1+H\ne H$. \end{proof} By Lemma~\refl{union}, to prove Theorem~\reft{main} it suffices to show that any sum-free set in $\Zn$ of size larger than $\frac32\cdot5^{n-1}$ is contained in a union of two cosets of a proper subgroup. \begin{proposition}\label{p:1} Let $n\ge 1$ be an integer and suppose that $A\seq\Zn$ is sum-free. If $|A|>\frac32\cdot5^{n-1}$, then $A$ cannot have non-empty intersections with exactly three cosets of a maximal proper subgroup of $\Zn$. \end{proposition} \begin{proof} The case $n=1$ is immediate. Assuming that $n\ge 2$, $A\seq\Zn$ is sum-free, and $H<\Zn$ is a maximal proper subgroup such that $A$ intersects non-trivially exactly three $H$-cosets, we obtain a contradiction. Fix an element $e\in\Zn\stm H$, and for each $i\in[0,4]$ let $A_i:=(A-ie)\cap H$; thus, $A=A_0\cup(e+A_1)\cup(2e+A_2)\cup(3e+A_3)\cup(4e+A_4)$ with exactly three of the sets $A_i$ non-empty. Considering the actions of the automorphisms of $\Zn[]$ on its two-element subsets (equivalently, passing from $e$ to $2e,3e$, or $4e$, if necessary), we further assume that one of the following holds: \begin{itemize} \item[(i)] $A_2=A_3=\est$; \item[(ii)] $A_0=A_4=\est$; \item[(iii)] $A_3=A_4=\est$. \end{itemize} We consider these three cases separately. \paragraph{Case (i): $A_2=A_3=\est$} In this case $A=A_0\cup(e+A_1)\cup(4e+A_4)$, and since $A$ is sum-free, we have $(A_1+A_4)\cap A_0=\est$. It follows that $|A_0|+|A_1+A_4|\le |H|$. Consequently, letting $F:=\pi(A_1+A_4)$, we have $|H|\ge |A_0|+|A_1|+|A_4|-|F|=|A|-|F|$ by Kneser's theorem. Observing that $|F|\le\frac15|H|=5^{n-2}$, we conclude that $$ |A| \le |H|+|F| \le \frac65|H|=6\cdot 5^{n-2} < \frac32\cdot 5^{n-1}, $$ a contradiction. \smallskip \paragraph{Case (ii): $A_0=A_4=\est$} In this case $A=(e+A_1)\cup(2e+A_2)\cup(3e+A_3)$ with $(A_1+A_2)\cap A_3=\est$, and the proof can be completed as in Case (i). \smallskip \paragraph{Case (iii): $A_3=A_4=\est$} In this case from $(A_0+A_1)\cap A_1=\est$, letting $F:=\pi(A_0+A_1)$, by Kneser's theorem we get $$ |H| \ge |A_0+A_1|+|A_1| \ge |A_0|+2|A_1| - |F| $$ whence, in view of $|F|\le\frac15|H|$, \begin{equation}\label{e:nov5a} 2|A_1|+|A_0| \le \frac65\,|H|. \end{equation} Similarly, from $(A_0+A_2)\cap A_2=\est$ we get \begin{equation}\label{e:nov5b} 2|A_2|+|A_0| \le \frac65\,|H|. \end{equation} Averaging~\refe{nov5a} and~\refe{nov5b} we obtain $|A|\le\frac65|H|<\frac32\cdot5^{n-1}$, a contradiction. \end{proof} \begin{proposition}\label{p:2} Let $n\ge 1$ be an integer and suppose that $A\seq\Zn$ is sum-free, and that $H<\Zn$ is a maximal proper subgroup. If there is an $H$-coset with more than half of its elements contained in $A$, then $A$ has non-empty intersections with at most three $H$-cosets. \end{proposition} \begin{proof} Fix an element $e\in\Zn\stm H$, and for each $i\in[0,4]$ set $A_i:=(A-ie)\cap H$; thus, $A=A_0\cup(e+A_1)\cup\dotsb\cup(4e+A_4)$. Suppose that $|A_i|>0.5|H|$ for some $i\in[0,4]$. Since $2A_i=H$ by the pigeonhole principle, we have $i>0$ (as otherwise $2A_0=H$ would not be disjoint from $A_0$). Normalizing, we can assume that $i=1$. From $2A_1\cap A_2=\est$ we now derive $A_2=\est$, and from $(A_1-A_1)\cap A_0=\est$ we get $A_0=\est$. \end{proof} In view of Lemma~\refl{union} and Propositions~\refp{1} and~\refp{2}, we can assume that the set $A\seq\Zn$ of Theorem~\reft{main} contains fewer than $\frac12\cdot 5^{n-1}$ elements in every coset of every maximal proper subgroup. \begin{lemma}\label{l:new} Let $n\ge 1$ be an integer, and suppose that $A,B,C\seq\Zn$ satisfy $(A+B)\cap C=\est$. If $\min\{|A|,|B|\}>2\cdot5^{n-1}$ and $C\ne\est$, then $|A|+|B|+2|C|\le6\cdot5^{n-1}$. \end{lemma} \begin{proof} Write $H:=\pi(A+B-C)$, and define $k\in[0,n]$ by $|H|=5^{n-k}$. We have \begin{equation}\label{e:unnamed} \min\{|A+H|,|B+H|\} > 2\cdot 5^{n-1} = 2\cdot 5^{k-1}|H| \end{equation} while, by Kneser's theorem, and since $(A+B)\cap C=\est$ implies $0\notin A+B-C$ and, consequently, $(A+B-C)\cap H=\est$, \begin{equation}\label{e:ABCKneser} 5^n-|H| \ge |A+B-C| \ge |A+H|+|B+H|+|C+H|-2|H|. \end{equation} Combining \refe{unnamed} and \refe{ABCKneser}, we obtain $$ 5^n \ge 2(2\cdot 5^{k-1}+1)|H| + |C+H| - |H| \ge 4\cdot 5^{k-1}|H| + |C+H| +|H|. $$ Consequently, $$ |C| \le |C+H| \le 5^{n-1}-|H|. $$ On the other hand, from~\refe{ABCKneser}, $$ |A|+|B|+|C|\le 5^n+|H|. $$ Taking the sum of the last two estimates gives the result. \end{proof} \begin{proposition}\label{p:3} Let $n\ge 1$ be an integer and suppose that $A\seq\Zn$ is a sum-free subset of size $|A|>\frac32\cdot5^{n-1}$. If $H<\Zn$ is a maximal proper subgroup such that every $H$-coset contains fewer than $\frac12|H|$ elements of $A$, then there is at most one $H$-coset containing more than $\frac25|H|$ elements of $A$. \end{proposition} \begin{proof} Suppose for a contradiction that there are two (or more) $H$-cosets which are \emph{rich} meaning that they contain more than $\frac{2}{5}|H|$ elements of $A$ each. Fix an element $e\in\Zn\stm H$ and write $A_i=(A-ie)\cap H$, $i\in[0,4]$. Without loss of generality, either $A_0$ and $A_1$, or $A_1$ and $A_2$, or $A_1$ and $A_4$ are rich. If $A_0$ and $A_1$ are rich, then applying Lemma~\refl{new} with $H$ as the underlying group, in view of $(A_0+A_1)\cap A_1=\est$ we get $4\cdot \frac{2}{5}|H|<|A_0|+|A_1|+2|A_1|\le 6\cdot 5^{n-2}$, which is wrong. If $A_1$ and $A_2$ are rich then, observing that $(A_1+A_1)\cap A_2=\est$, we recover the contradictory $4\cdot \frac{2}{5}|H|<|A_1|+|A_1|+2|A_2|\le 6\cdot 5^{n-2}$. Finally, if $A_1$ and $A_4$ are rich, then from $$ (A_1+A_4)\cap A_0 = (A_1+A_1)\cap A_2 = (A_4+A_4)\cap A_3 = \est $$ using Lemma~\refl{new} we obtain \begin{align*} |A_1|+|A_4|+2|A_0| & \le 6\cdot 5^{n-2}, \\ |A_1|+|A_1|+2|A_2| & \le 6\cdot 5^{n-2}, \\ |A_4|+|A_4|+2|A_3| & \le 6\cdot 5^{n-2}. \end{align*} Taking the sum, $$ 3|A_1| + 3|A_4| + 2|A_0| + 2|A_2| + 2|A_3| \le 18\cdot 5^{n-2}; $$ that is, $2|A|+|A_1|+|A_4|\le 18\cdot 5^{n-2}$. However, from $|A|>\frac32\cdot5^{n-1}$ and $\min\{|A_1|,|A_4|\}>\frac{2}{5}\cdot 5^{n-1}$ we derive $2|A|+|A_1|+|A_4|> 3\cdot 5^{n-1} + \frac45\cdot 5^{n-1}=19\cdot 5^{n-2}$, a contradiction. \end{proof} We use character sums to complete the argument and prove Theorem~\reft{main}. \begin{proof}[Proof of Theorem~\reft{main}] Suppose that $n\ge 2$, and that $A\seq\Zn$ is a sum-free set with $\alp:=|A|/5^n>\frac3{10}$; we want to show that $A$ is contained in a union of two cosets of a proper subgroup. Denoting by $1_A$ the indicator function of $A$, consider the Fourier coefficients $$ \hat1_A(\chi):=5^{-n}\sum_{a\in A} \chi(a),\ \chi\in\widehat{\Zn}. $$ Since $A$ is sum-free, we have $A\cap(A-A)=\est$, whence $$ \sum_\chi |\hat1_A(\chi)|^2\cdot \hat1_A(\chi) = 0; $$ consequently, $$ \sum_{\chi\ne 1}|\hat1_A(\chi)|^2\cdot \hat1_A(\chi) = -\alp^3 $$ and, as a result, \begin{equation*}\label{e:Re} \sum_{\chi\ne 1}|\hat1_A(\chi)|^2\cdot \Re(\hat1_A(\chi)) = -\alp^3. \end{equation*} Comparing this to $$ \sum_{\chi\ne 1}|\hat1_A(\chi)|^2=\alp(1-\alp) $$ (which is an immediate corollary of the Parseval's identity), we obtain $$ \sum_{\chi\ne 1}|\hat1_A(\chi)|^2 \big((1-\alp)\,\Re(\hat1_A(\chi))+\alp^2\big) = 0. $$ We conclude that there exists a non-principal character $\chi\in\widehat{\Zn}$ such that \begin{equation}\label{e:Rsmall} \Re(\hat1_A(\chi)) \le -\frac{\alp^2}{1-\alp}. \end{equation} Let $F:=\ker\chi$, fix $e\in\Zn$ with $\chi(e)=\exp(2\pi i/5)$, and for each $i\in[0,4]$, let $\alp_i:=|(A-ie)\cap F|/|F|$. By Propositions~\refp{1} and~\refp{2}, we can assume that $\max\{\alp_i\colon i\in[0,4]\}<0.5$, and then by Proposition~\refp{3} we can further assume that there is at most one index $i\in[0,4]$ with $\alp_i>0.4$; that is, of the five inequalities $\alp_i\le 0.4\ (i\in[0,4])$, at most one fails, but holds true once the inequality is relaxed to $\alp_i<0.5$. We show that this set of assumptions is inconsistent with~\refe{Rsmall}. To this end, we notice that $$ 5 \Re(\hat1_A(\chi)) = \alp_0+s_1\cos(2\pi/5)+s_2\cos(4\pi/5) $$ where $s_1:=\alp_1+\alp_4$ and $s_2:=\alp_2+\alp_3\le 0.9$. Comparing with~\refe{Rsmall}, we get \begin{align* -\frac{5\alp^2}{1-\alp} &\ge \alp_0+s_1\cos(2\pi/5)+s_2\cos(4\pi/5) \\ &= \alp_0+s_1\cos(2\pi/5)+(s_2-0.9)\cos(4\pi/5) +0.9\cos(4\pi/5) \\ &\ge \alp_0+s_1\cos(2\pi/5)+(s_2-0.9)\cos(2\pi/5) +0.9\cos(4\pi/5) \\ &\ge \alp_0 + (5\alp-\alp_0-0.9)\cos(2\pi/5) +0.9\cos(4\pi/5) \\ &\ge (5\alp-0.9)\cos(2\pi/5) +0.9\cos(4\pi/5), \end{align*} while the resulting inequality $$ -\frac{5\alp^2}{1-\alp} \ge (5\alp-0.9)\cos(2\pi/5) +0.9\cos(4\pi/5) $$ is easily seen to be wrong for all $\alp\in[0.3,1)$. This completes the proof of Theorem~\reft{main}. \end{proof} \vfill
train/arxiv
BkiUdzU5qsMAIzHy-VHf
5
1
\section{Introduction} Modern astrophysics relies on the inclusion of stellar-feedback processes in order to explain the low star-formation efficiency and baryon fraction observed in galaxies together with the metal enrichment of the circumgalactic and intergalactic media. The most commonly considered form of stellar feedback is the injection of mass, heavy elements, energy, and momentum by supernova (SN) explosions \citep[e.g.][]{Dekel1986}. The energy budget per unit mass of stars formed is dominated by core-collapse SNe \citep[see e.g. figure 10 in the review by][]{Benson2010PhR}. Each SN deposits 1-10 M$_\odot$ of stellar ejecta initially moving at $\sim 10^4$ km s$^{-1}$ (much larger than the sound speed in the surrounding medium, thus leading to a blast wave) with a total kinetic energy of $\sim 10^{51}$ erg. The time evolution of the supernova remnant (SNR) produces a final momentum of a few $\times\, 10^5$ M$_\odot$ km s$^{-1}$, slightly depending on the properties of the environment \citep[e.g.][]{KimOstriker2015}. The characteristic length scale of this phenomenon (say the radius of the dense shell that forms at the outer edge of the SNR between the Sedov-Taylor and the pressure-driven snowplow stages) ranges between a few and a few tens of pc determined by the ambient density. Multiple events clustered in space and time can build up superbubbles of hot gas in the interstellar medium (ISM) that break through the galactic disc and vent material into the halo \citep[][]{Norman1989} or even beyond. It is widely believed that SNe play a key role in the self-regulation of star formation within galaxies: gravity and cooling cause the gas to reach high densities and turn into stars thus producing feedback which drives some gas back to lower densities. Finally, SN explosions are thought to govern the small-scale structure of the ISM which comprises multiple `thermal phases' in approximate pressure balance with one another. The ISM is often modelled as consisting of cold clouds (which dominate the mass budget) embedded in a hotter volume-filling inter-cloud medium. The physical conditions of the multi-phase ISM are determined by mass exchanges between the phases and the energy injection due to SNe \citep[][]{McKee1977}. In particular, the hot phase cools down slowly through the evaporation of the cold clouds. Several lines of reasoning suggest that additional forms of stellar feedback should be included in galaxy-formation models \citep[e.g.][]{Hopkins2011, Brook2012, Stinson2012} although it is challenging to differentiate this need from the limitations of the sub-grid models for SN feedback and star formation \citep{Rosdahl2017, Smith2019}. Beyond SN explosions, there are several physical phenomena through which stellar feedback could affect the ISM, namely: proto-stellar jets, radiation pressure, photo-ionisation, photo-electric heating, stellar winds and cosmic-ray acceleration at SNRs. These processes take place on rather small scales (ranging from a fraction of a pc to 10 pc) and can in principle play an important role in dispersing the gas after the onset of star formation. In particular, pre-SN or `early' feedback is thought to strengthen the impact of the succeeding SN blast wave \citep[e.g. ][]{Agertz2013, Stinson2013, Geen2015, Hopkins2018} by removing gas from the birth giant molecular cloud (GMC). This scenario is supported by observations showing that gas is cleared out of the star-formation site before the first SN explosion \citep{Barnes2020} and that GMCs are dispersed within \SI{3}{\Myr} after the emergence of unembedded high-mass stars \citep{Chevance2020}. Due to computational cost, high-resolution simulations of stellar feedback within the ISM often concentrate on only one or two processes, namely individual and clustered SNe \citep{Gatto2015, IffrigHennebelle2015, KimOstriker2015, WalchNaab2015, Girichidis2016, Kortgen2016}, SNe and photoionisation \citep{Geen2016}, SNe and stellar winds \citep{Rogers2013, Gatto2017}, photoionization \citep{DaleBonnell2011, Haid2019, Bending2020}, photoionization and radiation pressure \citep{Howard2016, Kim2016, Ali2018, Kim2018}, photoionization and winds \citep{Dale2014, Haid2018}, radiation pressure \citep{SkinnerOstriker2015}, winds and radiation pressure \citep{SilichTenorioTagle2013}. However, some recent simulations account for stellar winds, photoionisation and SNe \citep{Geen2015, Lucas2020, Rathjen2021}. \citet{Stinson2013} developed an approximate numerical scheme to account for the collective effect of early stellar feedback in low-resolution simulations of galaxy formation. In this case, 10 per cent of the UV luminosity of a stellar population is injected as thermal energy before any SN events take place. Consequently, the gas immediately heats up to temperatures $T > 10^6$ K and then rapidly cools down to $10^4$ K thus creating a lower density medium than in the absence of early feedback. This broadly mimics the formation of an HII region and effectively prevents star formation in the regions immediately surrounding young stellar clusters. On the other hand, \citet{Hopkins2011} developed a kinetic-feedback scheme to deposit the momentum imparted by radiation, SNe, and stellar winds in higher-resolution simulations. This is a sub-grid model which aims to describe physics taking place within giant molecular clouds and stellar clusters. This effort eventually led to the Feedback In Realistic Environments (FIRE) project \citep{Hopkins2014} and its updated version FIRE-2 \citep{Hopkins2018}. In parallel, \citet[][see also \citealt{AgertzKravtsov2015}]{Agertz2013} presented a sub-grid model for stellar feedback which takes into account the time-dependent injection of energy, momentum, mass and heavy elements from radiation pressure, stellar winds, supernovae type II and Ia into the surrounding ISM. Undertaking a similar endeavour, \citet{Marinacci2019} presented the Stars and MUltiphase Gas in GaLaxiEs (SMUGGLE) model which has been then combined with a radiative-transfer scheme in \citet{Kannan2019}. Recent simulations of individual galaxies based on these schemes seem to self-consistently generate prominent galactic fountains and sustain inefficient (feedback-regulated) star formation for long times scales, in agreement with observations \citep[e.g.][]{Hu2016, Wetzel2016, Hu2017, LiBryan2017, Hopkins2018, Emerick2019, Lahen2019, Wheeler2019, Agertz2020, Emerick2020, Lahen2020, LiLi2020, LiTonnesen2020, Gutcke2021}. In this work, we concentrate on the mechanical impact from stellar winds and neglect other forms of early feedback (e.g. radiative). Massive stars show radiation-driven outflows where material escapes the stellar surface with velocities of $\sim 1000$ km s$^{-1}$ and mass-loss rates of $\sim 10^{-6}$ M$_\odot$ yr$^{-1}$ \citep[see][for a recent review]{vink2021r}. The energy injected into the ISM by winds over the lifetime of a massive star can be comparable to the mechanical energy of the subsequent SN explosion. Theoretical models and numerical simulations suggest that SN explosions should always take place within wind-blown cavities surrounded by dense shells (with radii $>10$ pc) that have been seeded with nuclear processed material \citep{TenorioTagle1990, TenorioTagle1991, Rozyczka1993, SmithRosen2003, Dwarkadas2005, Dwarkadas2007, ToalaArthur2011, Geen2015}. However, in stellar clusters, the fast stellar winds collide with each other and with the clumpy ISM producing shocks that heat the gas up. In this complex configuration, the wind energy can be lost via several channels \citep[e.g.][]{Rosen2014} and it is unclear whether stellar winds can drive the bulk motion of a cool, dense super shell surrounding the whole star-forming region. Recent simulations give contrasting answers. For instance, \citet{Rey-Raposo2017} find that winds act as an effective source of kinetic and thermal energy for the ISM while \citet{Lancaster2021} show that the bulk of the wind energy is lost due to turbulent mixing followed by radiative cooling. This loss, however, might be inhibited by magnetic fields \citep{Rosen2021}. What is certain is that the presence of a wind-driven bubble dramatically increases the fraction of SN energy which is retained by the ISM in the form of kinetic energy \citep{Rogers2013, Fierlinger2016}. In fact, the SN energy leaks through the chimneys (low-density regions) dug by the winds. On the other hand, studies that investigate the impact on to the ISM of ionising radiation from massive stars find a limited impact from stellar winds \citep[e.g.][]{Dale2014, Ngoumou2015, Geen2021}. An important caveat worth mentioning here is that the various studies cited above adopt different approximations to model the small-scale structure of the ISM and therefore cannot be always directly compared. Most simulations of galaxy formation \citep[e.g. ][]{Agertz2013, Hopkins2018} rely on \textsc{Starburst99} \citep{Leitherer1999, Vazquez2005, Leitherer2010, Leitherer2014} to derive the mechanical input of winds from single stars. In order to bracket the distribution of rotation velocities in stellar populations, the latest version of \textsc{Starburst99} includes the Geneva 2012/13 stellar models with two rotation velocities (either zero or 40 per cent of the break-up velocity on the zero-age main-sequence) and two metallicity values. However, this is a rather limited set of evolutionary models covering a limited parameter range, which might not be fully representative of the rotational velocity distribution of stars and its metallicity-dependence. We use the state-of-the-art open-source stellar-evolution code Modules for Experiments in Stellar Astrophysics \citep[\textsc{Mesa},][]{Paxton2011,Paxton2013,Paxton2015,Paxton2018,Paxton2019} in order to investigate a broader range of rotational velocities and metallicities. Moreover, since very massive stars have been observed in the Tarantula Nebula \citep[e.g.][]{Doran2013} in the Large Magellanic Cloud (LMC), we consider stellar models with initial masses up to nearly 160 M$_\odot$ (a factor of 1.3 larger than in \textsc{Starburst99}). Finally, we account for the fact that the majority of stars are born in binary systems \citep{Sana2012,Sana2013}. The observational evidence for such large binary fraction led to a paradigm change in our understanding of stellar evolution. As one (or both) of the stars in a binary system fill their Roche lobes, a phase of mass transfer takes place. Material is exchanged between the companions through the first Lagrangian point or lost by the system. The outermost layers of the donor are stripped off and eventually accreted on to the secondary star. This significantly alters the masses and spectroscopic appearances of the stars and generates evolutionary sequences otherwise unattainable in a single-star scenario \citep[e.g.][]{demarco2017}, influencing the mass-loss and the rotation rates of stars. In some cases, the interaction and mass exchange can be unstable, leading to merger events. Although numerous uncertainties still exist regarding the modelling of binary systems, it is becoming increasingly clear that more realistic estimates of stellar feedback cannot ignore the impact of stellar multiplicity. This applies also to radiative feedback as interacting binaries enhance the production of hydrogen- and helium-ionizing photons \citep[e.g.][]{Eldridge2017,gotberg2017} and harden the spectra of a stellar population \citep{gotberg2019}. Of particular interest are stripped helium stars, i.e. massive helium stars produced by binary interaction which emit the majority of their light at wavelengths shorter than the Lyman limit \citep{Stanway2016,gotberg2019} on a timescale beyond few Myr. As a first sample application, we perform a series of zoom-in cosmological simulations following the formation of the central galaxy in a dark-matter halo of mass $M=1.8\times 10^{11}$ M$_\odot$ at redshift $z=3$. By considering different feedback models, we investigate the impact of stellar winds from rotating stellar models and binary systems on the resulting galaxy. We obtain objects with a stellar mass of a few $\times \,10^9$ M$_\odot$ in line with those routinely observed in deep optical and infrared imaging surveys \citep[see e.g. figure 1 in][]{Grazian2015}. Note that, in the standard cosmological model, main-progenitor haloes of the selected mass at $z=3$ end up, on average, within haloes with mass $M\simeq 10^{12}$ M$_\odot$ at the present time \citep[e.g.][]{Wechsler2002}. In this sense, our study is also relevant to Milky Way-like galaxies at $z=0$. The paper is structured as follows. In Section~\ref{sec:wind_feedback} (and in the appendices), we introduce our suite of stellar evolutionary models and derive the mechanical and chemical yields of stellar winds as a function of metallicity. In Section~\ref{sec:sim_fdbk_over}, we briefly review the implementations of stellar feedback in simulations and describe our own. The specifics of our cosmological simulations are given in Section~\ref{sec:numerical_methods} while, in Section~\ref{sec:results}, we study the properties of the simulated galaxies. Our main results and conclusions are listed in Section~\ref{sec:Summary}. \section{Stellar evolution models} \label{sec:wind_feedback} \begin{figure*} \centering \includegraphics[width=1\textwidth]{collection_galaxy/Stellar/Energy_wind_0.0040_s_c.pdf} \caption{ Total kinetic energy (black solid line) injected by winds into the ISM during the lifetime of a stellar model with initial mass $M_\mathrm{ini}$ and metallicity $Z=0.004$. Different panels refer to different rotation velocities as indicated by the labels. In order to distinguish the relative contributions from different evolutionary stages (see footnote~\ref{footdef}), we shade the area between the black curve and the bottom axis using different colours. The vertical extent of each colour indicates the fraction of energy released during each phase using a linear scale (e.g. a contribution of 50 per cent covers half of the distance between the bottom axis and the black curve). } \label{fig:Methods_Single_Initialmass_Energy} \end{figure*} We use the stellar evolution code \textsc{Mesa} to compute different sets of models. For the mass-loss by stellar winds, we adopt the so-called `Dutch' prescription based on \citet{Vink2001} for OB stars, \citet{Nugis2000} for the Wolf-Rayet regime, and \citet{deJager1988} for late-type stars. For convection, we adopt the mixing-length theory \citep{Bohm-Vitense1958,Henyey1965} with a mixing-length parameter of $2$ in the framework of the MLT++ scheme. This is motivated by the numerical difficulties commonly found in inflated stellar envelopes \citep{Sanyal2015}. For the overshooting, semi-convection, and thermohaline-mixing parameters, we assume the values of $0.345$, $1$, and $1$, respectively \citep[similar to][]{Brott2011}. Any remaining numerical parameters are set as in \citet{Marchant2016}. We neglect magnetic fields. \subsection{Wind feedback from single stars} \label{sec:wind_feedback_single} Our grid of models for single stars spans a mass range of $\log(M/\si{\Msun})=$ \num{0.8} to \num{2.2} in steps of $\log(M/\si{\Msun})=0.1$, where $\log$ is a short for $\log\id{10}$. We consider eight different metallicities, namely \numlist{0.0001;0.0004;0.0007;0.001;0.002;0.004;0.008;0.02} and vary the initial surface rotation velocities, $\upsilon_\mathrm{ini}$, between $0$ and $600$~\si{\kilo\meter\per\second} in intervals of \SI{100}{\kilo\meter\per\second}. All our models are computed until core-helium exhaustion. For a stellar model with mass $M$, radius $R$, luminosity $L$, surface hydrogen fraction $X$, metallicity $Z$, mass-loss rate $\dot{M}$, surface rotation velocity $\upsilon\id{rot}$ and critical rotation velocity $\upsilon\id{crit}$ (determined by \textsc{Mesa}), we estimate the rate of kinetic energy ejected in the form of winds using \citep{Leitherer1992,Puls2008} \begin{equation} \dot{E}\id{k}=\frac{1}{2}\,\dot{M} \upsilon_\infty^2 \label{eq:Stellar_energy} \end{equation} with the terminal wind velocity \begin{equation} \upsilon_\infty=d\,\sqrt{\frac{2\,G\,M}{R}\,(1-\Gamma\id{es}) }\,f\id{rot} \, \left(\frac{Z}{0.02}\right)^{0.13} \label{eq:Stellar_terminal_wind_velocity} \end{equation} where $d$ is the factor relating the terminal and escape velocities \citep[$2.6$ for OB and $1.6$ for classical Wolf-Rayet (cWR) and helium stars][]{Abbott1978,GrafenerVink2013}, $\Gamma\id{es}$ denotes the electron-scattering Eddington factor \begin{equation} \Gamma\id{es}=\frac{0.2(1+X) L}{4\pi c G M}, \end{equation} $G$ is the gravitational constant, $c$ the speed of light and \begin{equation} f\id{rot}=\sqrt{1-\left(\frac{\upsilon\id{rot}}{\upsilon\id{crit}}\right)^2} \label{eq:Stellar_rotation_damping} \end{equation} \citep{Puls2008}. Fig.~\ref{fig:Methods_Single_Initialmass_Energy} shows the kinetic energy $E_\mathrm{k}$ of the stellar wind integrated over the lifetime of the stellar models as a function of the initial mass $M_\mathrm{ini}$ for all the simulated rotation velocities and for $Z=0.004$. As expected, $E_\mathrm{k}$ increases with $M_\mathrm{ini}$. Stellar models with $M_\mathrm{ini}\approx\;$\M{100} eject around \SI{e51}{erg}, nearly three orders of magnitude more than stars with $M_\mathrm{ini}\approx\;$\M{8}. Stellar rotation leads to higher values of $E_\mathrm{k}$. For instance, all the models with $\upsilon_\mathrm{ini}=\;$\SI{600}{\kilo\meter\per\second} and $M_\mathrm{ini}>\;$\M{40} eject more than \SI{e51}{erg} in stellar winds. In Fig.~\ref{fig:Methods_Single_Initialmass_Energy}, we use colours to highlight the relative contributions of different evolutionary stages\footnote{\label{footdef}We differentiate between the evolutionary phases based on the position of the models in the Hertzsprung–Russell diagram (HRD) and the optical depth of their winds. We identify stellar models in the upper right corner of the HRD as BSG/LBV stars and cool (i.e. with effective temperature $T_{\rm eff} \leq 10^4\,$K), lower luminosity stellar models as RSG. For the other types we separate according to their position relative to the zero-age main sequence (ZAMS). Stellar models hotter than the ZAMS are either classified as helium stars or cWR stars, where cWR stars have optically thick winds (i.e. $\tau > 2/3$). Stellar models cooler than the ZAMS are either OB or WNL stars, where WNL stars have optically thick winds. } to $E_\mathrm{k}$, namely OB dwarfs (OB), red supergiants (RSG), blue supergiant/luminous blue variables (BSG/LBV), WNL stars (WNL), helium stars (He-star) and cWR stars. The corresponding discussion of the mass yields is provided in appendix~\ref{sec:Appendix_massyield}. For non-rotating models with $M_\mathrm{ini}\lesssim\;$\M{40}, a large fraction of the wind energy is ejected during the main-sequence, when the stellar model would appear spectroscopically as a OB dwarf, with terminal wind velocities of the order of \SI{1000}{\kilo\meter\per\second} and mass-loss rates in the range \SIrange{e-8}{e-5}{\Msun\per\year} \citep{vink2021r}. Stars in this mass range evolve past their main-sequence into red giants and supergiants. However, despite the relatively large mass-loss rates, the contribution from the post-main-sequence evolution accounts only up to around $\sim 10$ per cent of the total, due to the much slower winds of this phase \citep[of about \SI{10}{\kilo\meter\per\second},][]{deJager1988,Smith2014}. On the other hand, stellar models with $M_\mathrm{ini}>\;$\M{40} lose their H-rich envelope and evolve into cWR stars displaying dense and optically thick winds, with mass-loss rates of $\approx\;$\SIrange{e-5}{e-4}{\Msun\per\year} and terminal velocities of $\approx\;$\SI{2000}{\kilo\meter\per\second} \citep{Nugis2000,Crowther2007,Smith2014}. Although the WR phase lasts for $\approx 10$ per cent of the stellar lifetime for models with $M_\mathrm{ini}\approx\M{60}$, it accounts for about half of $E_\mathrm{k}$. At even higher masses ($M_\mathrm{ini}\approx \M{100}$), the WR phenomenon already occurs during the main-sequence evolution, i.e. the stellar models correspond to WNL (or WNH) stars \citep{Crowther2007,Smith2014}. The relative importance of the WR stage (starting $\approx \SI{e6}{\year}$ after zero age) grows for $M_\mathrm{ini}\gtrsim\M{40}$, while the blue supergiants (including a potential LBV phase) only contribute a few per cent of the total mechanical yield. Rotation makes stellar cores larger while reducing the extent of the H-rich envelopes thus producing more massive post-main-sequence helium/WR stars. In consequence, both the WR contribution and $E_\mathrm{k}$ increase at fixed initial mass (especially for $M_\mathrm{ini}\approx\;$\M{40}-\M{50}). Mass-loss rates are also enhanced although with lower terminal velocities. \subsection{Wind feedback from binary stars} \label{sec:wind_feedback_binary} \begin{figure*} \centering \includegraphics[width=1\textwidth]{collection_galaxy/Stellar/Energy_wind_0.0040_b.pdf} \caption{As in Fig.~\ref{fig:Methods_Single_Initialmass_Energy} but for binary systems with a fixed mass ratio of 0.6 and different periods (indicated by the labels). } \label{fig:Methods_Binary_Initialmass_Energy} \end{figure*} It is well known that most stars form in binary systems \citep[e.g.][]{Sana2012} and this dramatically affects their evolutionary path \citep{Podsiadlowski1992,VanBever2000,deMink2013,Mandel2016,Marchant2016,Langer2020}. However, many aspects ranging from the way stars in binaries interact \citep[including common-envelope phases][]{Paczynski1976proceeding,Ivanova2013} to the intrinsic fraction of binary systems at different metallicities are still poorly understood. Nonetheless, we attempt to investigate the impact of binaries to the expected mechanical and chemical feedback by stellar wind of stellar populations. We compute binary stellar models with different metallicities (\numlist{0.0001;0.0004;0.0008;0.004;0.008;0.02}). For simplicity, we assume a fixed binary mass ratio of \num{0.6} and cover the same initial-mass range for the primary star as we did for single stars, although this time we use steps of $\log(M/\si{\Msun})=0.2$. We consider four different values for the orbital period, namely $P= 10^{0.6}, 10^{1}, 10^{2}$ and $10^3$ days. Finally, we take into account the fact that, in binary systems, some of the kinetic energy of the stellar winds is dissipated due to the interaction between the winds generated by the primary and secondary stars \citep{Usov1991,Stevens1992}. We base our estimate on the momentum balance between the stellar winds by adopting the reduction factor \begin{align} f\id{ww}= \begin{cases} 1-\frac{\pi^2}{16}\sqrt{\frac{\dot{M}\id{s} \upsilon\id{inf,s}}{\dot{M}\id{p} \upsilon\id{inf,p}}}\quad &\text{for } \dot{M}\id{p} \upsilon\id{inf,p} > \dot{M}\id{s} \upsilon\id{inf,s},\\ 1-\frac{\pi^2}{16}\sqrt{\frac{\dot{M}\id{p} \upsilon\id{inf,p}}{\dot{M}\id{s} \upsilon\id{inf,s}}}\quad &\text{otherwise }, \end{cases} \label{eq:Stellar_energy_binary_damp} \end{align} \citep{DeBecker2013} where the indices p and s refer to the primary and secondary star, respectively. The maximum attenuation ($f\id{ww}=0.38$) is obtained when the two stars produce winds with equal momenta. Once the primary star reaches core-helium exhaustion, we follow the remaining evolution of the secondary as if it was a single star. In small corners of the investigated parameter space, our evolutionary models suggest that a merger of the companions should take place, and the calculation is inevitably stopped as, presently, no stellar-evolution code is able to model such merger events \citep[see however][]{Glebbeek2013,Schneider2016}. In these cases, we interpolate through our grid of models. In Fig.~\ref{fig:Methods_Binary_Initialmass_Energy}, we show $E_\mathrm{k}$ as a function of $M_\mathrm{ini}$ for the primary and secondary models. Barring the most massive stars, both the primaries and the secondaries return more kinetic energy via their stellar winds compared to the single non-rotating case. The largest contribution to $E_\mathrm{k}$ comes from helium and WR stars. In fact, all the binary models experience mass-transfer and the primaries loose their outer H-rich envelope. They thus achieve effective temperatures of about \SIrange{5e4}{e5}{K}, becoming either stripped He-stars with optically thin winds or cWR stars, with much larger terminal wind velocities than late-type stellar models. The mass-gainers secondary models also show larger $E_\mathrm{k}$ compared to single star models of the same mass, particularly for the lowest considered initial mass and the short periods (i.e. those undergoing mass-transfer during the main-sequence phase). The reason is twofold: mass accretion onto the secondary models leads to higher masses and luminosities compared to the single star case, and a significant fraction of the mass stripped from the primary component of the binary systems is not accreated and leaves as stellar wind of the secondary model during its OB phase (given that mass-transfer takes place almost always during the main-sequence phase of the secondary stellar models), contributing to the kinetic stellar feedback (see also appendix \ref{sec:Appendix_massyield}). \subsection{Feedback from a stellar population} \label{sec:fdbk_stel-pop} In order to model the energy injection into the ISM by a simple coeval population of single stars, we use the parameterisation for the initial mass function (IMF) introduced by \citet{Kroupa2001}. We consider stars within the mass range $10^{-1}\leq M \leq 10^{2.2}$ M$_\odot$ and account for the SN explosion and stellar winds of the models with $M>10^{0.9}$ M$_\odot$ but we assume that stars with $M>40$ M$_\odot$ collapse directly to a black hole at the end of their lifetime without releasing energy into the ISM. For consistency with previous work, we associate an energy of \SI{e51}{erg} to every SN explosion \citep[e.g. ][]{Scannapieco2012, Agertz2013, Crain2015}, although this simplification does not account for the variety of SNe seen in nature. We make sure that this energy is released into the ISM when a massive star reaches the end of its lifetime (i.e. the injection time is different for each stellar model). We estimate the ejected mass by computing the difference between the final mass of the stellar model and the average remnant mass derived in \citet{Sukhbold2016}. We consider three cases. In the first one, we only use single non-rotating stellar models while, in the second one, we combine single rotating models according to an empirically derived distribution of initial rotation velocities which depends on metallicity (see appendix~\ref{sec:Appendix_rotation}). Finally, our third option also accounts for binary stars. In this case, we assume that 70 per cent of the stellar mass is in binary systems and that the remaining 30 per cent is contributed by single stars. For the binaries, we apply the Kroupa IMF to the mean mass in each system and assume a flat orbital-period distribution in log-space (between $\log (P/\mathrm{day})=0.3$ and 3.3) across all considered metallicities. This is consistent with observations in both the Galaxy and the LMC \citep[][]{Kobulnicky2014,Almeida2017}. Fig.~\ref{fig:Methods_Time_Energy} shows the time dependence of the cumulative kinetic energy ejected by one solar mass of coeval stellar evolutionary models (with $Z=0.004$) in the form of winds (see appendix~\ref{sec:Appendix_massyield} for the corresponding discussion of the mass yields). The top panel refers to single rotating stellar models while binary systems are considered in the bottom one. In the first \SI{2.5}{\Myr}, a few $\times\;$\SI{e47}{\erg\per\Msun} are ejected by stellar models in the OB stage. Subsequently, the most massive stellar models reach the WR phase, leading to a steep increase in the total energy ejected. Overall, the cWR stage contributes nearly 60 per cent of the total ejected energy (although only the most massive stars experience it). For single stars (binaries), 90 (65) per cent of the wind energy is ejected before the onset of the first SN explosion, which takes place when the stellar population has an age of \SI{5}{\Myr}. It is worth mentioning that the binary systems show more prominent contributions from the post-main-sequence helium star models. For instance, stripped helium stars originating from binary interactions start playing a relevant role after \SI{10}{\Myr} and provide roughly 10 per cent of the total energy. \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/Stellar/Energy_0.0040_attached.pdf} \caption{Cumulative kinetic energy ejected in the form of winds by a coeval simple stellar population with metallicity $Z=0.004$. Plotted is the energy per unit stellar mass as a function of the population age. The top and bottom panels refer to populations of single and binary stars, respectively. Colour shading is as in Fig.~\ref{fig:Methods_Single_Initialmass_Energy}. } \label{fig:Methods_Time_Energy} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/Stellar/all_final.pdf} \caption{Mechanical and chemical yields of winds emitted by different stellar populations (see labels) within the first \SI{30}{\Myr} as a function of absolute metallicity. From top to bottom, we show the (time-integrated) ejected energy per unit stellar mass, the ejected mass fraction and the corresponding metal fraction. For comparison, the corresponding yields of SNe are shown with dashed lines (assuming the metal yield is 10 per cent). Also shown are the yields of three different tracks from \textsc{Starburst99}: Geneva standard \citep[SB99 G,][]{Schaller1992}, v00 \citep[SB99 Gv0,][tracks with $\upsilon_\mathrm{ini}=0$]{Ekstrom2012} and v40 \citep[SB99 Gv40,][tracks with $\upsilon_\mathrm{ini}=0.4\upsilon\id{crit}$] {Ekstrom2012}. } \label{fig:Methods_final} \end{figure} So far we have only discussed the stellar models with $Z=0.004$. In Fig.~\ref{fig:Methods_final}, we present the metallicity dependence of the kinetic energy ($E_\mathrm{p}$, top panel), mass ($M_\mathrm{p}$, middle panel), and metals ($M_{\mathrm{Z, p}}$, bottom panel) ejected from a coeval population per unit stellar mass. Single non-rotating stars show the strongest metallicity dependence of $E_\mathrm{p}$ and $M_\mathrm{Z, p}$ while the inclusion of rotating and binary stars leads to a remarkably shallower relation. The reason is twofold: (i) the probability density function of the initial rotational velocity peaks at higher velocities with decreasing $Z$ (thus driving larger rates of mass-loss by stellar winds) and (ii) binary interaction leads to the formation of stripped helium and WR stars also when envelope self-stripping by stellar winds is not sufficient, i.e. at low $Z$. Note that, at low metallicity, single non-rotating stars eject nearly one order of magnitude less energy and three orders of magnitude less metals than a more realistic population composed of a mix of rotating and binary stars. This difference could play an important role in modelling the early phases of galaxy formation. Regarding the integrated mass loss quantified by $M_\mathrm{p}$, we note that all model populations show a similar metallicity dependence although with systematically higher mass yields from the rotating and binary models, by approximately a factor two. We compare our results for a coeval stellar population including a combination of binary systems and rotating single stars with the results from \textsc{Starburst99}, which are derived adopting the Geneva evolutionary tracks for single stars. We use the same IMF but the most massive model in the Geneva set (\M{120}) is lower than in our set. For the non-rotating single stars, the ejected energy per unit stellar mass we derive is consistent with that from \textsc{Starburst99}, with an approximate metallicity dependence of $E\id{p} \propto Z$. On the other hand, the mix of binary systems and rotating single-star models shows a shallower relation with $E\id{p} \propto Z^{0.65}$. \subsection{Discussion and a caveat} \label{sec:fdbk_discussion} Stars inject kinetic energy and mass into the surrounding medium during their whole lifetime. However, a fraction of this energy is likely dissipated in the nearby circumstellar regions \citep[e.g.,][]{GarciaSeguraMacLow1996,GarciaSeguraLanger1996} due to the variable wind velocity over the evolution of a single star \citep{Vink2001}. For instance, the slow and dense RSG or LBV outflows swept by the faster wind during the following WR phase can lead to a significant fraction of the kinetic energy being lost at scales smaller than the minimum resolution of our simulations of galaxy formation ($\sim \SI{40}{pc}$, see Section~\ref{sec:numerical_methods}). Moreover, in a stellar cluster, some energy will be also dissipated in the shocks forming between colliding stellar winds and between the winds and the clumpy ISM. As mentioned in the introduction, different authors reach opposite conclusions regarding the entity of the dissipation \citep{Rey-Raposo2017, Lancaster2021, Rosen2021}. In the absence of a consensus, in this work, we inject the whole energy released by the winds and supernovae into the ISM and solve the equations of fluid dynamics to determine gas flows on scales of tens of pc. In this sense, our study quantifies the maximum effect that could be possibly driven by stellar winds. It is also important to stress that, despite the large observational and theoretical efforts, mass-loss prescriptions in stellar models are still uncertain\footnote{This is mostly due to the presence of inhomogeneities, called clumping, that affect the line diagnostics \citep{Moffat1988,Puls2008,Sundqvist2013}.} by about a factor of three at Galactic metallicity \citep[][and references therein]{Smith2014,vink2021r}. Uncertainties are even more severe at lower metallicities, where the lack of observational constrains and the low abundance of the metals that are responsible of the radiatively-driven stellar wind do not allow for fully reliable estimates. This is particularly the case for the cWR stars which, as shown in Fig.~\ref{fig:Methods_Single_Initialmass_Energy} produce a large amount of mechanical feedback. These uncertainties present a major challenge in determining the energy and momentum budget that a stellar population injects into the ISM in the form of winds. \section{Simulating stellar feedback} \label{sec:sim_fdbk_over} \subsection{The state of the art} Different methods have been used to include SN feedback in numerical hydrodynamic simulations of galaxy formation. In cosmological runs that cover large volumes and achieve a spatial resolution of $\simeq 0.2-1$ kpc (insufficient to reveal individual SNRs and the complex multi-phase structure of the ISM), SN feedback was originally implemented as a single injection of thermal energy and mass from a simple stellar population. It turns out, however, that the energy is deposited in too large a volume or mass. Consequently, the bulk of the injected energy is immediately radiated away (without having much mechanical impact on the ISM) due to the high densities of the star-forming regions \citep[e.g.][]{Katz1992}. In order to prevent this `overcooling problem' some ad hoc shortcuts have been adopted. One possibility is to artificially prevent the heated gas from cooling for a time comparable with the local dynamical time scale ($\sim30$ Myr) so that to convert part of the deposited energy into actual gas kinetic energy and mimic the production of hot bubbles (possibly driving outflows) generated by the combination of blast waves from multiple SN explosions \citep{Gerritsen1997PhDT, GerritsenIcke1997, Thacker2000, SommerLarsen2003, Keres2005, Stinson2006, Governato2007}. The rationale behind this method is that the volume-filling cavities of low-density gas in the multi-phase ISM are not resolved and thus radiative losses are overestimated in the simulations. Alternatively, one can implement a `kinetic feedback' scheme in which the gas elements are imparted some outwardly directed momentum (and are decoupled from hydrodynamic forces until they leave the galaxy) to simulate the launching of a galactic wind \citep[e.g.][]{NavarroWhite1993,MihosHernquist1994, SpringelHernquist2003, Oppenheimer2010, Vogelsberger2013, Dave2017, Valentini2017, Pillepich2018}. This can be implemented in many different ways and generally requires the introduction of free parameters. \citet[][see also \citealt{Mori1997}, \citealt{Gnedin1998}]{Dubois2008}, for instance, impose around each SN event a spherical Sedov blast-wave profile for density, momentum and total energy, with a radius equal to a fixed physical scale resolved with a few computational elements. This radius should approximately match the size of super-bubbles blown by multiple clustered SN explosions (hundreds of pc) and is obviously much larger than individual SNRs. Note that this radius also sets the injection scale for turbulence in the simulated ISM. Kinetic feedback schemes, however, do not properly account for the thermal state of the ISM as they neglect the hot phase produced by SNe. Approximate sub-grid models are thus used to determine the `effective pressure' of the multi-phase ISM as a function of the coarse-grained density of the gas in the simulations \citep{Yepes1997, HultmanPharasyn1999, SpringelHernquist2003, SchayeDallaVecchia2008, Murante2010}. Basically, the use of an effective equation of state prevents the dense gas from cooling down to arbitrarily small temperatures and artificially fragmenting. On the opposite extreme, non-cosmological simulations of either an idealised ISM or individual molecular clouds and parts of disc galaxies have investigated the impact of individual SNe on their surroundings with sub-pc spatial resolution \citep[e.g.][]{Thornton1998, Creasey2013, Gatto2015, IffrigHennebelle2015, WalchNaab2015, Walch2015, KimOstriker2015, Girichidis2016, Simpson2016, KimOstriker2017, Smith2021, Hirai2021}. The emerging picture is that most of the energy injected by a SN into the ISM is rapidly thermalised and radiated away. However, a fraction ranging between a few to ten per cent is later found as kinetic energy of the ambient medium, with some dependence on the local density and the time at which the retained energy is estimated. In the last decade or so, it has become computationally feasible to simulate individual galaxies with a spatial resolution of a few tens of pc which partially reveals the turbulent and multi-phase structure of the ISM. At this resolution, the largest sites of star formation (where giant molecular clouds come into existence) can be localised in the simulations although their internal structure cannot be probed. \citet{CeverinoKlypin2009} showed that, with a spatial resolution of $\sim 50$ pc, it is possible to drive galactic winds without artificially delaying gas cooling after injecting SN energy. This, however, requires that some SNe explode outside of the dense regions in which they formed due to OB-runaway stars. Several studies devised an optimal strategy for simulating SNR and deal with the over-cooling problem \citep[][but see also \citealt{Hopkins2014} and \citealt{KimmCen2014}]{KimOstriker2015, Martizzi2015}. If the radius of the shell forming at the end of the Sedov-Taylor stage (which is sometimes called the cooling radius) is resolved with a sufficient number of elements ($\sim 10$), then one should inject $10^{51}$ erg of thermal energy and let the hydrodynamic solver track the buildup of momentum. Otherwise, if the ambient density is too high for the achieved resolution (and thus the shell radius too small), one should directly inject the full momentum generated during the Sedov-Taylor phase. The latter approximation fails to catch the impact of the hot gas (and, possibly, underestimates galactic winds) but anyway drives turbulence in the warm and cold phases of the ISM and allows for self-regulated star formation. \subsection{Our implementation} We adopt two different numerical schemes to simulate SN feedback. In the first one (dubbed T as a short for `thermal feedback'), the mass, metals and energy ejected by SNe are deposited at the location of stellar particles with age $t\id{SN}$. The SN energy is used to alter the thermal budget of the surrounding ISM and gas cooling is switched off as in the standard \textsc{RAMSES} implementation with a characteristic timescale of 20 Myr. In the second scheme (dubbed M as a short for `mixed thermal-kinetic feedback'), the ejecta are distributed within a sphere of radius $r\id{Sed} = 150$ pc and the SN energy is partitioned between the thermal energy of the gas (corresponding to 70 per cent of the total) and a kinetic term accounting for the bulk motion of the ISM within $r\id{Sed}$ \citep{Dubois2008}. Mass, momentum and kinetic energy are distributed within $r\id{Sed}$ as in a spherically-symmetric Sedov blast wave. We assume a unitary mass-loading factor, i.e. that the gas mass entrained by the SN explosion is equal to that of the stellar particle. Delayed cooling is implemented as in the T model. We use the blast-wave model also to describe the combined effect of stellar winds and SNe. In this case, however, mass, energy and momentum are continuously injected into the ISM by young stellar particles. The corresponding rates are obtained by interpolating the tables we derived in Section~\ref{sec:wind_feedback}. \section{Numerical methods} \label{sec:numerical_methods} As a first sample application of our stellar models to the field of galaxy formation and evolution, we investigate the impact of stellar winds on the structure of a high-redshift galaxy. To this end, we use the adaptive-mesh-refinement (AMR) code \textsc{Ramses} \citep{Teyssier2002} to perform several cosmological zoom-in simulations of a sub-$L_*$ galaxy at $z=3$. Each simulation starts from the same initial conditions but adopts different feedback models. \subsection{Initial conditions and refinement strategy} \label{sec:ics} We consider a flat $\Lambda$CDM cosmological model \citep{Planck2020} and simulate the formation of a galaxy within a cubic periodic box of comoving side $L_\mathrm{box}= 6 ~h^{-1}\,\mathrm{Mpc}$. Initial conditions (IC) are generated at $z=99$ with the \textsc{Music} code \citep{Music2011}. We set up zoom-in simulations following a multi-step procedure. We first generate IC with a uniform resolution using $2^{l\id{ini}}$ elements along each spatial dimension (with $l\id{ini}=7$) and run a DM-only simulation until $z=3$. After identifying haloes with the \textsc{Amiga Halo Finder} code \citep[\textsc{ahf},][]{Gill2004,Knollmann2009}, we pick a halo with a virial mass of $M\id{vir} \sim 10^{11} M_{\sun}$ and a relatively quiescent late mass-accretion history (the virial radius $R\id{vir}$ encloses a mean density of $200\,\rho\id{crit}$, with $\rho\id{crit}$ being the critical density for closure of the Universe). At this point, we use the zoom-in technique to re-simulate the selected halo at higher spatial resolution. In brief, all the simulation particles found within $3R\id{vir}$ from the halo centre at $z=3$ are traced back to the IC and the corresponding volume is resampled at a higher resolution. The final configuration includes several nested levels with different spatial resolutions. We investigate discreteness effects and numerical convergence (see Sec.~\ref{sec:results_globalproperties} and \ref{sec:comp_vs_obs.}) by generating ICs with two different maximum levels of refinement ($l\id{ini}=9$ and 10) corresponding to different mass resolutions for the dark-matter component ($m_\mathrm{DM}=1.8 \times 10^5$ and $2.2 \times 10^4$ M$_\odot$, respectively). In the remainder of this paper, we refer to the simulations obtained in the two cases as `low-' and 'high-resolution'. During run time, the AMR technique refines (and coarsen) the grid structure used to solve the fluid equations for the gas and the Poisson equation for the gravitational potential based on pre-determined criteria. We adopt a quasi-Lagrangian scheme which is uniquely based on mass density. A cell is split if it contains more than eight dark-matter particles or a total baryonic mass (stars and gas) corresponding to the same density contrast. In order to prevent runaway refinement early on, triggering a new level is only allowed after the simulation has reached a specific cosmic time so that to match the refinement strategy of the corresponding DM-only case (which we always run first). This strategy makes sure that the resolution of the grid in physical units remains nearly constant (in a stepwise sense). For all simulations, the AMR algorithm adds six levels of refinement until $z=3$, corresponding to a nominal spatial resolution of $68$ and $34$ physical pc for the low- and high-resolution simulations, respectively. In order to facilitate the interpretation of the results, we save snapshots every 5 Myr for the simulations with $l\id{ini}=10$ (10 Myr for those with $l\id{ini}=9$). \subsection{Gas physics and star formation} \label{sec:sf} For the gas, we assume an equation of state with polytropic index $\gamma =5/3$. In order to avoid spurious fragmentation, we artificially add thermal pressure where needed in order to resolve the Jeans length with four grid elements or more \citep{Truelove1997,Teyssier+10}. The gas is ionised and heated up by the default time-dependent uniform cosmic UV background in \textsc{Ramses} which is exponentially suppressed where the physical number density of gas particles, $n$, exceeds $0.01\, \mathrm{cm}^{-3}$ to approximate self-shielding \citep[e.g.][]{Tajiri-Umemura1998}. Our runs include gas cooling from H, He and metals. The conversion of `cold' and dense gas into stars is modelled with a Poisson process in which the average follows the relation \citep{Schmidt1959} \begin{equation} \dot{\rho}_\star=\frac{\epsilon_\star}{t\id{ff}}\,\rho\id{gas}\,, \ \ \ \ \text{if}\ T<\SI{2e4}{\kelvin}\ \ \text{and} \ \ n>n_\star\,, \label{eq:Theory_sf} \end{equation} which relates the star-formation-rate (SFR) density $\dot{\rho}_\star$ and the gas density $\rho\id{gas}$ in terms of the star-formation efficiency $\epsilon_\star= 0.01$ and the free-fall time of the gas $r\id{ff}=\sqrt{3\pi /(32G\rho\id{gas})}$ \citep[e.g.][]{Agertz2011,Perret2014,Kretschmer2020}. We set the density threshold for star formation to $n_\star=10\,\mathrm{cm}^{-3}$ and $20\,\mathrm{cm}^{-3}$ in the low- and high-resolution simulations, respectively. \begin{table} \caption{Main properties of the simulation suite.} \label{tab:Method_Simulations} \centering \begin{tabular}{ccccccc} \hline Name & Feedback & $t\id{SN}$ & Winds& $l\id{ini}/l\id{max}$& $\Delta x$ & $n_\star$ \\ & & Myr & & & \si{\pc}& \si{\pcm}\\ \hline S9T & Thermal& 10 & No &\num{9}/\num{15} & \num{68} & \num{10} \\ S9M & Mixed& 10 & No &\num{9}/\num{15} & \num{68} & \num{10} \\ E9S & Mixed& Variable & No &\num{9}/\num{15} & \num{68}& \num{10} \\ E9O & Mixed& Variable& Non-rot &\num{9}/\num{15} & \num{68} & \num{10} \\ E9R & Mixed& Variable & Rot&\num{9}/\num{15} & \num{68}& \num{10} \\ E9B & Mixed& Variable & Rot+Bin& \num{9}/\num{15} & \num{68}& \num{10} \\ \hline S10M & Mixed& 10 & No &\num{10}\num{16}&\num{34}& \num{20} \\ E10R & Mixed& Variable & Rot &\num{10}/\num{16} & \num{34}& \num{20} \\ E10B & Mixed& Variable& Rot+Bin&\num{10}/\num{16} & \num{34}& \num{20} \\ \hline \end{tabular} \end{table} \subsection{Stellar feedback} \label{sec:sfdk} We consider two sets of simulations. Namely, those adopting a standard SN feedback model (name starting with S) and those based on our stellar tracks (name starting with E, short for early feedback). Each stellar particle in the simulations represents a coeval stellar population. In the standard feedback model, we consider type II SNe originating from these populations. A single occurrence takes place $t\id{SN}= 10$ Myr \citep[]{Agertz2011, Ocvirk2020} after the birth of the stellar particle. The total ejected mass and energy by SNe from a stellar population with mass $M\id{pop}$ are $M\id{ej}=0.1\,M\id{pop}$ and $E\id{ej}=(M\id{ej}/\SI{10}{M_\odot})\, \SI{e51}{erg}$ \citep{Dubois2008,Agertz2011,Ocvirk2020} which is equivalent to assuming that every SN event injects (on average) $10^{51}$ erg and \SI{10}{M_\odot} into the ISM. The corresponding metal yield is 10 per cent \citep[see eq. 4 in][]{Perret2014}. In the E-simulations, instead, SN events do not all take place after 10 Myr after the birth of a stellar particle. On the contrary, mass, metals and energy are injected into the ISM at all times reflecting the lifetime of the stellar models presented in Section~\ref{sec:wind_feedback} and the initial mass function by \citet{Kroupa2001}. In this case, the masses ejected by SNe are obtained from the stellar tracks (see Section~\ref{sec:fdbk_stel-pop}) while we still assume that each SN event releases $10^{51}$ erg and that the metal yield is 10 per cent. In addition, the contribution generated by stellar winds is also taken into account. A list of all simulations and their distinguishing features are presented in Table~\ref{tab:Method_Simulations}. The naming convention is as follows: as previously mentioned, the first letter (S or E) indicates whether a standard SN-only model or the more-sophisticated early-feedback scheme is used. This is followed by a number which gives the maximum level of refinement in the IC (9 or 10). Finally, the last letter gives further details about the adopted feedback model. For the S simulations, T and M stand for thermal feedback and mixed thermal-kinetic feedback, respectively. On the other hand, for the E simulations (which all use the blast-wave model) there are four possibilities. The letter S indicates that only SN feedback is considered while the letters O, R and B refer to SNe and winds from non-rotating single stars, rotating single stars, and a mix of rotating single stars and binaries, respectively. \section{Results} \label{sec:results} \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/E10B/Environment_big_xy.png} \includegraphics[width=1\columnwidth]{collection_galaxy/E10B/Environment_big_color.png} \caption {Projected map of the gas distribution in the E10B run at $z=3$. Each pixel is colour coded according to the maximum density along the line of sight within a cube of side length 400 kpc. In the bottom zoomed-in panel, the halo and the galaxy radii are indicated with a grey and a black circle, respectively. } \label{fig:Results_Images_Evironment} \end{figure} We now present the results of the simulations. In what follows, we assume a solar metallicity of 2 per cent by mass (i.e. $\si{\Zsun}=0.02$) and all distances are given in physical units unless explicitly stated otherwise. The virial mass and radius of the re-simulated DM halo at $z=3$ are nearly the same in all simulations, namely $M\id{vir}=1.8 \times 10^{11}$ M$_\odot$ and $R\id{vir}=43$ kpc. We conventionally define the central galaxy as the collection of gas and stars enclosed within a radius of $R\id{gal}=0.1\,R\id{rvir}$ and located at the centre of the DM halo \citep[e.g.][]{Scannapieco2012}. In order to give a visual impression of the environment surrounding the galaxy, in Fig.~\ref{fig:Results_Images_Evironment}, we show a projected map of the gas density for the run E10B at $z=3$. The top panel has a side length of 400 kpc and illustrates the area of the intergalactic medium in which the DM halo resides while the bottom panel zooms in the central region (here, $R\id{vir}$ and $R\id{gal}$ are highlighted with a grey and a black circle, respectively). The intricate habitat of the galaxy within $R\id{vir}$ is characterised by a nearly planar gas distribution veined with dense filaments feeding the central object and its multiple companions (which, in all probability, will eventually merge with the main galaxy). \begin{table} \caption{Properties of the simulated galaxies at $z=3$. } \label{tab:gal_properties} \centering \begin{tabular}{cccccc} \hline Name & $M_\star$ & $M\id{gas}$ & SFR & $Z\id{star}$ & $Z\id{gas}$ \\ & \num{e9} \si{\Msun} & \num{e10} \si{\Msun} & \si{\Msun\per\year} &$\si{\Zsun}$ & $\si{\Zsun}$ \\ \hline S9T & \num{5.84} & \num{0.93} & \num{9.34} & \num{0.169} & \num{0.248}\\ S9M & \num{2.40} & \num{1.37} & \num{4.02} & \num{0.058} & \num{0.092}\\ E9S & \num{1.90} & \num{1.41} & \num{3.17} & \num{0.047} & \num{0.065} \\ E9O & \num{0.96} & \num{1.47} & \num{1.38} & \num{0.030} & \num{0.043}\\ E9R & \num{0.97} & \num{1.49} & \num{1.32} & \num{0.033} & \num{0.046}\\ E9B & \num{0.97} & \num{1.47} & \num{1.31} & \num{0.022} & \num{0.031}\\ \hline S10M & \num{3.80} & \num{1.25} & \num{5.11} & \num{0.087} & \num{0.142}\\ E10R & \num{1.09} & \num{1.59} & \num{1.48} & \num{0.034} & \num{0.049}\\ E10B & \num{1.16} & \num{1.55} & \num{1.49} & \num{0.024} & \num{0.035}\\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[height=0.15\textheight]{collection_galaxy/Galaxy_Images/219_den_xyxz_5_nearest_0.07_all_scale.png} \includegraphics[height=0.15\textheight]{collection_galaxy/Galaxy_Images/219_den_xyxz_5_nearest_0.07_all_scale_col.png} \includegraphics[height=0.15\textheight]{collection_galaxy/Galaxy_Images/00219_PosRot_both_c_den_all.png} \hspace{-0.12cm} \includegraphics[height=0.15\textheight]{collection_galaxy/Galaxy_Images/00219_PosRot_both_c_den_all_col.png} \caption[Gaseous (top panels) and stellar (bottom panels) density components images for the low-resolution runs at $z=3$.] {Face-on and edge-on false-colour images of the gas (top row) and stellar (bottom row) distributions in the low-resolutions simulations at $z=3$. Shown are the maximum gas density along the line of sight and the star volume density at the location of the stellar particles. Note the different length scales used for gas and stars as indicated by the white yardsticks. } \label{fig:Results_Images_low} \end{figure*} \subsection{Global properties and morphology} \label{sec:results_globalproperties} The main properties of the simulated galaxies at $z=3$ are listed in Table~\ref{tab:gal_properties}. These include the stellar mass ($M_*$), the gas mass ($M\id{gas}$), the average star-formation rate (SFR) over the past \SI{200}{\Myr} as well as the mass-weighted mean metallicity of the gas ($Z\id{star}$) and of the stars ($Z\id{star}$), both in solar units. As a first step, we compare the different SN-only runs. Looking at the low-resolutions simulations reveals that the thermal feedback scheme (S9T) is rather inefficient as it produces nearly three times larger $M_*$, $Z\id{star}$ and $Z\id{gas}$ than those obtained considering the two mixed schemes based on the blast-wave model (S9M and E9S). On the other hand, considering that massive stars have different lifetimes (E9S) causes a 20 per cent reduction in $M_*$ and $Z\id{star}$ with respect to S9M. Accounting for stellar winds substantially reduces $M_*$, the SFR and the metallicities (all by at least a factor ranging between 2 and 3.5) while it does not leave an imprint in $M\id{gas}$. In fact, it is to be expected that winds make stellar feedback more efficient (i) by injecting non-negligible amounts of energy and momentum into the ISM and (ii) by lowering the gas density around stellar particles before SN explosions take place. However, the different stellar-wind models (O, R, B) appear to generate galaxies with very similar global properties although accounting for rotation and binaries substantially increases the instantaneous energy injection rate. This unexpected behaviour arises from the fact that the winds of single non-rotating stars already provide enough energy early on to raise the gas temperature in the simulations above the threshold at which star formation is inhibited \citep{Stinson2013} -- see equation~(\ref{eq:Theory_sf}). Due to this threshold effect, the SFR of the galaxies does not change much when additional energy is injected by rotating and binary stars. Basically, when winds are accounted for, the formation of stellar particles in the spatial and temporal proximity of recently born ones is reduced considerably with respect to the simulations that only consider SN feedback. Another thing worth mentioning is that the O, R and B wind models generate galaxies with similar values of $Z\id{star}$ and $Z\id{gas}$ although the metal yields presented in Fig.~\ref{fig:Methods_final} can differ by orders of magnitude. This stems from the fact that the metal injection is dominated by SNe. Anyway, $Z\id{star}$ and $Z\id{gas}$ are nearly 30 per cent lower in the E9B and E10B simulations compared to the corresponding O and R runs. On the one hand, the metallicities of our galaxies at $z=3$ are in very good agreement with those found in other simulations for objects with similar stellar masses and using an early-feedback scheme \citep[e.g.][]{Ma2016}. On the other hand, they appear to be low compared to estimates based on observational data \citep[e.g.][]{Sommariva2012,Arabsalmani2018} which, however, are known to suffer from uncertain calibrations. False-colour images of the simulated galaxies at $z=3$ are displayed in Figs.~\ref{fig:Results_Images_low} and ~\ref{fig:Results_Images} for the low- and high-resolution sets, respectively. In all cases, the reference system has been rotated based on the angular momentum of the stars to present the face-on and edge-on projections. The images in the top rows (orange tones) show the maximum gas density along each line of sight while those in the bottom rows (blue tones) show the projected positions of the individual stellar particles (colour coded accorded to their local density determined with a spherical cubic-spline kernel). Generally, the galaxies assume the shape of oblate ellipsoids. The gas distribution is more extended than the stellar one and shows many filamentary features in the outskirts resulting either from interactions with satellite systems or large-scale flows. Most of the SN-only models (S9T, S10T and E9S) present dense gaseous and stellar concentrations at their centres, while those accounting for stellar winds are more uniform and flattened (disc like). \subsection{Stellar mass vs halo mass} \label{sec:comp_vs_obs.} The stellar mass - halo mass relation (SMHMR) provides a convenient way to test whether our simulations are compatible with observations or not. This semi-empirical relation is obtained by matching DM haloes from N-body simulations to estimates for the stellar mass of galaxies in a catalog under the assumption that $M_*$ scales monotonically with the halo mass. In Fig.~\ref{fig:Results_SMHM}, we show SMHMRs obtained at $z=3$ by different authors \citep[smooth solid lines,][]{Moster2013,Behroozi2013,Behroozi2019} together with their uncertainty (shaded regions). These estimates consistently show that the conversion of baryons into stars is most efficient within haloes with a mass of $\sim 10^{12}$ M$_\odot$ where $M_*/M\id{vir}$ is of the order of a per cent. In order to compare these results with our simulations we extrapolate the fits in the literature to lower halo masses (dashed lines) for which data are not available. The wiggly lines in the left-hand side of Fig.~\ref{fig:Results_SMHM} show the trajectories of our simulated galaxies in the $M\id{vir}$-$M_*$ plane within the redshift range $9>z\geq 3$. Time runs from left to right and the coloured circles highlight the end point at $z=3$. All models (with the exception of S9T) are very close in the $M\id{vir}$-$M_*$ plane at $z=9$ but they separate more and more with time. A rather sharp decrease in the stellar-to-halo mass ratio is caused at $z \approx 5.2$ by a major-merger event which increases the halo mass. By comparing the position of the circles to the fits from the literature, we conclude that the SN-only models are unable to efficiently regulate their SF via feedback and produce too many stars. On the other hand, including winds brings the simulations in rather good agreement with the SMHMR. This holds true for both the low- and high-resolution runs and suggests that our numerical set up provides converged global properties for the simulated galaxies. Therefore, we will only consider the high-resolution runs in the remainder of this paper. \begin{figure} \centering \includegraphics[height=0.15\textheight]{collection_galaxy/Galaxy_Images/419_den_xyxz_5_nearest_0.03_all_scale.png} \includegraphics[width=1\columnwidth]{collection_galaxy/Galaxy_Images/419_den_xyxz_5_nearest_0.03_all_scale_col.png} \includegraphics[height=0.15\textheight]{collection_galaxy/Galaxy_Images/00419_PosRot_both_c_den_all.png} \includegraphics[width=1\columnwidth]{collection_galaxy/Galaxy_Images/00419_PosRot_both_c_den_all_col.png} \caption[Galactic images for the high-resolution set at $z=3$. The gas (upper rows) and stellar (lower rows) distributions are colour coded by their respective densities] {As in Fig.~\ref{fig:Results_Images_low} but for the high-resolution simulations. } \label{fig:Results_Images} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/SMHM_r.pdf} \caption[Stellar mass -- halo mass relation at $z=3$.] {The trajectories of the simulated galaxies in the $M\id{vir}$-$M_*$ plane during the redshift interval $9>z\geq 3$ (wiggly lines with the end point indicated by a solid circle) are compared with the SMHMR at $z=3$ as determined by \citet[][B13]{Behroozi2013}, \citet[][M13]{Moster2013} and \citet[][B19]{Behroozi2019}. Note that all trajectories extracted from simulations that account for stellar winds overlap almost perfectly. } \label{fig:Results_SMHM} \end{figure} \subsection{Stellar profiles} \label{sec:gas_str_profiles} \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/profiles/info_00419_surface_star_density.pdf} \caption[Stellar surface density at $z=3$] {Face-on stellar surface-density profiles for the high-resolution runs at $z=3$. } \label{fig:Results_Surface_density_Stars} \end{figure} Early stellar feedback plays a key role in shaping galaxies as it maintains gas hot and creates pressure support which prevents low-angular-momentum material from reaching the central regions \citep{Stinson2013}. The net effect is that smaller bulges are assembled. The (face-on) projected stellar surface-density profiles of the three galaxies simulated at high-resolution (Fig.~\ref{fig:Results_Surface_density_Stars}) show that S10M is denser than E10R and E10B at all radii, while the runs including winds present nearly identical profiles. Towards the galactic centre, S10M shows a conspicuous increase in the stellar density while the E-simulations exhibit a nearly constant-density core. This difference is even more marked in the outskirts of the galaxy where S10M is $\sim10$ times denser. It is worth mentioning that the profiles rapidly decline beyond 1 kpc thus indicating that the stellar distributions are much more compact than $R\id{gal}$. The energy injection from winds and SN into the ISM is isotropic implying that, in a disc galaxy, feedback plays an important role in moulding the vertical surface-density profiles of the stars. This quantity is analysed in the top panel of Fig.~\ref{fig:Results_vertical_profiles} (where we use a rectangular window with transverse size $2R_\textrm{g}$). Apart from the different overall normalisation, the stellar distribution in the S10M model is less flattened than in the E-counterparts (see also Fig.~\ref{fig:Results_Images}) and no clear transition between a disc-like structure and its surrounding halo can be noticed (while this is present at $|z\id{H}|\approx 0.6$~kpc in E10R and E10B). \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/Vertical profiles/info_00419_height.pdf} \caption[Vertical profiles of the gas and stars] {Edge-on surface-density profiles for the stars (top) and gas (bottom) in the high-resolution simulations at $z=3$. } \label{fig:Results_vertical_profiles} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/profiles/info_00419_star_both_7.0_sd.pdf} \caption[Spherical stellar profiles at $z=3$] {Spherical radial profiles of the velocity anisotropy parameter (top) and of the mass-weighted stellar age (bottom) at $z=3$. } \label{fig:Results_spherical_profiles_star} \end{figure} In order to measure the kinematic state of the stellar orbits, we consider the velocity anisotropy parameter, \begin{equation} \label{eq:beta-paramter} \beta=1-\frac{\sigma^2\id{Tan}}{2\,\sigma^2\id{Rad}} \,, \end{equation} where $\sigma\id{Tan}$ and $\sigma\id{Rad}$ denote the tangential and radial velocity dispersions, respectively. If all orbits in a system are purely radial passing through the galactic centre, then $\beta=1$. If, instead, they are all circular, then $\beta \to -\infty$. A system with an isotropic velocity distribution has $\beta=0$. In the top panel of Fig.~\ref{fig:Results_spherical_profiles_star}, we show the radial profiles of $\beta$ for our high-resolution runs. The shapes of the profiles look very similar in the different simulations but $\beta$ is more biased towards radial orbits in the S10M run, at least within the central 2 kpc. The two E-simulations have almost identical profiles which are nearly isotropic in the centre ($R<0.2$ kpc), show a predominance of circular motion around $R=1$ kpc and are radially biased in the outskirts. The stellar kinematic in the E10R and E10B galaxies thus reveals the presence of a more pronounced disc-like structure than in S10M. On the other hand, all models present a tenuous stellar halo which is also discernible in the $\Sigma_*$-profiles for $R>1$ kpc. The radial mass-weighted age profiles of the stellar populations at $z=3$ are presented in the bottom panel of Fig.~\ref{fig:Results_spherical_profiles_star}. For all galaxies, the mean stellar age increases with $R$ but it turns out that the stars in S10M are on average younger than in E10R and E10B at all radii. The difference is more pronounced in the central regions ($R<\SI{0.2}{\kpc}$) which have been actively forming stars shortly before $z=3$ (see also Fig.~\ref{fig:Results_spherical_profiles_gas}). This reflects the higher efficiency of early stellar feedback in regulating local star formation. \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/profiles/info_00419_gas_panels3_7.0_sd.pdf} \caption[Spherical gas profiles at $z=3$] {Spherical radial profiles for different galaxy properties at $z=3$. The top panel shows the mass-density profiles for the gas (solid) and for the recently-formed stars ($t_*<\SI{50}{\Myr}$, dashed). The middle panel reveals the radial dependence of the fractions of cold and hot gas. Finally, the bottom panel displays the average metallicity of the gas. The insets in the top two panels refer to the E10B galaxy at $z=3.11$ and have compressed axes that represent the same range as the extended panels. } \label{fig:Results_spherical_profiles_gas} \end{figure} \subsection{Gas profiles} \label{sec:gas-profiles} It is interesting to investigate how the gas and stellar distributions in the simulated galaxies relate to each other. In the bottom panel of Fig.~\ref{fig:Results_vertical_profiles}, we show the edge-on surface-density profiles of the gas at $z=3$. The gas distribution in the E10R and E10B galaxies is more concentrated than in S10M and reaches higher densities at the centre (which compensates for the lower stellar content). The gas profiles of the E-models, however, show an abrupt decline at $|z\id{H}|\approx 0.6$~kpc which can be used to define the edge of the galaxies. We conclude that the E10R and E10B galaxies present a flatter, more disc-like gas distribution than S10M (see also the top panels in Fig.~\ref{fig:Results_Images}). The top panel of Fig.~\ref{fig:Results_spherical_profiles_gas} presents the spherical gas-density profiles (solid lines) for the galaxies at $z=3$ while the middle panel shows the mass fractions of `cold' ($T<2\times10^4$ K, from which stars form in the simulations) and hot ($T>2\times10^4$ K) gas as a function of the distance from the galaxy centre. Remarkably, the E-simulations contain ten times more gas at their centres than the S10M model. In all cases, the gas density stays nearly constant for $R\lesssim\SI{0.6}{\kpc}$ and drops rapidly at larger distances. All the gas in this extended region with uniform density is hot due to the presence of young stars which have recently injected energy into the ISM. Cold gas is only present in the outer regions where little or no star formation took place in the recent past. The galaxies at $z=3$ happen to be in a transient sterilised state: feedback from a recent star-formation episode prevents new star formation (see also Section~\ref{sec:outflows}). In order to demonstrate that this is indeed the case, in the insets of the top and middle panels we show again the profiles for the E10B galaxy but this time we evaluate them at $z=3.11$ (i.e. approximately 80 Myr before $z=3$): the presence of cold and dense gas in the central regions now makes star formation possible. The bottom panel of Fig.~\ref{fig:Results_spherical_profiles_gas} shows the metallicity profiles of the ISM. As already seen in Table~\ref{tab:gal_properties}, the S10M galaxy is substantially more metal rich than the E-runs (by a factor 2.9 and 4 with respect to E10R and E10B, respectively) but the shape of the metal distribution is similar in all objects with an extended uniform core and a rapidly declining profile for $R\gtrsim\SI{1.5}{\kpc}$. This drop reflects the finite size of the stellar components, the feedback efficiency in expelling metals, and the infall of metal-poor gas from the circumgalactic medium. The lower metal content of the E-galaxies makes gas cooling less efficient and thus halts the formation of new stars for longer times after a burst of star formation takes place. The different normalisation of the metal profiles in E10R and E10B is a direct consequence of the yields shown in Fig.~\ref{fig:Methods_final}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/Outflow/00419_Radius_RV_10_T_E10B.png} \includegraphics[width=0.9\columnwidth]{collection_galaxy/Outflow/00419_Radius_RV_T_col.png} \hspace{-1.3cm} \caption[Phasespace] {Radial phase-space diagrams for the gas in the S10M (top) and E10B (bottom) simulations at $z=3$. The colour coding reflects the gas temperature while the solid and dotted black lines represent the mass-weighted mean radial velocity and the escape velocity, respectively. The vertical dashed grey line indicates the galaxy radius $R\id{gal}$. } \label{fig:Results_Phasespace} \end{figure} \subsection{Outflows and metal enrichment} \label{sec:outflows} In Fig.~\ref{fig:Results_Phasespace}, we present radial phase-space diagrams of the gas component at $z=3$ colour-coded by temperature. Outflows of hot gas with radial velocities exceeding the escape velocity (dotted line) are clearly noticeable in the S10M galaxy (top). The (mass-weighted) mean gas velocity (solid line) is positive for $R<3.0$ kpc (with a peak value of 100 km s$^{-1}$) and negative at larger radii implying that there is a net inflow of material within $R\id{gal}$ (vertical dashed line). The E10B galaxy (bottom) shows a similar net inflow of material but does not present fast outflowing gas that could escape the system. In order to better understand how early and SN feedback influence galactic outflows and push the gas into the circumgalactic medium, in Fig.~\ref{fig:Results_Outflows}, we compare the inward and outward mass-flow rates calculated at $R=\SI{2}{\kpc}$ with the SFR of the simulated galaxies (averaged over a time span of \SI{10}{\Myr}). The first thing to notice is that star formation is bursty in both galaxies (as expected from a self-regulating process) but the amplitude of the variations is much larger in S10M. The highest peaks of star formation trigger gas outflows that go past \SI{2}{\kpc} (like the one at $z\simeq 4.8$ following a major galaxy merger). These outflow maxima are followed by sudden increases of the inflow rates, suggesting that a galactic fountain is in place. These features are much more prominent in S10M than in E10B. All this is consistent with the picture in which stellar winds heat up the ISM and locally inhibit further star formation thus leading to a more uniform star-formation history. The reduced (and less spatially clustered) SN activity is then unable to launch high-speed galactic winds. This is in line with the recent finding that early stellar feedback suppresses galactic outflows in galaxies hosted by lower-mass haloes than those considered here \citep{Smith2021}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/Flux_2kpc_SFR.pdf} \caption[Flow rates] {The inward and outward gas-mass flow rates measured at $R=2$ kpc in the S10M (top) and E10B (middle) galaxies are compared with the corresponding SFR averaged over \SI{10}{\Myr} (bottom). } \label{fig:Results_Outflows} \end{figure} \begin{figure} \centering \includegraphics[width=1\columnwidth]{collection_galaxy/E10B/info_00419_gas_Met_diff_65.1_sd.pdf} \caption[Profiles of metallicity ratio] {Top: spherical radial profiles of the mass-weighted gas metallicity (for the E10B galaxy at $z=3$) obtained by separating the contributions from the metals entrained in stellar winds and from those ejected by SNe. Bottom: ratio between the profiles shown in the top panel. } \label{fig:Results_metallicity_prof} \end{figure} \begin{figure} \centering \includegraphics[width=0.49\columnwidth]{collection_galaxy/E10B/419_met_ratio4_xy_5_nearest_0.03_c.png} \includegraphics[width=0.49\columnwidth]{collection_galaxy/E10B/419_met_ratio4_xy_45_nearest_0.27_c.png} \includegraphics[width=1\columnwidth]{collection_galaxy/E10B/419_met_ratio4_xy_45_nearest_0.27_c_col.png} \caption[Slice of metallicity ratio] {Face-on maps of the ratio between the wind- and SN-generated metallicities for the E10B galaxy (left) and its halo (right) at $z=3$. } \label{fig:Results_metallicity_image} \end{figure} It is worth noticing that also the mass-inflow rates at $R=2$ kpc are lower in E10B than in S10M. This could be for two reasons: (i) since outflows are weaker, there is less material that could fall back and/or (ii) the infall of recycled and pristine gas onto the galaxies is prevented by early feedback which keeps the gas hot. Finally, we investigate if metals ejected by stellar winds follow a different spatial distribution than those ejected by SNe. In a recent theoretical study aiming at explaining the large observed scatter of N/O at low O/H, \citet{Roy2021} conjectured that wind metals are more likely to remain locked up within a low-metallicity dwarf galaxy than SN metals. We test whether this scenario takes place in our simulated galaxies which, however, are hosted by substantially more massive haloes than those analysed in \citet{Roy2021}. The spherical radial profiles for the E10B galaxy at $z=3$ shown in Fig.~\ref{fig:Results_metallicity_prof} reveal that nearly 90 per cent of the metals which are found within $R\id{gal}$ (and 80 per cent of those at $R\id{vir}$) have been ejected by SNe. The mean metallicities generated by winds and SNe stay basically constant in the innermost regions (where also the stars are found) and suddenly drop by an order of magnitude at $R\simeq\SI{1.5}{\kpc}$ revealing that the circumgalactic medium is relatively metal poor (the sharp peak around $R=\SI{10}{\kpc}$ is due to a satellite galaxy). Maps of the relative distribution of metals due to winds and SNe (Fig.~\ref{fig:Results_metallicity_image}) confirm that the two type of metals are very well mixed within the galaxy. This reflects the fact that the material emitted by winds does not travel very far from the massive stars and is subsequently swept up by the faster SN ejecta. Beyond the galaxy, the maps are more complex due to the interplay of gas accretion, outflows, and the presence of satellite galaxies (passing by or being ripped appart). Apart from a few localised features, a general trend is noticeable: the relative importance of wind metals slightly increases with $R$, probably due to the $Z$ dependence of the wind-metal ejecta (see Fig.~\ref{fig:Methods_final}). \section{Summary} \label{sec:Summary} Main-sequence and post-main-sequence winds from massive stars provide a continuous injection of kinetic energy, mass, and metals into the ISM. In order to quantify the corresponding yields, we compute different sets of evolutionary tracks using the \textsc{Mesa} code and accounting for the presence of binary systems and of a metallicity-dependent distribution of initial rotational velocities. We find that: \begin{enumerate} \item For the most-massive models, the ratio between the kinetic-energy yields of stellar winds and SNe ranges from a few per cent to more than a hundred per cent depending on the input parameters ($Z, \upsilon_\mathrm{ini}$, binarity). Crucially, this energy becomes available on timescales shorter than the free-fall time of a young cluster. \item A stellar population consisting of rotating and binary stars generates substantially stronger mechanical feedback compared to standard models based on single non-rotating stars, especially at low metallicity. In fact, both binaries and rotation significantly flatten the otherwise steep metallicity dependence of the mechanical-energy yield (see section~\ref{sec:fdbk_stel-pop}). Additionally, the mass and metal yields are also enhanced. \end{enumerate} As a second step, we implement the feedback yields derived from our stellar evolutionary models into \textsc{Ramses} and run a suite of simulations which follow the formation and evolution of a galaxy until redshift $3$ (at which we achieve a nominal spatial resolution of \SI{34}{\pc}). We follow the central galaxy hosted by a dark-matter halo with a final mass of $1.8 \times 10^{11}$ M$_\odot$ and use the same initial conditions to compare the galaxies generated with different versions of our early-feedback scheme (E models) with the galaxy obtained considering SN-only feedback in the standard \textsc{ramses} implementation (S model). For the E models, we consider three different options: non-rotating single stars, rotating single stars, and a mix of rotating single stars and binary stars. It is important to stress that modelling stellar mass loss requires a number of assumptions and even our detailed evolutionary models bear large uncertainties. The complex interaction of the wind ejecta with the circumstellar material introduces further uncertainties in the calculations (see the discussion in section~\ref{sec:fdbk_discussion}). In the absence of a consensus in the literature regarding the fraction of emitted energy which is dissipated into heat on sub-grid scales, we inject the whole energy released by the winds and supernovae into the ISM and solve the equations of fluid dynamics to determine the gas flows on spatially resolved scales. In this sense, our investigation quantifies the maximum effect that could be possibly driven by stellar winds on galaxy formation. Our main findings are as follows. \begin{enumerate}\setcounter{enumi}{2} \item In the E models, the stellar mass is reduced by a factor of three compared with the S model. This makes sure that the stellar-mass-to-halo-mass ratio is consistent with current semi-empirical estimates (see Fig.~\ref{fig:Results_SMHM}). \item The stellar surface-density profile of the E galaxies flattens in the central region, contrary to the outcome of the S model which shows a central cusp (see Fig.~\ref{fig:Results_Surface_density_Stars}). Additionally, the stars in the E galaxies have a lower anisotropy parameter, indicating that the structures are more rotationally supported (see Fig.~\ref{fig:Results_spherical_profiles_star}). \item Accounting for wind feedback leads to a smoother and less bursty SFR, less strong outflows and even reduced accretion flows. \item All the E galaxies have very similar stellar and gas masses. However, those including binary stars turn out to be more metal poor than those with single stars (while no important difference is noticeable between rotating and non-rotating models). A caveat to this is that we neglect the impact of binarity and rotation on the nucleosynthesis of SNe. \item The final spatial distribution of the metals which have been entrained in stellar winds or ejected by SNe is very similar within the galaxy and also in the circumgalactic medium. SN metals account for nearly 90 per cent of the total with a slight decrease in the outermost regions. \end{enumerate} \section*{Acknowledgements} We would like to thank R. Teyssier for making the \textsc{Ramses} code publicly available. We also thank J. Mackey and S. Geen for valuable discussions. Some of the figures were produced using the \textsc{YT} package \citep{Turk2011}. This work was carried out within the Collaborative Research Centre 956, sub-project C04, funded by the Deutsche Forschungsgemeinschaft (DFG) – project ID 184018867. We gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de). We acknowledge the Max-Planck-Society for providing computing time on the MPG Supercomputer Cobra at the Max Planck Computing and Data Facility. YAF is part of the International Max Planck research school in Astronomy and Astrophysics and guest at the Max-Planck-Institute for Radio Astronomy. \section*{Data availability} The data underlying this article will be shared on reasonable request to the corresponding author. \typeout{} \bibliographystyle{mnras}
train/arxiv
BkiUdyw4eIXhwh1BxlLO
5
1
\section{Introduction} \label{sec:Intro} Entity Resolution (ER), or the process of identifying different representations of the same real--world entity, has been the subject of more than 70 years of research. Yet, practitioners often opt for \textit{ad hoc} ER approaches mainly due to deficiencies in adapting existing solutions to new and specialized datasets. In practice, adapting an ER solution to a new use--case often translates to (1) mapping the types of features that govern the comparison of two entities and are expected by the adopted solution to the case at hand, i.e., \textit{feature engineering}; (2) identifying local examples of duplicates and non--duplicates, i.e., \textit{data labeling}; and (3) learning a task--specific similarity function that discriminates between duplicates and non--duplicates, i.e., \textit{similarity learning}. Despite the sheer number of ER methods available \cite{elmagarmid-2007}, few solutions manage to reduce the costs incurred by the above triple, costs often paid in active user involvement and long configuration and training times. With respect to (1) and (3) above, recent proposals (e.g., \cite{ebraheem-2018, mudgal-2018}) resort to deep learning techniques \cite{goodfellow-2016} to adapt general--purpose features (e.g., word--embeddings \cite{bengio-2000}) to new ER use--cases and learn a task--specific similarity function \textit{concomitantly}. However, this is done at the expense of (2), since the resulting solutions tend to have a voracious appetite for labeled data (e.g., up to thousands of labeled instances \cite{mudgal-2018}), and at the expense of training time, since such approaches can take up to hours to generalize. With respect to (2) above, recent deep ER--focused active learning proposals (e.g., \cite{kasai-2019}) aim to generate labeled data for merely adapting an existing and already trained entity matching model to the case at hand. Therefore, there is limited support for manually labeling the volumes of training data often required by deep learning ER. In this paper, we aim to reduce the costs associated with performing deep learning--based ER in practice by pioneering the use of Variational Auto-Encoders (VAEs) \cite{kingma-2014} for automatically generating entity representations. This allows us to \textit{decouple feature engineering from similarity learning}, perform the former without user supervision (step \textit{1} in Figure \ref{fig:er}), and only rely on supervised learning for the latter (step \textit{2} in Figure \ref{fig:er}). Additionally, we support the data labeling effort for the supervised step through a proposed active learning technique (step \textit{3} in Figure \ref{fig:er}), facilitated by the above--mentioned decoupling. Therefore, our central contribution in this paper is an ER method that learns similarity functions over an unsupervised feature space and we show its cost--effectiveness potential through: \begin{figure}[t] \centering \captionsetup{justification=centering} \includegraphics[width=.7\linewidth]{"./img/er"} \caption{\small Decoupled, cost--effective ER process} \label{fig:er} \end{figure} \begin{itemize}[leftmargin=*, nosep] \item An \textit{unsupervised} and \textit{transferable} representation learning method for producing similarity--preserving feature vectors for tuples (i.e., entities). Unsupervised because it builds on deep generative models (e.g., VAE) to automatically generate the feature vectors. Transferable because the resulting model is reusable without re--training across different ER scenarios and data domains. \item An \textit{adaptable} matching method for learning task--specific similarity functions. Adaptable because it builds on Siamese neural networks \cite{bromley-1993} to fine--tune previously learned tuple representations to better reflect the notion of similarity derived from given training data. \item An \textit{active--learning} scheme for assisting the user in labeling supervised matching training instances. Our contribution here is the exploitation of the generative property of VAEs to identify tuple pairs that are informative, diverse, and balanced with respect to the match/non--match classes. \item An empirical evaluation of the above on a multitude of domains, demonstrating their collective potential for ER cost reduction in terms of (i) feature engineering and similarity learning; (ii) training times; and (iii) data labeling efforts. \end{itemize} \section{Background and related work} \label{sec:Relwork} Typically, ER methods for structured data (e.g., \cite{elmagarmid-2007}) include at least a \textit{blocking} and a \textit{matching} step \cite{maskat-2016}. The latter, which is the focus of this paper, involves detailed comparison of candidates efficiently identified by the former, often resorting to rule--based reasoning (e.g., \cite{fan-2009, singh-2017}), classification (e.g., \cite{bilenko-2003, konda-2016, ebraheem-2018}), crowdsourcing (e.g., \cite{wang-2012}), or generative modeling (e.g., \cite{wu-2020}) to do so. In this paper, we focus on \textit{classification--based} matching. In practice, it often relies on string similarities between attribute values of tuples (e.g., \cite{bilenko-2003, konda-2016}). Alternatively, NLP--specific feature engineering methods (e.g., \cite{mikolov-2013, goodfellow-2016}) are employed to extract meaningful numerical features from attribute values. These are then passed to deep learning \cite{goodfellow-2016} classifiers that learn a similarity function consistent with given duplicate/non--duplicate training examples \cite{mudgal-2018, ebraheem-2018, nie-2019}. Our proposal in this paper, \textbf{V}ariational \textbf{A}ctive \textbf{E}ntity \textbf{R}esolution (\textit{VAER}), is a \textit{deep learning ER} solution that focuses on reducing the human--involvement cost of performing ER in practice. Specifically, existing supervised ER proposals, such as \textit{DeepER} \cite{ebraheem-2018}, \textit{DeepMatcher} \cite{mudgal-2018}, \textit{Seq2SeqMatcher} \cite{nie-2019}, or \textit{DITTO} \cite{yuliang-20} aim to perform feature engineering and match training simultaneously, using the same training instances. This leads to complex models that require thousands of labeled instances, potentially hours of training \cite{mudgal-2018}, and that are not reusable. Similarly, unsupervised approaches, such as \textit{ZeroER} \cite{wu-2020}, involve costly feature engineering efforts for every new ER task. Conversely, \textit{VAER} decouples the feature engineering and matching tasks, conducting the former in an unsupervised fashion and only optimizing the latter on account of training data. Crucially, this leads to significantly smaller training times for matching and, consequently, enables the use of the resulting model in iterative active learning strategies that seek to assist the user in generating training data. Moreover, the feature engineering task in \textit{VAER} builds on a \textit{representation learning} paradigm \cite{bengio-2013} that enables the transfer of the resulting representation model to multiple different ER tasks, i.e. transfer learning \cite{pan-2010}. In supporting the above characteristics, the techniques used with \textit{VAER} intersect with the following research areas: \smallbreak \noindent\textbf{Variational Auto--Encoders}. VAEs \cite{kingma-2014, rezende-2014} are a type of neural network often used in dimensionality reduction, representation learning and generative learning. Typically, a VAE involves an \textit{encoder} and a \textit{decoder} that are trained simultaneously. The encoder aims to approximate a (lower dimension) probability distribution of the input using variational inference \cite{jordan-1999, beal-2003}. The decoder assists the encoder by ensuring that any random sample from the approximated distribution can be used to reconstruct the input. Therefore, VAEs can be seen as unsupervised models, since the labels are the inputs themselves. In this paper, we use a VAE to perform unsupervised entity representation learning. Formal details about our VAE--based approach are further provided in Section \ref{sec:RL}. \smallbreak \noindent\textbf{Active Learning}. AL is a sub--field of machine learning based on the key hypothesis that if a learning algorithm can suggest the data it learns from, it could perform better with less training \cite{burr-2009}. In ER, AL has been used to ease the user's task of labeling training data (e.g., \cite{qian-2017, kasai-2019, meduri-2020}. For example, \cite{meduri-2020} offers a framework for AL in ER with various types of non--deep learning algorithms. With respect to deep learning ER, in \cite{kasai-2019}, new training samples are used to adapt a transferred matching model from another ER use--case to the case at hand. While our approach follows a similar methodology, we do not assume pre--trained matching models and start from very few labeled instances to create an, often weak, initial model that is then iteratively improved based on evidence specific to and dictated by the nature of our entity representations. \section{Unsupervised representation learning} \label{sec:RL} With respect to ER feature engineering, we need to convert each input tuple into a numeric vectorized representation expected by downstream matching tasks. The viability of such representations ultimately determines the effectiveness of the entire ER process. Viable representations compact most of the high--level similarity--salient factors into dense vectors that are close together in their multivariate space for duplicates and far apart for non--duplicates. In this section, we set out to generate such representations, albeit in the more expressive form of probability distributions, rather than fixed vectors. Here, we emphasize a \textit{first cost--effectiveness property} of our system: we generate tuple representations unconstrained by the need for training data or user decisions with respect to data characteristics that govern the similarity of two entities. \subsection{Entity representation architecture} \label{subsec:Arch} The intuition in this section is that the attribute values of duplicate tuples originate from similar prior distributions that encode the information conveyed by the values. While there is no constraint on the input data distribution itself, we attempt to approximate the prior distributions as Gaussians by using a VAE with \textit{shared parameters} across attributes\footnote{{\small By ``shared" we mean that the model will generate representations for all attribute values of a tuple simultaneously by processing a 2--d input: \textit{num. attributes} $\times$ \textit{num. features}}}. This distribution type is a constraint on the VAE model, not the input data, and is dictated by the need for analytical interpretation and stability of the training process. One other distribution with similar properties is the von Mises--Fisher \cite{davidson-18}. A sufficiently powerful VAE with non--linearity can map arbitrary distributions to the Gaussian/von Mises--Fisher and back, so from a theoretical viewpoint, the only constraint for the choice of latent distribution is given by the VAE training process, i.e, the smoothness of the latent space \cite{kingma-2014}. Broadly, given a tuple with $m$ attribute values denoted $\{A_1, \ldots A_m\}$, for each $A_i$ we want to approximate a distribution over some random variable which encodes both \textit{morphological} (i.e., syntactic form of words) and \textit{semantic} (i.e., natural language meaning) factors. The first step towards this goal is to map individual attribute values to dense vectors, which we call \textit{Intermediate Representations} ($IRs$), capable of capturing the similarity between close attribute values. Then, we proceed to approximating a distribution over $IRs$ that will allow us to probabilistically reason about their similarity. Assuming for now the existence of $IRs$, Figure \ref{fig:arch} illustrates the overall neural architecture of the entity representation model proposed in this paper. \begin{figure}[t] \centering \includegraphics[width=.75\linewidth]{"./img/representation"} \caption{\small Proposed entity representation model architecture} \label{fig:arch} \end{figure} \noindent\textbf{Attr}. The attribute values layer where we treat each attribute value independently. Consequently, tuple comparison at matching time can be more granular, i.e., comparing attribute representations pair--wise. Furthermore, attribute--level weighted matching schemes can also be employed. \smallbreak \noindent$\boldsymbol{IR}$. The Intermediate Representation ($IR$) layer where attribute values are transformed into initial vectorized representations that encode semantic and morphological factors. These representations are the inputs to the VAE and are further described in the next subsection. \smallbreak \noindent\textbf{Encoder}. The encoding layer that takes a collection of $IRs$ as input, passes them through one or more \textit{dense} neural layers with \textit{non--linear} activation functions, e.g., rectified linear functions (ReLU), and approximates a latent multivariate Gaussian distribution with diagonal covariance, $\mathcal{N}(\mu, \sigma)$, for each $IR$. The mean and covariance diagonal parameters of distributions produced by the \textit{Encoder}, $\{(\mu_1, \sigma_1), \dots$ $(\mu_m, \sigma_m)\}$, one for each attribute value, denote the desired entity representations used in downstream tasks to identify duplicates. In other words, each tuple will be represented by a collection of $(\mu, \sigma)$ pairs and the comparison of two tuples will be performed attribute--wise by comparing the corresponding distributions. \smallbreak \noindent\textbf{Sampling}. This layer performs ancestral sampling from each $\mathcal{N}(\mu, \sigma)$ following a procedure specific to VAEs known as the \textit{reparameterization trick} \cite{kingma-2014}. This allows training a VAE by means of backpropagation with (stochastic) gradient descent \cite{kushner-2003} that requires the model to be deterministic. Moreover, this step confers a generative property to the representation model: given an attribute value $v$, each sample $z$ generated from the corresponding $\mathcal{N}(\mu_v, \sigma_v)$ can faithfully encode the latent characteristics of $v$. \smallbreak \noindent\textbf{Decoder}. The decoding layer that reverses the encoding to reconstruct the original input (i.e., $IRs$), conditioned by latent variables $z$ randomly sampled from $\mathcal{N}(\mu, \sigma)$. \smallbreak \noindent$\boldsymbol{\widehat{IR}}$. The last layer of the architecture denotes the reconstructed $IRs$ that, ideally, are as close as possible to the original $IRs$. The difference between the input $IRs$ and the output $\widehat{IRs}$ represents one of the minimizing objectives used in training the model, as described in Section \ref{subsec:learning}. Given a collection of $IRs$, the components described above are trained in tandem. Before describing this training process we first discuss how we obtain $IRs$ and their importance. \subsection{Intermediate representations of attributes} \label{subsec:IR} Deep--learning models, including VAEs, operate on numerical inputs, i.e., feature vectors. One possible approach to obtain such representations is, similarly to \cite{ebraheem-2018}, \cite{mudgal-2018} or \cite{yuliang-20}, to use a RNN-- \cite{ebraheem-2018} or BERT--based neural architecture \cite{devlin-19}. However, such an approach would require considerable training resources and, more importantly, would bound the resulting representations of tuples to the current data domain and ER task, since they rely on processing the word vocabulary of all input tuples. The resulting representations would then be hard to reuse in other ER tasks. In this paper, we rely on a simple alternative that generates initial attribute value vectorized representations, i.e., \textit{intermediate representations ($IRs$)} that are \textit{similarity--preserving} and independent from the type of neural architecture used for matching. Such $IRs$ are vectors of numbers that can be independently generated using methods such as Latent Semantic Analysis (\textit{LSA}) \cite{dumais-2004}, word--embedding models \cite{pennington-2014, mikolov-2013} (\textit{W2V}), BERT pre--trained embeddings \cite{Wolf-20}, or relational embeddings specialized for data integration tasks such as ER \cite{cappuzzo-20} (\textit{EmbDI}). In practice, $IR$ generation involves construing each attribute value of a given table as a sentence and: \begin{itemize}[leftmargin=*, nosep] \item \textit{LSA}: consider the corpus of all such sentences to model its latent topics and generate $IRs$ using known topic modeling methods (e.g., \cite{dumais-2004,blei-2003}). \item \textit{W2V}: pass each word of each sentence through a pre--trained word--embedding model and generate $IRs$ by averaging the resulting embeddings at sentence--level. \item \textit{BERT:} pass each sentence through a pre--trained BERT model \cite{Wolf-20} and construe the result as the $IR$ for the input sentence (i.e., attribute value). \item \textit{EmbDI:} consider the corpus of all sentences to train a data integration embedding model following \cite{cappuzzo-20} and follow the \textit{W2V} method to obtain $IRs$. \end{itemize} $IRs$ are important because they encode morphological and semantic information of values. Without their use, we would have to rely on the VAE architecture to learn numerical representations of words using high--dimensional deep \textit{Embedding} layers. Such architecture types are common with \textit{DeepMatcher} or \textit{DITTO}. In this paper, we show that \textit{VAER} is general enough so that it can incorporate the types of evidence commonly used with other ER systems (e.g., \textit{W2V} in \textit{DeepMatcher} and \textit{BERT} in \textit{DITTO}), while further contributing to the cost reductions of the ER process by (i) reducing the dimensionality of the VAE; (ii) speeding up the VAE training process, e.g., through transfer learning, as described in Section \ref{subsec:transf}; and (iii) reducing the generalization requirements, since part of the similarity correlations between words aimed to be conveyed by the final representations are already caught by $IRs$. \subsection{Learning entity representations: training the VAE} \label{subsec:learning} $IRs$ can themselves act as entity representations. However, they are deterministic, fixed vectors in a potentially irregular latent space with reduced control over how close duplicates end up. In practice, $IRs$ tend to be effective for clean and structured data and less so when data is dirty. The model from Figure \ref{fig:arch} attempts to address such limitations by learning a probabilistic model over $IRs$. Specifically, it encodes a given input, $IR$ in our case, as a probabilistic latent variable $z$ that captures the high--level information of the input, and then decodes that input (or a close version of it) from $z$. The objective of the encoding--decoding process is to minimize the error between the input and the output \cite{kingma-2014}. More formally, given a collection of $n$ entities with $m$ attributes and each entity represented by the $m$ $IRs$ of its attribute values, $\{\{IR^1_1, \ldots IR^1_m\}, \ldots$ $\{IR^n_1, \ldots IR^n_m\}\}$, we consider each $IR$ to be a random variable generated by some random process involving a lower--dimension latent variable $z$ drawn from some prior distribution $p(z)$ that conveys high--level similarity--salient factors of $IR$. We want to infer the characteristics of $z$, given $IR$. In other words, we want to compute a posterior distribution $p(z|IR)$, given by $p(z|IR) = \frac{p(IR|z)p(z)}{p(IR)}$. Computing the denominator $p(IR) = \int p(IR|z)p(z)dz$ is intractable due to the multidimensionality of $IR$ and $z$ \cite{kingma-2014}. Alternatively, $p(z|IR)$ can be approximated through \textit{variational inference} \cite{beal-2003} by another tractable (e.g., Gaussian) distribution $q(z|IR)$ \cite{kingma-2014}. In practice, this translates to minimizing the Kullback–Leibler (KL) divergence between $q(z|IR)$ and $p(z|IR)$ which can be achieved by maximizing: \begin{equation} \label{eq:obj} \small \mathbb{E}_{q(z|IR)}log(p(IR|z)) - KL(q(z|IR)||p(z)) \end{equation} \noindent where the first term represents the \textit{expected log--likelihood}) of faithfully reconstructing $IR$ given some $z$ from $q(z|IR)$, and the second term is the KL divergence between our approximated distribution $q(z|IR)$ and the true prior. Consistently with our initial assumption, Equation \ref{eq:obj} ensures that $q(z|IR)$ describes a distribution of faithful latent representations of $IR$, so that $IR$ can be reconstructed from its samples, and that any latent representation $z$ comes from a similar distribution to the assumed prior $p(z)$. In practice, by fixing $p(z) = \mathcal{N}(0, I)$ (i.e., the standard normal distribution of mean $0$ and diagonal unit covariance), the VAE enforces a regular geometry on the latent space so that similar data lead to similar latent variables \cite{bowman-2016}. From a practical perspective, we can construct the inference model described above into a neural network model where a function $\phi: \mathbb{R}^d \rightarrow \mathbb{R}^k$, i.e., our \textit{Encoder} from Figure \ref{fig:arch}, maps a $d$--dimensional input, $IR$, to two $k$--dimensional variables, $\mu$ and $\sigma$, denoting the parameters of a latent Gaussian distribution, $q_{\phi}(z|IR)$. Additionally, a second function, $\theta: \mathbb{R}^k \rightarrow \mathbb{R}^d$, i.e., our \textit{Decoder} from Figure \ref{fig:arch}, ensures that any latent variable $z$ sampled from $q_{\phi}(z|IR)$ can be used to produce an approximate reconstruction of $IR$, (i.e., $\widehat{IR}$). The loss function minimized in learning the parameters of $\phi$ and $\theta$ follows the relation from Equation \ref{eq:obj}, extended to include all $m$ $IRs$ (i.e., attribute values) of a tuple: \begin{equation} \label{eq:real_obj} \small \begin{split} L_{(\phi, \theta)}(IR, \widehat{IR}) & = \sum_{i=1}^m \underset{q_{\phi}(z_i|IR_i)}{\mathbb{E}}[log(p_{\theta}(IR_i|z_i))] \\ & - \sum_{i=1}^m KL(q_{\phi}(z_i|IR_i)||\mathcal{N}(0, I)) \end{split} \end{equation} Intuitively, by fitting the inputs to Gaussian distributions, the VAE learns representations of attribute values not as single points, but as ellipsoidal regions in the latent space, forcing the representations to continuously fill this space. Specifically, the mean $\mu$ of the latent distribution controls where the encoding of an input should be centered around, while the diagonal covariance $\sigma$ controls how far from the center the encoding can vary. As decoder inputs are generated at random from anywhere inside this distribution (recall the \textit{Sampling} layer of Figure \ref{fig:arch}), the decoder is exposed to a range of variations of the encoding of the same input during training. The decoder, therefore, learns that, not only is a single point in the latent space referring to an attribute value, but all nearby points refer to that value as well, i.e., accounting for variation and uncertainty across the attribute values of duplicates. \subsection{Representation model transferability} \label{subsec:transf} The architecture from Figure \ref{fig:arch} enables a \textit{second cost--reduction characteristic} to our overall approach: the representation model trained during one ER task can be reused in other ER tasks, therefore eliminating the need for representation learning. In other words, the architecture from Figure \ref{fig:arch} allows for transfer learning \cite{pan-2010}. This is by virtue of the variational inference and the use of $IRs$ as inputs. Concretely, because the model from Figure \ref{fig:arch} operates on numerical $IRs$, its output distributions are \textit{independent from domain--specific aspects}, such as the domain vocabularies. Consequently, once the parameters of the VAE are trained, any new $IR$ with \textit{similar dimensionality} to the one required by the transferred model architecture can be accurately encoded, regardless of the $IR$'s data domain. However, these new $IRs$ have to already convey similarity signals since the pre--trained $VAE$ can only amplify existing ones. In practice, using $IRs$ of the types discussed in Section \ref{subsec:IR} satisfies this requirement. It follows that the transferability property is \textit{domain--agnostic} and can eliminate the need for feature engineering from new ER tasks while minimizing the training--time costs, since representation learning accounts for most of the training time needs. \section{Supervised matching in the latent space} \label{sec:Matching} The transferability property is complemented by an adaptability property. Specifically, the representations produced by the model from Figure \ref{fig:arch} are lenient with respect to small variations in the attribute values of two duplicate tuples. However, more significant discrepancies between duplicates that are not reflected in the $IRs$ can lead to far apart latent distributions, especially if the representation model has been transferred from another use--case. Moreover, since the representation learning step is unsupervised, the notion of similarity between tuples conveyed by the learned representations may not be consistent with the real intended notion. For instance, consider the example from Table \ref{tab:dupex}. Both entities denote the same song by the same artist released as part of two different albums. Whether or not the two tuples are duplicates depends on the use--case and the unsupervised representations may not reflect that decision. There is, therefore, a need for (i) adjusting the entity representations to cover more significant discrepancies between duplicates, and (ii) aligning the notion of similarity conveyed by the representations with the use--case intent, i.e, similarity learning. In this section, we describe our supervised deep learning proposal for addressing these requirements. \begin{table}[t] \centering \footnotesize \caption{\small Duplicate candidates songs example} \begin{tabular}{|c|c|c|c|} \hline Song & Artist & Album & Year \\ \hline \hline Charlie Brown & Coldplay & Mylo Xyloto & 2011 \\ \hline Charlie Brown & Coldplay & GRAMMY Nominees & 2013 \\ \hline \end{tabular} \normalsize \label{tab:dupex} \end{table} \subsection{Matching architecture} \label{subsec:m_arch} In Section \ref{sec:RL} we focused on a \textit{generative} task of producing similarity--preserving entity representations. We now describe a \textit{discriminative} task that builds on the generative model to learn a similarity measure between representations and to perform ER matching. More specifically, given a set of tuple pairs $(s, t)$, each with $m$ attribute values $\{A^s_1, \ldots A^s_m\}$ and $\{A^t_1, \ldots A^t_m\}$, and corresponding duplicate/non--duplicate labels, we perform supervised training of a Siamese neural network (i.e., a class of neural network that contain two or more identical sub--networks) \cite{bromley-1993, neculoiu-2016}. Our proposed architecture is illustrated in Figure \ref{fig:m_arch}. \begin{figure}[t] \centering \includegraphics[width=.7\linewidth]{"./img/matching"} \caption{\small Proposed matching model architecture} \label{fig:m_arch} \end{figure} \noindent\textbf{Attr} and $\boldsymbol{IR}$. These layers correspond to the first two layers from Figure \ref{fig:arch}: each attribute value of the input tuples is mapped to an $IR$ which is then passed to the next layer. \smallbreak \noindent\textbf{Encoder}. This layer uses two variational encoders, similar to the one from Figure \ref{fig:arch}. Both encoders share the same weights initialized with the trained values of the variational encoder in Figure \ref{fig:arch}. Parameter updating is mirrored across both encoders during training and each encoder generates an entity representation, $\{(\mu_1, \sigma_1), \dots (\mu_m, \sigma_m)\}$, corresponding to its input tuple. The purpose of this layer is to improve the weights transferred from the variational encoder of the representation model on account of training data. \smallbreak \noindent\textbf{Distance}. Modeling the latent space using probability distributions allows us to reason about the similarity of $s$ and $t$ in terms of the distance between their corresponding distributions. Two examples of metrics that can be used to quantify such distance between Gaussian distributions are Wasserstein \cite{mallasto-2017} and Mahalanobis \cite{gallego-2013}. These have previously been used in learning sentence similarity in NLP (e.g., \cite{deudon-2018}). In practice, both distances showed similar effectiveness, so we are only discussing the former here. Intuitively, the $d$--Wasserstein distance quantifies the minimal cost of transporting the unit mass of one probability measure into the unit mass of another probability measure, when the cost is given by an $L^d$ distance \cite{mallasto-2017}. In our case, we consider $d=2$ and the squared $2$--Wasserstein distance ($W^2_2$) between two $k$--dimensional diagonal Gaussian distributions, $p$ and $q$, is given by Equation \ref{eq:wasserstein}. \begin{equation} \label{eq:wasserstein} \small W^2_2(p, q) = \sum^{k}_{i=1} (\mu^p_i - \mu^q_i)^2 + (\sigma^p_i - \sigma^q_i)^2 \end{equation} Returning to the \textit{Distance} layer in Figure \ref{fig:m_arch}, from the two inputs $\{(\mu^s_1, \sigma^s_1), \dots (\mu^s_m, \sigma^s_m)\}$ and $\{(\mu^t_1, \sigma^t_1), \dots (\mu^t_m, \sigma^t_m)\}$, we compute $m$ attribute--wise Wasserstein distance vectors $\vec{d}^{(s,t)} = (\mu^s - \mu^t)^2 + (\sigma^s - \sigma^t)^2$. Then, we concatenate all $m$ distance vectors and pass the result to the next layer. \smallbreak \noindent\textbf{Matching}. This layer performs ER matching in the form of a binary classification task. The classifier consists of a two--layer Multi Layer Perceptron (MLP) with non--linear activation functions that, given the $m$ concatenated Wasserstein distance vectors, predicts a match/non--match label. The purpose of this layer is to discriminate between duplicates and non--duplicates. In practice, at inference time when there may be many tuple pairs to match, although the distance computation and matching overheads are neglectable compared to the training--time costs of \textit{VAER}, they could be alleviated by a blocking strategy (e.g., as in \cite{ebraheem-2018}) that filters out clear non--duplicates. We describe how such a step could be adapted to our distribution--based representations in Section \ref{subsec:rl_exp}. \subsection{Learning entity similarity: training the Siamese network} The training process of the proposed matching model involves two optimization objectives: (1) minimize the $W^2_2$ distance between the representations of duplicates and maximize it for non--duplicates, i.e., improving the \textit{Encoder} layer, and (2) minimize the classification error of the binary classifier, i.e., training the \textit{Matching} layer. These two objectives are optimized simultaneously. Firstly, with respect to (1), we further improve the initial weights of the Siamese encoder heads on account of training data so that they are consistent with more complex cases of tuple similarity, infeasible to cover by the unsupervised approach alone. Secondly, in relation to (2), we optimize the parameters of the binary classifier such that the resulting model is effective in discriminating between duplicate and non--duplicate tuples. To this end, we use a contrastive loss function \cite{neculoiu-2016} defined by Equation \ref{eq:contrastive}: \begin{equation} \label{eq:contrastive} \small \begin{split} L(s, t) & = ylog(p_{\gamma}(y|d^{(s,t)})) + (1-y)log(1-p_{\gamma}(y|d^{(s,t)})) \\ & + \frac{1}{m}\sum_{i=1}^m [xW^2_2(q_{\phi}(A^s_i), q_{\phi}(A^t_i)) \\ & + (1-x)\max (0, M - W^2_2(q_{\phi}(A^s_i), q_{\phi}(A^t_i)))] \end{split} \end{equation} \noindent where $y$ is the predicted class for a pair of tuples $(s, t)$, $x$ is the true class, $p_{\gamma}(y)$ is the predicted probability of $y$, $\gamma$ are the parameters of the binary classifier, $d^{(s,t)}$ is the distance vector from the \textit{Distance} layer in Figure \ref{fig:m_arch}, $q_{\phi}(A^j_i)$ is the Gaussian distribution approximated for attribute $A^j_i$ by the encoding layer, and $\phi$ are the parameters of the encoders. $M$ is a margin hyperparameter that controls how the encoders weights are adjusted in light of the training data. The first term in Equation \ref{eq:contrastive}, i.e., the cross--entropy of the prediction, covers objective (2) from above, while the second term covers objective (1). The function of the margin $M$ is that, when the representations produced for a negative pair are distant enough, no effort is wasted on maximizing that distance, so further training can focus on more difficult pairs. The cost benefits of the feature and similarity learning approaches described in Sections \ref{sec:RL} and \ref{sec:Matching} are now clear: given two input tables (as in Figure \ref{fig:er}), the unsupervised representation learning model from Figure \ref{fig:arch} alleviates the user from deciding on the features that govern the tuple comparison process, and from providing training data to generate those features. The potential loss in feature expressiveness determined by the unsupervised nature of the representation model, or by the use of a transferred representation model, is attenuated by the Siamese matching model from Figure \ref{fig:m_arch} that adjusts the parameters of the representation encoder in light of training data. This adjustment is also efficient, since most of the optimization work has already been done by the unsupervised training process. \section{Active learning in the latent space} \label{sec:AL} Deep learning solutions are often characterized by the need for a significant number of training instances (e.g., up to thousands \cite{mudgal-2018}). Active learning (AL) \cite{burr-2009} has traditionally been proposed to support the manual labeling of such volumes where an initial pool of labeled instances, $\mathcal{L}$, and a much larger pool of unlabeled instances, $\mathcal{U}$, are assumed. Then, the user is tasked with labeling one or more instances from $\mathcal{U}$ to iteratively update $\mathcal{L}$. The crux of an AL strategy is the sampling method used to choose instances from $\mathcal{U}$ to be passed to the user for labeling. Many such strategies proposed for various learning algorithms (e.g., \cite{meduri-2020}) often degenerate to random sampling or to sampling from a single class when used with highly stochastic models such as neural networks \cite{beygelzimer-2010, ash-2020}. Furthermore, the computational overhead of training deep neural networks precludes approaches that expect model retraining over many iterations. In this section, we propose an AL sampling strategy facilitated by the decoupling of feature and matching learning tasks and by VAE's latent space. This strategy is characterized by three important properties: \begin{itemize}[leftmargin=*, nosep] \item \textbf{Class balance}: ensures that the samples presented to the user for labeling represent both matches and non--matches and, thus, avoids class imbalance problems. \item \textbf{Informativeness}: ensures that the samples presented to the user for labeling maximize the information gain and, thus, potentially speed up the matcher generalization. \item \textbf{Diversity}: ensures that the samples selected for user labeling cover a diversified range and, thus, prevents overfitting. \end{itemize} Here we emphasize a \textit{third cost--effectiveness property} of our proposal: decoupling the matching model reduces training times to an extent that enables iterative training over multiple active learning iterations using a proposed sampling strategy that eases the manual task of labeling data, as we now describe. \subsection{Bootstrapping for initial training data} Contrary to previous deep ER--specific AL approaches (e.g., \cite{kasai-2019}), in this paper we aim to automatically create the initial pool of labeled instances $\mathcal{L}$, factored as two disjoint subsets, $\mathcal{L^+} \cup \mathcal{L^-} = \mathcal{L}$, for matches and non--matches, respectively. To this end, we use Algorithm \ref{alg:al_boot} that relies on the latent space, modeled as described in Section \ref{sec:RL}, to identify the nearest and the furthest entity representations to act as initial positive and negative examples, respectively. \begin{algorithm}[t] \small \caption{\small AL bootstrapping} \begin{flushleft} \textbf{Input}: Tuples $T$, repres. model $\phi$, num. neighbours $k$ \\ \textbf{Output}: Pos/Neg/Unlabeled tuple pairs $\mathcal{L}^+$/$\mathcal{L}^-$/$\mathcal{U}$ \end{flushleft} \begin{algorithmic}[1] \Function{ALBootstrap}{} \State $\mathcal{U} \gets \{\}$ \State $R \gets \phi.\mathsf{predict}(T)$ \State $I \gets \mathsf{lsh\_index}(R)$ \ForAll{$t_i \in T\ and\ r_i \in R$} \State $N = I.\mathsf{lookup(r_i, k)}$ \ForAll{$n_j \in N$} \State $\mathcal{U} \gets \mathcal{U} \cup \{(t_i, n_j)\}$ \EndFor \EndFor \State $W_{min} = \min\limits_{(s, t) \in \mathcal{U}} W^2_2(\phi(s), \phi(t))$ \State $W_{max} = \max\limits_{(s, t) \in \mathcal{U}} W^2_2(\phi(s), \phi(t))$ \State $\mathcal{L}^+ \gets \{(s, t)| (s, t) \in \mathcal{U}, W^2_2(\phi(s), \phi(t)) \approx W_{min}\}$ \State $\mathcal{L}^- \gets \{(s, t)| (s, t) \in \mathcal{U}, W^2_2(\phi(s), \phi(t)) \approx W_{max}\}$ \State \Return $\mathcal{L}^+, \mathcal{L}^-$, $\mathcal{U} \setminus (\mathcal{L}^+ \cup \mathcal{L}^-)$ \EndFunction \end{algorithmic} \label{alg:al_boot} \end{algorithm} Specifically, given a collection of input tuples, $T$, and a representation model $\phi$, trained as described in Section \ref{sec:RL}, we first generate the pool of unlabeled candidates, $\mathcal{U}$, by performing nearest--neighbour search, e.g., using Locality Sensitive Hashing \cite{indyk-1998} with Euclidean distance \cite{datar-2004}, (Lines 3--10), i.e., each candidate is a pair of neighboring tuples $(s, t)$ that may or may not be duplicates. Note that the use of LSH based on Euclidean distance is possible because, looking back to Equation \ref{eq:wasserstein}, we observe that the $W_2$ distance of two $k$--dimensional Gaussian distributions $p=\mathcal{N}(\mu^p, \sigma^p)$ and $q=\mathcal{N}(\mu^q, \sigma^q)$ is \textit{positively correlated} with the squared Euclidean distance of their means, given by $\sum^{k}_{i=1} (\mu^p_i - \mu^q_i)^2$. In other words, duplicate tuples that have $W_2$--close representations in the latent space will have Euclidean--close means as well. This observation allows us to use the Euclidean distance as surrogate for similarity candidates and employ LSH algorithms to efficiently find them. Next, we measure the distances between the closest and the furthest two neighbours in $\mathcal{U}$ using Equation \ref{eq:wasserstein} and use them as approximate thresholds for initial positive and negative samples (\textit{Lines 11, 12}). The sets of positive (i.e., $\mathcal{L}^+$) and negative (i.e., $\mathcal{L}^-$) samples are given by pairs with similar distances between their members to the minimum and maximum distances, respectively (\textit{Lines 5, 6}). The intuition behind Algorithm \ref{alg:al_boot} is that very similar tuples will end up with very small distances between their representations and we can automatically choose them as initial positive samples. Similarly, tuples that are very far apart can provide initial negative samples. \subsection{Sampling for AL in the latent space} Having obtained initial $\mathcal{L}^+$, $\mathcal{L}^-$ and $\mathcal{U}$, we now describe how we aim to iteratively improve these sets through a sampling strategy that exhibits the three properties mentioned at the beginning of this section. \subsubsection{Class balance} Given the unlabeled pool $\mathcal{U}$, in reality only very few of its instances are positives, while the vast majority are negatives. We therefore need to ensure sampling from both categories. We do so by treating positive and negative candidates separately and discriminate between them using the matching model $\gamma$, trained on $\mathcal{L}^+ \cup \mathcal{L}^-$ as described in Section \ref{sec:Matching}. Specifically, instead of sampling from $\mathcal{U}$, we sample from each $\mathcal{U}^+ = \{(s, t)|(s, t) \in \mathcal{U}, p_{\gamma}(1|(s, t)) > 0.5\}$ and $\mathcal{U}^- = \{(s, t)|(s, t) \in \mathcal{U}, p_{\gamma}(1|(s,t)) \leq 0.5\}$, where $p_{\gamma}(1|(s,t))$ is the probability of a given sample $(s, t)$ to be a positive under the current matching model $\gamma$ for that iteration. \subsubsection{Informativeness} Given the unlabeled pool $\mathcal{U}$, AL theory hypothesizes that there is a sub--set in $\mathcal{U}$ of minimal size that offers maximum generalization power to a matching model. We therefore need to get as close as possible to this sub--set in order to maximize the information gain of the resulting model. We do so by using one of the most common measures for the amount of information carried by a potential training sample: the \textit{entropy} of the conditional probability under the model aimed to be improved \cite{burr-2009}. More formally, given $\gamma$ and $\mathcal{U} = \mathcal{U}^+ \cup \mathcal{U}^-$, we can use the entropy measure, given by Equation \ref{eq:entropy} for the binary case, to choose the most informative instances in $\mathcal{U}$ to be labeled by the user. \begin{equation} \label{eq:entropy} \small \begin{split} H_{\gamma}(s, t) &= -p_{\gamma}(y|(s,t))log(p_{\gamma}(y|(s,t))) \\ &+ (1 - p_{\gamma}(y|(s,t)))log(1 - p_{\gamma}(y|(s,t))) \end{split} \end{equation} \noindent where $y$ is the predicted class for a tuple pair $(s, t) \in \mathcal{U}$ and $p_{\gamma}(y|(s,t))$ is the predicted probability of $y$ under the current model $\gamma$. Intuitively, $H(s, t)$ is high for pairs of uncertain tuples whose probability of being duplicate/non--duplicate is close to $0.5$, and low otherwise. Therefore, in our sampling strategy, those $(s, t)$ from $\mathcal{U}$ that have a high entropy are most informative to the model. \subsubsection{Diversity} Given the unlabeled pool $\mathcal{U}$, in addition to entropy, we also have to consider the distance between the tuple representations of each instance. This is because treating positives and negatives separately is often not enough to ensure class balance, i.e., many of the predicted positives with high entropy are in reality negatives. We therefore sample positive candidates that have a high positive probability and a distance between their tuple representations similar to the distances associated with samples from $\mathcal{L}^+$. However, most of the initial $\mathcal{L}^+$ constituents have very small distances between their tuples. Consequently, the Wasserstein vectors computed in the \textit{Distance} layer of Figure \ref{fig:m_arch} will be similar, if not identical. We therefore need to ensure that the AL sampling strategy chooses more diverse positive candidates. We do so by considering the entire distribution of distances between tuple representations of members of $\mathcal{L}^+$. More specifically, given a set of duplicates $\mathcal{L}^+$ and a representation model $\phi$, we repeatedly sample from each $\phi(s)$ and $\phi(t)$, with $(s, t) \in \mathcal{L}^+$, using the \textit{Sampling} step from Figure \ref{fig:arch}, i.e., VAE's \textit{reparameterization trick} \cite{kingma-2014}, to obtain a distribution of possible Euclidean distances $D^+$ between duplicate representations, as shown in Equation \ref{eq:dist_dist}. \begin{equation} \label{eq:dist_dist} \small \begin{split} D^+ &= \{\left\Vert z_i^s - z_i^t \right\Vert_{2} | (s, t) \in \mathcal{L}^+; \\ & s_i \in \mathsf{sampling(\phi(s))}; t_i \in \mathsf{sampling(\phi(t))}\} \end{split} \end{equation} Recall that, given a trained variational encoder $\phi$, every sample from the approximated distribution for a tuple $t$ produced by $\phi(t)$ can act as a viable encoding from which the input can be decoded. Therefore, by repeatedly sampling from each $\phi(s)$ and $\phi(t)$, with $(s, t) \in \mathcal{L}^+$, Equation \ref{eq:dist_dist} produces a distribution of possible distances between duplicate representations\footnote{{\small We need to repeatedly sample, e.g., 1000 samples for each tuple, because $\mathcal{L}^+$ alone often contains insufficient samples to accurately estimate the distance distribution.}}. Having obtained $D^+$, we can employ Kernel Density Estimators (KDE) \cite{silverman-1986} using Gaussian kernels to estimate a univariate probability density function, $\widehat{f}^+(d)$, over all distances $d \in \mathcal{D^+}$. Then, $\widehat{f}^+(d)$ can be applied on the distances between tuple representations of each unlabeled instance from $\mathcal{U}$ to identify the ones that are most likely to be positives, and use this information in addition to entropy. \subsubsection{Balanced, informative and diverse sampling} Algorithm \ref{alg:AL} unifies all the steps discussed in this section. Specifically, once the initial sets of unlabeled/labeled samples have been generated (i.e., \textit{Line 2}), the initial matcher trained (i.e., \textit{Line 3}), and the initial positive probability density function estimated (i.e., \textit{Line 4}), we iteratively proceed as follows: \begin{algorithm}[t] \small \caption{\small AL for ER} \begin{flushleft} \textbf{Input}: Tuples $T$, repres. model $\phi$, num. iterations $I$, num. neighbours $k$\\ \textbf{Output}: Matching model $\gamma$ \end{flushleft} \begin{algorithmic}[1] \Function{AL}{} \State $\mathcal{U}, \mathcal{L}^+, \mathcal{L}^- \gets \textproc{ALBootstrap}(T, \phi, k)$ \State $\gamma \gets \mathsf{train}(\mathcal{L}^+, \mathcal{L}^-)$ \State $\widehat{f}^+(d) \gets \mathsf{KDE}(\mathcal{L}^+)$ \ForAll{$i \in I$} \State $c^+ \gets (s, t),\ \min\limits_{(s, t) \in \mathcal{U}^+} H_{\gamma}((s, t)) \times \frac{1}{\widehat{f}^+(d((s,t))}$ \State $c^- \gets (s, t),\ \min\limits_{(s, t) \in \mathcal{U}^-} H_{\gamma}((s, t)) \times \widehat{f}^+(d((s,t))$ \State $u^+ \gets (s, t), \ \min\limits_{(s, t) \in \mathcal{U}^+} \frac{1}{H_{\gamma}((s, t))} \times \widehat{f}^+(d((s,t))$ \State $u^- \gets (s, t),\ \min\limits_{(s, t) \in \mathcal{U}^-} \frac{1}{H_{\gamma}((s, t))} \times \frac{1}{\widehat{f}^+(d((s,t))}$ \State $\mathsf{label}(c^+);\ \mathsf{label}(c^-);\ \mathsf{label}(u^+);\ \mathsf{label}(u^-)$ \State $\mathcal{L}^+ \gets \mathcal{L}^+ \cup \{c^+, u^+\};\ \mathcal{L}^- \gets \mathcal{L}^- \cup \{c^-, u^-\}$ \State $\mathcal{U} \gets \mathcal{U} \setminus (\mathcal{L}^+ \cup \mathcal{L}^-)$ \State $\gamma \gets \mathsf{train}(\mathcal{L}^+, \mathcal{L}^-)$ \State $\widehat{f}^+(d) \gets \mathsf{KDE}(\mathcal{L}^+)$ \EndFor \State \Return $\gamma$ \EndFunction \end{algorithmic} \label{alg:AL} \end{algorithm} \noindent \textbf{Certain positives}. We identify duplicate candidates with low entropy and high distance likelihoods as certain positives (i.e., \textit{Line 6}). Intuitively, these are instances characterized by a high positive probability under the current model and a distance value between their representations \textit{similar} to the distances associated with $\mathcal{L}^+$. \noindent \textbf{Certain negatives}. We identify non--duplicate candidates with low entropy and low distance likelihood as certain negatives (i.e., \textit{Line 7}). Intuitively, these are instances characterized by a high negative probability under the current model and a distance value between their representations \textit{dissimilar} to the distances associated with $\mathcal{L}^+$. \noindent \textbf{Uncertain positives}. We identify duplicate candidates with high entropy and low distance likelihood as uncertain positives (i.e., \textit{Line 8}). Intuitively, these are instances characterized by a low positive probability under the current model (although higher than $0.5$) and a distance value between their representations \textit{dissimilar} to the distances associated with $\mathcal{L}^+$. \noindent \textbf{Uncertain negatives}. We identify non--duplicate candidates with high entropy and high distance likelihood as uncertain negatives (i.e., \textit{Line 9}). Intuitively, these are instances characterized by a low negative probability under the current model (although higher than $0.5$) and a distance value between their representations \textit{similar} to the distances associated with $\mathcal{L}^+$. Algorithm \ref{alg:AL} identifies two types of samples for each class, \textit{viz.} certain and uncertain. Uncertain samples have high informative value for the model since they are close to the decision boundary (i.e., high entropy) and have surprising distances between their tuple representations given their predicted class. Conversely, the purpose of certain samples is to prevent overfitting to the selected uncertain instances. Finally, in Algorithm \ref{alg:AL}, user involvement is only required at \textit{Line 10} and, although Algorithm \ref{alg:AL} assumes sampling one instance of each type per iteration, in practice the algorithm can easily be extended to perform batch sampling by choosing the top-$k$ instances at each of \textit{Lines 6,7,8,9}. \section{Evaluation} \label{sec:Evaluation} In this paper we set out to decrease the cost associated with ER tasks, and approached this by decoupling feature learning from matching tasks. In this section, we empirically show that this decoupling contributes to the cost reduction desideratum without compromising effectiveness. \subsection{Experimental setup} \label{subsec:setup} \subsubsection{Datasets} We conduct experiments on nine datasets from eight domains. Table \ref{tab:data} shows an overview of the evaluation data used in this section. Each domain (i.e., \textbf{Domain} column) presents two tables (of \textbf{Card.} cardinality) between which we aim to perform ER, with the same \textbf{Arity} and aligned attributes. Each domain also comes with a training set with duplicate and non--duplicate example pairs (of \textbf{Training} size), and a similar, albeit smaller, test set (of \textbf{Test} size). Datasets marked with $^\dag$ are \textit{clean} with few missing values. Datasets marked with $^\ddag$ are \textit{noisy} and more challenging to perform ER on due to their many missing values and unstructured attributes, e.g., product descriptions. Finally, the first seven domains have been previously used in ER benchmarks\footnote{Public datasets, together with their training/test instances, available at \url{www.github.com/anhaidgroup/deepmatcher/blob/master/Datasets.md}} (e.g., \cite{ebraheem-2018, mudgal-2018}), while the last two are private datasets from \textit{Peak AI} with data about clothing products and person contacts. \begin{table}[t] \centering \footnotesize \caption{\small Datasets used in the experiments} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Domain}& \textbf{Card.} & \textbf{Arity} & \textbf{Training} & \textbf{Test} \\ \hline Restaurants$^\dag$ & 533/331 & 6 & 567 & 189 \\ \hline Citations 1$^\dag$ & 2616/2294 & 4 & 7417 & 2473 \\ \hline Citations 2$^\dag$ & 2612/64263 & 4 & 17223 & 5742 \\ \hline Cosmetics$^\ddag$ & 11026/6443 & 3 & 327 & 81 \\ \hline Software$^\ddag$ & 1363/3226 & 3 & 6874 & 2293 \\ \hline Music$^\ddag$ & 6907/55923 & 8 & 321 & 109 \\ \hline Beer$^\ddag$ & 4345/3000 & 4 & 268 & 91 \\ \hline Stocks$^\ddag$ & 2768/21863 & 8 & 4472 & 1117 \\ \hline CRM$^\dag$ & 5742/9683 & 12 & 440 & 220 \\ \hline \end{tabular} \label{tab:data} \end{table} \subsubsection{Baselines and reported measures} We build our evaluation\footnote{All experiments have been run using \textit{PyTorch} on a \textit{Python 3 Google Compute Engine Backend} with \textit{12 GB RAM} and \textit{GPU acceleration}.} around \textit{Precision (P)}, \textit{Recall (R)} and \textit{F--measure (F1)}, measured on the test datasets. For the purposes of computing these measures, we define a true positive \textit{(tp)}: any pair of tuples marked as duplicate in both the test set and the evaluated results; a false positive \textit{(fp)}: any pair of tuples marked as non--duplicate in the test set and as a duplicate in the evaluated results; and as a false negative \textit{(fn)}: any pair of tuples marked as a duplicate in the test set and as a non--duplicate in the evaluated results. Then, $P=\frac{tp}{tp+fp};\ R=\frac{tp}{tp+fn}; F1=2 * \frac{P \times R}{P + R}$. For evaluating the representation learning task we consider simple LSH--based top-$k$ nearest neighbour \cite{datar-2004} baselines using \textit{LSA}--, \textit{word2vec (W2V)}--, \textit{BERT}--, and \textit{EmbDI}--generated tuple representations. We compare each of these against a top-$k$ nearest neighbour approach that uses \textit{VAER} representations generated from \textit{LSA}--, \textit{word2vec (W2V)}--, \textit{BERT}--, and \textit{EmbDI}--based $IR$s, respectively. Here, we aim to show the generality of our approach and the effectiveness of the representation learning task over different types of $IR$s. For evaluating the matching task, we consider \textit{DeepER} \cite{ebraheem-2018}, \textit{DeepMatcher} \cite{mudgal-2018}, and \textit{DITTO} \cite{yuliang-20} as the state--of--the--art supervised ER. These systems, while highly effective, do not allow for transferability or practical training times for reasons mention in Section \ref{sec:Relwork}. We aim to demonstrate \textit{VAER}'s superiority in these respects, while keeping similar levels of effectiveness. Other ER proposals, such as \textit{Magellan} \cite{konda-2016} or \textit{ZeroER} \cite{wu-2020}, are not considered in this paper since they do not rely on deep learning and have already been subject to extensive comparison against this space (e.g., \cite{mudgal-2018}, \cite{wu-2020}). \subsubsection{VAER configuration} The most important hyper--parameters have been exemplified in Table \ref{tab:params}. The values have been obtained through hyper--parameter optimization using a validation set that comes with the evaluation datasets. The number of neighbors $K$, and the margin $M$ of the matching loss function are data dependent. However, the values shown in the table led to good results for all the domains used in the evaluation. The configuration of baselines is the default one from their available implementations. \begin{table}[t] \centering \footnotesize \caption{\small Hyperparameters of VAER} \begin{tabular}{|c|c|c|} \hline \textbf{Component} & \textbf{Parameter} & \textbf{Value} \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Repr. \\ learning\end{tabular}} & VAE hidden dimension & 200 \\ \cline{2-3} & VAE latent dimension & 100 \\ \hline Matching & Margin M & .5 \\ \hline AL & Samples/iteration & 10 \\ \hline AL & Top neighbours K & 10 \\ \hline \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Repr. learning \&\\ matching\end{tabular}} & Optimizer & Adam \\ \cline{2-3} & Learning rate & 0.001 \\ \hline \end{tabular} \normalsize \label{tab:params} \end{table} \subsection{Representation learning experiments} \label{subsec:rl_exp} \begin{table*}[t] \centering \footnotesize \caption{\small \textit{VAER} representation learning P/R/F1 showing consistency across all $IR$ types} \begin{tabular}{|l|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|p{.2cm}|} \hline \textbf{Domain} & \multicolumn{6}{c|}{\textbf{LSA/VAER$^{LSA}$}} & \multicolumn{6}{c|}{\textbf{W2V/VAER$^{W2V}$}} & \multicolumn{6}{c|}{\textbf{BERT/VAER$^{BERT}$}} & \multicolumn{6}{c|}{\textbf{EmbDI/VAER$^{EmbDI}$}} \\ \hline & \multicolumn{2}{c|}{\textbf{P}} & \multicolumn{2}{c|}{\textbf{R}} & \multicolumn{2}{c|}{\textbf{F1}} & \multicolumn{2}{c|}{\textbf{P}} & \multicolumn{2}{c|}{\textbf{R}} & \multicolumn{2}{c|}{\textbf{F1}} & \multicolumn{2}{c|}{\textbf{P}} & \multicolumn{2}{c|}{\textbf{R}} & \multicolumn{2}{c|}{\textbf{F1}} & \multicolumn{2}{c|}{\textbf{P}} & \multicolumn{2}{c|}{\textbf{R}} & \multicolumn{2}{c|}{\textbf{F1}} \\ \hline Rest. &.17&.17&\textbf{1}&\textbf{1}&.29&.29&.31&.23&.95&\textbf{1}&.47&.37&.26&.24&.95&\textbf{1}&.4&.41&.23&.23&\textbf{1}&\textbf{1}&.37&.37 \\ \hline Cit. 1 &.49&.51&.98&\textbf{1}&.64&.68&.57&.56&.38&\textbf{.98}&.46&.72&.49&.53&.98&\textbf{1}&.65&.69&.5&.47&.89&\textbf{1}&.65&.64 \\ \hline Cit. 2 &.6&.67&.89&\textbf{.91}&.7&.77&.75&.77&.51&\textbf{.82}&.6&.8&.61&.75&.64&\textbf{.83}&.63&.79&.59&.7&\textbf{.94}&.93&.72&.8 \\ \hline Cosm. &.65&.68&\textbf{.85}&.83&.74&.76&.74&.65&.84&\textbf{.89}&.78&.76&.65&.78&.7&\textbf{.78}&.67&.78&.66&.75&.14&\textbf{.25}&.24&.35 \\ \hline Soft. &.21&.25&.72&\textbf{.79}&.33&.39&.22&.23&\textbf{.83}&.8&.35&.36&.26&.29&.6&\textbf{.68}&.37&.41&.28&.28&\textbf{.94}&.93&.43&.43 \\ \hline Music &.58&.65&.77&\textbf{.82}&.66&.73&.6&.62&.84&\textbf{.85}&.69&.71&.7&.68&.87&\textbf{.93}&.77&.79&.72&.66&.29&\textbf{.86}&.42&.75 \\ \hline Beer &.44&.48&.84&\textbf{.86}&.58&.62&.44&.5&\textbf{.84}&.8&.58&.62&.47&.57&.78&\textbf{.79}&.59&.67&.7&.64&.91&\textbf{1}&.78&.79 \\ \hline Stocks &1&1&.79&\textbf{.82}&.88&.9&1&1&.35&\textbf{.45}&.54&.62&1&1&.64&\textbf{.7}&.78&.82&1&.99&.23&\textbf{.77}&.54&.86 \\ \hline CRM &1&.97&.68&\textbf{.81}&.79&.89&.98&.97&\textbf{.9}&.85&.94&.92&.96&.98&.56&\textbf{.8}&.71&.88&1&.8&\textbf{1}&.88&.1&.84 \\ \hline \end{tabular} \label{tab:repr} \end{table*} In this subsection, we evaluate the similarity--preserving nature of our entity representations in unsupervised settings. Concretely, we perform LSH top-$K$ nearest--neighbour search, with $K =10$, on $IRs$ generated using each of the techniques from Section \ref{subsec:IR}. We compare the results against a similar nearest--neighbour search on representations generated by the encoding layer of Figure \ref{fig:arch} with inputs of corresponding types. We note that such an approach can also act as a \textit{blocking} step in an end--to--end ER process (similar to \cite{ebraheem-2018}) and, therefore, aim for \textit{high recall} because missed duplicates at this step would be unrecoverable by matching. Table \ref{tab:repr} shows the values for precision, recall and F1 score \textit{@ K=10}, for $IR$ nearest--neighbour search\footnote{For each tuple pair in the test set, we measure the effectiveness against the top-$10$ most similar neighbours of either of the two tuples in the pair.} (i.e., left--hand--side values) compared against the results of a \textit{VAER} tuple representations nearest--neighbour search with $IR$ inputs of the corresponding type (i.e., right--hand--side values). Since each representation returned by the variational encoder is a $(\mu, \sigma)$ vector--pair, we perform the search on $\mu$ vectors and reorder the results according to the $W^2_2$ distance from Eq. \ref{eq:wasserstein} to include the $\sigma$ vectors as well. Overall, the results show the consistency and the potential of VAE encodings to improve the effectiveness of an unsupervised $IR$--based ER task across all types of $IRs$. \textit{LSA} seems to be the most robust $IR$--type choice, with \textit{BERT} and \textit{EmbDI} closely following. In the case of \textit{Cosmetics}, there are many similar entities that only diverge in one attribute, e.g., \textit{color}. The representations of such tuples tend to be very similar, especially when generated using \textit{EmbDI}, which leads to lower recall values. The lowest recall value, i.e., for \textit{Software}, is determined by noisy and missing data. However, \textit{EmbDI} copes well with such data inconsistencies and this behavior is preserved after VAE encoding. \begin{figure}[t] \centering \includegraphics[width=.75\linewidth]{"./img/recall_k"} \caption{\small $VAER^{LSA}$'s recall@$K$ as $K$ increases} \label{fig:r_k} \end{figure} The effectiveness of the nearest--neighbor search depends on the value of $K$. Figure \ref{fig:r_k} shows that most of the $VAER^{LSA}$ cases that did not already achieve good recall in Table \ref{tab:repr} (i.e., the last six domains) can be improved by increasing $K$. The experiments from this subsection show that VAE--generated tuple representations are similarity--preserving and that the proposed representation learning method is \textit{cost--effective}, in the sense that it alleviates the user from deciding on the features that govern the similarity between representations, and robust across multiple types of $IR$. \subsection{Supervised matching experiments} \label{subsec:m_exp} In this section, we evaluate the supervised matching task, hypothesizing that \textit{our matching model is more efficient than the baselines}, e.g., \textit{DeepER} (DER), \textit{DeepMatcher} (DM), and \textit{DITTO}, \textit{without compromising effectiveness}. \begin{table*}[t] \centering \footnotesize \caption{\small Similarity learning results showing \textit{VAER}'s matching effectiveness} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{Domain} & \multicolumn{3}{c|}{\textbf{VAER}$^{LSA}$} & \multicolumn{3}{c|}{\textbf{DER}} & \multicolumn{3}{c|}{\textbf{DM}} & \multicolumn{3}{c|}{\textbf{DITTO}} \\ \hline & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline Rest. &1&.97&\textbf{.99}&.95&1&.97&.95&1&.97&1&.95&.97 \\ \hline Cit. 1 &.97&1&\textbf{.99}&.96&.99&.97&.96&.99&.97&1&.99&.99 \\ \hline Cit. 2 &.9&.9&.9&.9&.92&.91&.94&.94&\textbf{.94}&.97&.86&.91 \\ \hline Cosm. &.87&.94&\textbf{.91}&.83&.96&.89&.89&.92&.9&.91&.81&.86 \\ \hline Soft. &.62&.64&.63&.62&.62&.62&.59&.64&.62&.72&.71&\textbf{.71} \\ \hline Music &.86&.86&.86&.78&.9&.83&.95&.81&\textbf{.88}&.78&1&.87 \\ \hline Beer &.75&.85&.8&.59&.92&.72&.63&.85&.72&.72&.92&\textbf{.81} \\ \hline Stocks &.99&.99&.99&1&1&\textbf{1}&.99&.99&.99&.99&.98&.98 \\ \hline CRM &.97&.99&\textbf{.99}&.96&.94&.95&.98&.97&.97&.94&.98&.96 \\ \hline \end{tabular} \label{tab:match} \end{table*} Table \ref{tab:match} shows the precision, recall and F1 score of the model from Figure \ref{fig:m_arch} using \textit{LSA} $IRs$, \textit{DeepER}, \textit{DeepMatcher}, and \textit{DITTO}, each trained on given training samples for each of the domains in Table \ref{tab:data}. We have empirically chosen \textit{LSA} $IRs$ due to their better performance compared to other $IR$ types. However, the differences are marginal, e.g., on average, the F1--score difference between \textit{LSA} and \textit{W2V}/\textit{BERT} was $0.06$. The highlighted values denote cases where \textit{VAER} performs better than the antagonists. However, as with the case when one of the competitors achieves better results, e.g., \textit{Citations 2}, \textit{Stocks}, etc. the differences are minimal. \textit{Software} is a case of particular interest because it proved problematic for all ER solutions evaluated. This is because its tables contain only three columns, one of which is numerical, one contains software product descriptions that are challenging to compare, and the third presents many missing values. \begin{table}[t] \centering \footnotesize \caption{\small Training times (s)} \begin{tabular}{|l|c|c|c|c|c|} \hline \textbf{Domain} & \multicolumn{2}{c|}{\textbf{VAER}$^{LSA}$} & \textbf{DER} & \textbf{DM} & \textbf{DITTO} \\ \hline & \textbf{Repr.} & \textbf{Match} &\textbf{Match}&\textbf{Match}&\textbf{Match} \\ \hline Rest. &\textbf{4.37}&\textbf{2.5}&84.5&258.79&93.51 \\ \hline Cit. 1 &\textbf{23.5}&\textbf{10.14}&549.65&1022.31&100.94 \\ \hline Cit. 2 &\textbf{127.84}&\textbf{23.6}&1145.57&2318.89&1523.93 \\ \hline Cosm. &83.1&1.73&\textbf{33.88}&103.12&84.17 \\ \hline Soft. &\textbf{21.95}&\textbf{19.43}&552.26&986.07&679.47 \\ \hline Music &335.32&1.4&\textbf{62.28}&160.15&64.18 \\ \hline Beer &57.29&4.61&\textbf{33.61}&58.76&59.96 \\ \hline Stocks &\textbf{182.29}&\textbf{17.29}&836.94&1509.49&436.85 \\ \hline CRM &81.31&1.88&\textbf{40.23}&121.76&85.83 \\ \hline \end{tabular} \label{tab:ttime} \end{table} We can conclude from Table \ref{tab:match} that our proposed Siamese matching model \textit{achieves state--of--the--art effectiveness levels}. Additionally, the decoupling of feature learning and matching can \textit{lead to a significant decrease in training time} for some of the evaluated domains, as shown in Table \ref{tab:ttime}. The table compares the combined training times of $VAER^{LSA}$'s representation and matching models\footnote{Using other types of $IRs$ returns marginally similar results.} against \textit{DeepER}, \textit{DeepMatcher}, and \textit{DITTO}. For five out of nine cases highlighted in Table \ref{tab:ttime}, \textit{VAER} requires orders of magnitude lower training times. This is by virtue of $IRs$ that impose reduced input dimensions, and by virtue of the simple architectures of the representation and matching models. The other four cases: \textit{Cosmetics}, \textit{Music}, \textit{Beer} and \textit{CRM} are domains with many tuples to match and reduced amounts of training data. This suggests that \textit{VAER}'s representation training time is dominated by the size of the input tables, while \textit{VAER}'s matching training time, similarly to the baselines, is dominated by the size of the training set. In practice, training representations on large input tables can be accelerated by training on just a sample of all tuples. Alternatively, transfer learning can be employed, as we show in the next experiment. \subsection{Transferability experiments} \label{subsec:trf_exp} \begin{table}[t] \centering \footnotesize \caption{\small Recall/F1 score with local/transferred repr. models.} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \textbf{Domain} & \multicolumn{3}{c|}{\textbf{Repr. recall@$K$}} & \multicolumn{3}{c|}{\textbf{Matching F1}} \\ \hline & \textbf{Local} & \textbf{Transf.} & $\Delta$ & \textbf{Local} & \textbf{Transf.} & $\Delta$ \\ \hline Rest. &1&1&0&.97&.96&-.01 \\ \hline Cit. 1 &.99&1&+.01&.99&.97&-.02 \\ \hline Cit. 2 &.91&.91&0&.9&.9&0 \\ \hline Cosm. &.83&.83&0&.86&.85&-.01 \\ \hline Soft. &.8&.79&-.01&.59&.57&-.02 \\ \hline Music &.79&.75&-.04&.8&.78&-.02 \\ \hline Beer &.86&.86&0&.79&.77&-.02 \\ \hline Stocks &.79&.79&0&.95&.97&-.02 \\ \hline CRM &.81&.84&+.03&.97&.98&+.01 \\ \hline \end{tabular} \label{tab:transf_r_f} \end{table} \begin{table*}[t] \centering \footnotesize \caption{\small Active Learning results showing data labeling cost--reductions.} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|cc} \cline{2-10} & \multicolumn{3}{c|}{$\boldsymbol{Precision}$} & \multicolumn{3}{c|}{$\boldsymbol{Recall}$} & \multicolumn{3}{c|}{$\boldsymbol{F1}$} & & \\ \hline \multicolumn{1}{|c|}{\textbf{Domain}} & \textbf{Bootstrap} & \textbf{A250} & \textbf{Full} & \textbf{Bootstrap} & \textbf{A250} & \textbf{Full} & \textbf{Bootstrap} & \textbf{A250} & \textbf{Full} & \multicolumn{1}{c|}{\textbf{F1 \%}} & \multicolumn{1}{c|}{\textbf{Training \%}} \\ \hline \multicolumn{1}{|c|}{Rest.} & .73 & 1 & .94 & .6 & 1 & 1 & .65 & 1 & .97 & \multicolumn{1}{c|}{\textbf{103\%}} & \multicolumn{1}{c|}{\textbf{44\%}} \\ \hline \multicolumn{1}{|c|}{Cit. 1} & .96 & .95 & .97 & .84 & .97 & 1 & .89 & .95 & .99 & \multicolumn{1}{c|}{\textbf{96\%}} & \multicolumn{1}{c|}{\textbf{3.3\%}} \\ \hline \multicolumn{1}{|c|}{Cit. 2} & .9 & .7 & .9 & .33 & .8 & .9 & .48 & .74 & .9 & \multicolumn{1}{c|}{82\%} & \multicolumn{1}{c|}{1.4\%} \\ \hline \multicolumn{1}{|c|}{Cosm.$^\dag$} & .67 & .8 & .87 & .91 & .85 & .94 & .77 & .82 & .91 & \multicolumn{1}{c|}{\textbf{90\%}} & \multicolumn{1}{c|}{76\%} \\ \hline \multicolumn{1}{|c|}{Soft.} & .25 & .56 & .62 & .41 & .38 & .64 & .31 & .45 & .63 & \multicolumn{1}{c|}{71\%} & \multicolumn{1}{c|}{3.6\%} \\ \hline \multicolumn{1}{|c|}{Music} & .46 & .8 & .86 & .63 & .83 & .86 & .53 & .81 & .86 & \multicolumn{1}{c|}{\textbf{94\%}} & \multicolumn{1}{c|}{\textbf{76\%}} \\ \hline \multicolumn{1}{|c|}{Beer$^\dag$} & .51 & .71 & .75 & .55 & .73 & .85 & .52 & .71 & .8 & \multicolumn{1}{c|}{89\%} & \multicolumn{1}{c|}{92\%} \\ \hline \multicolumn{1}{|c|}{Stocks$^\dag$} & .99 & .95 & .99 & .83 & .85 & .99 & .90 & .89 & .99 & \multicolumn{1}{c|}{\textbf{90\%}} & \multicolumn{1}{c|}{\textbf{5.5\%}} \\ \hline \multicolumn{1}{|c|}{CRM} & .83 & .78 & .97 & .63 & .88 & .99 & .71 & .82 & .98 & \multicolumn{1}{c|}{84\%} & \multicolumn{1}{c|}{56\%} \\ \hline \end{tabular} \label{tab:al} \end{table*} Recall that by decoupling the feature and matching learning tasks, \textit{VAER} enables the transferability of the representation model to other use--cases. In this experiment, we test our hypothesis that \textit{transferability decreases the training--time costs without impacting effectiveness}. We set to prove this hypothesis firstly by observing that, when representation models are transferred from other ER cases, the representation learning times from Table \ref{tab:ttime} are no longer required, making the final training time--cost of \textit{VAER} dependent only on matching model and, therefore, \textit{drastically smaller than the baselines requirements}. Further, in Table \ref{tab:transf_r_f}, we show that the representation learning recall $@K=10$ and the matching F1 scores \textit{remain mostly unchanged} when the representation model is transferred from another domain. More specifically, we firstly train a \textit{VAER}$^{LSA}$ representation model on all tuples of the \textit{Citations 2} domain (call it the \textit{transferred representation model}). Similarly, we trained seven other \textit{VAER}$^{LSA}$ representation models, one on each of the remaining seven domains (call these the \textit{local representation models}). Then, we measure the recall obtained when performing unsupervised ER (i.e., similar to the experiment from Section \ref{subsec:rl_exp}) on each of the seven domains using the transferred model from \textit{Citations 2} and using the corresponding local representation model. Similarly, we measure the \textit{F1}--score of the matching task when the siamese encoders from Figure \ref{fig:m_arch} are initialized with the parameters of the transferred model and with the parameters of the corresponding local model. Note that the transferability case restricts the input tables to have the same arity, $a$, expected by the transferred model. Therefore, when the tested dataset has a higher arity we use the first $a$ columns and pad with empty columns datasets with lower arity. This is why the values corresponding to the local models are different from the ones in Tables \ref{tab:repr} and \ref{tab:match}. In practice, a blend of duplicate and non--duplicate tuples from multiple past ER use--cases can be used to create a robust transferable representation model. Overall, this experiment shows the domain--agnostic nature of the representation learning task, i.e., the representation model can be transferred from other ER tasks with marginal impact on the quality of the representations or the matching effectiveness (see the $\Delta$ columns in Table \ref{tab:transf_r_f}). This property offers a reliable opportunity for using previous representation knowledge and minimizing the ER training time--cost. \subsection{Active learning experiments} The practical training times \textit{VAER}'s matching model exhibit in Table \ref{tab:ttime} enable its use in AL schemes where it can be iteratively improved on account of new user--labeled training samples. In this section, we evaluate the cost reduction potential with respect to data labeling of such an AL scheme, i.e., from Section \ref{sec:AL}, hypothesizing that \textit{by actively labeling training samples for the supervised matcher we can still achieve practical effectiveness with less training data.} Table \ref{tab:al} shows the values for the precision, recall, and F1 scores obtained using three types of matching models: a \textbf{Bootstrap} matching model trained only on data resulting from Algorithm \ref{alg:al_boot}, a \textbf{A250} matching model resulting from Algorithm \ref{alg:AL} with 250 actively labeled samples, and a \textbf{Full} matching model trained on all available training data (from Table \ref{tab:data}). The last two columns denote the percentage of Full model's F1 score the A250 model achieves and the percentage of Full training data size 250 samples represent. Domains marked with $^\dag$ signal cases where the positive samples generated by Algorithm \ref{alg:al_boot} contained false positives that had to be manually removed. On average, Algorithm \ref{alg:al_boot} generated $15$ positives and $15$ negatives (i.e., a balanced initial training set). Table \ref{tab:al} highlights examples of cases achieving $90\%$ or more F1 score with less actively labeled samples than provided in the training set. Additionally, cases that have a low Bootstrap precision/recall, e.g., \textit{Software}, \textit{Beer} etc., have low diversity in their bootstrap positive/negative instances. This is expected, since Algorithm \ref{alg:al_boot} retrieves the positives/negatives with the lowest/highest distances between their tuples. Cases that show a significant recall increase from Bootstrap to A250, e.g. \textit{Restaurants}, \textit{Citations 2}, \textit{Beer}, confirm the importance of the diversity property for positive instances. Finally, in only one case, the A250 F1 score percentage is smaller than training size percentage, i.e., \textit{Beer}. Here, $250$ labeled samples achieved $89\%$ of the F1 score obtained with the $269$ samples from the given training set. This is determined by the more diverse positive instances existing in the training data. \begin{figure}[t] \centering \includegraphics[width=.75\linewidth]{"./img/al_f1"} \caption{\small Active learning F1 score} \label{fig:al_f1} \end{figure} Overall, the last two columns of Table \ref{tab:al} show that our proposed VAE--based AL strategy \textit{leads to reductions in costs associated with data labeling, while achieving practical effectiveness}. However, the effort required to achieve the Full F1 score with actively labeled data is use--case dependent. Consider Figure \ref{fig:al_f1} where the F1 scores obtained with 250 samples remain unchanged even after 100 additional samples across most of the domains. Continuing the AL iterations for the \textit{Citations 2} domain led to achieving $90\%$ of the Full F1 score with 1050 additional actively labeled samples (i.e., for a total of $7.5\%$ of the training set size). Conversely, for the \textit{Software} domain, the same percentage was achieved with just 400 additional samples (i.e., $9.4\%$ of the training set size). \section{Conclusions} In this paper we set out to decrease the cost of performing ER with a deep learning model. We identified three requirements associated with ER in practice that generate user--involvement and time costs: (i) the need for features that capture the similarity between duplicates; (ii) the need for duplicate/non--duplicate example data; and (iii) the need to learn a task--specific discriminative similarity function. We approached the cost--reduction desiderata by decoupling the feature and similarity learning tasks. This decoupling, facilitated by with the use of VAEs for the former, allowed us to perform \textit{unsupervised} feature engineering, therefore addressing the cost associated with (i); \textit{fast} supervised matching that \textit{adapts} the unsupervised feature space to the ER case at hand, therefore addressing the cost associated with (iii); and active learning on the matching model, therefore addressing the cost associated with (ii). In addition, we showed how \textit{transferring} a representation model across ER use--cases and data domains can further minimize the cost associated with (i). Lastly, we showed that empirical evaluation supports the fulfillment of our cost--reduction desiderata in practice. \section{Acknowledgments} Research supported by a joint collaboration between The University of Manchester, Peak AI Ltd. and Innovate UK as part of a Knowledge Transfer Partnership (KTP11540). \bibliographystyle{IEEEtran}
train/arxiv
BkiUd7M5qoTBG_qrrEC_
5
1
\section{Introduction} The study of vortices and their existence, stability and dynamical properties has been a central theme of study in the area of Bose-Einstein condensates (BECs) \cite{pethick,stringari}. In particular, the remarkable experiments illustrating the generation of vortices \cite{vort1,vort2,vort3} and of very robust lattices thereof \cite{latt1,latt2,latt3} have stirred a tremendous amount of activity in this area in the past few years, that has by now been summarized in various reviews and books; see for example \cite{pismen,fetter,our,book,berloff1,ournew}. Much of this activity has been centered around the robustness of vortex structures in the context of the mean-field dynamics of the BECs (which are controllably accurately described by a nonlinear Schr{\"o}dinger (NLS) equation) in the presence of many of the potentials that are relevant to the trapping of atomic BECs including parabolic traps \cite{pethick,stringari} and periodic optical lattice ones \cite{konotop,morsch}. Particularly, the latter context of optical lattice potentials is quite interesting, as it has been suggested that vortices (for example of topological charge $S=1$) will be unstable when centered at a minimum of the lattice potential \cite{jpb}, an instability that it would be interesting to understand in more detail. On the other hand, the BECs in the presence of periodic potentials have been argued to be well-approximated by models of the discrete nonlinear Schr{\"o}dinger (DNLS) type (i.e., resembling the finite-difference discretization of the continuum equation) \cite{TS,konotop2,us,usnew}. In that regard, to understand the existence and stability properties of vortices in the presence of periodic potentials, it would be interesting to analyze the discrete analog of the relevant NLS equation. This is also interesting from a different perspective in this BEC context, namely that if finite-difference schemes are employed to analyze the properties of the continuum equation, it is useful to be aware of features introduced by virtue of the discretization. However, it should be stressed that this is not a problem of restricted importance in the context of quantum fluids; it is also of particular interest in nonlinear optics where two-dimensional optical waveguide arrays have been recently systematically constructed e.g. in fused silica in the form of square lattices \cite{lederer1,lederer2} (and, more recently of even more complex hexagonal lattices \cite{lederer3}), whereby discrete solitons can be excited. By analogy to their one-dimensional counterparts of discrete dark solitons, which have been created in defocusing waveguide arrays with the photovoltaic nonlinearity \cite{kip}, we expect that it should be possible to excite discrete dark vortices in defocusing two-dimensional waveguide arrays. An especially interesting feature of dark solitons that was observed initially in \cite{johansson} (see also \cite{fitrakis}) is that on-site discrete dark solitons are stable for sufficiently coarse lattices, but they become destabilized beyond a certain coupling strength among adjacent lattice sites and remain so until the continuum limit where they are again restabilized (as the point spectrum eigenvalue that contributes to the instability becomes zero due to the restoration of the translational invariance in the continuum problem) \cite{johansson,fitrakis}. It is therefore of interest to examine if the instability mechanisms of discrete defocusing vortices are of this same type or are potentially different and how the relevant stability picture is modified as a function of the inter-site coupling strength. It is this problem of the existence, stability and continuation of the vortex structures as a function of coupling strength that we examine in the present work. We consider, in particular, a two-dimensional discrete non-linear Schr\"odinger equation \begin{equation}\label{eq:dyn} i\frac{d\psi_{n,m}}{dt}-|\psi_{n,m}|^2\psi_{n,m}+\epsilon \Delta \psi_{n,m}=0, \end{equation} where $\Delta \psi_{n,m}= \psi_{n+1,m}+\psi_{n-1,m}+\psi_{n,m+1}+\psi_{n,m-1}-4 \psi_{n,m}$ is the discrete Laplacian. We study the {\em defocusing} case when $\epsilon >0$. In that case, equation (\ref{eq:dyn}) is denoted as discrete Gross-Pitaevskii equation in analogy with its continuum counterpart \cite{pethick,stringari,Berloff}. We look for time-periodic solutions with frequency $\omega$. Using the ansatz $\psi_{n,m}(t)= \sqrt{\omega}\, \phi_{n,m} e^{-i \omega t}$, we obtain \begin{equation} \label{eq:stat} C\, \Delta \phi_{n,m} + (1-|\phi_{n,m}|^2) \phi_{n,m}=0, \end{equation} where we have set $C = \epsilon / \omega$. The coupling parameter $C>0$ determines the strength of discreteness effects. The limit $C \rightarrow +\infty$ corresponds to the continuum (stationary) Gross-Pitaevskii equation: \begin{equation} \label{eq:statcont} \frac{\partial^2\phi}{\partial x^2}+\frac{\partial^2\phi}{\partial y^2}+ (1-|\phi |^2) \phi=0. \end{equation} The case $C \rightarrow 0$ corresponds to the so-called anti-continuum (AC) limit \cite{MA94}. When equation (\ref{eq:stat}) is considered on an infinite lattice $\mathbb{Z}^2$, we look for solutions satisfying $|\phi_{n,m}|\rightarrow1$ when $(n,m)\rightarrow\infty$, for which $\phi_{n,m}$ vanishes at one lattice site, e.g. at $(n,m)=(0,0)$. Such solutions are denoted as discrete vortices, or ``dark'' vortex solitons. If one trigonometric turn on any path $\mbox{Max}(|n|,|m|)=\rho$ around the vortex center changes the argument of $\phi_{n,m}$ by $2 \pi S$ ($S\in \mathbb{Z}$), then the vortex is said to have a topological charge (or vorticity) equal to $S$. In this paper we numerically investigate the existence and stability of such solutions on a finite lattice of size $N\times N$, $N$ being large; our analysis is performed as a function of the lattice coupling parameter $C$ and we illustrate how to perform relevant continuations both from the continuum, as well as, more importantly from the AC limit (section 2). We mainly focus on numerically computing vortex solutions with vorticity $S=1$ and $S=2$ (section 3). In section 4, we also obtain such solutions in the presence of an external harmonic trap (the latter is typically present in BEC experiments). Finally, section 5 presents our conclusions and some future directions of potential interest. \section{Numerical method} \label{num_method} We compute vortex solutions of (\ref{eq:stat}) using the Newton method and a continuation with respect to $C$. The path-following can be initiated either near the continuum limit (for $C$ large) or at the anti-continuum limit $C=0$, since in both cases one is able to construct a suitable initial guess for the Newton method. For relatively high $C$, a suitable initial condition for a vortex with topological charge $S$ is obtained with a Pad\'e approximation developed for the continuum limit in \cite{Berloff}. We set $\phi_{n,m}=\rho_{n,m}e^{iS\alpha_{n,m}}$, where \begin{equation} \rho_{n,m}=\sqrt{\frac{r_{n,m}^{2S}(a_1+a_2r_{n,m}^2)} {1+b_1r_{n,m}^2+a_2r_{n,m}^{2S+2}}}, \ \ \ \ r_{n,m}=\sqrt{n^2+m^2} \end{equation} ($a_1=11/32$, $a_2=a_1/12$, $b_1=1/3$, see reference \cite{Berloff}), $$ \alpha_{n,m}= \left\{ \begin{array}{ll} \arctan(m/n) + \frac{3\pi}{2} \, &\mbox{ for } n\geq 1, \\ \arctan(m/n) + \frac{\pi}{2} \, &\mbox{ for } n\leq -1, \\ \frac{\pi}{2}\, (1-\mbox{sign}(m)) &\mbox{ for } n=0. \end{array} \right. $$ Once a vortex is found for a given $C$, the solution can be continued by increasing or decreasing $C$. Although this method was found to be efficient, it remains limited to single vortex solutions having explicit continuum approximations. Moreover, when the Newton method is applied to continue these solutions near $C=0$, the Jacobian matrix becomes ill-conditioned (and non-invertible for $C=0$) and the iteration does not converge. In what follows we introduce a different method having a wider applicability, and for which the above mentioned singularity is removed. We consider a finite $N\times N$ lattice with $(n,m) \in \Gamma = \{-M,\ldots ,M\}^2$ ($N=2M+1$), equipped with fixed-end boundary conditions given below. We set $\phi_{n,m} = R_{n,m} \, e^{i \theta_{n,m} }$ and note $R = (R_{n,m})_{n,m}$, $\theta = (\theta_{n,m})_{n,m}$. One obtains the equivalent problem \begin{equation} \label{acm2} R_{n,m}\, (1- R_{n,m}^2)+ C \, f(R,\theta )_{n,m} =0, \end{equation} \begin{equation} \label{acp} C \, g(R,\theta )_{n,m}=0, \end{equation} where $f(R,\theta ) = \mbox{Re }[\, e^{-i\theta }\, \Delta (R\, e^{i\theta } )\, ]$ and $g(R,\theta ) = \mbox{Im }[\, e^{-i\theta }\, \Delta (R\, e^{i\theta } )\, ]$ can be rewritten \vspace{1cm} \begin{eqnarray*} f(R,\theta )_{n,m} & = & R_{n+1,m} \cos{(\theta_{n+1,m}-\theta_{n,m})} + R_{n-1,m} \cos{(\theta_{n,m}-\theta_{n-1,m})} -4 R_{n,m} \\ & & +R_{n,m+1} \cos{(\theta_{n,m+1}-\theta_{n,m})} + R_{n,m-1} \cos{(\theta_{n,m}-\theta_{n,m-1})}, \end{eqnarray*} \begin{eqnarray*} g(R,\theta )_{n,m} & = & R_{n+1,m} \sin{(\theta_{n+1,m}-\theta_{n,m})} - R_{n-1,m} \sin{(\theta_{n,m}-\theta_{n-1,m})} \\ & & +R_{n,m+1} \sin{(\theta_{n,m+1}-\theta_{n,m})} - R_{n,m-1} \sin{(\theta_{n,m}-\theta_{n,m-1})}. \end{eqnarray*} Now we divide equation (\ref{acp}) by $C$ (this eliminates the above-mentioned degeneracy at $C=0$) and consider equation (\ref{acm2}) coupled to \begin{equation} \label{acp2} g(R,\theta )_{n,m}=0. \end{equation} System (\ref{acm2}), (\ref{acp2}) is supplemented by the boundary conditions \begin{eqnarray} \label{rinf} R_{n,m} = 1 & \mbox{ for }& \mbox{Max}(|n|,|m|)=M, \\ \label{tinf} \theta_{n,m} = \theta_{n,m}^\infty & \mbox{ for }& \mbox{Max}(|n|,|m|)=M. \end{eqnarray} The prescribed value $\theta_{n,m}^\infty$ of the angles on the boundary will depend on the type of vortex solution we look for. In particular, we use the boundary conditions $\theta_{n,m}^\infty = S \alpha_{n,m}$ for a single vortex with topological charge $S$ centered at $(n,m)=(0,0)$. For $C=0$, a single vortex at $(n,m)=(0,0)$ corresponds to fixing $R_{0,0}=0$ and $R_{n,m}=1$ everywhere else. Equation (\ref{acp2}) yields in that case \begin{equation} \label{aclp1} \begin{array}{l} \sin{(\theta_{n+1,m}-\theta_{n,m})}-\sin{(\theta_{n,m}-\theta_{n-1,m})} \\ +\sin{(\theta_{n,m+1}-\theta_{n,m})}-\sin{(\theta_{n,m}-\theta_{n,m-1})} =0, \\ (n,m)\in \Gamma \setminus \{ \, (0,\pm 1),\, (\pm 1,0)\, ,\, (\pm M, m)\, ,\, (n, \pm M)\, \} \end{array} \end{equation} supplemented by the four following relations at $(n,m)= (0,\pm 1),\, (\pm 1,0)$ \begin{eqnarray} \label{aclp2} \sin{(\theta_{1,\pm 1}-\theta_{0, \pm 1})}-\sin{(\theta_{0,\pm 1}-\theta_{-1,\pm 1})} +\sin{(\theta_{0,\pm 2}-\theta_{0,\pm 1})} &=&0, \\ \label{aclp3} \sin{(\theta_{\pm 2,0}-\theta_{\pm 1,0})}+\sin{(\theta_{\pm 1,1}-\theta_{\pm 1,0})} -\sin{(\theta_{\pm 1,0}-\theta_{\pm 1,-1})} &=&0. \end{eqnarray} For a vortex with topological charge $S=1$, solutions of (\ref{tinf})-(\ref{aclp3}) are computed by the Newton method, starting from the initial guess $\theta_{n,m} = \alpha_{n,m}$. The symmetries of the problem allow one to divide by four the size of the computational domain. Indeed one can take $(n,m) \in \{ 0,\ldots ,M\}^2$ with the boundary conditions $\theta_{0,m} = \alpha_{0,m}$, $\theta_{n,0} = \alpha_{n,0}$. Solutions on the whole lattice $\Gamma$ have the symmetries \begin{equation} \label{symtheta} \theta_{n,-m} = \pi- \theta_{n,m} \, [2\pi], \ \ \ \ \theta_{-n,m} = - \theta_{n,m}\, [2\pi] . \end{equation} These conditions make (\ref{aclp1}) automatically satisfied at $(n,m)=(0,0)$ ($\theta_{0,0}$ need not being specified). Afterwards, the corresponding solution of (\ref{acm2}), (\ref{acp2})-(\ref{tinf}) can be continued to $C > 0$ by the Newton method, yielding a solution $\phi_{n,m}=R_{n,m}e^{i\theta_{n,m}}$ of (\ref{eq:stat}) (see section \ref{numres}). For higher topological charges, the initial guess $\tilde\phi_{n,m}=R_{n,m}e^{iS\theta_{n,m}}$ can be used to compute a vortex solution of (\ref{eq:stat}) by the Newton method. This is done in section \ref{numres} also for $S=2$. All these continuations are performed with a $10^{-8}$ accuracy. \section{\label{numres}Numerical computation of single vortices} In this section we analyze the existence and stability of discrete vortices centered on a single site, as a function of the coupling strength $C$ for fixed-end boundary conditions. The stability of the discrete vortex solitons is studied assuming small perturbations in the form of $\delta\psi_{m,n}=\exp(-i t)[p_{n,m}\exp(-i\lambda t)+q_{n,m}\exp(i\lambda^* t)]$, the onset of instability indicated by the emergence of $\mathrm{Im}(\lambda)\neq 0$; $\lambda$ in this setting denotes the perturbation eigenfrequency. Note that it is sufficient to consider the case $\omega=1$ for stability computations, because this case can always be recovered by rescaling time. Figure \ref{fig:AC1} compares the computed angles $\theta_{n,m}$ with respect to the seed angle $\alpha_{n,m}$ for fixed-end boundary conditions and $N=81$. The most significant differences arise close to the vortex center. This figure also shows the dependence on $N$ of the difference between the angles $\theta$ for a given domain size $N$ and for a larger domain of size $N+10$. This is done through $||\theta_{n,m}^{N}-\theta_{n,m}^{N+10}||$ where $||\cdot||$ is the $\infty$-norm, and $\theta_{n,m}^{N}$ represent the angles at a given lattice size $N$. The main contribution of this norm corresponds to the boundary sites. On the other hand, the decrease of this norm as a function of $N$ originates from the convergence of the configuration to an asymptotic form. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{AC1.eps} & \includegraphics[width=7cm]{AC2.eps} \end{tabular} \caption{(Left panel) The spatial profile of the difference between the computed angles and the seed angles in a $81\times81$ lattice at the AC-limit. (Right panel) Dependence of $||\theta_{n,m}^{N}-\theta_{n,m}^{N+10}||_{\infty}$ with respect to the lattice size $N$. In both cases, the lattice has fixed end boundary conditions.} \label{fig:AC1} \end{center} \end{figure} Figure \ref{fig:power} shows the complementary norm of the $S=1$ and $S=2$ vortices, which is defined as \cite{MHM07}: \begin{equation} P=\sum_n\sum_m\left(|\phi_{\infty}|^2-|\phi_{n,m}|^2\right) \end{equation} with $|\phi_{\infty}|^2$ being the background density; in our case, $|\phi_{\infty}|^2=1$. As it can be observed in the figure, vortices with $S=1$ and $S=2$ can be continued for couplings up to O$(1)$ and presumably for all $C$\footnote{In fact, vortices have been continued at least up to $C=10$ without any convergence problems, and their existence in the continuum limit suggests that it should be, in principle, possible to identify such structures for arbitrarily large values of $C$.}. It should be mentioned in passing that the method has also been successfully used to perform continuation in the vicinity of the anti-continuum limit, even for higher charge vortices such as $S=3$. Notice also that all the considered solutions are ``black'' solitons, i.e., the vortex center has amplitude $R_{0,0}=0$. \begin{figure} \begin{center} \includegraphics[width=7cm]{norms.eps} \caption{Dependence of the complementary norm on the coupling strength $C$ for $S=1$ and $S=2$.} \label{fig:power} \end{center} \end{figure} Figures \ref{fig:S1} and \ref{fig:S2} show, for $S=1$ and $S=2$ vortices, respectively, the profile $|\psi_{n,m}|^2=|\phi_{n,m}|^2=R_{n,m}^2$, the angles $\theta_{n,m}$, the spectral plane of the stability eigenfrequencies and a comparison with the angles $\alpha_{n,m}$. In all cases, $C=0.2$ is shown, which corresponds to unstable vortices. \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{S1a.eps} & \includegraphics[width=7cm]{S1b.eps} \\ \includegraphics[width=7cm]{S1c.eps} & \includegraphics[width=7cm]{S1d.eps} \\ \end{tabular} \caption{Vortex soliton with $S=1$ and $C=0.2$. (Top left panel) density Profile; (top right panel) angular dependence; (bottom left panel) spectral plane of stability eigenfrequencies [recall that the presence of eigenfrequencies with non-vanishing imaginary part denotes instability]; (bottom right panel) comparison of the vortex angles with $\alpha_{n,m}$.} \label{fig:S1} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{S2a.eps} & \includegraphics[width=7cm]{S2b.eps} \\ \includegraphics[width=7cm]{S2c.eps} & \includegraphics[width=7cm]{S2d.eps} \\ \end{tabular} \caption{Same as Fig. \ref{fig:S1} but for $S=2$.} \label{fig:S2} \end{center} \end{figure} The vortices with $S=1$ and $S=2$ are, respectively, stable for $C<C_{cr}\approx 0.0395$ and $C<C_{cr}\approx0.0425$. This instability, highlighted in the case of the $S=1$ vortex in Fig. \ref{fig:S1stab} can be rationalized by analogy with the corresponding stability calculations in the case of dark solitons \cite{johansson}. In particular, the relevant linearization problem can be written in the form: \begin{eqnarray} \lambda \left( \begin{array}{c} p_{n,m} \\ q_{n,m}^{\star} \end{array} \right) = \left( \begin{array}{cc} 2 |\phi_{n,m}|^2 -1 - C \Delta & \phi_{n,m}^2 \\ -(\phi_{n,m}^2)^{\star} & 1 -2 |\phi_{n,m}|^2 + C \Delta \end{array} \right) \left( \begin{array}{c} p_{n,m} \\ q_{n,m}^{\star} \end{array} \right). \label{ceq4b} \end{eqnarray} However, by analogy to the corresponding 1d problem, the symmetry and the high spatial localization of the localized eigenvector at low coupling renders it a good approximation to write for the relevant perturbations that $\Delta p_{n,m} \approx -4 p_{n,m}$ (and similarly for $q$), by virtue of which it can be extracted that the relevant eigenfrequency is $\lambda \approx 1-4 C$. This leading order prediction (as a function of $C$) for the internal (``translational'') mode frequency is based on the anti-symmetry of both the real and the imaginary parts of the vortex configuration around its central site, in analogy with the anti-symmetry of the on-site dark soliton around its central site in the 1d analog of the problem \cite{johansson,fitrakis}. This feature (whose continuation to the $C \rightarrow \infty$ leads to a zero frequency mode due to the translational invariance of the underlying continuum model) is an example of the ``negative energy'' modes that both dark solitons (see e.g., \cite{giorgo} and references therein) and vortices (see e.g. \cite{pu}) are well-known to possess (due to the fact that, although stationary, they are not ground states of the respective 1d and 2d systems). On the other hand, by analogy to the one dimensional calculation, it is straightforward to compute the dispersion relation characterizing the eigenfrequencies of the continuous spectrum (using $\{p_{n,m},q_{n,m}^{\star}\} = \{P,Q^{\star}\}\exp[i (k_n n + k_m m)]$, deriving a $2 \times 2$ homogeneous linear system for $P$ and $Q$ and demanding that its determinant be zero) as extending through the interval $\lambda \in [-\sqrt{64 C^2 + 16 C}, \sqrt{64 C^2 + 16 C}]$. Therefore, the collision of the point spectrum (negative energy) eigenvalue with the band edge of the continuous spectrum yields a prediction for the critical point of $C_{cr} \approx (2 \sqrt{3}-3)/12 \approx 0.0387$ in good agreement with the corresponding numerical result above. At $C=C_{cr}$ the system experiences a Hamiltonian Hopf bifurcation. In consequence, there exists an eigenvalue quartet $\{\lambda,\lambda^*,-\lambda,-\lambda^*\}$. When $C$ increases, a cascade of Hopf bifurcations takes place due to the interaction of a localized mode with extended modes, as it was observed in one-dimensional dark solitons \cite{johansson} (see also \cite{MA98}, \cite{AACR02} to illustrate the appearance of this phenomenon in Klein--Gordon lattices). This cascade implies the existence of stability windows between inverse Hopf bifurcations and direct Hopf bifurcations. For $S=1$ vortices, each one of the bifurcations takes place for decreasing $|{\rm Re}(\lambda)|$ when $C$ grows, and, in consequence, the bifurcations cease at a given value of $C$, as $|{\rm Re}(\lambda)|$ of the localized mode is smaller than that of the lowest extended mode frequency [however, in the infinite domain limit, this eventual restabilization would not take place but for the limit of $C \rightarrow \infty$]. This fact is illustrated in Fig. \ref{fig:S1stab}. A similar plot for the case of the $S=2$ vortex is shown in Fig. \ref{fig:S2stab}. When the lattice size tends to infinity ($N\rightarrow\infty$), the linear mode band extends from zero to infinity and becomes dense; thus, these stabilization windows should disappear at this limit. To illustrate this point, we have considered lattices of up to $201 \times 201$ sites for the $S=1$ and $S=2$ vortices and have shown the growth rate of the corresponding instabilities in Fig. \ref{fig:stab}. The maximum growth rate (i.e. the largest imaginary part of the stability eigenfrequencies) takes place at $C\approx0.115$ for $S=1$ and $S=2$ and being ${\rm Im}(\lambda)\approx0.0845$ (0.0782) for $S=1$ ($S=2$). \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{fS1real1.eps} & \includegraphics[width=7cm]{fS1real2.eps} \end{tabular} \caption{Real part of the stability eigenfrequencies for $S=1$. The panels show zooms of two different regions. Dashed lines correspond to the predicted eigenvalues $\lambda \approx 1-4 C$ and $\lambda \approx\sqrt{64 C^2 + 16 C}$.} \label{fig:S1stab} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{fS2real1.eps} & \includegraphics[width=7cm]{fS2real2.eps} \end{tabular} \caption{Real part of the stability eigenfrequencies for $S=2$. The panels show zooms of two different regions.} \label{fig:S2stab} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{fS1imag.eps} & \includegraphics[width=7cm]{fS2imag.eps} \end{tabular} \caption{Imaginary part of the stability eigenfrequencies for $S=1$ (left panel) and $S=2$ (right panel), as a function of the coupling strength $C$. This corresponds to the growth rate of the corresponding instability.} \label{fig:stab} \end{center} \end{figure} \section{Harmonic Trap} In this section, we consider the effect of introducing a harmonic trap. Thus, Eq. (\ref{eq:stat}) is modified: \begin{equation}\label{eq:stattrap} C\, \Delta \phi_{n,m} + (1-|\phi_{n,m}|^2-V_{n,m}) \phi_{n,m}=0, \end{equation} with the parabolic potential of the form\footnote{The factor $1/C$ appears when discretizing the continuum equation given that $C=1/h^2$, where $h$ the lattice spacing. In particular, we have used $r=\sqrt{x^2+y^2}=h\sqrt{m^2+n^2}=hr_{n,m}=r_{n,m}/\sqrt{C}$.} \begin{equation} V_{n,m}=\frac{1}{C}\Omega^2r_{n,m}^2\ . \end{equation} Fig. \ref{fig:TrapS1} shows a typical example of such a discrete vortex structure in the presence of an external trapping potential. The method presented in Section \ref{num_method} is again used and converges unhindered by the presence of the magnetic trap. Notice that to include the trapping effect of the potential, we only modify the initial guess proposed in Section \ref{num_method} through multiplying it by the so-called Thomas-Fermi profile of $\sqrt{\max(0,1-V_{n,m})}$ \cite{pethick,stringari}; the resulting guess converges even for small values of $C$ (such as the one used in Fig. \ref{fig:TrapS1}). These vortices can be continued up to $C\rightarrow\infty$ and will converge to the corresponding continuum trapped vortices (for a recent discussion of such vortices in the presence of external potentials see e.g. \cite{klaw}). The stability of such structures is also examined in Figs. \ref{fig:stabMT1} and \ref{fig:stabMT2}. The sole type of instability observed is an oscillatory one, with alternating windows of destabilization and restabilization. However, since the harmonic trap is well-known \cite{pethick,stringari} to discretize the spectrum of excitations, these windows of instability/restabilization are ``true'' ones (due to collisions of the ``negative energy'' mode of the vortex with the point spectrum of the background), rather than artificial ones (caused by the finite size of the computational domain). In fact, in this case, the maximum imaginary part of the eigenvalues does not depend on the number of grid points used (provided that the domain ``encompasses'' the harmonically trapped vortex). For high enough $C$, the charge $S=1$ vortex is always found to stabilize \cite{pu,klaw}. It is interesting to also note that although the fundamental destabilization scenario indicated by the right panel of Fig. \ref{fig:stabMT1} has very strong parallels with its untrapped analog, the left panel of the figure indicates multiple additional collisions for smaller values of $C$. The negative Krein sign of the translational eigenvalue, discussed previously, suggests that these collisions should also result in oscillatory instabilities, although this is not discernible in the left column of Fig. \ref{fig:stabMT1}. A relevant clarification to this apparent paradox is provided by Fig. \ref{fig:stabMT2} which clearly illustrates that the oscillatory instabilities do indeed arise but, in fact, emerge and disappear (the latter through inverse Hopf bifurcations) over very tiny parametric intervals of $C$ (and are, thus, apparently invisible over the scale of Fig. \ref{fig:stabMT1}). \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{MTS1a.eps} & \includegraphics[width=7cm]{MTS1b.eps} \end{tabular} \caption{Vortex soliton with $S=1$ and $C=0.5$ in a harmonic trap with $\Omega=0.1$. (Left panel) density Profile; (Right panel) angular dependence.} \label{fig:TrapS1} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{MTS1real.eps} & \includegraphics[width=7cm]{MTS1imag.eps} \end{tabular} \caption{Real part (left panel) and imaginary part (right panel) of the stability eigenfrequencies for $S=1$ as a function of the coupling strength $C$ and a harmonic trap with $\Omega=0.1$.} \label{fig:stabMT1} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tabular}{cc} \includegraphics[width=7cm]{MTS1realzoom.eps} & \includegraphics[width=7cm]{MTS1imagzoom.eps} \end{tabular} \caption{Same as Fig. \ref{fig:stabMT1} with a zoom around the first bifurcation. From this figure, it is clear that there is a Hopf bifurcation that destabilizes the vortex and an inverse Hopf. This pair of bifurcations takes place in a small interval of $C$ with a length of approximately $3\times10^{-6}$.} \label{fig:stabMT2} \end{center} \end{figure} \section{Conclusions and Future Directions} In the present paper, we examined the discrete analog of continuum defocusing vortices which are perhaps the prototypical coherent structure in the two-dimensional nonlinear Schr{\"o}dinger equation. We illustrated how to systematically obtain such structures through an appropriate continuation of the amplitude and phase profiles from the anti-continuum limit, and also discussed how to perform such a continuation from the continuum limit (at least for single core vortices). Such a continuation as a function of the coupling strength revealed significant analogies between these defocusing discrete vortices and their 1d analog of the discrete dark solitons, which are stable from coupling $C=0$ up to a critical coupling and are subsequently unstable for all higher couplings up to $C \rightarrow \infty$ (when they become restabilized). Something similar was observed and quantified in the case of discrete vortices. In addition to the most fundamental structures of topological charge $S=1$, structures of higher charge such as $S=2$ were obtained by similar means. A natural topic for a more detailed future study arising from the present work concerns the understanding of multi-vortex bound states and their stability properties, as well as their detailed continuation as a function of the coupling and eventual disappearance as the coupling becomes sufficiently large. Another possible direction would be to examine such defocusing vortices in multi-component models (in analogy e.g., to the bright discrete vortices of \cite{pelinovsky2d}; see also references therein). There it would be of interest to study the similarities and differences of bound states of the same charge versus ones of, say, opposite charges. For these more demanding computations (as well as possibly ones associated with the 3d version of the present model \cite{our3d}), more intensive numerical computations will be needed which may be aided by virtue of parallel implementation \cite{Faustino}. Such studies are currently in progress and will be reported in a future publication. \section*{Acknowledgments} We acknowledge Faustino Palmero for his help with the implementation of the numerical routines. PGK gratefully acknowledges support from NSF-CAREER, NSF-DMS-0505663, NSF-0619492 and from the Alexander von Humboldt Foundation through a Research Fellowship.
train/arxiv
BkiUdNk4uBhi8DXP5fW7
5
1
\section{Introduction: Long-Range Spin-Glasses} Spin-glass systems \cite{EdwardsAnderson}, composed of frozen randomly distributed competing interactions, such as ferromagnetic and antiferromagnetic interactions or, more recently \cite{Caglar1, Caglar2, Caglar3}, left- and right-chiral (i.e., helical \cite{Ostlund,Surface3}) interactions, exhibit phases with distinctive spin-glass order. A prime characteristic of the spin-glass phase is the chaotic behavior \cite{McKayChaos,McKayChaos2,BerkerMcKay,Hartford,ZZhu,Katzgraber3,Fernandez,Fernandez2,Eldan,Wang2,Parisi3} of the effective temperature under scale change, which also means the major changes of the macroscopic properties under minor changes of the external paramater such as temperature.\cite{Aral} In this study, we consider the spin-glass system of Ising spins on a three-dimensional $(d=3)$ hierarchical lattice \cite{BerkerOstlund,Kaufman1,Kaufman2}, with the inclusion of long-range interactions \cite{Hinczewski,percolation,Jiang}. We study, in turn, ferromagnetic and spin-glass long-range interactions. Much qualitatively new behavior emerges from the inclusion of these long-range interactions. Refs. \cite{Derevyagin2,Chio,Teplyaev,Myshlyavtsev,Derevyagin,Shrock,Monthus,Sariyer} are recent works using exactly soluble hierarchical models. \begin{figure}[ht!] \centering \includegraphics[scale=0.29]{ConstantLRb.eps} \caption{Calculated phase diagrams of the Ising spin glass with long-range ferromagnetic interaction $K$ in $d=3$. In the left panel, the phase diagram that starts leftmost is for $K=0$, no long-range interaction, and is the standart spin-glass phase diagram with ferromagnetic-antiferromagnetic symmetry about the $p=0.5$ line. The ferromagnetic and antiferromagnetic phases are marked respectively as F and A. Between these phases, there are the spin-glass and disordered phases, respectively at low and high temperature. In the next phase diagram to the right in the left panel, for long-range ferromagnetic interaction $K=0.01453$, the phase diagram is slightly deformed and loses the ferromagnetic-antiferromagnetic symmetry. For $K>0$, the disordered phase is replaced by a Berezinski-Kosterlitz-Thouless (BKT) phase with algebraic order. At $K=0.01453$, the BKT phase precipitously disappears, by the renormalization-group mechanism of the peninsular Potts flows, explained in the text and in Fig. 3. For $K>0.01453$, there is a direct phase transition between the ferromagnetic and antiferromagnetic phases, as seen for $K=0.05$, the rightmost phase diagram in the left panel of this figure. In the right panel of this figure, the evolution of this phase diagram is seen from the phase diagrams for $K=0.05,0.1,0.4,0.8$, from top to bottom.} \end{figure} Our model, with nearest-neighbor spin-glass interactions and long-range ferromagnetic or spin-glass interactions, is defined by the Hamiltonian \begin{equation} -\beta \mathcal{H}=\sum_{\langle ij \rangle} J_{ij} s_i s_j \,+\sum_{LR} K_{ij} s_i s_j \,, \end{equation} where $\beta=1/kT$, $s_i = \pm1$ at each site $i$ of the lattice, and the sum $\langle ij \rangle$ is over all pairs of nearest-neighbor sites. The bond $J_{ij}$ is ferromagnetic $+J>0$ or antiferromagnetic $-J$ with probabilities $1-p$ and $p$, respectively. The long-range interaction $LR$ is between all spins pairs beyond the first neighbors. We have studied the two cases where, for all further-neighbor spin pairs, the long-range interaction is (a) ferromagnetic $K_{ij}=K>0$ or (b) frozen ferromagnetic or antiferromagnetic $K_{ij}=\pm K$ with equal probability, namely a spin-glass interaction. By symmetry, and a simple reflection (which is meaningful, as the phase diagrams become asymmetric) of the phase diagrams about the $p=0.5$ line, case (a) is equivalent to antiferromagnetic $K_{ij}=-K<0$ interaction for all further-neighbor spin pairs. As seen in Fig.1, the introduction of long-range interaction qualitatively affects the system, introducing a new phase (the BKT phase) and a new mechanism of phase collapse (the peninsular Potts flow renormalization-group mechanism). \begin{figure}[ht!] \centering \includegraphics[scale=0.22]{BondImage.eps} \caption{The hierachical lattice is constructed by the repeated self-imbedding of a graph into bond. \cite{BerkerOstlund} The dashed line in the graph represents the long-range interaction. The renormalization-group exact solution proceeds in the opposite direction: Summation over the spins on the internal sites (full circles) of the graph gives the renormalized interaction $J'$ between the spins on the external sites (open circles) as a function of the interactions $J$ and $K$ on the graph, namely the recursion relation $J'=J'(\{J_{ij}\},K)$, where $\{J_{ij}\}$ are the nearest-neighbor interactions, in general with different values, inside the graph.} \end{figure} \section{Construction of the Hierarchical Lattice and Renormalization-Group Exact Solution} The construction of the hierarchical lattice is explained in Fig. 2. The number (in this case 27) of nearest-neighbor interactions replacing a single nearest-neighbor interaction gives the dimensionality as $b^d$, where $b$ is the length-rescaling factor, namely the number of bonds in the shortest path between the external sites of the graph. In the present case, $b=3$ and therefore $d=3$. The renormalization-group transformation is effected by expressing the nearest-neighbor interaction as a $2 \times 2$ transfer matrix, $T_{ij}(s_i,s_j)= e^{E_{ij}(s_i,s_j)}$, where the energy $E_{ij}(s_i,s_j)$ is initially as given in the first term of Eq.(1). For each renormalization-group trajectory, initially 4000 unrenormalized transfer matrices $\{T_{ij}\}$ are generated randomly from the double-delta distribution characterized by the probability $p$ as explained above. In each consecutive renormalization-group transformation, a new (renormalized) set of 4000 transfer matrices $\{T'_{ij}\}$ is generated, using the recursion relation explained in Fig. 2 and in(A-G) below, randomly choosing each of the $b^d$ unrenormalized transfer matrices $T_{ij}$ inside the graph from the 4000 transfer matrices generated from the previous renormalization-group transformation. Thus, a renormalization-group flow of the quenched probability distribution of the interactions \cite{AndelmanBerker} is obtained. The generation of a set of renormalized transfer matrices is broken into binary steps \cite{Ilker1,Ilker2,Ilker3} that accomplish the dictate of Fig. 2: (A) First, the starting set of tranfer matrices is combined with itself, by randomly chosing two transfer matrices, $\mathbf{T^{(1)}}$ and $\mathbf{T^{(2)}}$, from the set and multiplying matrix elements at each position, $T^{(1)}_{ij}*T^{(2)}_{ij}$, thus obtaining a new transfer matrix. 4000 such new matrices are generated. (B) The set generated in (A) is combined with itself, using the procedure described in (A). (C) The set generated in (B) is combined with itself, using the procedure described in (A). (D) The set generated in (C) is combined with the initial set used in (A), using the procedure described in (A). This completes the combination of $b^{d-1}=9$ parallel bonds shown in each bubble in Fig. 2. (E) The set generated in (D) is combined with itself, by randomly chosing two transfer matrices, $\mathbf{T^{(1)}}$ and $\mathbf{T^{(2)}}$, from the set and matrix multiplying, $\mathbf{T^{(1)} \cdot T^{(2)}}$. (F) The set generated in (E) is combined with the initial set used in (E), using the procedure described in (E). This completes the elimination of the internal sites in Fig. 2 by decimation. (G) The anti-diagonals of each transfer matrix in the set are multiplied by $\exp {-2K}$. This also completes the renormalization-group transformation, obtaining the set of 4000 renormalized transfer matrices $\{\mathbf{T'}\}$ from the set of 4000 unrenormalized transfer matrices $\{\mathbf{T}\}$. This renormalization group-transformation is repeated many times to obtain a renormalization-group trajectory of the quenched probability distribution. With no loss of generality, each time that a transfer matrix is constructed as described in the previous paragraphs, the matrix elements are divided by the largest element, so that eventually all matrix elements are between 1 and 0, inclusive. This allows the repetition of the renormalization-group transformation as much as necessary (in practice, thousands of times) without running into numerical overflow problems, needed for the determination of thermodynamic phase sinks, runaway exponents, and the Lyapunov exponents of chaos. For trajectories starting at $(J,K,p)$ in the ferromagnetic phase, all transfer matrices in the set asymptotically renormalize to 1 in the diagonals and 0 in the anti-diagonals. For trajectories starting at $(J,K,p)$ in the antiferromagnetic phase, all transfer matrices in the set asymptotically renormalize to 0 in the diagonals and 1 in the anti-diagonals. For trajectories starting at $(J,K,p)$ in the spin-glass phase, all transfer matrices in the set asymptotically renormalize to 1 in the diagonals or anti-diagonal randomly, simultaneously with 0 in the anti-diagonals or diagonals. For trajectories starting in the algebraically ordered BKT phase, all transfer matrices in the set asymptotically renormalize to 1 in the diagonals and to a value between 1 and 0 in the anti-diagonals, continuously varying based on the inital $(J,K,p)$ of the trajectory. For the trajectories starting in the disordered phase, all transfer matrices in the set renormalize to 1 in the diagonals and anti-diagonals. Phase boundaries in $(J,K,p)$ are obtained by numerically determining the boundaries of these different asymptotic behaviors. \begin{figure}[ht!] \centering \includegraphics[scale=0.23]{PottsFlowsF.eps} \caption{The peninsular Potts renormalization-group flow mechanism and the precipitous phase diagram. In the lower left panel, the upper line gives the eigenvalue exponents $y$ for the phase transitions from the antiferromagnetic phase. The positive part of the lower curve gives the eigenvalue exponents for the phase transitions between the algebraically ordered and ferromagnetic phases. The phase diagram on the right is calculated at constant temperature $J^{-1} = 0.1$. Because of the Potts-peninsular mechanism, explained in the Sec. III, part of the phase boundary between the ferromagnetic and BKT phases should be and is calculated to be vertical here.} \end{figure} \section{Potts-Peninsular Renormalization-Group Mechanism and Precipitous Phase Diagram} Quenched randomness amplifies in renormalization-group trajectories starting in the spin-glass phase and shows chaotic rescaling behavior. Quenched randomness deamplifies in renormalization-group trajectories starting in the four other phases. In this case, the recursion relation constructed in the previous section becomes \begin{equation} J' = \tanh^{-1}\{[\tanh(9J)]^3\} + K \,. \end{equation} Solving Eq.(2) for $J'=J \equiv J^*$ gives the fixed point interactions $J^*$ as a function of $K$, shown in the upper right panel of Fig. 3. Taking the derivative of Eq.(2) at the fixed point, \begin{equation} \frac {dJ'}{dJ} = \frac {27[\tanh(9J)]^2}{1+[\tanh(9J)]^2+[\tanh(9J)]^4} = b^y \,, \end{equation} the eigenvalue exponents $y$ at the fixed point are obtained. These are shown in the lower left panel of Fig. 3. The peninsular Potts renormalization-group flow mechanism and the precipitous phase diagram are given in Fig. 3. The upper left panel shows the lines of fixed points as a function of the long-range interaction $K$, calculated from Eq.(2). This calculation is done in the non-random limit where all renormalization-group trajectories flow, from phases outside the spin-glass phase. In this upper left panel, the lower curve is the fixed line, unstable to the renormalization-group flows, giving the phase boundary between the antiferromagnetic phase and, for $K<0.01453$ where the upper flows hit the stable branch of the peninsula, the BKT phase and, for $K>0.01453$ where the upper flows miss the peninsula beyond its tip, the ferromagnetic phase. Therefore, the BKT phase precipitously disappears for at $K=0.01453$. Due to this catastrophic changeover \cite{Thom}, in Fig. 3, part of the phase boundary between the ferromagnetic and BKT phases should be and is calculated to be vertical. In the lower left panel of Fig. 3, the lower branch of the peninsula is a fixed line, stable to the renormalization-group flows, constituting the sink of the algebraically ordered BKT phase. The upper branch of the peninsula is a fixed line, unstable to the flows, giving the phase transition between the BKT phase and the ferromagnetic phase. The renormalization-group flows are indicated with the arrows. The flows at the upper and lower edges of the panel proceed to $J=+\infty$ and $J=-\infty$, constituting the sinks of the ferromagnetic and antiferromagnetic phases respectively. The unstable fixed lines give the phase transitions. As seen in the lower left panel of Fig. 3, the eigenvalue exponent $y$ and therefore the critical exponents (e.g., the correlation-length critical exponent $\nu$) vary continuously along the phase boundaries. This peninsular renormalization-group flow mechanism previously has only been seen in Potts models in $d$ dimensions, realizing the changeover from second- to first-order phase transitions of the Potts models.\cite{spinS7,Nienhuis1,Nienhuis2,AndelmanPotts0,AndelmanPotts1,Nienhuis3} \begin{figure}[ht!] \centering \includegraphics[scale=0.2]{ConstantLyapunovB.eps} \caption{The chaotic renormalization-group trajectory of the interaction $J_{ij}$ at a given location $<ij>$, for various long-range interactions $K$. The calculated Lyapunov exponents $\lambda$ are also given and increase with ferromagnetic long-range interaction $K$. The calculated runaway exponent is $y_R=0.24$, showing simultaneous strong-chaos and strong-coupling behaviors.} \end{figure} \section{Asymmetric Phase Diagrams with Algebraically Ordered Berezinski-Kosterlitz-Thouless Phase} Calculated phase diagrams of the Ising spin glass with long-range ferromagnetic interaction $K$ in $d=3$ are shown in Fig. 1. In the left panel, the phase diagram that starts leftmost is for $K=0$, no long-range interaction, and is the standart spin-glass phase diagram with ferromagnetic-antiferromagnetic symmetry about the $p=0.5$ line. The ferromagnetic and antiferromagnetic phases are marked respectively as F and A. Between these phases, there are the spin-glass and disordered phases, respectively at low and high temperature. In the next phase diagram to the right, for long-range ferromagnetic interaction $K=0.01453$, the phase diagram is slightly deformed and loses the ferromagnetic-antiferromagnetic symmetry. For $K>0$, the disordered phase is replaced by a Berezinski-Kosterlitz-Thouless (BKT) phase with algebraic order. This phase has algebraic order, since its sink line continuously varies and is at non-zero and non-infinite interactions. In general, the correlation length at a fixed point is either zero, or infinite, due to the scale-free nature of this point. In the present case, the zero option is eliminated by the fixed-point interactions being non-zero and non-infinite. Therefore, the BKT attractive fixed line (phase sink) and all points flowing to it under renormalization group have infinite correlation length and algebraic order.\cite{Kosterlitz,Jose,BerkerNelson,BerkerKadanoff1,BerkerKadanoff2} At $K=0.01453$, the BKT phase precipitously disappears, by the renormalization-group mechanism of the peninsular Potts flows, explained in Sec. III and in Fig. 3. Thus, our phase diagram calculations (Fig. 1) with global renormalization-group flows exactly yield and confirm the peninsular tip obtained from the fixed-point calculation using Eq. (2) (Fig. 3). For $K>0.01453$, there is a direct phase transition between the ferromagnetic and antiferromagnetic phases, as seen for $K=0.05$, the rightmost phase diagram in the left panel of Fig. 1. In the right panel of Fig. 1, the evolution of this phase diagram is seen from the phase diagrams for $K=0.05,0.1,0.4,0.8$, from top to bottom. \section{Chaos Continuously Varying within the Spin-Glass Phase: Lyapunov Exponent and Runaway Exponent} The spin-glass phase is a phase induced by competing quenched randomness and that does not otherwise exist. The competing interactions can be ferromagnetic versus antiferromagnetic, as here, or left- and right-chiral interactions. A distinctive characteristic of the spin-glass phase is chaos under scale change \cite{McKayChaos}. In the present work, the asymptotic chaotic trajectory continuously varies quantitatively with the long-range interaction $K$. The asymptotically chaotic renormalization-group trajectories starting within the spin-glass phase are shown for various values of the long-range interaction $K$ in Fig. 4, where, for each $K$, the consecutively renormalized (combining with neighboring interactions) values at a given location $<ij>$ are followed. The strength of chaos is measured by the Lyapunov exponent \cite{Collet,Hilborn} \begin{equation} \lambda = \lim _{n\rightarrow\infty} \frac{1}{n} \sum_{k=0}^{n-1} \ln \Big|\frac {dx_{k+1}}{dx_k}\Big|\,, \end{equation} where $x_k = J_{ij}/\overline{J}$ at step $k$ of the renormalization-group trajectory and $\overline{J}$ is the average of the absolute value of the interactions in the quenched random distribution. We calculate the Lyapunov exponents by discarding the first 1000 renormalization-group steps (to eliminate crossover from initial conditions to asymptotic behavior) and then using the next 9000 steps. For a given $K$ value, the initial $(J,p)$ values do not matter, as long as they are within the spin-glass phase. In the absence of long-range interaction, $K=0$, the Lyapunov exponent is calculated to be $\lambda = 1.93$, as in previous work \cite{Ilker2,ArtunBerker}. With increasing long-range ferromagnetic interaction, the Lyapunov exponent and therefore chaos increase, to the value of $\lambda = 1.99$ for $K=0.8$. In addition to chaos, the renormalization-group trajectories show asymptotic strong coupling behavior, \begin{equation} \overline{J'} = b^{y_R}\, \overline{J}\,, \end{equation} where $y_R >0$ is the runaway exponent \cite{Demirtas}. Again using 9000 renormalization-group steps after discarding 1000 steps, we find $y_R =0.24$ for all values of $K$. In fact, $y_R =0.24$ was also found previously for all values of the spin $s$ \cite{ArtunBerker}. \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{RandomPD.eps} \caption{Calculated phase diagrams of the Ising spin glass with long-range spin-glass interaction $\pm K$ in $d=3$. From top to bottom, the phase diagrams are for $K=0.1,0.4,0.8$. The ferromagnetic and antiferromagnetic phases are marked respectively as F and A. Between these phases, for $K=0.1$, there are the weak-coupling and strong-coupling spin-glass phases, respectively at high and low temperature. The weak-coupling spin-glass phase occurs for $0 < K < 0.1883$ and abruptly disappears at $K = 0.1883$ by the Potts renormalization-group flow mechanism generalized to quenched random interactions, namely by the unstable fixed distribution of the phase boundary between the two spin-glass phases and the stable fixed distribution sink of the weak-coupling spin-glass phase (Fig. 6) merging and annihilating. Thus, for $K > 0.1883$, only the strong-coupling spin-glass phase occurs between the ferromagnetic and antiferromagnetic phases.} \end{figure} \begin{figure}[ht!] \centering \includegraphics[scale=0.25]{birlesikhistogramD.eps} \caption{Fixed distributions for the Ising spin glass with long-range spin-glass interaction $\pm K$ in $d=3$. The left and right columns are for $K=0.1$ and 0.1780 respectively. The top row gives the stable fixed distribution, i.e., sink, for the weak-coupling spin-glass phase. The bottom row gives the stable fixed distribution, i.e., sink, for the strong-coupling spin-glass phase. The middle row gives the unstable fixed distribution for the phase transition between the weak- and strong-coupling spin-glass phases. At the very top are the Lyapunov exponents for the weak-coupling sinks. At the very bottom are the Lyapunov exponents for the strong-coupling sinks.} \end{figure} \section{Long-Range Spin-Glass Interactions and Spin-Glass-to-Spin-Glass Phase Transitions} Calculated phase diagrams of the Ising spin glass with long-range spin-glass interaction $\pm K$ in $d=3$ are given in Fig. 5. From top to bottom, the phase diagrams are for $K=0.1,0.4,0.8$. The ferromagnetic and antiferromagnetic phases are marked respectively as F and A. Between these phases, for $K=0.1$, there are the weak-coupling and strong-coupling spin-glass phases, respectively at high and low temperature. Fixed distributions and chaos for these two spin-glasses with long-range spin-glass interaction $\pm K$ in $d=3$ are given in Fig. 6. The left and right columns are for $K=0.1$ and 0.1883 respectively. The top row gives the stable fixed distribution, i.e., sink, for the weak-coupling spin-glass phase. The bottom row gives the stable fixed distribution, i.e., sink, for the strong-coupling spin-glass phase. The middle row gives the unstable fixed distribution for the phase transition between the weak- and strong-coupling spin-glass phases. As $K$ is increased, the stable sink fixed distribution for the weak-coupling spin-glass phase and the unstable fixed distribution for the phase transition approach each other, meaning perforce become identical (note the similarity the two distributions on the right top and bottom of Fig. 6, as compared with the left side), and annihilate each other, clearing the way for the renormalization-group flows to the strong-coupling spin-glass sink. The weak-coupling spin-glass phase disappears and is replaced by the extended strong-coupling spin-glass phase, as seen for $K=0.4$ and 0.8 in Fig. 5. This abrupt phase diagram change and its renormalization-group mechanism is the generalization to quenched random systems of the stable-unstable fixed-point annihilation (Fig. 3) of the Potts peninsular flow mechanism. At the very top and bottom are the chaos and Lyapunov exponents for the weak-coupling and strong-coupling spin-glass phases. Amazingly, as measured by the Lyapunov exponents, the weak-coupling spin-glass phase is more chaotic than the strong-coupling spin-glass phase. We have also calculated phase diagrams of the Ising spin glass with decaying long-range spin-glass interaction $\pm K/r$, where $r$ is the separation between the spins in units of the nearest-neighbor separation in the original unrenormalized lattice. As seen in Fig. 7, as $K$ is increased from 0, the strong-coupling spin-glass phase fully broadens becoming an intermediate phase between the ferromagnetic (antiferromagnetic) and disordered phases, and finally wholly replaces the disordered phase. \begin{figure}[ht!] \centering \includegraphics[scale=0.17]{decayingB.eps} \caption{Calculated phase diagrams of the Ising spin glass with decaying long-range spin-glass interaction $\pm K/r$, where $r$ is the separation between the spins in units of the nearest-neighbor separation in the original unrenormalized lattice. The ferromagnetic (F), antiferromagnetic (A), strong-coupling spin-glass (SG), and disordered (D) phases are marked. As $K$ is increased from 0, the strong-coupling spin-glass phase fully broadens becoming an intermediate phase between the ferromagnetic (antiferromagnetic) and disordered phases $(K=0.45)$, and finally wholly replaces the disordered phase $(K=0.80)$.} \end{figure} \section{Conclusion} We have seen that the introduction, to the spin-glass system, of long-range ferromagnetic or spin-glass interactions reveal a plethora of new phases, spin-glass-to-spin-glass phase transitions, algebraic order, continuously varying runaway and non-runaway chaos, Potts-peninsular renormalization-group flows and precipitous phase diagrams, fixed-distribution annihilation. The spin glasses are clearly a rich repository of complex-system behaviors. \begin{acknowledgments} Support by the Academy of Sciences of Turkey (T\"UBA) is gratefully acknowledged. \end{acknowledgments}
train/arxiv
BkiUdcjxK0zjCxh73hY6
5
1
\section{Randomization of states by cellular automaton} \label{sect:ca:rand} \subsection{Errorless randomization} \label{sect:ca:rand:el} Usually, distributed representations in collective-state computing use i.i.d. random vectors. Similarly, we start with i.i.d. random vectors for short seeds. However, in contrast to the conventional approach, we are going to expand the dimensionality of the seed vectors via CA90 with boundary conditions. An important question for expansion is what are the limits of CA90 in terms of producing randomness? One very useful empirical tool for answering this question is calculation of degrees of freedom from the statistics of normalized Hamming distances between binary vectors (see, e.g.,~\cite{Daugman2003} for an example). Given that $p_h$ denotes the average normalized Hamming distance and $\sigma_h$ denotes its standard deviation, the degrees of freedom are calculated as: \noindent \begin{equation} F= p_h(1-p_h)/\sigma_h^2. \label{eq:deg:fre} \end{equation} \noindent Due to the randomization properties of CA90, we expect that after a certain number of steps it will produce new degrees of freedom. To be able to compare different grid sizes, we will report the degrees of freedom normalized by the grid size, i.e., $F/N$. In other words, if a single step of CA90 increased $F$ by $N$ (best case), the normalized value would increase by $1$. Fig.~\ref{fig:ca:period:example} presents the normalized degrees of freedom measured for $5,000$ steps of CA90 for different grid sizes using the same values as in~\cite{Wolfram} (p. 259). From the figure we can draw several interesting observations. First for all of the considered grid sizes, the degrees of freedom grows linearly at the beginning (following the reference, dashed black line, which indicates degrees of freedom in random binary vectors of the corresponding size). At some point, however, the degrees of freedom reaches a maximum value and starts to saturate, as we would expect. We are interested in the period of linear growth, and we call this the randomization period. Second, we see that larger grid sizes typically have longer randomization periods. For example, the longest randomization period of $2,047$ steps was observed for $N=23$ (but this is not the largest grid size). Third, the randomization period of odd grid sizes are always longer than that of the even ones. For example, the randomization periods for $N=22$ and $N=24$ were only $31$ and $7$, respectively (cf. $2,047$ for $N=23$). Thus, there is a dependency between $N$ and the randomization period, but, despite the above observations, there is not a clear general pattern connecting the grid size to the length of the randomization period. The good news, however, is that the length of the randomization period is closely related to the length of periodic cycles (denoted as $\Pi_N$) in CA90 discovered in~\cite{MartinPropertiesCA1984}. In short, the irregular behaviour of randomization periods and periodic cycles is a consequence of their dependence on number theoretical properties of $N$;~\cite{MartinPropertiesCA1984} provides the following characterization of periodic cycles $\Pi_N$ in CA90: \begin{itemize} \item For CA90 with $N$ of the form $2^j$, $\Pi_N=1$; \item For CA90 with $N$ even but not of the form $2^j$, $\Pi_N=2\Pi_{N/2}$; \item For CA90 with $N$ odd, $\Pi_N| \Pi_N^*=2^{sord_N(2)}-1$, where $sord_N(2)$ is the multiplicative ``sub-order'' function of 2 modulo $N$, defined as the least integer $j$ such that $2^j = \pm 1 \mod N$. \end{itemize} \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{CA_errors_37_500} \caption{ The normalized Hamming distance between the original and noisy vectors obtained from short seeds for $N=37$ during the first $256$ steps of CA90 evolution. The reported values were averaged over $500$ simulations where both initial short seeds and errors were chosen at random. Note the logarithmic scales of axes. } \label{fig:ber:ca:cur} \end{figure} Fig.~\ref{fig:ca:periods} presents the empirically measured randomization periods as well as analytically calculated periodic cycles $\Pi_N^*$~\cite{MartinPropertiesCA1984} for the grid size in the range $[9, 46]$. First, we see that when $N$ is odd, the randomization period equals to the periodic cycle. The only exception is the case when $N=37$, but this is just the first exception where $\Pi_N = \Pi_N^*/3$. Second, when $N$ is of the form $2^j$, the randomization period does not equal one because the CA90 is producing activity for $2^{j-1}$ steps, which increases the degrees of freedom. In fact, the randomization period in this case is $2^{j-2}-1$. Third, when $N$ is even, the CA90 produces $\Pi_N$ unique grid states but the patterns of Hamming distances between the states evolved from two random initial short seeds start to repeat after $\Pi_N/2$ steps, thus, they do not contribute new degrees of freedom. Therefore, the randomization period is two times lower than the periodic cycle. Aggregating these points, with respect to the randomization period of CA90, we have the following: \begin{itemize} \item For CA90 with $N$ of the form $2^j$, the randomization period is $2^{j-2}-1$; \item For CA90 with $N$ even but not of the form $2^j$, the randomization period is $\Pi_{N/2}$; \item For CA90 with $N$ odd, the randomization period is divide of $\Pi_N^*=2^{sord_N(2)}-1$. \end{itemize} \subsection{The effect of noise in the short seed} \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{Error_evolution} \caption{ The evolution of CA90 for $65$ steps, $N=37$; the initial state includes one active cell, which can be thought as introducing one bit flip to some random initial short seed. All steps of the form $2^j$ are highlighted by red rectangles. } \label{fig:ca:1error} \end{figure} In the previous section, we have seen how CA90 can be used to expand initial short seeds into longer randomized representations. This property could be utilized by a collective-state system for efficient communication by exchanging only short seeds and expanding the seed with CA90. In such a scenario, one might expect errors when communicating the short seeds, therefore, it is important to understand how the evolution of an expanded distributed representation is affected by errors in the initial short seed. We address this issue by observing the empirical behavior for $N=37$ and the first $256$ steps of CA90 evolution. Fig.~\ref{fig:ber:ca:cur} reports the averaged normalized Hamming distance for an errorless short seed and a noisy version of it, where either $2$ (dash-dot line), $4$ (solid line), or $8$ (dashed line) bits were flipped randomly. The results reported are for the corresponding states of the grid at a given step; not for the concatenated states. This shows that even a single step of CA90 increases the normalized Hamming distance between the evolved states. For example, when $4$ bits were flipped the normalized Hamming distance between the seeds was $4/37\approx0.11$ while after a single step of CA90 it increased to almost $0.2$, almost doubling. Further, we observe that the normalized Hamming distance will never be lower than after the first step. What is very interesting is that the distances induced by errors change in a predictable manner. We see that the errors reset to the lowest possible value at regular intervals: each CA90 step of the form $2^j$. This behavior of CA90 suggests that we can mitigate the impact of errors when expanding short seeds. In order to minimize the distance between the errorless evolutions and their noisy versions, one should only use the CA90 expansion in steps of the form $2^j$, which places additional limits for the possibility of dimensionality expansion. \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{BER_after_CA} \caption{ The expected BER for CA90 steps of the form $2^j$ against the BER in the short seeds. The solid line is analytical calculation while the dashed line was measured empirically. The empirical results were averaged over $10$ simulations. } \label{fig:ber:ca:tot} \end{figure} To understand the observed cyclic behavior of CA90, we examine the case when the initial state of the grid includes only one active cell. Due to the fact that CA90 is additive, the active cell can be interpreted as one bit flip of error introduced to some random initial short seed. Fig.~\ref{fig:ca:1error} demonstrates the evolution of the considered configuration for the first $65$ steps. Red rectangles in Fig.~\ref{fig:ca:1error} highlight the steps of the form $2^j$. At these steps there are only $2$ active cells. The behavior of the configuration with the single active cell explains both why for small number of bit flips in Fig.~\ref{fig:ber:ca:cur} the normalized Hamming distance approximately doubled after the first step and why the distance reset for every step of the form $2^j$. Given the Bit Error Rate (BER, number of bit flips) in the short seed ($p_{bf}$), we can calculate the BER after CA90 expansion (denoted as $p_{CA}$) for steps of the form $2^j$ as follows: \noindent \begin{equation} p_{CA}= 2 p_{bf}^2 (1-p_{bf}) + 2 p_{bf} (1-p_{bf})^2 = 2p_{bf}(1-p_{bf}). \label{eq:ber:ca} \end{equation} \noindent The intuition here is that due to the local interactions of CA, it is enough to only consider cases as in Fig.~\ref{fig:rule90}. In particular, we are only interested in cases, which result in active cells at the next step. There are only 4 such cases: two with two active cells and two with one active cell; enumerated in (\ref{eq:ber:ca}) Fig.~\ref{fig:ber:ca:tot} plots the analytical $p_{CA}$ according to (\ref{eq:ber:ca}) against the empirical one obtained in numerical simulations, we see that the curves match. \section{Concepts} \label{sect:concepts} \subsection{Collective-state computing} \label{sect:vsas} As explained in the introduction, collective-state computing subsumes numerous types of network computations that employ distributed representation schemes based on i.i.d. random vectors. One type is VSA or hyperdimensional computing~\cite{PlateTr, Gallant13, HDNP17}, which has been proposed in cognitive neuroscience as a formalism for symbolic reasoning with distributed representations. Recently, the VSA formalism has been used to formulate other types of collective-state computing models, such as RC~\cite{FradyTheory2018}, compressed sensing~\cite{FradySDR2020}, and randomly connected feed-forward neural networks~\cite{KleykoDensityEncoding2020}. Following this lead, we will formulate the types of collective-state computing, which are used in Section~\ref{sect:VSAs:rand} to study the CA90 expansion. VSAs are defined for different spaces (e.g., real or complex), but here we focus on VSAs with dense binary~\cite{KanervaFully1997} or bipolar~\cite{MAP} vectors where the similarity between vectors is measured by normalized Hamming distance (denoted as $d_h$) for binary vectors or by dot product for bipolar ones (denoted as $d_d$). The VSA formalism will be introduced as we go. \subsubsection{Item memory with nearest neighbor search} A common feature in collective-state computing is that a set of basic concepts/symbols\footnote{Examples of the basic concepts are distinct features in machine learning problems~\cite{Rasanen2015tr, KleykoHolographic2017} or distinct symbols in data structures~\cite{JoshiNgrams2016, HD_FSA, YerxaUCBHD_FSA2018, PashchenkoSubstring2020, KleykoPermuted2016}.} is defined and assigned with i.i.d. random high-dimensional atomic vectors. In VSAs, these atomic vectors are stored in the so-called item memory (denoted as $\textbf{H}$), which in its simplest form is a matrix with the size dependent on the dimensionality of vectors (denoted as $K$) and the number of symbols (denoted as $D$). The item memory $\textbf{H}$ enables associative or content-based search, that is, for a given query vector $\textbf{q}$ it returns the nearest neighbor. Given such a noisy query, the memory returns the best match using the nearest neighbor search: \noindent \begin{equation} \underset{i}{\argmin} (d_h(\textbf{H}_i, \textbf{q})). \label{eq:nn:search} \end{equation} \noindent The search returns the index of the vector that is closest to the query in terms of the similarity metric (e.g., the Hamming distance as in (\ref{eq:nn:search})). In VSAs and, implicitly, in most types of collective-state computing, this content-based search is required for selecting and error-correcting results of computations that, in noisy form, have been produced by dynamical transformations of the collective state. \subsubsection{Memory buffer} \label{sect:mem:bef} RC is a prominent example of collective-state computing as in echo state networks~\cite{ESN02}, liquid state machines~\cite{LSM02} and state-dependent computation~\cite{buonomano2009state}. In these models the dynamics of a recurrent network is used to memorize or buffer the structure of a spatio-temporal input pattern. In essence, the memory buffer accumulates the input history over time into a compound vector. The recurrent network dynamics attaches time stamps to input arriving at different times, which can be used to later analyze or recover the temporal structure from the compound vector describing the present network state. It has been recently shown how the memory buffer task can be analyzed using the VSA formalism~\cite{FradyTheory2018}, which builds on earlier VSA proposals of the memory buffer task under the name trajectory association task~\cite{PlateBook}. Here we use a simplistic variant of the echo state network~\cite{LukoseviciusPracticalESN2012}, called integer echo state network~\cite{KleykointESN2017}. The memory buffer involves the item memory and two other VSA operations: permutation and bundling. The item memory contains a random binary/bipolar vector assigned for each symbol from a dictionary of symbols of size $D$. A fixed permutation (rotation) of the elements of the vector (denoted as $\rho$)\footnote{ The cyclic shift is used frequently due to its simplicity. } is used to represent the position of a symbol in the input sequence. In other words, the permutation operation is used as a systematic transformation of a symbol as a function of its serial position. For example, a symbol \textit{a} represented by $\textbf{a}$ is associated with its position $i$ in the sequence by the result of permutation (denoted as $\textbf{r}$) as: \noindent \begin{equation} \label{eq:perm} \textbf{r} = \rho^i( \textbf{a}). \end{equation} \noindent where $\rho^i(*)$ denotes that some fixed permutation $\rho()$ has been applied $i$ times. The bundling operation forms a linear superposition of several vectors, which in some form is present in all collective-state computing models. Its simplest realization is an element-wise addition. However, when using the element-wise addition, the vector space becomes unlimited, therefore, it is practical to limit the values of the result. In general, the normalization function applied to the result of superposition is denoted as $f_n(*)$.\footnote{ In the case of dense binary VSAs, the arithmetic sum-vector of two or more vectors is thresholded back to binary space vector by using the majority rule/sum (denoted as $f_m(*)$) where ties are broken at random. } The vector $\textbf{x}$ resulting from the bundling of several vectors, e.g., \noindent \begin{equation} \label{eq:bundle} \textbf{x} = f_n(\textbf{a}+\textbf{b} + \textbf{c}) \end{equation} \noindent is similar to each of the bundled vectors, which allows storing information as a superposition of multiple vectors~\cite{FradyTheory2018}. Therefore, in the context of the memory buffer, the bundling operation is used to update the buffer with new symbols. The memory buffer task involves two stages: memorization and recall, which are done in discrete time steps. At the memorization stage, at every time step $t$ we add a vector $ \mathbf{H}_{\textbf{s}(t)}$ representing the symbol $\textbf{s}(t)$ from the sequence $\textbf{s}$ to the current memory buffer $\textbf{x}(t)$, which is formed as: \noindent \begin{equation} \textbf{x}(t)= f_n ( \rho(\textbf{x}(t-1)) + \mathbf{H}_{\textbf{s}(t)} ), \label{eq:buffer} \end{equation} \noindent where $\textbf{x}(t-1)$ is the previous state of the buffer. Note that the symbol added $d$ step ago is represented in the buffer as $\rho^{d-1}( \mathbf{H}_{\textbf{s}(t-d)})$. At the recall stage, at every time step we use $\textbf{x}(t)$ to retrieve the prediction of the delayed symbol stored $d$ steps ago ($\hat{\textbf{s}}(t-d)$) using the readout matrix ($\textbf{W}^{d}$) for particular $d$ using the nearest neighbor search: \noindent \begin{equation} \hat{\textbf{s}}(t-d)= \underset{i}{\argmax} ( d_d( \textbf{W}_i^{d}, \textbf{x}(t))), \label{eq:recall:text} \end{equation} \noindent Due to the use of a normalization function $f_n(*)$, the memory buffer possesses the recency effect, therefore, the average accuracy of the recall is higher for smaller values of delay. There are several approaches to form the readout matrix and choose a normalization function. Please see Appendix~\ref{sect:intesn} for additional details. \subsubsection{Factorization with resonator network} \label{sect:fac:rn} General symbolic manipulations with VSA require one other operation, in addition to bundling, permutation and item memory. The represention of an association of two or more symbols, such as a role-filler pair, is achieved by a binding operation, which associates several vectors (e.g., $\textbf{a}$ and $\textbf{b}$) together and produces another vector (denoted as $\textbf{z}$) of the same dimensionality: \noindent \begin{equation} \label{eq:bind} \textbf{z} = \textbf{x} \oplus \textbf{y}, \end{equation} \noindent where the notation $\oplus$ denotes element-wise XOR used for the binding in dense binary VSAs. While bundling leads to a vector which is correlated with each of its components, in binding the resulting vector $\textbf{z}$ is pseudo-orthogonal to the component vectors. Another important property of binding is that it is conditionally invertible. Given all but one components, one can simply compute from the binding representation of the unknown factor, e.g, $\textbf{z} \oplus \textbf{x} = \textbf{x} \oplus \textbf{x} \oplus \textbf{y} = \textbf{y}$. If none of the factors are given, but are contained in the item memory, the unbinding operation is still feasible but becomes a combinatorial search problem, whose complexity grows exponentially with the number of factors. This problem often occurs in symbolic manipulation problems, for example, in finding the position of a given item in a tree structure~\cite{ResPart1}. Let us assume that each component (factor; denoted as $\mathbf{f}_i$) comes from a separate item memory ($\prescript{1}{}{\mathbf{H}}, \prescript{2}{}{\mathbf{H}}, ...$), which is called factor item memory, e.g., a general example of a vector with four factors is: \noindent \begin{equation} \label{eq:factors} \mathbf{f}_1 \oplus \mathbf{f}_2 \oplus \mathbf{f}_3 \oplus \mathbf{f}_4. \end{equation} \noindent Recent work~\cite{KentResonatorNetworks2019} proposes an elegant mechanism called the resonator network to address the challenge of factoring. In the nutshell, the resonator network~\cite{KentResonatorNetworks2019} is a novel recurrent neural network design that uses VSAs principles to solve combinatorial optimization problems. To factor the components from the input vector $\mathbf{f}_1 \oplus \mathbf{f}_2 \oplus \mathbf{f}_3 \oplus \mathbf{f}_4$ representing the binding of several vectors, the resonator network uses several populations of units, $\mathbf{\hat{f}}_1(t), \mathbf{\hat{f}}_2(t), ...$, each of which tries to infer a particular factor from the input vector. Each population, called a resonator, communicates with the input vector and all other neighboring populations to invert the input vector using the following dynamics: \noindent \begin{equation} \begin{split} &\mathbf{\hat{f}}_1(t+1)= f_n \Big( \prescript{1}{}{\mathbf{H}} \prescript{1}{}{\mathbf{H}}^\intercal (\textbf{z} \oplus \mathbf{\hat{f}}_2(t) \oplus \mathbf{\hat{f}}_3(t) \oplus \mathbf{\hat{f}}_4(t) ) \Big) \\ &\mathbf{\hat{f}}_2(t+1)= f_n \Big( \prescript{2}{}{\mathbf{H}} \prescript{2}{}{\mathbf{H}}^\intercal (\textbf{z} \oplus \mathbf{\hat{f}}_1(t) \oplus \mathbf{\hat{f}}_3(t) \oplus \mathbf{\hat{f}}_4(t) ) \Big) \\ &\mathbf{\hat{f}}_3(t+1)= f_n \Big( \prescript{3}{}{\mathbf{H}} \prescript{3}{}{\mathbf{H}}^\intercal (\textbf{z} \oplus \mathbf{\hat{f}}_1(t) \oplus \mathbf{\hat{f}}_2(t) \oplus \mathbf{\hat{f}}_4(t) ) \Big) \\ &\mathbf{\hat{f}}_4(t+1)= f_n \Big( \prescript{4}{}{\mathbf{H}} \prescript{4}{}{\mathbf{H}}^\intercal (\textbf{z} \oplus \mathbf{\hat{f}}_1(t) \oplus \mathbf{\hat{f}}_2(t) \oplus \mathbf{\hat{f}}_3(t) ) \Big) \\ \end{split} \label{eqn:resnet:text} \end{equation} \noindent Note that the process is iterative and progresses in discrete time steps, $t$. In essence, at time $t$ each resonator $\mathbf{\hat{f}}_i(t)$ can hold multiple weighted guesses for a vector from each factor item memory through the VSAs principle of superposition, which is used for the bundling operation. Each resonator also uses the current guesses for factors from other resonators. These guesses from the other resonators are used to invert the input vector and infer the factor of interest in the given resonator. The principle of superposition allows a population to hold multiple estimates of factor identity and test them simultaneously. The cost of superposition is a crosstalk noise. The inference step is, thus, noisy when many guesses are tested at once. However, the next step is to use factor item memory $\prescript{i}{}{\mathbf{H}}$ to remove the extraneous guesses that do not fit. Thus, the guess $\mathbf{\hat{f}}_i$ for each factor is cleaned-up by constraining the resonator activity only to the allowed atomic vectors stored in $\prescript{i}{}{\mathbf{H}}$. Finally, a regularization step (denoted as $f_n(*)$) is needed. Successive iterations of this inference and clean-up procedure (\ref{eqn:resnet}), eliminate the noise as the factors become identified and find their place in the input vector. When the factors are fully identified, the resonator network reaches a stable equilibrium and the factors can be read out from the stable activity pattern. Please refer to Appendix~\ref{sect:resnet} for additional motivation and explanation of the resonator network. \subsection{Cellular automata-based expansion} \label{sect:ca} The CA is a discrete computational model consisting of a regular grid of cells~\cite{Wolfram} of size $N$. Each cell can be in one of a finite number of states (the elementary CA is binary). States of cells evolve in discrete time steps according to some fixed rule. In the elementary CA, the new state of a cell at the next step depends on its current state and the states of its immediate neighbors. Despite the seeming simplicity of the system, amongst the elementary CAs there are rules (e.g., rule 110) that make CA dynamics operate at the edge of chaos, and which were proven to be Turing complete~\cite{Cook2004}. In the scope of this article, we consider another rule -- rule 90 (CA90) as it possess several properties highly relevant for collective-state computing. \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{CA90} \caption{The assignment of new states for a center cell when the CA uses rule 90. A hollow cell corresponds to zero state while a shaded cell marks one state. } \label{fig:rule90} \end{figure} \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{Scheme} \caption{ The basic scheme for expanding distributed representations with CA90 from some initial short seed. The dimensionality of the seed $N$ equals to the the size of the CA grid. } \label{fig:CA90:expand} \end{figure} In the elementary CA there are only three cells involved in a computation step, and for CA with binary states, there are in total $2^3=8$ possible input combinations for each input there are two possible assignments for the output cell, which makes in total $2^8 =256$ combinations where each particular assignment defines a rule. Fig.~\ref{fig:rule90} shows all input combinations and corresponding assignment of output states for CA90. CA90 assigns the next state of a central cell based on the previous states of the neighbors. In particular, the new state is the result of XOR operation on the states of the neighboring cells. This is particularly attractive because CA90 has a computational advantage since the CA implementation can be easily vectorized and implemented in hardware (especially when working with cyclic boundary conditions\footnote{Cyclic boundary condition means that the first and the last cells in the grid are considered to be neighbors. }). For example, if at time step $t$ vector $\textbf{x}(t)$ describes the states of the CA grid, then the grid state at $t+1$ is computed as: \noindent \begin{equation} \label{eq:ca90} \textbf{x}(t+1)= \rho^{+1}(\textbf{x}(t)) \oplus \rho^{-1}(\textbf{x}(t)), \end{equation} \noindent where $\rho^{\{+1, -1\}}$ is the notation for cyclic shift to the right or left by one. Since in the context of this study we use CA90 for the purposes of randomization, we will call the state of the grid $\textbf{x}(0)$ at the beginning of computations as an initial short seed. It is worth pointing out that CA90 formulated as in (\ref{eq:ca90}) is a sequence of VSA operations~\cite{Gayler03}. Given the vector $\textbf{x}$ as an argument, by performing two rotations ($\rho^{+1}(\textbf{x})$ and $\rho^{-1}(\textbf{x})$) and then binding the results of rotations together ($\rho^{+1}(\textbf{x}) \oplus \rho^{-1}(\textbf{x})$), we implement CA90. The core idea of this paper is to use CA90 to generate a distributed representation of expanded dimensionality that can be used within the context of collective-state computing. This expansion must have certain randomization properties and be robust to perturbations. Fig.~\ref{fig:CA90:expand} presents the basic idea of obtaining an expanded dense binary distributed representation from a short initial seed. In essence, the seed is used to initialize the CA grid. Once initialized, CA90 computations are applied for $L$ steps. At every step of the evolution, the state of the grid provides a new burst of $N$ bits, which can be either used on the fly (without memorization) to make the necessary manipulations and then erased, or concatenated (with memorization) to the previous states if the distributed representation should be re-materialized explicitly. In any case, the dimensionality of the expanded representation is $K=NL$. \subsection{CA90 and VSAs} Section~\ref{sect:discussion} will present the joint use of RC, VSAs, and CA90 expansion. Amongst the related works discussed,~\cite{KleykoBrainsCA2017} is the most relevant, as it uses the randomization property of CA90. In particular, this work identified the following useful properties of CA90 for VSAs: \begin{enumerate} \item Random projection; \item Preservation of the binding operation; \item Preservation of the cyclic shift. \end{enumerate} By random projection we mean that when CA90 is initialized with a random state $\textbf{x}(0)$ ($p_1 \approx p_0 \approx 0.5$), which should be seen as an initial short seed, its evolved state at step $t$ is a vector $\textbf{x}(t)$ of the same size and density. Moreover, during the randomization period (see Section~\ref{sect:ca:rand:el}) $\textbf{x}(t)$ is dissimilar to the initial short seed $\textbf{x}(0)$, i.e., $d_h(\textbf{x}(0),\textbf{x}(t)) \approx 0.5$ as well as to the other states in the evolution of the seed. \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{Examles_of_periods} \caption{ The normalized degrees of freedom for different values of the grid size of CA90. The evolution of CA90 is reported for $5,000$ steps. The number of short seeds in the item memory was fixed to $100$. The reported values were averaged over $100$ simulations randomizing initial short seeds. Note the logarithmic scales of the axes. } \label{fig:ca:period:example} \end{figure} The preservation of the binding operation refers to the fact that if a seed $\textbf{c}(0)$ is the result of the binding of two other seeds: $\textbf{c}(0)= \textbf{a}(0) \oplus \textbf{b}(0)$ then after $t$ computational steps of CA90, the evolved state $\textbf{c}(t)$ can be obtained by binding the evolved states of the initial seeds $\textbf{a}(0)$ and $\textbf{b}(0)$ used to form $\textbf{c}(0)$, i.e., $\textbf{c}(t)= \textbf{a}(t) \oplus \textbf{b}(t)$. \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{Periods} \caption{ The empirically measured randomization period (blue) and the analytical periodic cycles~\cite{MartinPropertiesCA1984} (red) for the grid size in the range $[9, 46]$. Note the logarithmic scale of \textit{y}-axis. } \label{fig:ca:periods} \end{figure} Finally, CA90 computations preserve a special case of the permutation operation -- cyclic shift by $i$ cells. Suppose $\textbf{d}(0)=\rho^i(\textbf{a}(0))$ is an initial seed. After $t$ computational steps of CA90, the cyclic shift of the evolved seed $\textbf{a}(t)$ by $i$ cells equals the evolved shifted seed $\textbf{d}(t)$ so that $d_h(\textbf{d}(t), \rho^i(\textbf{a}(t))=0$. \section{Discussion} \label{sect:discussion} \subsection{Summary of our results} The use of CA computations for the generation of random numbers is not new, for instance, a seminal work~\cite{WolframRandom1986} has proposed to generate random sequences with CA. Numerous studies followed on building CA-based pseudo-random number generators, e.g.,~\cite{SantoroSearch2007}, see \cite{DascaluCARandomization2018} for a recent overview of this work, some of it specifically investigating CA90. Here, we focused on how collective-state computing can benefit from the randomness produced by CA90. Our results are based on a key observation that collective-state computing typically relies on high-dimensional random representations, which have to be initially generated and stored for, e.g., being accessible by nearest neighbor searches during compute time. In many models, the representations are dense binary random patterns. Rather than storing the full random representations in item memory, we proposed to store just short seed patterns, and to use CA90 for re-materializing the full random representations when required. The usage of CA90 expanded high-dimensional representations was demonstrated in the context of RC and VSAs. Our results provided empirical evidence that the expansion of representations on-demand (re-materialization solution) is functionally equivalent to storing the full i.i.d. random representations in the item memory (memorization solution). Specifically, we have shown that the randomization period of CA90 is closely connected to its periodic cycle length and depends of the size of the grid. We provided the exact relation between the grid size and the length of the randomization period. The general trend is that larger grid sizes yield longer randomization periods. However, period length depends on number-theoretic properties of the grid size integer. In general, odd numbered grid sizes have longer randomization periods than even numbered. In particular, one should avoid grid sizes that are powers of two ($2^j$), as they have the shortest randomization period. The longest periods are obtained when the size of the grid is a prime number. Thus, given a memory constraint, it is best to choose the largest prime within the constraint. We have also demonstrated that it is possible to use the expansion even in the presence of errors in the short seed patterns. Unfortunately, CA90 introduces additional errors to the ones present in the seed pattern so the error rate after CA90 increases. The distribution of introduced errors is, however, not uniform -- some of the steps introduce more errors than the others. We have shown that the lowest amount of errors (cf. Fig.~\ref{fig:ber:ca:cur}) is introduced for CA90 steps that are powers of two ($2^j$). Thus, in order to minimize the errors in the expanded representation one should use only steps of the form $2^j$. This, of course, limits the possibilities for expansion as practically not that many steps of the form $2^j$ can be computed (e.g., we used up to $20$ in the experiments). \subsection{Related work} \subsubsection{Combining reservoir computing, vector symbolic architectures and cellular automata} It has been demonstrated recently~\cite{FradyTheory2018, KleykointESN2017} that echo state networks~\cite{LukoseviciusPracticalESN2012}, an instance of RC, can be formulated in the VSA formalism. CA have been first introduced to RC and VSA models for expanding low-dimensional feature representations into high-dimensional representations to improve classification~\cite{YilmazMachine2015, YilmazSymbolic2015}. Due to the local interactions in CA, the evolution of the initial state (i.e., low-dimensional representation) over several steps produces a richer and higher-dimensional representation of features, while preserving similarity. This method was applied to activation patterns from neural networks~\cite{YilmazMachine2015}, and to manually extracted features~\cite{KarvonenFPGA_CA_HD2019}. The expanded representations were able to improve classification results for natural~\cite{YilmazMachine2015} and medical~\cite{KleykoModality2017} images. Similar to these works, our approach also employs CA to expand the dimension of representations. However, we have applied expansion not to feature vectors, but just to i.i.d. random seed patterns. All we need is the property of CA90 that the resulting high-dimensional vectors are still pseudo-orthogonal. In our study, similarity preservation is only required if the random seed patterns contain errors. Our work is most directly inspired by~\cite{SchmuckHardwareOptimizations2019} and~\cite{KleykoBrainsCA2017} who both used CA to expand item memory with short~\cite{SchmuckHardwareOptimizations2019} or long~\cite{KleykoBrainsCA2017, McDonaldReplicationCA2019} i.i.d. random seed patterns. In~\cite{SchmuckHardwareOptimizations2019} the expansion was done with the CA30 rule, which is known to exhibit chaotic behaviour. Here, as in~\cite{KleykoBrainsCA2017}, we used the CA90 rule. For collective-state computing, CA90 has the great advantage that it distributes over the binding and the cyclic shift operation. We have seen this advantage in action when studying the resonator network in Section~\ref{sec:res:resnet:el}. Since CA90 distributes over the binding operation, it was possible to expand the collective-state (i.e., the input vector (\ref{eq:factors}) with factors) on-demand during the factorization. Going beyond~\cite{KleykoBrainsCA2017, McDonaldReplicationCA2019}, we also systematically explored the randomization properties of CA90, such as the length of the randomization period. Moreover, none of the previous work has studied the randomization behaviour of CA90 in the presence of errors in the initial seed. \subsubsection{Other computation methods that use randomness} A complementary approach of computing with randomness is sampling-based computation~\cite{orban2016neural}. This approach differs fundamentally from collective-state computing, which exploits a concentration of mass phenomenon of random patterns making them pseudo-orthogonal. Once generated, a fixed set of random patterns can serve as unique and well distinguishable identifiers for handling variables and values during compute time. In contrast, in the sampling-based computation each compute step produces independent randomness to provide good mixing properties. Good mixing ensures that even a small set of samples is representative for the entire probability distribution and, therefore, constitutes a compact, faithful representation (ch. 29 in~\cite{mackay2003information}). We should add that the ``frozen'' randomness in VSA can be used to form different types of compact representation of probability distributions. For example, a combination of binding and bundling can constitute compact representations of large histograms describing the $n$-gram statistics of languages~\cite{JoshiNgrams2016}. The advantage of such a representation is that it is a vector of the same fixed dimension as the atomic random patterns, somewhat independent of the number of non-zero entries in the histogram. \subsection{Future work} \subsubsection{Potential for hardware implementation} The space-time or memory-computation tradeoff introduced by the inclusion of CA can be used to optimize the implementation of collective-state computations on hardware. Of course, this optimization depends on the context of a computational problem and a particular hardware platform, which is outside the scope of this article. The optimized hardware implementation of models we described involves another interesting topic for future research, the question how to parallelize the computation of CA90. While the implementation of CA90 in FPGA is quite straightforward, see equation (\ref{eq:ca90}), the implementation with neural networks and on neuromorphic hardware~\cite{Loihi18} is still an open problem. \subsubsection{Integration of CA computations in neural associative memories} Another interesting future direction is to investigate how associative memories~\cite{SurveyAM17} can trade off synaptic memory with neural computation implementing the CA. Such CA-based approaches could be compared to other suggestions in the literature how to replace memory by computation, e.g.,~\cite{knoblauch2010zip}. \section{Integer echo state network} \label{sect:intesn} The integer echo state network (intESN) has been proposed in~\cite{KleykointESN2017}) as a light weight alternative to the conventional ESN~\cite{LukoseviciusPracticalESN2012}. For the sake of simplicity, we introduce it here using the memory buffer task from Section~\ref{sec:mem:buf}. The memory buffer task~\cite{FradyTheory2018,PlateBook} has two stages: memorization and recall. At the memorization stage, at every timestep $m$ the intESN stores a symbol $\textbf{s}(m)$ from the sequence of symbols $\textbf{s}$ to be memorized. The number of unique symbols (i.e., alphabet size) is denoted as $D$. The symbols are represented using $K$-dimensional random bipolar dense vectors stored in the item memory $\mathbf{H} \in \{-1,1\}^{K \times D}$. Thus, at every timestep $m$ the intESN is presented with the corresponding $K$-dimensional vector $\mathbf{H}_{\textbf{s}(m)}$, which is added to the hidden layer (i.e., the reservoir storing the collective state) of the intESN ($\textbf{x} \in \mathbb{Z}^{K \times 1}$). The state of the hidden layer at timestep $m$ (denoted as $\textbf{x}(m)$) is updated as follows: \noindent \begin{equation} \textbf{x}(m)= f_\kappa ( \rho(\textbf{x}(m-1)) + \mathbf{H}_{\textbf{s}(m)} ), \label{eq:intESN} \end{equation} \noindent where $\textbf{x}(m-1)$ is the previous state of the hidden layer at timestep $m-1$; $\rho$ denotes the permutation operation (e.g., cyclic shift to the right), which in the intESN acts as a simple variant of a recurrent connection matrix; $f_\kappa (*)$ is a clipping function defined as: \noindent \begin{equation} f_\kappa (x) = \begin{cases} -\kappa & x \leq -\kappa \\ x & -\kappa < x < \kappa, \\ \kappa & x \geq \kappa \end{cases} \label{eq:clipping} \end{equation} \noindent where $\kappa$ is a configurable threshold parameter. In the scope of the intESN, the clipping function acts as a nonlinear activation function, which keeps the values of the hidden layer in the limited range determined by $\kappa$. In practice, the value of $\kappa$ regulates the recency effect of the intESN. At the recall stage, the intESN uses the current collective state in its hidden layer $\textbf{x}(m)$ as the query vector to retrieve the symbol stored $d$ steps ago, where $d$ denotes delay. The recall is done by using the readout matrix for particular $d$, which contains one $K$-dimensional vector per each symbol. The readout matrix is denoted as $\textbf{W}^{d} \in \mathbb{R}^{D \times N}$ and the recall is done as: \noindent \begin{equation} \hat{\textbf{s}}(m-d)=\argmax ( \textbf{W}^{d} \textbf{x}(m) ), \label{eq:recall} \end{equation} \noindent where $\argmax(\cdot)$ returns the symbol with the highest postsynaptic sum among the output neurons for the chosen delay $\textbf{W}^{d}$ value and the given collective state $\textbf{x}(m)$. There are several approaches to form the readout matrix. In the experiments, we used the approach, which is most common in ESN that is to obtain $\textbf{W}^{d}$ via solving a linear regression on a given training sequence and the corresponding states of the hidden layer. \section{Introduction} \label{sect:intro} Collective-state computing is an emerging paradigm of computing which leverages interactions of nodes or neurons in a highly interconnected network~\cite{CsabaCoupled2020}. This paradigm was first proposed in the context of neural networks and neuroscience for exploiting the parallelism of complex network dynamics to perform challenging computations. The classical examples include reservoir computing (RC)~\cite{LSM02, ESN02, RodanMinimumESN2011, FradyTheory2018} for buffering spatio-temporal inputs, and attractor networks for associative memory~\cite{hopfield1982neural} and optimization~\cite{hopfield1985neural}. In addition, many other models can be regarded as collective-state computing, such as random projection~\cite{AchlioptasDatabase2003}, compressed sensing~\cite{donoho2006, AminiDeterministic2011}, randomly connected feedforward neural networks~\cite{RVFLorig, KleykoDensityEncoding2020}, and vector symbolic architectures (VSAs)~\cite{PlateTr, Rachkovskij2001, Kanerva09}. Interestingly, these diverse computational models share a fundamental commonality -- they all include an initialization phase in which high-dimensional i.i.d. random vectors or matrices are generated that have to be memorized. In different models, these memorized random vectors serve a similar purpose: to represent inputs and variables that need to be manipulated as distributed patterns across all neurons. The collective state is the (linear) superposition of these distributed representations. Decoding a particular variable from the collective state can be achieved by a linear projection onto the corresponding representation vector. Since high-dimensional random vectors are pseudo-orthogonal, the decoding interference is rather small, even if the collective state contains many variables\footnote{Although decoding by projection would work even better for exactly orthogonal distributed patterns, such a coding scheme is less desirable: i.i.d random generation of distributed patterns is computationally much cheaper and does not pose a hard limit on the number of encoded variables to be smaller or equal than the number of nodes or neurons.}. In contrast, if representations of different variables are not random, but contain correlations or statistical dependencies, then the interference becomes large when decoding and collective-state computing ceases to work. In order to achieve near orthogonality and low decoding interference, a large dimension of the random vectors is essential. When implementing a collective-state computing system in hardware (e.g., in Field Programmable Gate Arrays, FPGA), the memory requirements are typically a major bottleneck for scaling the system~\cite{SchmuckHardwareOptimizations2019}. It seems strangely counter intuitive to spend a large amount of memory just to store random vectors. Thus, our key question is whether collective-state computing can be achieved without memorizing the full array of random vectors. Instead of memorization, can memory requirements be traded off by computation? Cellular automata (CA) \cite{Wolfram} are simple discrete computations capable of producing complex random behavior, such as Conway's Game of Life~\cite{GardnerGameOfLife1970}. Here we study the randomization behavior of an elementary cellular automaton with rule 90 (CA90) for generating distributed representations for collective-state computing. We demonstrate in the context of RC and VSAs that collective-state computing at full performance is possible by storing only short random seed patterns, which are then expanded ``on the fly'' to the full required dimension by running rule CA90. This work is partially inspired by~\cite{YilmazSymbolic2015}, which proposed that RC, VSAs, and CA can benefit from each other, by expanding low-dimensional representations via CA computations into high-dimensional representations that are then used in RC and VSA models. The specific contributions of this article are: \noindent \begin{itemize} \item Characterization of the relation between the length of the randomization period of CA90 and the size of its grid; \item Analysis of the similarity between CA90 expanded representations in the case when the seed pattern contains errors; \item Experimental evidence that for RC and VSAs the CA90 expanded representations are functionally equivalent to the representations obtained from a standard pseudo-random number generator. \end{itemize} \noindent The article is structured as follows. The main concepts used in this study are presented in Section~\ref{sect:concepts}. The effect of randomization of states by CA90 is described in Section~\ref{sect:ca:rand}. The use of RC and VSAs with the CA90 expanded representations is reported in Section~\ref{sect:VSAs:rand}. The findings and their relation to the related work are discussed in Section~\ref{sect:discussion}. \section{Resonator network} \label{sect:resnet} Recall that in Section~\ref{sect:vsas} when introducing the binding operation (\ref{eq:bind}) only a pair of vectors was bound: $\textbf{x} \oplus \textbf{y}$. There are, however, practical cases when several vectors should be bound, e.g., $\textbf{z} = \textbf{v} \oplus \textbf{w} \oplus\textbf{x} \oplus \textbf{y}$. A concrete example when this design is used is a representation of $n$-grams by vectors~\cite{JoshiNgrams2016}. Here we assume that each component (factor; denoted as $\mathbf{f}_i$) comes from a separate item memory ($\mathbf{H}_1, \mathbf{H}_2, ...$), which is called factor item memory, i.e., a general example of a vector with four factors is $\mathbf{f}_1 \oplus \mathbf{f}_2 \oplus \mathbf{f}_3 \oplus \mathbf{f}_4$. Note that the task of factoring one of the components from a vector representing the result of binding of two vectors is a relatively simple task as, e.g, $\textbf{x} \oplus \textbf{y} \oplus \textbf{x} = \textbf{y}$. This task becomes much more complex (complexity grows exponentially) in the case when several vectors are bound together. However, there is a very recent work~\cite{KentResonatorNetworks2019, ResPart1}, which proposed an elegant mechanism called the resonator network to address the challenge of the factoring. In the nutshell, the resonator network~\cite{ResPart1} is a novel recurrent neural network design that uses VSAs principles to solve combinatorial optimization problems. To factor the components from a vector $\textbf{z}$ representing the bindings of several vectors, the resonator network uses several populations of units, $\mathbf{\hat{f}}_1(t), \mathbf{\hat{f}}_2(t), ...$, each of which tries to infer a particular factor from the input vector. Each population, called a resonator, communicates with the input vector and all other neighboring populations to invert $\textbf{z}$. In essence, each resonator can hold multiple weighted guesses for a vector from each factor item memory through the VSAs principle of superposition, which is used for the bundling operation. Each resonator then also receives guesses for factors from other resonators. These guesses from the other resonators are used to invert the input vector and infer the factor of interest in the given resonator. The principle of superposition allows a population to hold multiple estimates of factor identity and test them simultaneously. The cost of superposition is a crosstalk noise. The inference step is, thus, noisy when many guesses are tested at once. But the next step is to use the item memory $\mathbf{H}_i$ to remove the extraneous guesses that do not fit. Thus, the guess $\mathbf{\hat{f}}_i$ for each factor is cleaned-up by constraining the resonator activity only to the allowed atomic vectors stored in $\mathbf{H}_i$. The main dynamics of the resonator network are best described by the iterative update process. The resonator dynamics can be solved by following the VSAs operations. Through VSAs operations, the representation of an input vector $\textbf{z}$ can be approximately inverted. Thus, to factorize $\textbf{z}$, we can construct an equation for each factor as follows: \noindent \begin{equation} \begin{split} &\mathbf{\hat{f}}_1(t+1)= f \Big( \mathbf{H}_1 \mathbf{H}_1^T (\textbf{z} \oplus \mathbf{\hat{f}}_2(t) \oplus \mathbf{\hat{f}}_3(t) \oplus \mathbf{\hat{f}}_4(t) ) \Big) \\ &\mathbf{\hat{f}}_2(t+1)= f \Big( \mathbf{H}_2 \mathbf{H}_2^T (\textbf{z} \oplus \mathbf{\hat{f}}_1(t) \oplus \mathbf{\hat{f}}_3(t) \oplus \mathbf{\hat{f}}_4(t) ) \Big) \\ &\mathbf{\hat{f}}_3(t+1)= f \Big( \mathbf{H}_3 \mathbf{H}_3^T (\textbf{z} \oplus \mathbf{\hat{f}}_1(t) \oplus \mathbf{\hat{f}}_2(t) \oplus \mathbf{\hat{f}}_4(t) ) \Big) \\ &\mathbf{\hat{f}}_4(t+1)= f \Big( \mathbf{H}_4 \mathbf{H}_4^T (\textbf{z} \oplus \mathbf{\hat{f}}_1(t) \oplus \mathbf{\hat{f}}_2(t) \oplus \mathbf{\hat{f}}_3(t) ) \Big) \\ \end{split} \label{eqn:resnet} \end{equation} \noindent Note that according to (\ref{eqn:resnet}) to infer, e.g., the second factor, we also need to know the identities of the first, third, and fourth factors, which are not known in advance. Therefore, the resonator network relies on an iterative approach where the guesses for all factors are being inferred simultaneously. Thus, the vectors $\mathbf{\hat{f}}_1$, $\mathbf{\hat{f}}_3$ and $\mathbf{\hat{f}}_4$, can be thought to hold multiple guesses for their factors in superposition. Only one of these guesses will be correct, however. Thus, the interaction of the incorrect guesses with the inference procedure will result in more terms that act as the crosstalk noise. VSAs are often faced with these noisy inference procedures, and they use the item memory to remove the noise. This is simply a comparison between the inference of $\mathbf{\hat{f}}_2$ with all the possible atomic vectors, which are stored in the item memory $\mathbf{H}_2$ for this factor. Another way of understanding this clean-up step is that the resonator network is constrained to the span of the vectors stored in the item memory. Finally, a regularization step (denoted as $f(*)$) is needed, which can be either normalization or bipolarization. Successive iterations of this inference and clean-up procedure (\ref{eqn:resnet}), eliminate the noise as the factors become identified and find their place in the input vector. When the factors are fully identified, the resonator network reaches a stable equilibrium and the factors can be read out from the stable activity pattern. When running the resonator network, at first, the system appears to bounce around stochastically, with the inferences being dominated by the crosstalk noise. Eventually (in a regime within the ``operational capacity''~\cite{ResPart1}), the system has a moment of insight and rapidly hones in on the solution and the dynamics converges to a stable equilibrium. Once it converges, the factor corresponding to the atomic vector with the largest similarity to the final state of each estimator is taken as the output. \section{Experimental demonstration of CA90 expansion for collective-state computing} \label{sect:VSAs:rand} \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{item_memory_CA} \caption{ The usage of i.i.d. random vectors against the CA90 expanded representations in the item memory. The figure reports the accuracy of the nearest neighbor search where a query was a noisy version of one of the vectors stored in the item memory. The noise was introduced in the form of bit flips. Three different values of Bit Error Rates were simulated ($\{0.30, 0.35, 0.40\}$). The dimensionality of the initial short seeds was set to $N=23$. The size of the item memory was set to $100$. The reported values were averaged over $1,000$ simulations randomizing initial short seeds, random item memories, and noise added to queries. } \label{fig:item:mem:CA} \end{figure} This section focuses on using CA90 expansion for RC and VSAs. In several scenarios, we provide empirical evidence that expanded vectors obtained via CA90 computations are functionally equivalent to i.i.d. random vectors.\footnote{ In the scope of this article by random vectors we mean vectors generated with the use of a standard pseudo-random number generator. So strictly speaking, they should be called pseudo-random vectors but the term random is used to oppose them to the vectors obtained with CA90 computations. } \subsection{Nearest neighbor search in item memories} One potential application of CA90 expansion of vectors will be for ``on the fly'' generation of item memories as used in RC and VSAs. An item memory is used to decode the output of a collective-state computation, where often the nearest neighbor to a query vector within the item memory is to be found. As mentioned before, when there is noise in the query vector, the outcome of the search may not always be correct. Therefore, we explored the accuracy of the nearest neighbor search when the query vector was significantly distorted by noise. We compare two item memories: one with i.i.d. random vectors, and the other with CA90 expanded vectors where only initial short seeds ($N=23$) were i.i.d. random. Fig.~\ref{fig:item:mem:CA} reports the accuracy results of simulation experiments. The item memory with vectors based on CA90 expansion demonstrated the same accuracy as the item memory with fully i.i.d. random vectors. \subsection{Memory buffer} \label{sec:mem:buf} \begin{figure}[tb \centering \includegraphics[width=1.0\columnwidth]{CA_mem_buf} \caption{ The usage of i.i.d. random vectors against the CA90 expanded representations in the memory buffer task; $D=27$ in the experiments. The figure reports the accuracy of the correct recall of symbols for three different values of delay ($\{5, 10, 15\}$). The dimensionality of the initial short seeds was set to $N=37$. The reported values were averaged over $10$ simulations randomizing initial short seeds, random item memories, and traces of symbols to be memorized. } \label{fig:mem:buf:CA} \end{figure} \begin{figure*}[tb \centering \includegraphics[width=1.95\columnwidth]{Resonator_100_300_wide} \caption{ The usage of random vectors against CA90 expanded representations in the resonator network. Left column reports the average accuracies while right column reports the average number of iterations until convergence. The maximal number of iterations was set to $500$. The dimensionality of the initial short seeds varied between $\{100, 200, 300\}$. The evolution of CA90 is reported for the first $100$ steps. The size of an individual item memory varied between $\{8, 16, 32\}$. The number of factors was fixed to four. The reported values were averaged over $100$ simulations randomizing initial short seeds. } \label{fig:res:error:less} \end{figure*} Next, we demonstrate the use of CA90 expanded representations in the memory buffer task described in Section~\ref{sect:mem:bef}. In these experiments, we measured the accuracy of recall from the memory buffer when the item memory was created from initial short seeds ($N=37$) by concatenating the results of CA90 computations for several steps, $K=NL$. As a benchmark, we used an item memory with i.i.d. random vectors of matching dimensionality. Three different values of delay were considered: $\{5, 10, 15\}$. The results are reported in Fig.~\ref{fig:mem:buf:CA}. As expected, we observe that increasing the dimensionality of the memory buffer increased the accuracy of the recall. The main point, however, is that the memory buffer made from CA90 expanded representations demonstrates the same accuracy as the memory buffer made from i.i.d. random vectors. \subsection{Resonator network factoring in the error-free case} \label{sec:res:resnet:el} To further assess the quality of vectors obtained via the results of CA90 computations, we also examined their use in the resonator network~\cite{ResPart1}. Please see Section~\ref{sect:fac:rn} and Appendix~\ref{sect:resnet} for details of the resonator network. It is important to emphasize that due to the preservation of the binding operation by CA90, that multiple aspects of the resonator network can benefit from CA90 expansion. Both the composite input vector and the factor item memories do not have to be memorized explicitly, but rather can be expanded from seeds. The factorization process is computed at each level of CA90 expansion, with the vector dimensions increasing by $N$ for each CA step. The outputs of the resonator network are collected and compared to the ground truth, and averaged over many randomized simulation experiments. Fig.~\ref{fig:res:error:less} presents the average accuracies (left column) and the average iterations until convergence (right column) for three different dimensionalities of initial short seeds $\{100, 200, 300\}$ and three sizes of factor item memories $\{8, 16, 32\}$. The number of factors was set to four. The simulations considered the first $100$ steps of CA90, which was less than the randomization period ($1,023$) of the shortest seed ($N=100$). The performance of the resonator network was as expected. For a given dimensionality of short seed and item memory size, the average accuracy increased with the increased number of CA90 steps -- as practically it means using vectors of larger dimensionality. The number of iterations in contrast decreased for larger vectors. Importantly, there was no notable difference in the performance of the resonator network both in terms of the accuracy and number of iterations when operating with CA90 expanded representations. This further confirms that it is reasonable to use CA90 expanded representations in order to trade-off memory for computation. \begin{figure*}[tb \centering \includegraphics[width=2.0\columnwidth]{CA_errors_37_39} \caption{ The usage of the CA90 expanded representations in the resonator network in the case when the initial short seed might have errors. The upper panels report the case for $N=37$ while the lower panels correspond to $N=39$. The noise was introduced in the form of bit flips. The number of bit flips was in the range $[0, 5]$ with step $1$. The legends show the corresponding Bit Error Rates relative to $N$. The solid lines depict the resonator network with $4$ factors while the dashed lines depict the resonator network with $3$ factors. Columns correspond to different amount of information carried by a vector, which was determined by the size of an item memory for one factor. The sizes of item memories for resonator networks with three and four factors were set to approximately match each other in terms of amount of information. The reported values were averaged over $100$ simulations randomizing initial short seeds as well as introduced bit flips. } \label{fig:res:errors} \end{figure*} \subsection{Resonator network factoring in the case of errors} In order to examine the capabilities of CA90 expanded representations when the initial short seeds were subject to errors, we performed simulations for two dimensionalities of initial short seeds ($N=37$ and $N=39$). Similar to the experiments in Fig.~\ref{fig:res:error:less}, we used the resonator network to reconstruct a randomly chosen combination of factors. The difference was that some bit flips were added to the initial short seed (i.e., vector to be factored) where the number of bit flips was in the range $[0, 5]$ with step $1$. The results are reported in Fig.~\ref{fig:res:errors}. To minimize the noise introduced by CA90 computations we only used steps (\textit{x}-axis in Fig.~\ref{fig:res:errors}) of the form $2^j$ (cf. Fig.~\ref{fig:ca:1error}) to expand the dimensionality. Columns in Fig.~\ref{fig:res:errors} correspond to different amounts of information carried by the vector to be factored. The experiments were done for two configurations of the resonator network: with $3$ factors (dashed lines) and with $4$ factors (solid lines). Clearly, when given the same conditions, the resonator network with $3$ factors outperforms the one with $4$ factors. This observation is in line with the expected behavior of the resonator network. It should be noted, however, that the resonator network with $3$ factors requires larger item memories in order store the same amount of information. For example, for $16.00$ bits in the case of $4$ factors the size of individual item memory was $16$ while in the case of $3$ factors it was $40$, i.e., the resonator network with $3$ factors required about $2.5$ times more memory. Thus, the use of a resonator network with fewer factors results in a better performance but it requires more memory to be allocated. We also see that even in the absence of errors (BER=0.00) the accuracy is not perfect when a vector carries a lot of information because we are limited by the capacity of the resonator network -- which does fail at factorization when the size of the factorization problem is too large. For example, for $16.00$ bits none of the expanded dimensionalities reached the perfect accuracy as opposed to the other two cases. Naturally, the inclusion of errors hurts the accuracy, but the performance degradation is gradual. When comparing the performance of the resonator networks for the expanded vectors using all $21$ CA90 steps we made a counter intuitive observation that the performance for $N=37$ is better despite shorter vectors and higher Bit Error Rates (BER). Recall from Fig.~\ref{fig:ca:periods} that the chosen grid sizes have different randomization periods: $87,381$ and $4095$ for $N=37$ and $N=39$, respectively. The longer randomization period for $N=37$ means that the use of $N=37$ provides more randomness for large number of CA90 steps. This is the main reason for the counter intuitive observation that the use of shorter seed at higher BER resulted in a better performance. When taking into account only the steps of the form $2^j$ the corresponding randomization period results in about $16.41$ and $12.00$. These are exactly the values for which the performance of the resonator networks starts to saturate since concatenating additional dimensions after the randomization period stops adding extra randomness.
train/arxiv
BkiUfiLxaKgQX6NqtJCs
5
1
\section{Introduction} \label{sec:intro} Consensus provides a fundamental building block for developing reliable distributed systems~\cite{guerraoui:1997,guerraoui:2000,guerraoui:2001}. Motivated by the increasing interest in {\em wireless} distributed systems, in this paper we prove new upper and lower bounds for the consensus problem in wireless networks. \paragraph{The Abstract MAC Layer.} Consensus bounds are dependent on the model in which they are established. Accordingly, we must take care in selecting our model for studying the wireless version of this problem. Most existing work on distributed algorithms for wireless networks assumes low-level synchronous models that require algorithms to deal directly with link-layer issues such as signal fading and channel contention. Some of these models use topology graphs to determine message behavior (c.f.,~\cite{baryehuda:1987,jurdzinski:2002,kowalski:2005,moscibroda:2005,czumaj:2006,gasieniec:2007}) while others use signal strength calculations (c.f.,~\cite{moscibroda:2006,moscibroda:2007,goussevskaia:2009,halldorsson:2012b,jurdzinski:2013:random,daum:2013}). These models are well-suited for asking basic science questions about the capabilities of wireless communication. They are not necessarily appropriate, however, for developing algorithms meant for deployment, as real wireless systems typically require an algorithm to operate on top of a general-purpose MAC layer which is hard to bypass and enables many key network functions such as managing co-existence. Motivated by this reality, in this paper we adopt the {\em abstract MAC layer} approach~\cite{kuhn:2011abstract}, in which we model the basic guarantees provided by most existing wireless MAC layers---if you broadcast a message it will eventually be delivered with acknowledgment to nearby nodes in the network---but leverages a non-deterministic message scheduler to allow for unpredictability---there is no bound on when messages are delivered or in what order. The goal with this approach is to describe and analyze algorithms at a level of abstraction that makes it easy to subsequently implement theory results in real systems while still preserving their formally analyzed properties. (See Section~\ref{sec:model} for a detailed model definition and motivation.) \paragraph{Results.} We begin with lower bounds. In Section~\ref{sec:lower:crash}, we generalize the oft-cited result on the impossibility of deterministic consensus with a single process failure~\cite{flp} from the asynchronous message passing model to our abstract MAC layer model. (See Section~\ref{sec:model} for details on how these two models differ.) The main difficulty in this generalization is the new assumption in our model that senders receive acknowledgments at some point after a given broadcast completes. To overcome this difficulty, we are forced to restrict our valency definitions to focus on a restricted class of schedulers. Having established this impossibility, we proceed in this paper assuming no crash failures. Noting that wireless network deployments are often {ad hoc}, we next focus on determining how much {\em a priori} information about the network is required to solve deterministic consensus in our model. We start, in Section~\ref{sec:lower:unique}, by proving that consensus is impossible without unique ids, even if nodes know the size and diameter of the network. We then prove, in Section~\ref{sec:lower:n}, that even with unique ids (and knowledge of the diameter), consensus is impossible in multihop networks if nodes do not know the network size. Finally, we prove that any solution to consensus in our model requires $\Omega(D\cdot F_{ack})$ time, where $D$ is the diameter of the underlying network topology and $F_{ack}$ is the maximum message delivery delay (a value unknown to the nodes in the network). All three bounds leverage partitioning arguments that rely on carefully-constructed worst-case network topologies and message scheduler behavior for which the relevant network knowledge assumptions do not break symmetry. We then turn our attention to matching these lower bounds with a pair of new deterministic consensus algorithms. We begin, in Section~\ref{sec:upper:single}, with a new algorithm that guarantees to solve consensus in single hop networks in an optimal $O(F_{ack})$ time, even without advance knowledge of the network size or participants (this opens up a gap with the asynchronous message passing model, where consensus is impossible under such assumptions~\cite{abboud:2008}). This algorithm uses a two-phase structure. The key insight is that nodes wait to decide after their second phase broadcast until they have also heard this broadcast from a set of important {\em witnesses}. We then present, in Section~\ref{sec:upper:multihop}, the {\em wireless PAXOS} ({wPAXOS}) algorithm, which guarantees to solve consensus in multihop topologies of diameter $D$ in an optimal $O(D\cdot F_{ack})$ time. This algorithm assumes unique ids and knowledge of $n$ (as required by our lower bounds\footnote{Our algorithm still works even if provided only {\em good enough} knowledge of $n$ to recognize a majority. This does not contradict our lower bound as the lower bound assumes {\em no} knowledge of $n$.}), but no other advance knowledge of the network or participants. The wPAXOS algorithm combines the high-level logic of the PAXOS consensus algorithm~\cite{paxos} with a collection of support services that efficiently disseminate proposals and aggregate responses. We note that if the PAXOS (or similar consensus algorithm) logic is combined with a basic flooding algorithm, the result would be a $O(n\cdot F_{ack})$ time complexity, as bottlenecks are possible where $\Omega(n)$ value and id pairs must be sent by a single node only able to fit $O(1)$ such pairs in each message. To reduce this time complexity to an optimal $O(D\cdot F_{ack})$, we implement eventually stable shortest-path routing trees and show they allow fast aggregation once stabilized, and preserve safety at all times. These stabilizing support services and their analysis represent the main contribution of this algorithm. One could, for example, replace the PAXOS logic working with these services with something simpler (since we have unique ids and knowledge of $n$, and no crash failures, we could, for example, simply gather all values at all nodes). We choose PAXOS mainly for performance reasons, as it only depends on a majority nodes to make progress, and is therefore not slowed if a small portion of the network is delayed. \paragraph{Related Work.} Consensus provides a fundamental building block for reliable distributed computing~\cite{guerraoui:1997,guerraoui:2000,guerraoui:2001}. It is particularly well-studied in asynchronous models~\cite{paxos,schiper:1997,mostefaoui:1999,aguilera:2000}, where deterministic solutions are impossible with even a single crash failure~\cite{flp}. Most existing distributed algorithm results for the wireless setting assume low-level models. Though consensus has been studied in such models (e.g.,~\cite{chockler:2005}), most efforts in the low-level setting focus on reliable communication problems such as broadcast (see~\cite{peleg:2007} for a good survey). The abstract MAC layer approach to modeling wireless networks is introduced in~\cite{kuhn:2009} (later expanded to a journal version~\cite{kuhn:2011abstract}), and has been subsequently used to study several different problems~\cite{cornejo2009neighbor,khabbazian:2010,khabbazian:2011,cornejo2014reliable}. This paper, however, is the first to consider consensus in the abstract MAC layer context. Other researchers have also studied consensus in wireless networks at higher levels of abstraction. Vollset and Ezhilchelvan~\cite{vollset:2005}, and Alekeish and Ezhilchelvan~\cite{alekeish:2012}, study consensus in a variant of the asynchronous message passing model where pairwise channels come and go dynamically---capturing some behavior of {mobile} wireless networks. Their correctness results depend on detailed liveness guarantees that bound the allowable channel changes. Wu et~al.~\cite{wu:2009} use the standard asynchronous message passing model (with unreliable failure detectors~\cite{chandra:1996}) as a stand-in for a wireless network, focusing on how to reduce message complexity (an important metric in a resource-bounded wireless setting) in solving consensus. Finally, we note that a key focus in this paper is understanding the importance of network information in solving consensus, a topic previously studied in the classical models. Ruppert~\cite{ruppert2007anonymous}, and Bonnet and Raynal~\cite{bonnet2010anonymous}, for example, study the amount of extra power needed (in terms of shared objects and failure detection, respectively) to solve wait-free consensus in {\em anonymous} versions of the standard models. Attiya et~al.~\cite{attiya2002computing} describe consensus solutions for shared memory systems without failures or unique ids. In this paper, by contrast, we prove consensus impossible without failures or unique ids. These results do not contradict, however, as we assume multihop message passing-style networks. A series of papers~\cite{cavin:2004,greve:2007,alchieri:2008}, starting with the work of Cavin et~al.~\cite{cavin:2004}, study the related problem of {\em consensus with unknown participants} (CUPs), where nodes are only allowed to communicate with other nodes whose identities have been provided by a {\em participant detector} formalism. Results on the CUPs problem focus on the structure of the knowledge from such detectors required for consensus (e.g., if we create a graph with a directed edge indicating participant knowledge, then the resulting graph must satisfy certain connectivity properties). Closer to our own model is the work of Abboud~et~al.~\cite{abboud:2008}, which studies single hop networks in which participants are {\em a priori} unknown, but nodes do have a reliable broadcast primitive. They prove consensus is impossible in single hop networks under these assumptions without knowledge of network size. In Section~\ref{sec:upper:single}, we describe an algorithm in our model that {\em does} solve consensus under these assumptions: opening a gap between these two models. \iffalse Another contribution of this paper is that it expands our understanding of the impact of network knowledge on the solvability of consensus with no crash failures in the standard asynchronous model. A similar investigation was begun by Cavin et~al.~\cite{cavin:2004} who introduce the problem of {\em consensus with unknown participants} (CUPs). They studied a variant of the asynchronous model where nodes must learn of each other before they can communicate. This learning is facilitated by a {\em participant detector} abstraction that provides information to each node about some other nodes in the system. Greve and Tixeuil~\cite{greve:2007} extended the CUPs problem to include crash failures, identifying a trade-off between the amount of participant information provided and how much synchrony information (captured by the failure detector formalism) that is needed to solve consensus. Alchieri~et~al.~\cite{alchieri:2008} further extended the problem to include byzantine faults. Perhaps the work in this vein must similar to ours is due to Abboud~et~al.~\cite{abboud:2008}. As in our paper, the authors consider the broadcast variant of the asynchronous model. They prove in this setting for a clique network topology consensus is impossible, even without crash failures, if node identities are not known in advance. In our paper, we show consensus is solvable in this setting, but once again becomes impossible if we assume the topology is not single hop. Our model differs from their in our assumption an acknowledgment when message broadcast is complete. Furthermore, our multihop impossibility holds even if $n$ and $D$ is known. \fi \section{Model and Problem} \label{sec:model} For simplicity, in the following we sometimes call our model {\em the} {abstract MAC layer} model. We emphasize, however, that there is no single abstract MAC layer model, but instead many variants that share the same basic assumptions of acknowledged local broadcast and an arbitrary scheduler. The major differences between our model and the standard asynchronous message passing model are that: (1) we assume local broadcast instead of point-to-point communication; (2) senders receive an acknowledgment at some point after their broadcast completes (this acknowledgment captures the time at which the underlying link layer is done broadcasting its current message; e.g., after its slot in a TDMA schedule arrives or its CSMA algorithm finally detected a clear channel); and (3) we care about assumptions regarding network information knowledge as wireless networks are often deployed in an ad hoc manner where such information may be unknown to nodes. \paragraph{Model Details.} To formalize our abstract MAC layer model, fix a graph $G=(V,E)$, with the set $V$ describing the $|V|=n$ wireless devices in the network (called {\em nodes} in the following), and the edges in $E$ describing nodes within reliable communication range. In this model, nodes communicate with a local reliable (but not necessarily atomic\footnote{Local broadcast in wireless networks is not necessarily an atomic operation, as effects such as the hidden terminal problem might lead some neighbors of a sender to receive a message before other neighbors.}) broadcast primitive that guarantees to eventually deliver messages to a node's neighbors in $G$. At some point after a broadcast completes (see below), a node receives an ack. If a node attempts to broadcast additional messages before receiving an ack for the current message, those extra messages are discarded. To formalize the message delivery guarantees, fix some execution $\alpha$ of a deterministic algorithm in our model. To simplify definitions, assume w.l.o.g. that messages are unique. Let $\pi$ be the event in $\alpha$ where $u$ calls {\em broadcast}$(m)$, and $\pi'$ be the subsequent ack returned to $u$. Our abstract MAC layer model guarantees that in the interval from $\pi$ to $\pi'$ in $\alpha$, every non-faulty neighbor of $u$ in $G$ receives $m$, and these are the only receive events for $m$ in $\alpha$ (this is where we leverage message uniqueness in our definition). We associate each message scheduler with an unknown (to the nodes) but finite value $F_{ack}$ that bounds the maximum delay it is allowed between a broadcast and a corresponding acknowledgment. This property induces some notion of fairness: the scheduler must eventually allow each broadcast to finish. To simplify timing, we assume local non-communication steps take no time. That is, all non-determinism is captured in the message receive and ack scheduling. We note that in some definitions of abstract MAC layer models (see~\cite{kuhn:2011abstract}), a second timing parameter, $F_{prog}$, is introduced to bound the time for a node to receive {\em some} message when one or more neighbors are broadcasting. We omit this parameter in this study as it is used mainly for refining time complexity analysis, while we are concerned here more with safety properties. Refining our upper bound results in a model that includes this second parameter remains useful future work. We also note that some definitions of the abstract MAC layer assume a second topology graph consisting of {\em unreliable} links that sometimes deliver messages and sometimes do not. We omit this second graph in this analysis, which strengthens our lower bounds. Optimizing our multihop upper bound to work in the presence of such links, however, is left an open question. In some results that follow, we consider {\em crash} failures (a node halts for the remainder of the execution). The decision to crash a node and the timing of the crash is determined by the scheduler and can happen in the middle of a broadcast (i.e., after some neighbors have received the message but not all). We call a node {\em non-faulty} (equiv. {\em correct}) with respect to a given execution if it does not crash. For our upper bounds, we restrict the message size to contain at most a constant number of unique ids. For a given topology graph $G$, we use $D$ to describe its diameter. Finally, for integer $i>0$, let $[i]=\{1,2,...,i\}$. \paragraph{The Consensus Problem.} To better understand the power of our abstract MAC layer model we explore upper and lower bounds for the standard binary consensus problem. In more detail, each node begins an execution with an initial value from $\{0,1\}$. Every node has the ability to perform a single irrevocable {\em decide} action for a value in $\{0,1\}$. To solve consensus, an algorithm must guarantee the following three properties: (1) {\em agreement}: no two nodes decide different values; (2) {\em validity}: if a node decides value $v$, then some node had $v$ as its initial value; and (3) {\em termination}: every non-faulty process eventually decides. By focusing on binary consensus, as oppose to the more general definition that assumes an arbitrary value set, we strengthen the lower bounds that form the core of this paper. Generalizing our upper bounds to the general case in an efficient manner (e.g., a solution more efficient than agreeing on the bits of a general value, one by one, using binary consensus) is non-trivial and remains an open problem. \section{Lower Bounds} \label{sec:lower} We begin by exploring the fundamental limits of our abstract MAC layer model with respect to the consensus problem. In Section~\ref{sec:upper}, we provide matching upper bounds. In the following, we defer some proofs to the appendix for the sake of clarity and concision. \subsection{Consensus is Impossible with Crash Failures} \label{sec:lower:crash} In this section we prove consensus is impossible in our model in the presence of even a single crash failure. To achieve the strongest possible bound we assume a clique topology. Our proof generalizes the FLP~\cite{flp} result to hold in our stronger setting where nodes now have acknowledgments.\footnote{The version of the FLP proof that we directly generalize here is the cleaned up and condensed version that appeared in the subsequent textbook of Lynch~\cite{lynch:1996}.} \paragraph{Preliminaries.} For this proof, assume w.l.o.g. that nodes always send messages; i.e., on receiving an {ack} for their current message they immediately begin sending a new message. We define a {\em step} of a node $u$ to be either: (a) a node $v\neq u$ receiving $u$'s current message; or (b) $u$ receiving an {ack} for its current message (at which point its algorithm advances to sending a new message). We call a step of type (a) from above {\em valid} with respect to the execution so far if the node $v$ receiving $u$'s message has not previously received that message {\em and} all non-crashed nodes smaller than $v$ (by some fixed but arbitrary ordering) have already received $u$'s message. We call a step of type (b) {\em valid} with respect to the execution so far if every non-crashed neighbor of $u$ has received its current message in a previous step. When we consider executions that consist only of valid steps we are, in effect, restricting our attention to a particular type of well-behaved message scheduler. We call an execution fragment (equiv. prefix) $\alpha$ of a consensus algorithm {\em bivalent} if there is some extension of valid steps that leads to nodes deciding $0$, and some extension of valid steps that leads to nodes deciding $1$. By contrast, we call an execution $\alpha$ {\em univalent} if every extension of valid steps from $\alpha$ that leads to a decision leads to the same decision.\footnote{Notice, not every extension need lead to a decision. If, for example, the extension does not give steps to two or more nodes, than this is equivalent to two or more nodes crashing---a circumstance for which we do not expect a $1$-fault tolerant algorithm to necessarily terminate.} If this decision is $0$ (resp. $1$), we also say that $\alpha$ is {\em $0$-valent} (resp. {\em $1$-valent}). In the following, we use the notation $\alpha \cdot s$, for execution fragment $\alpha$ and step $s$, to describe the extension of $\alpha$ by $s$. \paragraph{Result.} Fix some algorithm ${\cal A}$. Assume for the sake of contradiction that ${\cal A}$ guarantees to solve consensus in this setting with up to $1$ crash failure. \iffalse \begin{lemma} There exists a bivalent initial configuration of ${\cal A}$. \label{lem:1} \end{lemma} \fi The key to generalizing the FLP impossibility to our model is the following lemma, which reproves the main argument of this classical result in a new way that leverages our model-specific constraints. \begin{lemma} Fix some bivalent execution fragment $\alpha$ of ${\cal A}$ and some process $u$. There exists a finite extension $\alpha'$ of $\alpha$ such that $\alpha'\cdot s_u$ is bivalent, where $s_u$ is a valid step of $u$ with respect to $\alpha'$. \label{lem:2} \end{lemma} \begin{proof} Assume for contradiction that this property does not hold for some $\alpha$ and $u$. It follows that for every finite extension $\alpha'$ of $\alpha$ of valid steps, if we extend $\alpha'$ by a single additional valid step of $u$, the execution is univalent. Let $s_u$ be the next valid step for $u$ after $\alpha$ (by our definition of valid, $s_u$ is well-defined). We start by considering $\alpha\cdot s_u$. By assumption, this fragment is univalent. Assume, w.l.o.g. that $\alpha\cdot s_u$ is $0$-valent (the argument below is symmetric for the case where $\alpha\cdot s_u$ is instead $1$-valent). Because $\alpha$ is bivalent, however, there is some other extension $\alpha''$ consisting of valid steps that is $1$-valent. We now move step by step in $\alpha''$, starting from $\alpha$, until our growing fragment becomes $1$-valent. Assume there are $k\geq 1$ steps in this extension. Label the $k$ intermediate execution fragments from $\alpha$ to the $\alpha''$: $\alpha_1, \alpha_2,...,\alpha_k$, where $\alpha_k$ is the where the fragment becomes $1$-valent. To simplify notation, let $\alpha_0 = \alpha$. For $0< i< k$, we know $\alpha_i$ is bivalent, so, by our contradiction assumption, that step between $\alpha_{i-1}$ and $\alpha_i$ cannot be $s_u$. Let $s^*$ be the step between $\alpha_{k-1}$ and $\alpha_k$. (Notice that it {\em is} possible that $s^* = s_u$, as the execution is no longer bivalent after this final step.) By our contradiction assumption, we know that for each $i \in \{0,...,k-1\}$, $\alpha_i \cdot s_u$ is univalent. We also know that $\alpha_0\cdot s_u$ is $0$-valent. It follows that there must exist some $\hat i \in \{0,...,k-1\}$, such that $\alpha_{\hat i}\cdot s_u$ is $0$-valent and $\alpha_{\hat i} \cdot s_v \cdot s_u$ is $1$-valent, where $s_v$ is the next valid step of some node $v\neq u$. Notice this holds whether or not $s^* = s_u$ (if $s^* = s_u$ then $\alpha_k$ is a fragment ending with $s_u$ that we know to be $1$-valent, otherwise, $\alpha_k \cdot s_u$ is this fragment). We have found a fragment $\alpha^*$, therefore, where $\alpha^* \cdot s_u$ is $0$-valent but $\alpha^* \cdot s_v \cdot s_u$ is $1$-valent. We now perform a case analysis on $s_v$ and $s_u$ to show that all possible cases lead to a contradiction. In the following, to simplify notation, let $\beta_0 = \alpha^* \cdot s_u$ and $\beta_1 = \alpha^* \cdot s_v \cdot s_u$. {\em Case $1$:} Both steps affect the same node $w$. It is possible that $w$ is $u$ or $v$ (e.g., if $s_u$ is an acknowledgment and $s_v$ is $u$ receiving $v$'s message), it is also possible that $w$ is not $u$ or $v$ (e.g., if $s_u$ and $s_v$ are both receives at some third node $w$). We note that $w$ (and only $w$) can distinguish between $\beta_0$ and $\beta_1$. Imagine, however, that we extend $\beta_1$ such that every node {\em except for $w$} keeps taking valid steps. All non-$w$ nodes must eventually decide, as this is equivalent to a fair execution where $w$ crashes after $\beta_1$, and $w$ is the only node to crash---a setting where termination demands decision. By our valency assumption, these nodes must decide $1$. Now imagine that we extend $\beta_0$ with the exact same steps. For all non-$w$ nodes these two executions are indistinguishable, so they will once again decide $1$. We assumed, however, that $\beta_0$ was $0$-valent: a contradiction. {\em Case $2$:} The steps affect two different nodes. In this case, it is clear that no node can distinguish between $\beta_0\cdot s_v$ and $\beta_1$. We can, therefore, apply the same style of indistinguishability argument as in case $1$, except in this case we can allow all nodes to continue to take steps. \iffalse {\em Case $1$:} One of the steps is an acknowledgment. Assume that step $s_u$ is the acknowledgement (the other case is symmetric). If $s_v$ is $u$ receiving $v$'s message, then $u$ can distinguish between $\alpha_0$ and $\alpha_1$. Imagine, however, that we extend $\alpha_1$ such that every node {\em except for $u$} keeps taking valid steps. All non-$u$ nodes must eventually decide (as this is equivalent to a fair execution where $u$ crashes after $\alpha_1$, and $u$ is the only node to crash---a setting where termination demands decision). By our valency assumption, these nodes must decide $1$. Now imagine that we extend $\alpha_0$ with the exact same steps. For all non-$u$ nodes these two executions are indistinguishable, so they will once again decide $1$. We assumed, however, that $\alpha_0$ was $0$-valent: A contradiction. Notice that if $s_v$ delivers a message to a node other than $u$, than the above indistinguishability argument holds, even if we do not crash $u$. In this case, the only node that can possibly differentiate between $\alpha_1 = \alpha^{*} \cdot s_v \cdot s_u$ and $\alpha_0 = \alpha^{*} \cdot s_u \cdot s_v$ is the node $w\in \{v,u\}$, that received the acknowledgment. Imagine that we extend $\alpha_1$ such that every node except for $w$ keeps taking valid steps. All non-$w$ nodes must eventually decide (as this is equivalent to a fair execution where $w$ crashes after $\alpha_1$, and $w$ is the only node to crash---a setting where termination demands decision). By our valency assumption, these nodes must decide $1$. Now imagine that we extend $\alpha_0$ with the exact same steps. For all non-$w$ nodes these two executions are indistinguishable, so they will once again decide $1$. We assumed, however, that $\alpha_0$ was $0$-valent. A contradiction. {\em Case $2$:} Both steps are receives but the receives happen at two different nodes. Let $x$ and $y$ be the two nodes that received messages in these steps. There is no way for $x$ and $y$ to determine the order in which they received these messages. In more detail, we can apply a similar indistinguishability argument as used in Case $1$, but now allowing all nodes to keep taking valid steps, and we end up with a similar contradiction. {\em Case $3$:} Both steps are receives that occur at the same node $w$. In this case, only $w$ can tell the difference between $\alpha_0$ and $\alpha_1$. We apply the exact same indistinguishability argument as Case $1$ to generate a contradiction. \fi \end{proof} \noindent We now leverage Lemma~\ref{lem:2} to prove our main theorem. \begin{theorem} There does not exist a deterministic algorithm that guarantees to solve consensus in a single hop network in our abstract MAC layer model with a single crash failure \label{thm:singlehop} \end{theorem} \begin{proof} Assume for contradiction such an algorithm exists. Call it ${\cal A}$. Using the standard argument we first establish the existence of a bivalent initial configuration of ${\cal A}$ (e.g., Lemma $2$ from~\cite{flp}). Starting from this configuration, we keep applying Lemma~\ref{lem:2}, rotating through all $n$ nodes in round robin order, to extend the execution in a way that keeps it bivalent. Because we rotate through all nodes when applying Lemma~\ref{lem:2}, the resulting execution is fair in the sense that all nodes keep taking steps. The termination property of consensus requires that nodes eventually decide. By agreement when any node decides the execution becomes univalent. By Lemma~\ref{lem:2}, however, our execution remains bivalent, so no node must ever decide. This violates termination and therefore contradicts the assumption that ${\cal A}$ solves consensus. \end{proof} \begin{figure}[!t] \centering \begin{minipage}{.39\textwidth} \centering \xymatrix@=.7em { & & & & & \\ &*+[F--]{\text{\scriptsize (gadget copy)}} & & {\bullet}_{q} \ar@{-}[r] \ar@{-}[ll] & *+[F--]{C} & \\ & & & {\bullet}_{c} \ar@{-}[u] & & \\ & {\bullet} _{a_1} \ar@{-}[r] \ar@{-}[urr] & {\bullet}_{a^{+}_2} \ar@{-}[r] & {\bullet}_{a^{+}_3} \ar@{-}[r] \ar@{-}[u]& {\bullet}_{a^{+}_4} \ar@{-}[ul] & \\ & {\bullet} _{a_2} \ar@{-}[u]& & & & \\ & ... & & & & \\ & {\bullet}_{a_{d-1}} & & & & \\ {\bullet}_{a^{*}_1} \ar@{-}[ur] & {\bullet}_{a^{*}_2} \ar@{-}[u] & ... & {\bullet}_{a^{*}_k} \ar@{-}[ull]& {\bullet}_{a_d}\ar@{-}[ulll] & % } \end{minipage}% \begin{minipage}{.61\textwidth} \centering \xymatrix@=.7em { & {\bullet}_{c_1} & & & &{\bullet}_{c_2} & & & & {\bullet}_{c_3} & & &\\ & {\bullet} \ar@{-}[u] \ar@{-}[r] & {\bullet} \ar@{-}[r] & {\bullet}\ar@{-}[r] \ar@{-}@/^/[urrrrrr]& {\bullet}\ar@{-}[ur] & {\bullet}\ar@{-}[u]\ar@{-}[r] & {\bullet}\ar@{-}[r] & {\bullet}\ar@{-}[r] \ar@{-}[ullllll] & {\bullet} \ar@{-}[ur] & {\bullet}\ar@{-}[u] \ar@{-}[r] & {\bullet} \ar@{-}[r] & {\bullet} \ar@{-}[r] \ar@{-}[ullllll] & {\bullet} \ar@{-}@/_2pc/[ulllllllllll]\\ & & & & & & & & & & & & \\ & & & & & & & & & & & & \\ & & & & & & & & & & & & \\ & *+[F--]{L_1} \ar@{-}[uuuu] & & & & *+[F--]{L_2} \ar@{-}[uuuu] & & & & *+[F--]{L_3} \ar@{-}[uuuu] & & & } \end{minipage} \caption{In {\bf Network A} (left) the $a$ nodes plus $c$ combine to comprise a {\em gadget}. The bridge node $q$ is connected to two copies of this gadget at their $c$ nodes. It is also connected to all nodes in a clique $C$ used to adjust the total network size. In {\bf Network B} (right) the sub-graphs $L_1$, $L_2$, and $L_3$, are each a copy of the sub-graph of the gadget of Network $A$ consisting of nodes at $a_2$ and below in the diagram (i.e., nodes labelled $a_i$ for $i>1$, as well as the $a^*_j$ nodes.) $L_1$, $L_2$, and $L_3$, in other words, connect to the $a_1$ node of the Network $A$ gadget.} \label{fig:uid} \end{figure} \subsection{Consensus is Impossible without Unique Ids} \label{sec:lower:unique} Having proved that consensus is impossible with crash failures, we consider the conditions under which it remains impossible {\em without} crash failures. Recall, in wireless networks, unlike wired networks, the network configuration might be {ad hoc}, preventing nodes from having full {\em a priori} information on the participants. Accordingly, in this section and the next we explore the network information required to solve consensus. We start here by investigating the importance of unique ids. We call an algorithm that {\em does not} use unique ids an {\em anonymous} algorithm. We prove below that consensus is impossible with anonymous algorithms, even if nodes know the network size and diameter. We then provide a corollary that extends this result to the standard asynchronous network model. To the best of our knowledge, this is the first result on the necessity of unique ids for consensus in multihop message passing networks. \paragraph{Result.} To prove our main theorem we leverage an indistinguishability result based on the network topologies shown in Figure~\ref{fig:uid}. Due to the careful construction of these networks, we cannot prove our impossibility holds for all $n$ and $D$ (network $B$, for example, requires that $n$ be divisible by $3$). We can, however, prove that for every sufficiently large (even) $D$ and $n$, the problem is impossible for $D$ and some $n' = \Theta(n)$. \begin{theorem} There exists a constant integer $c\geq 1$, such that for every even diameter $D\geq 4$ and network size $n \geq D$, there exists an $n' \in \{n,...,c\cdot n\}$, such that no anonymous algorithm ${\cal A}$ guarantees to solve consensus in our abstract MAC layer model in all networks of diameter $D$ and size $n'$. \label{thm:nounique} \end{theorem} \noindent Given some $D$ and $n$ that satisfy the theorem constraints, let $k$ be the smallest integer $k\geq 0$ such that $3(\frac{D-2}{2} +k) + 12 \geq n$. Set $n' = 3(\frac{D-2}{2} + k) + 12$. Consider networks $A$ and $B$ from Figure~\ref{fig:uid}, instantiated with $d=\frac{D-2}{2}$ and $k$ set to the value fixed above in defining $n'$. In the case of network $A$, set the clique $C$ to contain enough nodes to bring the total count in that network to equal the number of nodes in $B$. The following claim follows from the structure of these networks and our definitions of $k$ and $d$. \begin{claim} Networks $A$ and $B$, instantiated with the values described above, have size $n'$ and diameter $D$. \label{claim:nounique:1} \end{claim} \noindent We define the {\em synchronous scheduler} in our model to be a message scheduler that delivers messages in lock step rounds. That is, it delivers all nodes' current message to all recipients, then provides all nodes with an {ack}, and then moves on to the next batch of messages. Furthermore, we assume no global time (or, equivalently, some small amount of time that we can define as needed in the below proof) passes between these {\em synchronous steps.} Fix some consensus algorithm ${\cal A}$ that does not use unique ids. For $b\in \{0,1\}$, let $\alpha_B^b$ be the execution of ${\cal A}$ in network $B$ (see the right network in Figure~\ref{fig:uid}) with all nodes starting with initial value $b$ and message behaviors scheduled by the synchronous scheduler. The following lemma follows directly from the definition of consensus and the fairness of the synchronous scheduler. \begin{lemma} There exists some $t\geq 0$ such that for $b\in \{0,1\}$, $\alpha_B^b$ terminates by synchronous step $t$ with all nodes deciding $b$. \label{lem:nounique:1} \end{lemma} Next, let $\alpha_A$ be an execution of ${\cal A}$ in network $A$ (see the left network in Figure~\ref{fig:uid}) defined as follows: (1) all nodes in one gadget start with initial value $0$ (call these nodes $A_0$), all nodes in the other copy of the gadget start with initial value $1$ (call these nodes $A_1$); (2) the bridge node $q$ and the nodes in component $C$ start with arbitrary initial values; and (3) we fix the scheduler to schedule the steps of $A_0$ and $A_1$ like the synchronous scheduler for for $t$ steps (for the $t$ fixed in Lemma~\ref{lem:nounique:1}), while delaying any message from node $q$ being delivered until after these $t$ steps are complete. After this point, the scheduler can behave as the synchronous scheduler for the full network. The key argument in our proof is that a node in $A_b$ cannot distinguish itself during the first $t$ steps of $\alpha_A$ from the same node in $\alpha_B^b$. Intuitively, this follows because the network in $B$ is carefully constructed to be symmetric, so nodes cannot tell if they are communicating with one copy of the network $A$ gadget or multiple copies. To formalize this argument, we introduce some notion that relates network $A$ to $B$. Notice that network $B$ consists of three copies of the gadget from network $A$ (with some of the edges from the connector node copies swapped to interconnect the copies). For each node $u$ in a network $A$ gadget, therefore, we can define $S_u$ to be the set containing the three nodes in network $B$ that correspond to $u$: that is, the nodes in $u$'s position in the three gadget copies of $B$). For example, consider node $c$ in the network $A$ gadget shown in Figure~\ref{fig:uid}. By our above definition, $S_c = \{c_1, c_2, c_3\}$. We can now formalize our indistinguishability. \begin{lemma} Fix some $b\in \{0,1\}$ and some node $u$ in $A_b$. The first $t$ steps of $u$ in $\alpha_A$ are indistinguishable from the first $t$ steps of the three nodes in $S_u$ in $\alpha_B^b$. \label{lem:nounique:2} \end{lemma} \begin{proof} We begin by noting the following property of our networks that follows directly from its structure (in the following, we use the notation $N_{A_b}$ to indicate the neighbor function of the subgraph of network $A$ consisting only of the nodes in $A_b$): (*) {\em Fix any $u\in A_b$ and $u'\in S_u$. For every $v\in N_{A_b}(u)$, $u'$ is connected to exactly one node in $S_v$. There are no other edges adjacent to $u'$ in $B$.} We now leverage property (*) in proving the following induction argument, which itself directly implies our lemma statement. The below induction is on the number of synchronous steps in the $\alpha$ executions. {\em Hypothesis:} $\forall u\in A_b$, $0 \leq r \leq t$: after $r$ steps, $u$ in $\alpha_A$ has the same state as the nodes in $S_u$ in $\alpha_B^b$. {\em Basis ($r=0$):} Because we assume no unique ids and the same initial values for all relevant nodes, the hypothesis is trivially true after $0$ steps. {\em Step:} Assume the hypothesis holds through some step $r, 0 \leq r < t$. We will now show it holds for step $r+1$. By our hypothesis, for each $w\in A_b$, the nodes in $S_w$ will send the same message as $w$ during step $r+1$ (as this message is generated deterministically by the nodes' state after step $r$). Now consider a particular $u\in A_b$ and a particular copy $u' \in S_u$ in network $B$. By property (*), for each node $v\in N_{A_b}(u)$ that sends a message to $u$ in $r+1$, $u'$ is connected to a single node in $S_v$. By our above argument, this node in $S_v$ will send the same message to $u'$ as $v$ sends to $u$. Furthermore, (*) establishes that there are no other edges to $u'$ that will deliver messages at this point. It follows that $u'$ will receive the same message set in $r+1$ in $\alpha_B^b$ as $u$ receives in $r+1$ in $\alpha_A$. They will end $r+1$, therefore, in the same state. \end{proof} \noindent We now leverage Lemma~\ref{lem:nounique:2} to prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:nounique}.] Assume for contradiction that there exists an anonymous algorithm ${\cal A}$ that guarantees to solve consensus for a diameter $D$ and network size $n'$ specified to be impossible by the theorem statement. Fix some nodes $u\in A_0$ and $v\in A_1$ such that $u$ and $v$ are in the same position in their respective gadgets in network $A$. Fix some $w\in S_u = S_v$. By Lemma~\ref{lem:nounique:1}, $w$ decides $0$ within $t$ steps of $\alpha_B^0$. Combining this observation with Lemma~\ref{lem:nounique:2}, applied to $u$ and $b=0$, it follows that $u$ will decide $0$ in $\alpha_A$. By agreement, it follows that all nodes must decide $0$ in $\alpha_A$---including $v$. We can, however, apply this same argument to $\alpha_B^1$, $v$ and $b=1$, to determine that $v$ decides $1$ in $\alpha_A$. A contradiction. \end{proof} \noindent We conclude with a corollary for the standard asynchronous model that follows from the fact that our model is strictly stronger and this result concerns a lower bound. \begin{corollary} There exists a constant integer $c\geq 1$, such that for every even diameter $D\geq 4$ and network size $n \geq D$, there exists an $n' \in \{n,...,c\cdot n\}$, such that no anonymous algorithm ${\cal A}$ guarantees to solve consensus in the asynchronous network model with broadcast communication and no advance knowledge of the network topology, in all networks of diameter $D$ and size $n'$. \end{corollary} \subsection{Consensus is Impossible without Knowledge of $n$} \label{sec:lower:n} \begin{wrapfigure}{c}{0.3\textwidth} \centerline{ \xymatrix @=1em { L_D^1 & L_{D-1} & L_D^2 \\ \bullet \ar@{-}[d] \ar@{-}[r] & \bullet \ar@{-}[d] & \bullet \ar@{-}[d] \ar@{-}[l] \\ \bullet \ar@{-}[ur] & \bullet & \bullet \ar@{-}[ul] \\ ... & ... & ... \\ \bullet \ar@{-}[d] \ar@{-}[ruuu] & \bullet \ar@{-}[d] & \bullet \ar@{-}[d] \ar@{-}[luuu] \\ \bullet \ar@{-}[d] \ar@{-}[ruuuu] & \bullet & \bullet \ar@{-}[d] \ar@{-}[luuuu] \\ \bullet \ar@{-}[ruuuuu] & & \bullet \ar@{-}[luuuuu] } } \caption{The {$K_D$ network. Note: $L_{D-1}$ contains $D$ nodes.} } \label{fig:kd} \end{wrapfigure} In Section~\ref{sec:lower:unique}, we proved that consensus in our model requires unique ids. Here we prove that even with unique ids and knowledge of $D$, nodes still need knowledge of $n$ to solve the problem (in multihop networks). Our strategy for proving this theorem is an indistinguishability argument of a similar style to that used in Section~\ref{sec:lower:unique}. In more detail, consider network $K_D$ with diameter $D$ shown in Figure~\ref{fig:kd}. Imagine that we start the $D+1$ nodes in sub-graph $L_D^1$ (resp. $L_D^2$) with initial value $0$ (resp. $1$). If we delay message delivery long enough between $L_{D-1}$ and its neighbors in $K_D$, the nodes in $L_D^i$ cannot distinguish between being partitioned in $K_D$ or executing by themselves in a network. Diameter knowledge does not distinguish these cases. To formalize this argument, we first assume w.l.o.g. that nodes continually send messages. In the following, fix some consensus algorithm ${\cal A}$. Let $L_d$, for integer $d\geq 1$, be the network graph consisting of $d+1$ nodes in a line. Let $\alpha^b_d$, for $b\in \{0,1\}$ and some integer $d\geq 1$, be the execution of ${\cal A}$ in $L_d$, where all nodes begin with initial value $0$ and message behavior is scheduled by the synchronous scheduler (defined in Section~\ref{sec:lower:unique}). The following lemma follows from the validity and termination properties of consensus. \begin{lemma} There exists some integer $t\geq 0$, such that for every $b\in \{0,1\}$ and $d\geq 1$, $\alpha^b_d$ terminates after $t$ synchronous steps with all nodes deciding $b$. \label{lem:uniqueanddiameter:1} \end{lemma} \noindent For a given diameter $D>1$, we define the network graph $K_D$ to consist of two copies of $L_D$ (call these $L_D^1$ and $L_D^2$) and the line $L_{D-1}$, with an edge added from every node in $L_D^1$ and $L_D^2$ to some fixed endpoint of the $L_{D-1}$ line. Notice that by construction, $K_D$ has diameter $D$. (See Figure~\ref{fig:kd}.) Next, we define the {\em semi-synchronous scheduler}, in the context of network graph $K_D$, to be a message scheduler that delivers messages amongst nodes in $L_D^1$ and amongst nodes in $L_D^2$, in the same manner as the synchronous scheduler for $t$ synchronous steps (for the $t$ provided by Lemma~\ref{lem:uniqueanddiameter:1}). During this period, the semi-synchronous scheduler does {\em not} deliver any messages from the endpoint of the $L_{D-1}$ line to nodes in $L_D^1$ or $L_D^2$. After this period, it behaves the same as the synchronous scheduler. Let $\beta_D$ be the execution of ${\cal A}$ in $K_D$ with: (1) all nodes in $L_D^1$ starting with initial value $0$; (2) all nodes in $L_D^2$ starting with initial value $1$; (3) all nodes in $L_{D-1}$ starting with arbitrary initial values; and (4) the semi-synchronous scheduler controlling message actions. With these definitions established, we can prove our main theorem. \begin{theorem} For every $D>1$, no algorithm ${\cal A}$ guarantees to solve consensus in our abstract MAC layer model in all networks of diameter $D$. \label{thm:uniqueanddiameter} \end{theorem} \begin{proof} Assume for contradiction that ${\cal A}$ guarantees to solve consensus in all networks of diameter $D$, for some fixed $D>1$. By the definition of the semi-synchronous scheduler, it is straightforward to see that $\beta_D$ is indistinguishable from $\alpha^0_D$ for nodes in $L_D^1$, and indistinguishable from $\alpha^1_D$ for nodes in $L_D^2$, for the first $t$ synchronous steps. Combining Lemma~\ref{lem:uniqueanddiameter:1} with our indistinguishability, we note that nodes in $L_D^1$ will decide $0$ in $\beta_D$ while nodes in $L_D^2$ will decide $1$. Therefore, ${\cal A}$ does not satisfy agreement in $\beta_D$. We constructed $K_D$, however, so that it has a diameter of $D$. Therefore, ${\cal A}$ guarantees to solve consensus (and thus satisfy agreement) in this network. A contradiction. \end{proof} \iffalse As in Section~\ref{sec:lower:unique}, in the below proof arguments we assume w.l.o.g. that nodes continually send messages. In the following, fix some consensus algorithm ${\cal A}$. Let $L_d$, for integer $d\geq 1$, be the network graph consisting of $d+1$ nodes in a line. Let $\alpha^b_d$, for $b\in \{0,1\}$ and some integer $d\geq 1$, be the execution of ${\cal A}$ in $L_d$, where all nodes begin with initial value $0$ and message behavior is scheduled by the synchronous scheduler (defined in Section~\ref{sec:lower:unique}). The following lemma follows from the validity and termination properties of consensus. \begin{lemma} There exists some integer $t\geq 0$, such that for every $b\in \{0,1\}$ and $d\geq 1$, $\alpha^b_d$ terminates after $t$ synchronous steps with all nodes deciding $b$. \label{lem:uniqueanddiameter:1} \end{lemma} For a given diameter $D>1$, we define the network graph $K_D$ to consist of two copies of $L_D$ (call these $L_D^1$ and $L_D^2$) and the line $L_{D-1}$, with an edge added from every node in $L_D^1$ and $L_D^2$ to some fixed endpoint of the $L_{D-1}$ line. Notice that by construction, $K_D$ has diameter $D$. (See Figure~\ref{fig:kd}.) Next, we define the {\em semi-synchronous scheduler}, in the context of network graph $K_D$, to be a message scheduler that delivers messages amongst nodes in $L_D^1$ and amongst nodes in $L_D^2$, in the same manner as the synchronous scheduler for $t$ synchronous steps (for the $t$ provided by Lemma~\ref{lem:uniqueanddiameter:1}). During this period, the semi-synchronous scheduler does {\em not} deliver any messages from the endpoint of the $L_{D-1}$ line to nodes in $L_D^1$ or $L_D^2$. After this period, it behaves the same as the synchronous scheduler. Let $\beta_D$ be the execution of ${\cal A}$ in $K_D$ with: (1) all nodes in $L_D^1$ starting with initial value $0$; (2) all nodes in $L_D^2$ starting with initial value $1$; (3) all nodes in $L_{D-1}$ starting with arbitrary initial values; and (4) the semi-synchronous scheduler controlling message actions. With these definitions established, we can prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:uniqueanddiameter}.] Assume for contradiction that ${\cal A}$ guarantees to solve consensus in all networks of diameter $D$, for some fixed $D>1$. By the definition of the semi-synchronous scheduler, it is straightforward to see that $\beta_D$ is indistinguishable from $\alpha^0_D$ for nodes in $L_D^1$, and indistinguishable from $\alpha^1_D$ for nodes in $L_D^2$, for the first $t$ synchronous steps. Combining Lemma~\ref{lem:uniqueanddiameter:1} with our indistinguishability, we note that nodes in $L_D^1$ will decide $0$ in $\beta_D$ while nodes in $L_D^2$ will decide $1$. Therefore, ${\cal A}$ does not satisfy agreement in $\beta_D$. We constructed $K_D$, however, so that it has a diameter of $D$. Therefore, ${\cal A}$ guarantees to solve consensus (and thus satisfy agreement) in this network. A contradiction. \end{proof} \fi \subsection{Consensus Requires $\Omega(D\cdot F_{ack})$ Time} \label{sec:lower:time} The preceding lower bounds all concerned computability. For the sake of completeness, we conclude by considering complexity. The $\Omega(D\cdot F_{ack})$ time bound claimed below is established by a partitioning argument. \begin{theorem} No algorithm can guarantee to solve consensus in our abstract MAC layer model in less than $\lfloor \frac{D}{2} \rfloor F_{ack}$ time. \label{thm:time} \end{theorem} \begin{proof} Fix some $D$. Consider a line of diameter $D$ consisting of nodes $u_1,u_2,...,u_{D+1}$, arranged in that order. Consider an execution of a consensus algorithm in this network with a variant of the synchronous scheduler from Section~\ref{sec:lower:unique} that delays the maximum $F_{ack}$ time between each synchronous step. In $\lfloor \frac{D}{2} \rfloor F_{ack}$ time, the endpoints cannot hear from beyond their nearest half of the line. If we assume they must decide by this deadline we can apply a standard partitioning argument to create an agreement violation. In particular, if one half starts with initial value $0$ and the other initial value $1$, the endpoint of the first half must decide $0$ and the endpoint of the second must decide $1$ (by indistinguishability and validity), creating an agreement violation. \end{proof} \iffalse \begin{proof}[Proof (Sketch).] Fix some $D$. Consider a line of diameter $D$ consisting of nodes $u_1,u_2,...,u_{D+1}$, arranged in that order. Consider an execution of a consensus algorithm in this network with a variant of the synchronous scheduler from Section~\ref{sec:lower:unique} that delays the maximum $F_{ack}$ time between each synchronous step. In $\lfloor \frac{D}{2} \rfloor F_{ack}$ time, the endpoints cannot hear from beyond their nearest half of the line. If we assume they must decide by this deadline we can apply a standard partitioning argument to create an agreement violation. \end{proof} \fi \section{Upper Bounds} \label{sec:upper} In Section~\ref{sec:lower}, we proved fundamental limits on solving consensus in our model. In this section, we prove these bounds optimal with matching upper bounds. We consider both {\em single hop} (i.e., the network graph is a clique) and {\em multihop} (i.e., the network graph is an arbitrary connected graph) networks. Due to the impossibility result from Section~\ref{sec:lower:crash}, we assume no crash failures in the following. \subsection{Consensus in Single Hop Networks} \label{sec:upper:single} Here we describe an algorithm---{\em two-phase consensus}---that guarantees to solve consensus in single hop network topologies in an optimal $O(F_{ack})$ time. It assumes unique ids but does not require knowledge of $n$. This opens a separation with the standard broadcast asynchronous model (which does not include acknowledgments) where consensus is known to be impossible under these conditions~\cite{abboud:2008}. \begin{wrapfigure}{L}{0.5\textwidth} \begin{minipage}{0.5\textwidth} \begin{algorithm}[H] \caption{Two-Phase Consensus (for node $u$)} \begin{algorithmic}[1] \scriptsize % % \State $v \gets$ initial value from $\{0,1\}$ \State $R_1 \gets \{\langle \text{phase 1}, \id{u}, v \rangle\}$ \State \Comment{Phase $1$} \State {\bf broadcast}$(\langle \text{phase 1}, \id{u}, v \rangle)$ \While{waiting for $ack$} \State add {\bf received} messages to $R_1$ \EndWhile \If{$\langle \text{phase 1,*,} 1-v \rangle\in R_1$ or $\langle \text{phase 2,*, bivalent} \rangle\in R_1$} \State $status\gets$ bivalent \Else \State $status \gets$ decided$(v)$ \EndIf \State \Comment{Phase 2} \State {\bf broadcast}$(\langle \text{phase 2}, \id{u}, status \rangle)$ \State $R_2 \gets \{\langle \text{phase 2}, \id{u}, status \rangle\}$ \While{waiting for $ack$} \State add {\bf received} messages to $R_2$ \EndWhile \State $W \gets$ every unique id in $R_1$ and $R_2$ \Comment{Witness list created} \While{$\exists id\in W$ s.t. $\langle \text{phase 2, id,*} \rangle \notin R_1 \cup R_2$} \State add {\bf received} phase $2$ messages to $R_2$ \EndWhile \If{$\langle \text{phase 2, *, decided}(0) \rangle \in R_2$} \State {\bf decide} 0 \Else \State {\bf decide} 1 \EndIf \end{algorithmic} \label{alg:singlehop} \end{algorithm} \end{minipage} \end{wrapfigure} The pseudocode for our algorithm is presented in Algorithm~\ref{alg:singlehop}. Here we summarize its operation: Each node $u$ executes two {\em phases}. At the beginning of the first phase, $u$ broadcasts its unique id and its initial value $v_u\in \{0,1\}$. Node $u$ considers its first phase complete once it receives an acknowledgment for its first broadcast. At this point, $u$ will choose its {\em status}. If $u$ has seen evidence of a different initial value in the system by this point (i.e., it sees a phase $1$ message for a different value or a {\em bivalent} phase $2$ message), it sets its status to {\em bivalent}. Otherwise, it sets it to {\em decided $v_u$}. Node $u$ now begins phase $2$ by broadcasting its status and id. Once $u$ finishes this phase $2$ broadcast, it has two possibilities. If its status is {\em decided}, then it can decide its initial value and terminate. Otherwise, it constructs a {\em witness set} $W_u$, consisting of every node it has heard from so far in the execution. It waits until it has received a phase $2$ message from {\em every} node in $W_u$. At this point, if the set contains any message of the form {\em decided $v_w$}, then it decides $v_w$. Otherwise, it decides default value $1$. We now establish the correctness of this strategy. \begin{theorem} The two-phase consensus algorithm solves consensus in $O(F_{ack})$ time in our abstract MAC layer model in single hop networks with unique ids. \label{thm:single:upper} \end{theorem} \begin{proof} Validity and termination are straightforward to establish. We turn our attention, therefore, to agreement. If no node ends up with {\em status = decided$(0)$}, then $1$ is the only possible decision value. The interesting case, therefore, is when some node $u$ does set {\em status $\gets$ decided$(0)$} after its phase $1$ broadcast completes. Let $S$ be the subset of nodes that began with initial value $1$ (if any). By assumption, $u$'s phase $1$ broadcast completed before any node in $S$, as, otherwise, $u$ would have seen evidence of a $1$ before setting $status$, preventing it from choosing {\em decided}$(0)$. It follows that every node in $S$ must set $status$ to {\em bivalent.} We know, therefore, that it is impossible to have both {\em decided}$(1)$ and {\em decided}$(0)$ in the system. We are left to show that if there is {\em decided}$(0)$ in the system, then all nodes end up deciding $0$. As before, let $u$ be a node with status {\em decided}$(0)$. Now let $v$ be a node with status {\em bivalent}. We consider two cases concerning $u$ and $v$'s interaction. In the first case, assume $v$ receives a message from $u$ before $v$ finishes its phase $2$ broadcast. It follows that $u$ will be placed in $v$'s witness list, $W$. The algorithm now requires $v$ to wait for $u$'s phase $2$ broadcast before deciding. It will therefore see that $u$ has a status of {\em decided}$(0)$, requiring $v$ to decide $0$. In the second case, $v$ does not receive a message from $u$ before $v$ finishes its phase $2$ broadcast. Accordingly, $u$ is not in $v$'s witness set $W$. This might be problematic as it could allow $v$ to decide before it sees a {\em decided}$(0)$ message. Fortunately, we can show this second case cannot happen. If $v$ had not heard {\em any} message from $u$ by the time it finished its phase $2$ broadcast, it follows that $u$ receives this broadcast before it finishes its phase $1$ broadcast. But $v$'s phase $2$ broadcast has a {\em bivalent} status. By the algorithm, this would prevent $u$ from setting its status to {\em decided}$(0)$---contradicting our assumption that $u$ has a {\em decided} status. \end{proof} \subsection{Consensus in Multihop Networks} \label{sec:upper:multihop} We now describe a consensus algorithm for the multihop setting that guarantees to solve consensus in $O(D\cdot F_{ack})$ time. It assumes unique ids and knowledge of $n$ (as required by the lower bounds of Section~\ref{sec:lower}), but makes no additional assumptions about the participants or network topology. Notice, this solution does not {\em replace} the single hop algorithm of Section~\ref{sec:upper:single}, as this previous algorithm: (1) is simpler; (2) has a small constant in its time complexity (i.e., $2$); and (3) does not require knowledge of $n$.\footnote{This lack of knowledge of $n$ does not violate the lower bound of Section~\ref{sec:lower:n}, as this lower bound requires the use of a multihop network topology.} Our strategy for solving consensus in this setting is to leverage the logic of the PAXOS consensus algorithm~\cite{paxos,paxos-simple}. This algorithm was designed and analyzed for the asynchronous network model with bounded crash failures. Here we apply the logic to our wireless model with no crash failures. The main difficulty we face in this effort is that nodes do not know the topology of the network or the identity of the other participants in advance. To overcome these issue we connect the PAXOS logic with a collection of sub-routines we call {\em services}, which are responsible for efficiently delivering messages, electing the leaders needed by PAXOS for liveness, and telling the proposers when to generate new proposal numbers. We call this combination of PAXOS logic with our model-specific services {\em wireless PAXOS} (wPAXOS). If we were satisfied with a non-optimal $O(n\cdot F_{ack})$ time complexity, the communication services could be implemented with a simple flooding logic (the first time you see a message, re-broadcast), and the leader election service could simply return the largest id seen so far. To obtain an optimal $O(D\cdot F_{ack})$ time complexity, however, requires a more intricate solution. In particular, when a proposer is waiting to hear from a majority of acceptors, we cannot afford for it to receive each response individually (as each message can only hold a constant number of unique ids, and this would therefore require a proposer to receive $\Theta(n)$ messages). Our solution is to instead have nodes execute a distributed Bellman-Ford style iterative refinement strategy to establish shortest-path routing trees rooted at potential leaders. We design this service such that once the leader election service stabilizes, a tree rooted at this leader will complete soon after (if it is not already completed). These trees are then used to safely {\em aggregate} responses from acceptors: a strategy that leverages the fact that PAXOS only requires the total {\em count} of a given response type, not the individual responses.\footnote{A technicality here is that these responses sometimes include prior proposals; we handle this issue by simply maintaining in aggregated responses the prior proposal---if any---with the largest proposal number of those being aggregated.} This aggregation strategy reduces the time to gather responses (after stabilization) from $O(n\cdot F_{ack})$ to $O(D\cdot F_{ack})$. The final optimization needed to adapt PAXOS to our model is the change service. We need the eventual leader to generate proposals {\em after} the leader election and tree services stabilize, so it can reap the benefits of efficient acceptor response aggregation. At the same time, however, the leader cannot generate {\em too many} new proposals after this point, or each new proposal may delay the previous. The key property of our change service is that it guarantees that the leader will generate $\Theta(1)$ new proposal after stabilization (assuming there is no decision yet in the network). \begin{figure} \noindent\begin{minipage}{0.48\textwidth} \begin{algorithm}[H] \caption{Leader Election Service (for node $u$)} \label{alg:paxos:leader} \begin{algorithmic}[1] \scriptsize \Procedure {On Initialization}{} \State $\Omega_u \gets \id{u}$ \State {UpdateQ}$(\langle leader, \id{u}\rangle)$ \EndProcedure % \vspace{3mm} \Procedure{Receive}{$\langle leader, id \rangle$} \If{$id > \Omega_u$} \State $\Omega_u \gets id$ \State {UpdateQ}$(\langle leader, id\rangle)$ \EndIf \EndProcedure % \vspace{3mm} % \Procedure {UpdateQ}{$\langle leader, id \rangle$} \State {\bf empty} leader queue and {\bf enqueue} $\langle leader, id \rangle$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{Change Service (for node $u$)} \label{alg:paxos:change} \begin{algorithmic}[1] \scriptsize % \Procedure {On Initialization}{} \State $lastChange \gets -\infty$ \EndProcedure % \vspace{3mm} \Procedure {OnChange}{} \Comment{$\Omega_u$ or $dist_u$ updated.} \State $lastChange \gets$ time\_stamp() \State UpdateQ$(\langle change, lastChange,\id{u} \rangle)$ \EndProcedure % \vspace{3mm} % \Procedure{Receive}{$\langle change, t, id \rangle$} \If{$t>lastChange$} \State $lastChange \gets t$ \State {UpdateQ}$(\langle change, t, id\rangle)$ \EndIf \EndProcedure % \vspace{3mm} % \Procedure {UpdateQ}{$\langle change, t, id \rangle$} \State {\bf empty} the change queue then {\bf enqueue} $\langle change, t, id \rangle$ \If{$\Omega_u = \id{u}$} \State GenerateNewPAXOSProposal() \EndIf \EndProcedure % \end{algorithmic} \end{algorithm} \end{minipage} \noindent\begin{minipage}{0.48\textwidth} \begin{algorithm}[H] \caption{Tree Building Service (for node $u$)} \label{alg:tree} \begin{algorithmic}[1] \scriptsize \Procedure {On Initialization}{} \State $\forall v\neq u: dist[\id{v}] \gets \infty$ and $parent[\id{v}] \gets \bot$\ \State $dist[\id{u}] \gets 0$ and $parent[\id{u}] \gets \id{u}$ \State UpdateQ$(\langle search, \id{u}, 1\rangle)$ \EndProcedure % \vspace{3mm} \Procedure{Receive}{$m = \langle search, id, h \rangle$} \If{$h < dist[id]$} \State $dist[id] \gets h$ \State $parent[id] \gets m.sender$ \State {UpdateQ}$(\langle search, id, h+1 \rangle)$ \EndIf \EndProcedure % \vspace{3mm} % \Procedure {UpdateQ}{$\langle search, id, h \rangle$} \State {\bf enqueue} $\langle search, id, h \rangle$ on tree queue \State {\bf discard} any message for $id$ with hop count $h' > h$ \State {\bf move} message (if any) with id $\Omega_u$ to front of tree queue \EndProcedure % \vspace{3mm} % \Procedure{OnLeaderChange}{} \Comment{Called when $\Omega_u$ changes} \State {\bf move} message (if any) with id $\Omega_u$ to front of tree queue \EndProcedure % \end{algorithmic} \end{algorithm} \begin{algorithm}[H] \caption{Broadcast Service (for node $u$)} \label{alg:paxos:bcast} \begin{algorithmic}[1] \scriptsize \While{true} \State {\bf wait} for at least one queue from \State $\ \ \ $ $\{tree, leader, change\}$ to become non-empty \State {\bf dequeue} a message from each non-empty queue and \State $\ \ \ $ combine into one message $m$. \State {\bf broadcast}$(m)$ then wait for $ack$ \EndWhile \end{algorithmic} \end{algorithm} \end{minipage} \caption{Support services used by wPAXOS. Notice, the broadcast service schedules message broadcasts from the queues maintained by the other three services. We assume that when a message is received it is deconstructed into its constituent service messages which are then passed to the {\tt receive} procedure of the relevant service. This basic receive logic is omitted in the above.} \label{fig:paxos} \end{figure} \subsubsection{Algorithm} Our algorithmic strategy is to implement the logic of the classic PAXOS asynchronous agreement algorithm~\cite{paxos,paxos-simple} in our abstract MAC layer model. To do so, we implement and analyze a collection {\em services} that can be connected to the high-level PAXOS logic to run it in our model. These services are responsible for disseminating proposer messages and acceptor responses (in an efficient manner), as well as notifying the high level PAXOS logic when to start over with a new proposal number. They also provide the leader election service needed for liveness. As mentioned, we call this combination of the high-level PAXOS logic with our model-specific support services, {\em wireless PAXOS} (wPAXOS). We note, that if we did not care about time complexity, our services could disseminate messages and responses with simple flooding services. To achieve an optimal $O(D\cdot F_{ack})$ time, however, requires a more complicated strategy. In more detail, our services build, in a distributed fashion, an eventually stabilized shortest path tree rooted at the eventual leader in the network. Once stabilized, acceptors can efficiently send their responses to the leader by aggregating common response types as they are routed up the tree. We proceed below by first describing these support services, and then describing how to connect them to the standard PAXOS logic. We conclude by proving our needed safety and liveness properties for the resulting combined algorithm. In the following, we assume the reader is already familiar with the PAXOS algorithm (if not, see~\cite{paxos-simple}), and will focus on the new service algorithms specific to our model it uses and how it uses them. \paragraph{Services.} Our wPAXOS algorithm requires the four support services (see Figure~\ref{fig:paxos} for the pseudocode). The first three, {\em leader election}, {\em change}, and {\em tree building} each maintain a message queue. The fourth, {\em broadcast}, is a straightforward loop that takes messages from the front of these queues, combines them, then broadcasts the combined message---allowing the algorithm to multiplex multiple service on the same channel. In the following, therefore, we talk about the messages of the leader election, change, and tree building services as if they were using a dedicated channel. {\em Leader Election.} This service maintains a local variable $\Omega_u$, containing an id of some node in the network, at each node $u$. The goal of this service is to eventually stabilize these variables to the same id network-wide. It is implemented with standard flooding logic. {\em Tree Building.} This service attempts to maintain in the network, for each node $v$, a shortest-path tree rooted at $v$. The protocol runs a Bellman-Ford style iterative refinement procedure for establishing these trees, with the important optimization that the messages corresponding to the current leader get priority in each node's tree service queue. This optimization ensures that soon after the network stabilizes to a single leader a tree rooted at that leader is completed. {\em Change.} This service is responsible for notifying PAXOS proposers when they should start a new proposal number. The goal for this service is to ensure that the eventual leader $u_{\ell}$ starts a new proposal after the other two services have stabilized: that is, after the leader election service has stabilized to $u_{\ell}$ across the network, and the tree rooted at $u_{\ell}$ is complete. Proposals generated after this point, we will show, are efficiently processed. The service must also guarantee, however, not to generate {\em too many} changes after this point, as each new proposal can delay termination. \paragraph{Connecting PAXOS Logic to Support Services.} We now describe how to combine the standard PAXOS logic with our model-specific support services to form the wPAXOS algorithm. Recall, in PAXOS there are two roles: {\em proposers} and {\em acceptors}.\footnote{There is sometimes a third role considered, called {\em learners}, that are responsible for determining when a decision has been made. As is common, however, we collapse this role with the role of the proposer. When a proposer determines it can can decide a value---i.e., because a proposal for this value was accepted by a majority of acceptors---it does so and floods this decision to the rest of the network.} In wPAXOS all nodes play both roles. We describe below how both roles interact with our services. {\em Proposers.} Proposers generate {\em prepare} and {\em propose} messages (the latter sometimes called {\em accept} messages), associated with a given proposal number. A proposal number is a $tag$ (initially $0$) and the node's id. The pairs are compared lexicographically. A key detail in any PAXOS deployment is the conditions under which a proposer chooses a new proposal number and starts over the proposal process. In wPAXOS, a proposer starts over when its change service locally calls {\em GenerateNewPAXOSProposal}(). At this point it increases its $tag$ to be $1$ larger than the largest tag it has previously seen or used. If this new proposal number is rejected at either stage, {\em and} the proposer learns of a larger proposal number in the system during this process (i.e., because an acceptor appended a previous message to its rejection), {\em and} it is still the leader according to its leader election service, it increases its tag number and attempts a new proposal. If this new proposal also fails, it waits for the change service before trying again. In other words, a proposer will only try up to $2$ proposal numbers for each time it is notified by the change service to generate a new proposal. To disseminate the {\em prepare} and {\em propose} messages generated by proposers we assume a simple flooding algorithm (due to its simplicity, we do not show this pseudocode in Figure~\ref{fig:paxos}): if you see a proposer message from $u$ for the first time, add it to your queue of proposer messages to rebroadcast. To prevent old proposers messages from delaying new proposer messages we have each proposer maintain the following invariant regarding the message queues used in this flooding: At all times, the queue is updated to ensure: (1) only contains messages from the current leader; and (2) only contains messages associated with the largest proposal number seen so far from that leader. {\em Acceptors.} We now consider the acceptors. When acceptor $v$ generates a response to a {\em prepare} or {\em propose} message from proposer $u$, $v$ labels the response with the destination $parent[\id{u}]$ then adds it to its acceptor broadcast queue. This message is then treated like a unicast message: even though it will be broadcast (as this is the only communication primitive provided by our model), it will be ignored by any node except $parent[\id{u}]$. Of course, if $v=u$ then the message can skip the queue and simply be passed over to the proposer logic. To gain efficiency, we have acceptors aggregate messages in their acceptor queue when possible. In more detail, if at any point an acceptor $v$ has in its queue multiple responses of the same type (positive or negative) to the same proposer message, to be sent to same $parent$: it can combine them into a single {\em aggregated} message. This single message retains the the type of responses as well as the proposal number the responses reference, but replaces the individual responses with a count. We can combine aggregated messages with other aggregated messages and/or non-aggregated messages in the same way (i.e., given two aggregated messages with counts $k_1$ and $k_2$, respectively, we can combine them into an aggregated message with count $k_1 + k_2$). Notice, PAXOS sometimes has acceptors respond to a {\em prepare} message with a previous proposal (i.e., combination of a proposal number and value). When aggregating multiple messages of this type (i.e., containing previous proposal) we keep only the previous proposal with the largest proposal number. We also assume PAXOS implements the standard optimization that has acceptors, when rejecting a message, append the larger proposal number to which they are currently committed. We aggregate these previous proposals in the same way we do with positive responses to prepare messages---by maintaining only the largest among those in the messages we are aggregating. Finally, as with the proposers, the acceptors keep their message queue up to date by maintaining the invariant that at all times, the queue is updated to ensure: (1) only contains messages in response to propositions from the current leader; and (2) only contains responses associated with the largest proposal number seen so far from that leader. {\em Deciding.} When a proposer learns it can decide a value $val$ (i.e., because it has learned that at least a majority of acceptors replied with {\em accept} to a proposal containing $val$), it will decide $val$ then flood a {\em decide}$(val)$ message. On receiving this message, a node will decide $val$. \subsubsection{Analysis} \label{sec:paxos:analysis} We now prove that our wPAXOS algorithm solves consensus in $O(D\cdot F_{ack})$ time---matching the relevant lower bounds. In the following, let a {\em message step} (often abbreviated below as just ``step") be an event in an execution where a message or ack is received. Notice, because we consider deterministic algorithms and assume that local computation does not require any time, the behavior of an algorithm in our model is entirely describe by a sequence of message steps. Let a {\em proposition} be a combination of a proposer, a proposer message type (i.e., either {\em prepare} or {\em propose}), and a proposal number. For a fixed execution of wPAXOS: let $a(p)$, for a given proposition $p$, be the number acceptors that generate an affirmative response to proposition $p$; and let $c(p)$ be the total count of affirmative responses for proposition $p$ received by the originator of $p$. Similarly, let $c(p,s)$, for step $s$, be the total count of affirmative responses to $p$ received by the end of step $s$. \paragraph{Safety.} In the standard asynchronous network model, PAXOS takes for granted that proposers properly count the relevant responses to their propositions, as the model reliably delivers each response, labeled with the acceptor that generated it. In wPAXOS, however, we aggregate responses using shortest-path trees that might change during throughout the execution. Before we can leverage the standard safety arguments for PAXOS, we must first show that this aggregation in dynamic trees does not compromise the integrity of the proposer response counts. \begin{lemma} Fix some execution of wPAXOS. Let $p$ be a proposition generated by $u$ in this execution. It follows that $c(p) \leq a(p)$. \label{lem:paxos:1} \end{lemma} \begin{proof} In this proof, for a fixed execution, proposition $p$, node $v$, and step $s$, we define $q(p,v,s)$ be the sum of the following values: (1) the total count of affirmative responses to proposition $p$ in node $v$'s acceptor message queue after step $s$; (2) the total count of the affirmative responses to $p$ in the message $v$ is in the process of sending to some $w$ after step $s$, assuming that $w$ has not yet received the message at this point; (3) $1$, if the acceptor at $v$ will generate an affirmative response to $p$ at some later step $s' > s$ in the execution. Similarly, let $Q(p,s) = \sum_{v\in V} q(p,v,s)$. Fix some proposition $p$ with originator $u$. To prove our lemma it is sufficient to prove the following invariant: \begin{center} {\em (*) For every step $s\geq 0$: $Q(p,s) + c(p,s) \leq a(p)$.} \end{center} We prove (*) by induction on $s$. The {\em basis} ($s=0$) of this induction is trivial: by definition $Q(p,0) = a(p)$ and $c(p,0) = 0$. For the {\em inductive step} we assume (*) holds through step $s$. There are three cases for step $s+1$ that are relevant to our hypothesis. {\em Case $1$:} Assume $s+1$ has some acceptor $v\neq u$ receive a message that causes it to generate a response to $p$. This action transfers quantity $1$ from $q(p,v,s)$ (contributed from element (3) of the definition of $q$) and transfers it either to the messages in $v$'s queue, or, if it is not in the process of sending a message when $s+1$ occurs, into a message being sent by $v$. In either case, this value of $1$ now shows up in $q(p,v,s+1)$ either from element (1) {\em or} (2) of its definition/ It follows that $Q(p,s+1) = Q(p,s)$ and $c(p,s+1) = c(p,s)$. {\em Case $2$:} Assume $s+1$ has some acceptor $w\neq u$ receive a message $m$ from $v$ that contains affirmative responses to $p$ {\em and} $v$ addressed the message to $w$. In this case, the count of affirmative responses contained in this message was moved unchanged from $q(p,v,s)$ to $q(p,w,s+1)$. Once again $Q(p,s) = Q(p,s+1)$ and $c(p,s+1)=c(p,s)$. {\em Case $3$:} Assume step $s$ has $u$ receive a message with $k$ affirmative responses to $p$ that is addressed to $u$. In this case, $u$ adds $k$ to its count and discards the message. It follows that $Q(p,s+1) = Q(p,s) - k$ and $c(p,s+1) = c(p,s) + k$. \end{proof} The below lemma comes from~\cite{paxos-simple}, where Lamport shows that proving this lemma provides agreement. If we can prove this below lemma, in other words, then we can apply the arguments of~\cite{paxos-simple} to establish agreement. Notice, this proof leverages Lemma~\ref{lem:paxos:1}. \begin{lemma} Fix an execution of wPAXOS. Assume proposer $u$ generates a proposal with value $val$ and proposal number $x$. It follows that there exists a set $S$ consisting of a majority of acceptors such that either: (a) no acceptor in $S$ has accepted a proposal with number smaller than $x$, or (b) $val$ is the value of the highest-numbered proposal among all proposals with numbers less than $x$ accepted by the acceptors in $S$. \label{lem:paxos:2} \end{lemma} \begin{proof} Fix some execution $\alpha$. Let $p'$ be a proposal proposition generated by $u$ for proposal number $x$. Let $p$ be the prepare proposition from $u$ with this same number $x$ that must, by the definition of the algorithm, precede $p'$. Let $A$ be the set of acceptors that commits to $p$. Let $s$ be the step at which $c(p,s)$ becomes greater than $n/2$ for the first time. We define a directed graph $G(\alpha,p,s) = (V,E')$ as follows. Add $(w,v)$ to $E'$ iff at some step $s' \leq s$ in $\alpha$, acceptor $w$ sent a message about a positive response to $p$ to $v$. Let $A_u \subset A$ be the subset of $A$ that have a path to $u$ in $G(\alpha,p,s)$. Consider the case where at least one node in $A_u$ had previously committed to a proposal number $\hat x$ when $p$ arrived. We know $\hat x < x$ as this node subsequently committed to $x$. Let $x_{max} \geq \hat x$ be the largest such proposal number of a previous commitment and $val_{max}$ be the corresponding value. Let $P$ be the set of previous proposals that $u$ receives in response to $p$. We argue that $(x_{max}, val_{max})$ is in $P$ and has the largest proposal number of any proposal in $P$. If this is true, it follows that $u$ will adopt $val_{max}$ for its proposal $p'$, as needed by the lemma. To show this is true, let $v_{max}$ be an acceptor in $A_u$ that previously committed to that proposal. By the definition of $G(\alpha,p,s)$, there is a casual path of messages on which this proposal could travel to $u$. The only way it could be discarded before arriving at $u$ is if at some point a larger proposal in response to $p$ joins the path from $v_{max}$ to $u$, {\em before} the $x_{max}$ proposal has been sent forward. By assumption, however, no process in $A_u$ has a larger proposal number. It follows that this larger proposal must have originated from some $v' \notin A_u$. However, if this previous proposal can intercept the path from $v_{max}$ to $u$ before $v_{max}$'s messages have passed that point, then, by the definition of $G(\alpha,p,s)$. $u$ must have a path to $u$ in $G(\alpha,p,s)$, and therefore must be in $A_u$. We have now shown that properties (a) and (b) hold's for $A_u$. To prove the lemma, we are left to show that $A_u$ must hold a majority of acceptors. Assume for the sake of contradiction that $|A_u| \leq n/2$. At step $s$, however, proposer $u$ has $c(p,s) > n/2$. By the definition of $G(\alpha,p,s)$, there is no causal path of messages being sent and received from nodes in $V\setminus A_u$ to $u$ that arrive at $u$ before it counts a majority of acceptances. Therefore, the execution $\alpha'$ in which {\em no} node in $V \setminus A_u$ commits to $p$ is indistinguishable from $\alpha$ with respect to $u$ through step $s$. In $\alpha'$, however, $|A| \leq n/2$ and therefore $a(p) \leq n/2$. In this same execution, we also know $c(p) > n/2$. The result: in $\alpha'$, $c(p) > a(p)$. This contradicts Lemma~\ref{lem:paxos:1} \end{proof} Part of the difficulty tackled by wPAXOS is efficiently disseminating responses form acceptors even though the messages are of bounded size (i.e., can only hold $O(1)$ ids). If message size was instead unbounded, we could simply flood every response we have seen so far. Given this restriction, however, we must make sure that the proposal numbers used by proposers in wPAXOS do not grow too large. In particular, we show below that these proposal numbers always remain small enough to be represented by $O(\log{n})$ bits (messages must at least this large by the assumption that they can hold a constant number of ids in a network of size $n$). \begin{lemma} There exists a constant $k$ such that the tags used in proposal numbers of wPAXOS are bounded by $O(n^k)$. \label{lem:paxos:3} \end{lemma} \begin{proof} Each node $u$ can only locally observe $O(n^2)$ change events. its leader variable can only have $n$ possible values and these values strictly increase. And for each $v$, $dist[v]$ at $u$ can only have $n$ values it strictly decreases. Each change event can lead to at most $2$ new proposal numbers at each node in the network (recall: the algorithm restricts a proposer to attempting at most $2$ new numbers in response to a given call of {\em GenerateNewPAXOSProposal}). We also note that there are $n$ total nodes and the tags in proposal numbers are increased to be only $1$ larger than the largest tag previously seen. A straightforward induction argument bounds the largest possible tag size as polynomial in $n$ as needed. \end{proof} \paragraph{Liveness.} We now establish that all nodes in wPAXOS will decide by time $O(D\cdot F_{ack})$. The main idea in this proof is to consider the behavior of the algorithm after the tree and leader election service stabilize. WE show this stabilization occurs in $O(D\cdot F_{ack})$ time and after this point the leader will continue to generate proposals and end up reaching a decision within an additional $O(D\cdot F_{ack})$ time. \begin{lemma} Fix some execution of wPAXOS. Every node decides in $O(D\cdot F_{ack})$ time. \label{lem:paxos:4} \end{lemma} \begin{proof} Let $\hat s$ be the step that generates the {\em final} change in this execution. Let $t(\hat s)$ be the global time at which this step occurs. Repurposing a term from the study of partially synchronous model, we call $t(\hat s)$ the {\em global stabilization time} (GST) as it defines a time by which point: (a) the whole network has stabilized to the same leader, $\ell$; (b) the tree defined by $parent[\ell]$ pointers in the network has stabilized to a shortest-path spanning tree rooted at $\ell$. Consider $\ell$'s behavior after the GST. We know that $\hat s$ generates a change message. Because this change message has a timestamp as larger (or larger) than other change message in the execution, every node will receive a change message with this timestamp for the first time somewhere in the $[t(\hat s), t(\hat s) + O(D\cdot F_{ack})]$. This is the last change message any node will pass on to their {\em UpdateQ} procedure in the change service. This is significant, because it tells us that $\ell$ will generate a new proposition at some on or after the GST, but not {\em too much} after. And, this is the last time the change service will cause $\ell$ to generate a proposition. We now bound the efficiency of $\ell$'s propositions after the GST. Notice, after GST only messages from $\ell$ can be stored in proposer queues. When $\ell$ generates a new proposition message with a larger proposal number than any previous such messages, it will necessarily propagate quickly---reaching a major it of acceptors in $O(D\cdot F_{ack})$ time. We can show that acceptor responses return to $\ell$ with the same bound, as every such response is at most distance $D$ from $\ell$ in the shortest path tree, and can be delayed by at most $O(F_{ack})$ time at each hop (if multiple responses arrive at the same node they will simply be aggregated into one response) Now we consider the fate of $\ell$'s propositions after GST. We know from above that it generates a new proposition with a new proposal number on or after GST. If receives enough commits to its prepare message for this proposal number, then it follows it will go on to gather enough accepts to decide (as no other proposer, after GST, can have its proposals considered). If it fails to gather enough commits, it will move on to the next proposal after counting a majority of the acceptors rejecting its prepare. By the algorithm, however, it will learn the largest proposal number that one of this set has previously committed to. Its subsequent proposal number will be larger, and therefore this same majority will end up sending him enough commits to move on toward decision. It follows that a constant number of propositions is sufficient for $\ell$ to decide. To tie up the final loose ends, we note that once $\ell$ decides, all nodes decide within an additional $O(D\cdot F_{ack})$ time, and that GST is bounded by $O(DF_{ack})$ as well: it takes this long for the leader election service to stabilize, after which the tree rooted at the leader must be stabilized within in addition $O(D\cdot F_{ack})$ time. \end{proof} \paragraph{Pulling Together the Pieces.} We are now ready to combine our key lemmas to prove the final correctness of wPAXOS. \begin{theorem} The wPAXOS algorithm solves consensus in $O(D\cdot F_{ack})$ time in our abstract MAC layer model in any connected network topology, where $D$ is the network diameter and nodes have unique ids and knowledge of network size. \label{thm:paxos} \end{theorem} \begin{proof} The {\em validity} property is trivial. The {\em termination} property as well as the $O(D\cdot F_{ack})$ bound on termination follow directly from Lemma~\ref{lem:paxos:4}. To satisfy agreement we combine Lemma~\ref{lem:paxos:2} with the standard argument of~\cite{paxos-simple}. \end{proof} \section{Conclusion} Consensus is a key primitive in designing reliable distributed systems. Motivated by this reality and the increasing interest in wireless distributed systems, in this paper we studied the consensus problem in a wireless setting. In particular, we proved new upper and lower bounds for consensus in an abstract MAC layer model---decreasing the gap between our results and real deployment. We prove that (deterministic) consensus is impossible with crash failures, and that without crash failures, it requires unique ids and knowledge of the network size. We also establish a lower bound on the time complexity of any consensus solution. We then present two new consensus algorithms---one optimized for single hop networks and one that works in general multihop (connected) networks. In terms of future work, there are three clear next steps that would help advance this research direction. The first is to consider consensus in an abstract MAC layer model that includes unreliable links in addition to reliable links. The second is to consider what additional formalisms might allow deterministic consensus solutions to circumvent the impossibility concerning crash failures. In the classical distributed systems setting, failure detectors were used for this purpose. In the wireless world, where, among other things, we do not always assume {\em a priori} knowledge of the participants, this might not be the most natural formalism to deploy. The third direction is to consider randomized algorithms, which might provide better performance and the possibility of circumventing our crash failure, unique id, and/or network size knowledge lower bounds. \bibliographystyle{plain}
train/arxiv
BkiUdDw5qhLA-KZ41ZfO
5
1
\section{Introduction} \IEEEPARstart{A}{nderson} localization (AL) is the absence of diffusive wave transport in highly disordered scattering media~\cite{Anderson1,Anderson1980,Abrahams-50-book,Lagendijk-Physics-Today-2009,Abrahams-Scaling-Theory,Stone,sheng2006introduction}. It is broadly applicable to the quantum mechanical wave function of an electron described by the Schr\"odinger equation~\cite{Anderson1,Thouless-1974,Wegner1976,Soukoulis-1999}, matter waves and Bose-Einstein condensates~\cite{matter-waves-2008,roati2008anderson,kondov2011three}, quantum fields such as photons in various quantum optical systems~\cite{quantum-fields-2010,Lahini-Quantum-Correlation-2010,Lahini-HBT-2011,Abouraddy-entangled-2012}, as well as classical wave phenomena including acoustics~\cite{ultrasound-1990,acoustic-PRL-1990}, elastics~\cite{elastics-Nat-Phys-2009}, electromagnetics~\cite{John-EM-abs-mobility-edge-1984,dalichaouch1991microwave,Chabanov-microwave-2000,El-Dardiry-microwave-2012}, and optics~\cite{Anderson2,John-photon-localization-1987,John-Physics-Today-1991,SegevNaturePhotonicsReview,Mafi-AOP-2015}. Among all classical wave systems, optics is uniquely positioned for studies of AL phenomena because of the diverse set of possibilities to construct the disordered background potential and the availability of robust tools to experiment and probe the localization phenomena~\cite{storzer2006observation,wiersma1997localization,yannopapas2003anderson,aegerter2007observation}, including its behavior in the presence of nonlinearity~\cite{Lahini-1D-AL-2008,fishman2012nonlinear,mafi-NL-ArXiv-2017,Mafi-Marco-PRL-Migrating-NL-2014,Mafi-Marco-APL-self-focusing-2014}. There have been many attempts over the years to observe AL of light in a three-dimensional (3D) disordered optical medium, including some recent demonstrations~\cite{sperling2013direct,vatnik2017anderson,choi2018anderson}. However, because the large refractive index contrasts required for 3D Anderson localization are generally accompanied with considerable losses in optics, and because it is not easy to differentiate between the exponential decay of the optical field associated with loss and the exponential decay of the field due to the AL, 3D AL of light remains a subject of active on-going research. In this Review, our focus is on the transverse Anderson localization (TAL) of light in a waveguide-like structure. In TAL structures, the dielectric constant is uniform along the direction of the propagation of light, similar to a conventional optical waveguide, and the disorder in the dielectric constant resides in the (one or two) transverse dimension(s). An optical field that is launched in the longitudinal direction tends to remain localized in the disordered transverse dimension(s) but propagates freely as if in an optical waveguide in the longitudinal direction. TAL appears to be ubiquitous in any transversely disordered waveguide, as long as the disorder is sufficiently strong such that the transverse physical dimensions of the waveguide are larger than the transverse localization length and the waveguide remains uniform in the longitudinal direction. In the following, after a brief historical survey of the origins of the TAL, we will present the recent progress especially as related to TAL in disordered optical fibers with particular emphasis on applications to image transport and incoherent illumination. \section{Brief Survey on the Origins of TAL} Transverse Anderson localization in a quasi-2D optical system was first proposed in a pair of visionary theoretical papers by Abdullaev \textit {et al}.~\cite{transverse-Abdullaev} in 1980 and De~Raedt \textit {et al}.~\cite{transverse-DeRaedt} in 1989. \begin{figure}[htp] \centering \includegraphics[width=0.9\columnwidth]{abd-array.png} \caption {A conceptual sketch of the 2D randomized array of coupled optical fibers is shown, as proposed by Abdullaev \textit {et al}.~\cite{transverse-Abdullaev}, to observe the TAL of light.} \label{fig:abd-array} \end{figure} The structure proposed by Abdullaev \textit {et al}.~\cite{transverse-Abdullaev} is sketched in Fig.~\ref{fig:abd-array}, consisting of a two-dimensional (2D) array of coupled optical fibers with slightly different and randomly distributed physical parameters, e.g., different radii. Therefore, the propagation constants of the guided modes supported by the optical fibers are randomly distributed. Because the individual fibers are evanescently coupled, light is expected to tunnel from one fiber to another. However, the efficiency of the optical tunneling between neighboring fibers is reduced because the propagation constants of the modes are generally different due to the randomness~\cite{Saleh-Teich}. Therefore, if the light is coupled initially in one optical fiber, it does not spread out as efficiently to other fibers and the amplitude of the field, on average, decays exponentially in the transverse dimensions. This localization is readily observable if the nominal transverse decay length (localization radius) is smaller than the transverse dimensions of the system. Of course, the localization radius can generally be made smaller by increasing the randomness. \begin{figure}[htp] \centering \includegraphics[width=0.9\columnwidth]{rae-array.png} \caption {A conceptual sketch of the transversely random and longitudinally invariant dielectric medium for the observation of the TAL of light is shown, as proposed by De~Raedt \textit {et al}.~\cite{transverse-DeRaedt}. The refractive index is invariant in the longitudinal direction. In the transverse plane, the refractive index is pixelated into tiny squares, and the refractive index of each pixel is randomly selected to be $n_1$ or $n_2$ with equal probabilities. } \label{fig:rae-array} \end{figure} The structure proposed by De~Raedt \textit {et al}.~\cite{transverse-DeRaedt} is sketched in Fig.~\ref{fig:rae-array}, consisting of an optical fiber-like structure, whose refractive index profile is invariant in the longitudinal direction. In the transverse plane, the refractive index is pixelated into tiny squares, where the edge length of each square is on the order of the wavelength of the light. The refractive index of each pixel is randomly selected to be $n_1$ or $n_2$ with equal probabilities. De~Raedt \textit {et al}. showed that an optical field that is launched in the longitudinal direction tends to remain localized in the transverse plane due to the transverse scattering and the amplitude of the field, on average, decays exponentially in the transverse dimensions, as it propagates freely in the longitudinal direction. The localization radius can be generally reduced by increasing the refractive index contrast $\Delta n=|n_2-n_1|$. One of the earliest experimental attempts to observe TAL was carried out by Pertsch \textit {et al}.~\cite{Pertsch}, where they investigated light propagation in a disordered 2D array of mutually coupled optical fibers, similar to the structure proposed by Abdullaev \textit {et al}.~\cite{transverse-Abdullaev}. They made interesting observations in the nonlinear regime, where they showed that for high excitation power, diffusive spreading is arrested by the focusing nonlinearity. However, the disorder was not sufficiently large in their structure to result in the observation of TAL. In other words, the localization radius in their structure appears to have been larger than the transverse dimensions of their 2D array. The first successful attempt in observing TAL was reported by Schwartz \textit {et al}.~\cite{Schwartz2007} and was performed in a photorefractive crystal. An intense laser beam was used to write the transversely disordered and longitudinally invariant refractive index profiles in a photorefractive crystal, and another laser probe beam was used to investigate the transverse localization behavior. Their experiment allowed them to vary the disorder level by controlling the laser illumination of the photorefractive crystal in a controlled fashion to observe the onset of the transverse localization and the changes in the localization radius as a function of the disorder level. Because the variations in the refractive index of the random sites were on the order of $10^{-4}$, the localization radius was observed to be considerably larger than the wavelength of the light. Over the next few years after the pioneering demonstration by Schwartz \textit {et al}.~\cite{Schwartz2007}, several theoretical and experimental efforts in one-dimensional (1D) disordered lattices were performed that demonstrated and further explored various aspects of the TAL phenomena, including the impact of the Kerr nonlinearity and boundary effects~\cite{Lahini-1D-AL-2008,Szameit-boundary-2010,Martin-1D-AL-2011,Kartashov-NL-2012}. These efforts eventually led to the development of TAL optical fibers that will be discussed in the rest of the Review. \section{TAL in Disordered Optical Fibers} The first demonstration of TAL in an optical fiber was reported in 2012 by Karbasi \textit {et al}.~\cite{Mafi-Salman-OL-2012}. The structure used by Karbasi \textit {et al}. is shown in Fig.~\ref{fig:karbasi-polymer}, which is similar to the design proposed by De~Raedt \textit {et al}.~\cite{transverse-DeRaedt}. The optical fiber was fabricated by the stack-and-draw method from a low-index component, polymethyl methacrylate (PMMA) with a refractive index of 1.49, and a high-index component, polystyrene (PS) with a refractive index of 1.59. 40,000~pieces of PMMA and 40,000 pieces of PS fibers were randomly mixed~\cite{Mafi-Salman-JOVE-2013}, fused together, and redrawn to a fiber with a nearly square profile and approximate side-width of 250\textmu m, as shown in the left panel in Fig.~\ref{fig:karbasi-polymer}. The right panel shows the zoomed-in scanning electron microscope~(SEM) image of an approximately 24\textmu m-wide region on the tip of the fiber after exposing the tip to an ethanol solvent to dissolve the PMMA. The typical random feature size in the structure shown in Fig.~\ref{fig:karbasi-polymer} is around 0.9\textmu m. Karbasi \textit {et al}. demonstrated that when the light was launched into the disordered fiber from a small-core single-mode optical fiber, the beam went through a brief initial expansion (transverse diffusion) but the expansion was arrested upon propagating for $\sim$2cm, after which the TAL eventually took over. The mean effective beam radius for the 100 measured near-field beam intensity profiles was calculated to be $\xi_{\rm avg}=31$\textmu m, with a standard deviation $\sigma_{\xi}=14$\textmu m. TAL was observed in samples as long as 60cm, but the large variations in the thickness of the optical fiber in the draw process hindered the observation of TAL in longer samples. Furthermore, it was observed that when the input beam was scanned across the input facet, the output beam followed the transverse position of the incoming beam~\cite{Mafi-Salman-OPEX-2012}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{karbasi-polymer.png} \caption {Cross section of the polymer disordered fiber from Ref.~\cite{Mafi-Salman-OL-2012} is shown with a nearly square profile and an approximate side-width of 250\textmu m in the left panel. A zoomed-in SEM image of a 24\textmu m-wide region on the tip of the fiber, exposed to a solvent to differentiate between PMMA and PS polymer components, is shown in the right panel. Feature sizes are around 0.9\textmu m, and darker regions are PMMA. Reprinted/Reproduced with permission from Optics Letters, 2012~\cite{Mafi-Salman-OL-2012}, and the Optical Society of America.} \label{fig:karbasi-polymer} \end{figure} Subsequently, further detailed analyses of the disordered fibers in Ref.~\cite{Mafi-Salman-OL-2012} were conducted in Ref.~\cite{Mafi-Salman-OPEX-2012} to explore the effect of the refractive index contrast, fill-fraction, and random site size on the localization radius. It was shown that at least for $\Delta n\le 0.5$, the larger index contrast results in a stronger AL and smaller localization radius. However, the jury is still out for larger values of the index contrast, especially if $\Delta n$ becomes so large that the vectorial nature of the optical field must be taken into account~\cite{Skipetrov}. The optimal value of the fill fraction, defined as the fraction of the low-index polymer to the total, was shown to be 50\%, resulting in the strongest transverse scattering. It is notable that the optimal 50\% value is below the percolation threshold (59.27\%) of a square lattice; therefore, the host material with the higher refractive index remains generally connected in the long range, making the AL non-trivial, i.e., it is not merely due to the disconnected clusters of the higher index material. The initial studies on the impact of the random site size showed that the edge length of each square pixel must ideally be equal to half the wavelength of the light. However, this observation was contradicted in later studies, as will be discussed in more detail in subsection~\ref{sec:optimal}. Another important observation made in Ref.~\cite{Mafi-Salman-OPEX-2012} was that the statistical distribution of the mode field diameters follows a nearly Poisson-like distribution, i.e., a stronger TAL that leads to a smaller average mode field diameter also reduces the mode-to-mode diameter variations; therefore, a stronger TAL leads to a stronger uniformity in the supported modes across the disordered fiber. These observations resulted in the understanding that a stronger AL, especially using higher index contrast components, is warranted, which eventually led to the development of a glass-air disordered fiber structure. The first observation of the TAL in a silica fiber was reported by Karbasi \textit {et al}.~\cite{Mafi-Salman-OMEX-2012} in~2012. The glass-air disordered fiber used for this work was drawn at Clemson University, where the preform was made from ``satin quartz'' (Heraeus Quartz), which is a porous artisan glass. The airholes in the porous glass were drawn to air channels; therefore, the structure resembled the design proposed by De~Raedt \textit {et al}.~\cite{transverse-DeRaedt}, where $n_1=1.0$ and $n_2=1.46$. The cross-sectional SEM image of this fiber is shown in Fig.~\ref{fig:SEM-Magnified-2} in the left panel, and a zoomed-in SEM image is shown in the right panel. The light-gray background matrix is glass, and the random black dots represent the airholes. The total diameter of the disordered glass-air fiber was measured to be 250\textmu m. The diameters of the airholes varied between 0.2\textmu m and 5.5\textmu m. We note that the fill-fraction of the airholes in this fiber ranged from nearly 2\% in the center of the fiber to approximately 7\% near the edges; therefore, TAL was only observed near the periphery of the fiber. This caused a bit of debate, considering the perceived delocalizing impact of the boundaries in disordered TAL systems~\cite{Szameit-boundary-2010,Jovic-boundary-2011,Molina-boundary-2011,Naether-boundary-2012}, which was subsequently addressed in Ref.~\cite{Mafi-Behnam-Boundary-OC-2016}, as will be discussed in more detail in subsection~\ref{sec:boundary}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{SEM-Magnified-2.png} \caption {SEM image of the glass optical fiber with random airholes reported in Ref.~\cite{Mafi-Salman-OMEX-2012} is shown in the left panel. A zoomed-in SEM image of the same fiber is shown in the right panel. Reprinted/Reproduced with permission from Optical Material Express, 2012~\cite{Mafi-Salman-OMEX-2012}, and the Optical Society of America.} \label{fig:SEM-Magnified-2} \end{figure} In 2014, there was another successful attempt in observing TAL in an air-glass optical fiber by Chen and Li~\cite{chen2014observing} at Corning Incorporated. They fabricated random air-line fibers with approximately 150, 250 and 350\,\textmu m diameters and observed TAL with significantly lower air fill-fraction than those reported in Ref.~\cite{Mafi-Salman-OMEX-2012}. This can be attributed to the far-subwavelength size of the transverse scattering centers and the higher scattering center density (air-line density) than the fiber studied in Ref.~\cite{Mafi-Salman-OMEX-2012}. There have since been other successful attempts is observing TAL in glass-based fibers, such as the air-silica random fiber structure by Zhao \textit {et al}.~\cite{ZHAO:17,zhao2018image} at CREOL, University of Central Florida, and the fabrication of an all-solid tellurite optical glass by Tuan \textit {et al}.~\cite{tong2018characterization} from the Toyota Technological Institute in Japan. These recent reports will be discussed in more detail in section~\ref{sec:imaging}. \subsection{Optimal pixel size and wavelength dependence of TAL} \label{sec:optimal} The issue of the optimal pixel size and the wavelength dependence of TAL were initially explored in Ref.~\cite{Mafi-Salman-OPEX-2012}. For the structure proposed by De~Raedt \textit {et al}.~\cite{transverse-DeRaedt}, because Maxwell's equations are scale invariant, {\em increasing the pixel size while keeping the wavelength fixed} can be trivially mapped to {\em decreasing the wavelength while keeping the pixel size fixed}. It was initially claimed that a shorter wavelength (or equivalently a larger pixel size at a given wavelength) decreased the localization radius (in units of the pixel size)~\cite{Mafi-Salman-OPEX-2012}. However, further analysis and subsequent work in 1D~\cite{Mafi-Salman-Modal-JOSAB-2013} hinted that the optimum value of the pixel size is around half the free-space wavelength, at least for the refractive index on the order of 1.5. However, more recent experimental evidence and simulations in Ref.~\cite{Mafi-Schirmacher-PRL-2018} have cast doubt on this observation. Schirmacher \textit {et al}.~\cite{Mafi-Schirmacher-PRL-2018} argued that the average localization radius shows no dependence on the wavelength (over a reasonable range). They attributed the observation of the wavelength dependence for the simulations presented in Ref.~\cite{Mafi-Salman-OPEX-2012} to the omission of a term proportional to the gradient of the dielectric permittivity, which is common in the finite difference simulations of optical fibers but not acceptable for TAL fiber. They also noted that the large error bars in the experiments performed in Ref.~\cite{Mafi-Salman-OPEX-2012} may have been behind the disagreements in the experimental observations; however, simulations in Ref.~\cite{Mafi-Salman-Modal-JOSAB-2013} correctly took the permittivity gradient term into account, and still showed some wavelength dependence. Therefore, the issue is not entirely settled, and part of the disagreement may reside in different ways the averaging is performed. As of now, the jury is still out, and this issue needs to be explored in further detail. \subsection{TAL near the disordered fiber boundaries} \label{sec:boundary} TAL of light near the boundaries was discussed theoretically in Refs.~\cite{Jovic-boundary-2011,Molina-boundary-2011} and experimentally in Refs.~\cite{Szameit-boundary-2010,Naether-boundary-2012}. They originally reported a delocalizing effect near the boundaries of 1D and 2D random lattice waveguides. These reports appeared to be in contrast with the experimental observation reported by Karbasi \textit {et al}.~\cite{Mafi-Salman-OMEX-2012}, who reported that a strong localization happened near the outer boundary of the glass-air disordered fiber and no trace of localization was observed in the central regions. The disagreements were explained in Ref.~~\cite{Mafi-Salman-OMEX-2012} by the non-uniform distribution of disorder in the fiber, where the disorder was measured to be much stronger near the outer boundary of the fiber, which resulted in a stronger localization in that region. However, Abaie \textit {et al}. later performed a detailed analysis in Ref.~\cite{Mafi-Behnam-Boundary-OC-2016} and showed that the perceived suppressed localization near the boundaries is due to a lower mode density near the boundaries compared with the bulk, while the average decay rate of the tail of localized modes is the same near the boundaries as in bulk. Therefore, on average, it is less probable to excite a localized mode near the boundaries; however, once it is excited, its localization is with the same exponential decay rate as any other localized mode. \section{Image Transport and Illumination using Disordered Optical Fibers} \label{sec:imaging} Once the localized beam propagation was verified in highly disordered multi-component polymer and air-glass fibers, the next natural step was to explore the possibility of beam multiplexing in these fibers. This was reported in Ref.~\cite{Mafi-Salman-Multiple-Beam-2013}, where Karbasi \textit {et al}. investigated the simultaneous propagation of multiple beams in a disordered TAL fiber. Moreover, it was shown that the multiple-beam propagation was quite robust to macro-bending and even a tight bending radius in the range of 2-4mm did not result in any notable beam drifts and the multi-beam structure remained intact. In Fig.~\ref{fig:five-beams}, we show an example of the multibeam propagation in the disordered polymer of Ref.~\cite{Mafi-Salman-OL-2012} for a propagation distance of 5cm at 405nm wavelength. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{five-beams.png} \caption {The image shows the transportation of five multiplexed beam (computational) in the disordered polymer of Ref.~\cite{Mafi-Salman-OL-2012} for a propagation distance of 5cm at 405nm wavelength, where no interference between the beams is observed in the output. The green background area represents the fiber area with a side-width of 250\textmu m. Reprinted/Reproduced with permission from Optics Express, 2012~~\cite{Mafi-Salman-Multiple-Beam-2013}, and the Optical Society of America.} \label{fig:five-beams} \end{figure} Motivated by the successful demonstration of beam multiplexing, Karbasi \textit {et al}.~\cite{Mafi-Salman-Image-2013} compared the quality of image transport in a 1D waveguide with a periodic structure to the image transport in a disordered waveguide. The periodic waveguide was meant as a 1D prototype of a coherent fiber optic bundle that is commonly used for imaging applications. It was shown that increased disorder improved the quality of the image transport. In a subsequent study reported in Ref.~\cite{Mafi-Salman-Nature-2014}, Karbasi \textit {et al}. explored image propagation in the TAL polymer fiber of Ref.~\cite{Mafi-Salman-OL-2012}. They showed that the image transport quality was comparable with or better than some of the best commercial multicore imaging fibers, with less pixelation and higher contrast. Figure~\ref{fig:image-transport-2} shows an example of a transported image in the form of numbers from a section of the 1951 U.S. Air Force resolution test chart through the disordered fiber. The test-target, a section of which is shown in the right panel of Fig.~\ref{fig:image-transport-2}, is in the form of a stencil in which numbers and lines are carved--it was butt-coupled to the hand-polished input facet of the fiber and was illuminated by white light. The near-field output was projected onto a CCD camera with a 40$\times$ microscope objective. The minimum resolution of the images is determined by the width of the point-spread function of the disordered optical fiber imaging system (localization radius), which was calculated to be smaller than 10\textmu m at 405nm wavelength~\cite{Mafi-Salman-OPEX-2012}. The perceived image quality of the transported images was quantified by the mean structural similarity index (MSSIM), using which it was verified that a disordered TAL can transport images of higher quality than conventional coherent fiber bundles. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{image-transport-2.png} \caption {Transported images of numbers ``4'' and ``6'' through a 5cm piece of a disordered fiber are shown and a section of the 1951 U.S. Air Force resolution test chart (1951-AFTT) used in the image transport experiment is shown. The images are 120\textmu m long. Details can be obtained in Reference~\cite{Mafi-Salman-Image-2013}.} \label{fig:image-transport-2} \end{figure} Another notable work using the disordered polymer fibers of Ref.~\cite{Mafi-Salman-OL-2012} was the demonstration by Leonetti \textit {et al}.~\cite{Mafi-Marco-information-2016} of propagating reconfigurable localized optical patterns in the fiber to encode up to 6 bits of information in disorder-induced high transmission channels, even using a small number of photon counts. This effort highlighted the potential application of these fibers in quantum information processing in spatially multiplexed configurations. The first successful image transport in an air-silica random fiber structure was reported in 2018 by Zhao \textit {et al}.~\cite{ZHAO:17,zhao2018image}, where the disordered fiber featured a 28.5\% air-filling fraction in the structured region, and low attenuation below 1dB per meter at visible wavelengths. High-quality optical image transfer through 90 cm-long fibers was reported in these disordered fibers. In a more recent attempt, Zhao \textit {et al}.~\cite{zhao2018deep} applied deep learning techniques to improve the quality of the image transport in these fibers. The system they have developed provides the unique property that the training performed within a straight fiber setup can be utilized for high fidelity reconstruction of images that are transported through either straight or bent fiber, hence making the retraining for different bending situations unnecessary. This report is a considerable advancement compared with previous demonstrations of image transport in multimode fiber, such as the report by Choi \textit {et al}.~\cite{Choi-multimode}, where the computed transmission matrix had to be recalculated for any bending or twisting the fiber, making the method slow and computationally very challenging. Most recently, Tuan \textit {et al}.~\cite{tong2018characterization} reported the fabrication of the first all-solid tellurite optical glass rod with a transversely disordered refractive index profile and a refractive index contrast of $\Delta n=$0.095 to study TAL of light and near-infrared (NIR) optical image transport. Experiments performed at the NIR optical wavelength of 1.55\textmu m confirmed TAL in this structure, and the images transported over a 10cm length of the disordered fiber showed high contrast and high brightness. Last but not least, we would like to highlight the work led by Thomas P Seward III of Corning Incorporated in the 1970s on phase-separated glasses that resulted in random elongated needle-like structures after drawing~\cite{seward1974elongation,seward1977some}. The fiber-like glass rods were successfully used for image transport and most likely operated based on the TAL principles discussed here. \subsection{Mode-area probability density function and scaling} Scaling properties of TAL structures can provide a wealth of information on their physical properties~\cite{Anderson1980,Abrahams-Scaling-Theory,Wegner1976,Stone,Pichard1981-1,Pichard1981-2,Pichard1986-1,Pichard1986-2,Aegerter:07}. We briefly discussed the issue related to the optimal pixel size or the imaging wavelength in TAL fibers in section~\ref{sec:optimal}. We argued that some of the discrepancies might reside in different ways the averaging is performed. There is one more critical issue that must be considered before making broad-reaching conclusions. The traditional study of TAL is based on launching a beam and analyzing its propagation along the waveguide. It is hard to ensure that the results are not biased by the choice of the initial launch condition. To address this issue, Abaie \textit {et al}.~\cite{Mafi-Behnam-Scaling-PRB-2016,Mafi-Abaie-OL-2018} performed detailed studies on quasi-1D and quasi-2D (fiber-like) TAL structures using the modal method and calculated the mode-area (MA) probability density function (PDF) for these structures. The MA-PDF encompasses all the relevant statistical information on TAL; it relies solely on the physics of the disordered system and the physical nature of the propagating wave and is independent of the beam properties of the external excitation. For their analysis in Ref.~\cite{Mafi-Abaie-OL-2018}, Abaie \textit {et al}. used a quasi-2D structure that was based on the random fiber design proposed by De~Raedt \textit {et al}.~\cite{transverse-DeRaedt}. Although Refs.~\cite{Mafi-Behnam-Scaling-PRB-2016,Mafi-Abaie-OL-2018} provide a wealth of information on the inner workings of the TAL behavior, especially when it comes to differentiating between the localized and extended modes and the best strategies to optimization of the waveguide, they have yet to be adequately leveraged to address the discrepancies discussed in section~\ref{sec:optimal}. A key observation reported in Refs.~\cite{Mafi-Behnam-Scaling-PRB-2016,Mafi-Abaie-OL-2018} was that the MA-PDF can be reliably computed from structures with substantially smaller transverse dimensions than the size of the practical waveguides used in experiments. In fact, it is shown that the shape of the MA-PDF rapidly converges to a terminal form as a function of the transverse dimensions of the waveguide. This notable scaling behavior observed in MA-PDF is of immense practical importance in the design and optimization of such TAL-based waveguides, because one can obtain all the useful TAL information from disordered waveguides with smaller dimensions, hence substantially reducing the computational cost. \subsection{Spatial coherence and illumination} Although image transport has so far been the main focus of the research efforts on TAL fibers, control of spatial coherence and illumination are also potentially viable areas that are due for further explorations. In particular, Refs.~\cite{Mafi-Marco-singlemode-2017,Mafi-Behnam-Optica-2018} recently pointed out that the presence of a strong disorder in TAL fibers can be exploited to obtain high-quality wavefronts. Abaie \textit {et al}.~\cite{Mafi-Behnam-Optica-2018} showed, in agreement with the theoretical analysis of Ref.~\cite{Mafi-Abaie-OL-2018}, that a considerable number of the guided modes have low M$^2$ values. These high-quality modes are distributed across the transverse profile of the disordered fiber and can be excited without requiring sophisticated spatial light modulators at the input facet. Alternatively, when the input light is coupled to the entire transverse area of a TAL fiber, the output is spatially incoherent. Therefore, by proper coupling of the light, without sophisticated spatial modulators, it is possible to access a range of spatial coherence properties in these fibers. Of particular importance is the possibility of using part of the transverse structure of the fiber to guide spatially incoherent light to illuminate a scene and the other parts of the fiber to transport the images back in an endoscopic setting. The possibility of generating incoherent but directional broadband light was also highlighted in Ref.~\cite{Mafi-Behnam-Random-Laser-2017} in a TAL random laser setup. \section{Future directions and conclusions} Disordered optical fibers demonstrate many novel physical properties, mainly driven by the possibility of the localized beam transport over the entire cross section of the optical fiber. Therefore, they should be designed to support highly localized modes with small localization radii, in order to have a narrow point-spread function for high-resolution image transport. Localized modes must also be uniformly distributed over the transverse cross section of the fiber, and the mode-to-mode variations in the localization radius must be minimized to improve the image transport uniformity. At present, these requirements translate into efforts in fabricating fibers with highly disordered transverse refractive index profiles with small transverse index correlations. In order to improve the distances over which high-quality images can be transported, it is imperative to ensure a high degree of longitudinal uniformity in these fibers. We anticipate that over the next few years, an expanded selection of materials and improved fabrication methods would enhance and expand the reach of these fibers, both in physical properties and in practical applications. \section*{Acknowledgment} A. Mafi acknowledges support by Grant Number 1807857 from National Science Foundation (NSF). J. Ballato acknowledges support by Grant Number 1808232 from NSF. \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
train/arxiv
BkiUayzxK6nrxl9bLKy4
5
1
\section{Introduction} The overall behavior of geometric quantities such as systole, diameter, eigenvalues of Laplacian, Cheeger constant \textit{etc}., for all closed hyperbolic surfaces of a given genus $g$, is a classical object of study. While there are many results and conjectures about the maximal/minimal values of these quantities, as functions on the moduli space $\mathcal{M}_g$, Mirzakhani initiated a new approach in \cite{Mirz13} to the subject: based on her celebrated thesis works \cite{Mirz07,Mirz07-int}, she obtained asymptotic results on certain statistic information of these quantities, viewed as random variables with respect to the probability measure $\mathop{\rm Prob}\nolimits_{\rm WP}^g$ on $\mathcal{M}_g$ given by the Weil-Petersson metric. One may see the book \cite{Wolpert-book} of Wolpert and the recent survey \cite{Wright-tour} of Wright for more details. \subsection{Separating systole} It was shown in \cite[Corollary 4.3]{Mirz13} that the expectation value of $\frac{1}{\ell_{\mathop{\rm sys}}(\cdot)}$ on $\mathcal{M}_g$ is bounded from above and below by two positive constants independent of $g$, where $\ell_{\mathop{\rm sys}}(\cdot)$ is the \emph{systole} defined as $$ \ell_{\mathop{\rm sys}}(X):=\min\big\{\ell_\gamma(X)\,;\, \text{$\gamma\subset X$ is a simple closed geodesic}\big\}. $$ The systole function is always bounded on $\mathcal{M}_g$. Meanwhile, Mirzakhani \cite{Mirz13} also proved a result on the \emph{separating systole} defined as $$ \ell_{\mathop{\rm sys}}^{\rm sep}(X):=\min\big\{\ell_\gamma(X)\,;\, \text{$\gamma\subset X$ is a separating simple closed geodesic}\big\}, $$ which implies that $\ell_{\mathop{\rm sys}}^{\rm sep}$ behaves drastically differently from $\ell_{\mathop{\rm sys}}$. The separating systole function is unbounded on $\mathcal{M}_g$; indeed if a closed hyperbolic surface of genus $g$ carries a pants decomposition consisting of arbitrarily short non-separating closed geodesics, the classical Collar Lemma (\textit{e.g.\@ } see \cite{Kee74}) implies that the length of any separating closed geodesic is arbitrarily large because it always has nonempty intersection with certain curve in the pants decomposition. In this paper we study the asymptotic behavior of shortest simple separating closed geodesics as the genus $g$ goes to infinity. First we recall the following result. One may also see \cite[Theorem 4.2]{Mirz10} of Mirzakhani's 2010 ICM report for a weaker version. \begin{thm*}[{\bf{Mirzakhani}, \cite[Theorem 4.4]{Mirz13}}] Let $0<a<2$. Then \[\mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \ell_{\mathop{\rm sys}}^{\rm sep}(X)<a \log g \right)=O\left( \frac{(\log g)^3 g^{\frac{a}{2}}}{g} \right).\] \end{thm*} \noindent This result in particular implies that for any $\epsilon>0$, $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \ell_{\mathop{\rm sys}}^{\rm sep}(X)> (2-\epsilon)\log g \right)=1. $$ Let $\omega:\{2,3,\cdots\}\to\mathbb{R}^{>0}$ be any function satisfying \begin{equation} \label{eq-omega} \lim \limits_{g\to \infty}\omega(g)= +\infty \ \textit{and} \ \lim \limits_{g\to \infty}\frac{\omega(g)}{\log\log g} = 0. \end{equation} The main part of this article is to show \begin{theorem}\label{main} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Consider the following two conditions defined for all $X\in\mathcal{M}_g$: \begin{itemize} \item[(a).] \label{item_main1} $|\ell_{\mathop{\rm sys}}^{\rm sep}(X)-(2\log g - 4\log \log g)| \leq \omega(g)$; \item[(b).] \label{item_main2} $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is achieved by a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$. \end{itemize} Then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \textit{$X$ satisfies $(a)$ and $(b)$} \right)=1. $$ \end{theorem} \noindent The result in particular implies that for any $\epsilon>0$, $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ (2-\epsilon)\log g< \ell_{\mathop{\rm sys}}^{\rm sep}(X)<2\log g \right)=1. $$ \begin{rem*} We remark that the seemingly cumbersome upper and lower bounds $2\log g-4\log \log g\pm\omega(g)$ of $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ in the theorem above is related to the expected number of multi-geodesics of length less than $L$ on $X\in\mathcal{M}_g$ bounding a one-handle or a three-holed sphere, which is roughly $\frac{L^2e^\frac{L}{2}}{g}$. One may see the remark following Lemma \ref{E[N]} for more details. \end{rem*} In the subsequent subsections we discuss several applications of Theorem \ref{main} or the proof of Theorem \ref{main}. \subsection{Long half collar and extremal length} A \emph{collar} of a simple closed geodesic $\gamma$ in a closed hyperbolic surface is an embedded symmetric hyperbolic cylinder centered at $\gamma$, bounded by two equidistance curves from $\gamma$, whereas a \emph{half-collar} of $\gamma$ is an embedded hyperbolic cylinder bounded by one equidistance curve along with $\gamma$ itself. If $X\in \mathcal{M}_g$ has an arbitrarily short separating systolic curve $\gamma$, the width of the maximal collar of $\gamma$ can be arbitrarily large. As introduced above, if $X\in \mathcal{M}_g$ has an arbitrarily long separating systolic curve $\gamma$, the width of the maximal (half-)collar of $\gamma$ can be arbitrarily closed to $0$ because the area of $X$ is fixed. As an application of Theorem \ref{main}, we show that as $g$ goes to infinity, asymptotically on a generic point $X\in \mathcal{M}_g$ there is an arbitrarily long half-collar around a separating systolic curve. More precisely, \begin{theorem}\label{thm:half collar} Given any $\epsilon>0$, and consider the following conditions defined for all $X\in \mathcal{M}_g$: \begin{itemize} \item[(c).] $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is achieved by a simple closed geodesic $\gamma$ separating $X$ into $S_{1,1}\cup S_{g-1,1}$; \item[(d).] There is a half-collar around $\gamma$ in the $S_{g-1,1}$-part of $X$ with width $\frac{1}{2}\log g-\left(\frac{3}{2}+\epsilon\right)\log\log g$. \end{itemize} Then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \textit{$X$ satisfies $(c)$ and $(d)$} \right)=1. $$ \end{theorem} \noindent Note that one cannot replace ``half-collar'' by ``collar'' in the above theorem. In fact, since a geodesic $\gamma\subset X$ realizing $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is arbitrarily long and bounds a one-handle on a generic point $X\in \mathcal{M}_g$ by Theorem \ref{main}, the maximal embedded half-collar about $\gamma$ in the one-handle must be arbitrarily thin because the area of a one-handle is fixed. The theory of \emph{extremal length} was developed by Ahlfors and Beurling (\textit{e.g.\@ } see \cite[Chapter 4]{Ahlfors-ci}). One may also see \cite[Section 3]{Kerck80} of Kerckhoff for its deep connection to the geometry of the Teichm\"uller space. In this paper we deduce from Theorem \ref{thm:half collar} a consequence about \emph{extremal lengths} of separating curves. Let $\rm Ext_\gamma(X)$ denote the extremal length of the family of rectifiable closed curves on $X$ homotopic to $\gamma$ (see precise definition in Subsection \ref{subsec-el}), and $\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)$ denote the \emph{separating extremal length systole} of $X$ which is defined as the infimum of $\rm Ext_\gamma(X)$ over all separating simple closed geodesics $\gamma$ on $X$. It is known by Maskit \cite{Maskit} that $\ell_\gamma(X)\leq\pi\rm Ext_\gamma(X)$, hence $\ell_{\mathop{\rm sys}}^{\rm sep}(X)\leq\pi\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)$. Conversely, as an application of Theorem \ref{thm:half collar} we show that \begin{theorem}\label{cor:extremal} Given any $\epsilon>0$, then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \frac{\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)}{\ell_{\mathop{\rm sys}}^{\rm sep}(X)}< \frac{2+\epsilon}{\pi} \right)=1. $$ As a consequence of this result and Theorem \ref{main}, we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \frac{(2-\epsilon)}{\pi}\log g< \rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)< \frac{(4+\epsilon)}{\pi}\log {\textit{g}} \right)=1. $$ \end{theorem} \begin{rem*} For $X\in M_g$, the \emph{extremal length systole} $\rm Ext_{sys}(X)$ of $X$ is defined as the infimum of $\rm Ext_\gamma(X)$ over all simple closed geodesics $\gamma$ on $X$. For any systolic curve $\gamma\subset X$, it is known that the collar of $\gamma$ has width which is bounded from below by a uniform positive constant independent of $g$. By Maskit \cite{Maskit} and Buser-Sarnak \cite{BS94} it is not hard to see that $\rm Ext_{sys}(X)$ is uniformly comparable to $\ell_{sys}(X)$. Thus, as the genus $g$ goes to infinity, the asymptotic behavior of $\rm Ext_{sys}(\cdot)$ on $\mathcal{M}_g$ is similar as the behavior of $\ell_{sys}(\cdot)$ on $\mathcal{M}_g$, which was studied by Mirzakhani \cite{Mirz13} and Mirzakhani-Petri \cite{MP19}. One may see their works for more details. \end{rem*} \subsection{Shortest separating closed multi-geodesics} The union of disjoint non-separating simple closed curves may also separate a closed surface. The following geometric quantity was used by Schoen-Wolpert-Yau \cite{SWY80} to study the eigenvalues of the Laplacian operator on hyperbolic surfaces. \begin{def*} For any $X\in \mathcal{M}_g$, we define $$ \mathcal{L}_1(X):=\min\left\{\ell_\gamma(X)\ ; \ \parbox[l]{5.5cm}{ $\gamma=\gamma_1+\cdots+\gamma_k$ is a simple closed multi-geodesics separating $X$ }\right\}. $$ \end{def*} \noindent As a byproduct of the proof of Theorem \ref{main}, we show a similar result on $\mathcal{L}_1(\cdot)$ as following. \begin{theorem}\label{cor L1} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Consider the following two conditions defined for all $X\in\mathcal{M}_g$: \begin{itemize} \item[(e).] $|\mathcal{L}_1(X)-(2\log g - 4\log \log g)| \leq \omega(g)$; \item[(f).] $\mathcal{L}_1(X)$ is achieved by either a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$ or three simple closed geodesics separating $X$ into $S_{0,3}\cup S_{g-2,3}$. \end{itemize} Then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \textit{$X$ satisfies $(e)$ and $(f)$} \right)=1. $$ \end{theorem} Now we consider the expectation value of $\mathcal{L}_1(\cdot)$ over $\mathcal{M}_g$. Unlike the unboundness of $\ell_{\mathop{\rm sys}}^{\rm sep}(\cdot)$ on $\mathcal{M}_g$ we first show that $\sup_{X\in \mathcal{M}_g}\mathcal{L}_1(X)\leq C \log g$ for some universal constant $C>0$ independent of $g$ (see Proposition \ref{L1-upp}). And then we apply Theorem \ref{cor L1} to show that \begin{theorem}\label{cor E[L1]} The expectation value $\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]$ of $\mathcal{L}_1(\cdot)$ on $\mathcal{M}_g$ satisfies \begin{equation*} \lim_{g\rightarrow\infty}\frac{\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]}{\log g} = 2. \end{equation*} \end{theorem} As another byproduct of the proof of Theorem \ref{main} we show the following useful property. First we make the following definition generalizing $\mathcal{L}_1(\cdot)$. \begin{def*} For any integer $m\in [1,g-1]$ and $X\in \mathcal{M}_g$, we define \[\mathcal{L}_{1,m}(X):=\min_{\Gamma} \ell_{\Gamma}(X)\] where the minimum runs over all simple closed multi-geodesics $\Gamma$ separating $X$ into $S_{g_1,k}\cup S_{g_2,k}$ with $|\chi(S_{g_1,k})|\geq |\chi(S_{g_2,k})|\geq m.$ \end{def*} \noindent By definition we know that \[\mathcal{L}_{1,1}(X)=\mathcal{L}_{1}(X)\] and \[\mathcal{L}_{1,m-1}(X)\leq \mathcal{L}_{1,m}(X), \quad \forall m \in [2,g-1].\] \begin{proposition}\label{lower bound for chi geq 2} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Then we have that for any fixed $m\geq 1$ independent of $g$, \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in\mathcal{M}_g ;\ \mathcal{L}_{1,m}(X) \geq 2m\log g - (6m-2)\log\log g -\omega(g)\right) = 1. \end{equation*} \end{proposition} \noindent If $m=1$, this is part of Theorem \ref{cor L1}. \begin{rem*} As in \cite{Mirz13}, for all $1\leq m\leq g-1$ the \emph{$m$-th geometric Cheeger constant} $H_m(X)$ of $X$ is defined as \[H_m(X):= \inf \limits_{\gamma}\frac{\ell_{\gamma}(X)}{2\pi m}\] where $\gamma$ is a simple closed multi-geodesics on $X$ with $X\setminus \gamma=X_1\cup X_2$, and $X_1$ and $X_2$ are connected subsurfaces of $X$ such that $|\chi(X_1)|=m\leq |\chi(X_2)|$. The \emph{geometric Cheeger constant} $H(X)$ of $X$ is defined as \[H(X):=\min \limits_{1\leq m\leq g-1}H_m(X).\] \noindent Mirzakhani in \cite{Mirz13} showed that \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g \left(X\in \mathcal{M}_g; \ H(X)> \frac{\log 2}{2\pi} \right)=1.\] \noindent As a direct consequence of Theorem \ref{cor L1}, we obtain the following result on the asymptotic behavior of the first geometric Cheeger constant $H_1(\cdot)$ on $\mathcal{M}_g$. \begin{corollary} For any $\epsilon>0$, we have \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g \left(X\in \mathcal{M}_g; \ (1-\epsilon)\cdot \frac{\log g}{\pi}< H_1(X)< \frac{\log g}{\pi}\right)=1.\] \end{corollary} \noindent It would be also \emph{interesting} to study the asymptotic behavior of $H_m(\cdot)$ on $\mathcal{M}_g$ when $2\leq m \leq (g-1)$ as $g$ goes to infinity. One may see the last section for more discussions. \end{rem*} \begin{rem*} Similarly as the definition of $\mathcal{L}_1(\cdot)$, for all $1\leq i \leq (2g-3)$ and $X\in \mathcal{M}_g$, define $$ \mathcal{L}_i(X):=\min\left\{\ell_\gamma(X)\ ; \ \parbox[l]{5.5cm}{ $\gamma=\gamma_1+\cdots+\gamma_k$ is a simple closed multi-geodesics separating $X$ into $(i+1)$ pieces. }\right\}. $$ Let $0=\lambda_0(X)<\lambda_1(X)\leq \cdots \lambda_i(X) \cdots \to \infty$ be an increasing order of the eigenvalues of the Laplacian operator of $X$. Schoen-Wolpert-Yau in \cite{SWY80} showed that for all $1\leq i \leq (2g-3)$, there exist two positive constants $\alpha_i(g)$ and $\beta_i(g)$ depending on $g$ such that \[\alpha_i(g)\leq \frac{\lambda_i(X)}{\mathcal{L}_i(X)}\leq \beta_i(g).\] \noindent Recently it was shown by the second and third named authors in \cite{WX18,WX19} that as the genus $g$ goes to infinity, the constant $\alpha_1(g)$ can be optimally chosen to be $\frac{1}{g^2}$ up to a uniform positive constant multiplication. It is known by Cheng \cite{Cheng75} that $\limsup \limits_{g\to \infty}\sup_{X\in \mathcal{M}_g}\lambda_1(X)\leq \frac{1}{4}$. Mirzakhani \cite[Page 269]{Mirz13} showed that \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g \left(X\in \mathcal{M}_g; \ \lambda_1(X)>0.002\right)=1.\] Thus, combining these results together with Theorem \ref{cor L1} we have \begin{corollary} For any $\epsilon>0$, we have \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g \left(X\in \mathcal{M}_g; \ \frac{0.001}{\log g}< \frac{\lambda_1(X)}{\mathcal{L}_1(X)}< (1+\epsilon)\cdot \frac{0.125}{\log g}\right)=1.\] \end{corollary} \end{rem*} \subsection*{Plan of the paper.} In Sections \ref{section preliminaries}, \ref{section union}, \ref{section wp volume} and \ref{section McShane identity}, we review the backgrounds, introduce some notations, and prove a few technical lemmas. We then prove the lower bound part and the upper bound part of Theorem \ref{main} and \ref{cor L1} in Sections \ref{section lower bound} and \ref{section upper bound}, respectively. In Section \ref{sec:half collar}, we prove Theorem \ref{thm:half collar} and \ref{cor:extremal}. Theorem \ref{cor E[L1]} will be proved in Section \ref{section exp-L1}. We will pose several advanced questions in Section \ref{section questions}. \subsection*{Acknowledgements.} The authors would like to thank Jeffrey Brock, Curtis McMullen, and Michael Wolf for their interests and comments on this paper. The second named author is supported by a grant from Tsinghua University. \setcounter{tocdepth}{1} \tableofcontents \section{Preliminaries}\label{section preliminaries} In this section, we set our notations and review the relevant background material about moduli spaces of Riemann surfaces, Weil-Petersson \ metric and Mirzakhani's Integration Formula. \subsection{Weil-Petersson metric.} \label{sec:wp background} We denote by $S_{g,n}$ an oriented surface of genus $g$ with $n$ punctures or boundaries where $2g+n\geq 3$. Then the Uniformization theorem implies that the surface $S_{g,n}$ admits hyperbolic metrics of constant curvature $-1$. We let $\mathcal{T}_{g,n}$ be the Teichm\"uller space of surfaces of genus $g$ with $n$ punctures or boundaries, which we consider as the equivalence classes under the action of the group $\Diff_0(S_{g,n})$ of diffeomorphisms isotopic to the identity of the space of hyperbolic surfaces $X=(S_{g,n},\sigma(z)|dz|^2)$. The tangent space $T_X\mathcal{T}_{g,n}$ at a point $X=(S_{g,n},\sigma(z)|dz|^2)$ is identified with the space of finite area {\it harmonic Beltrami differentials} on $X$, i.e. forms on $X$ expressible as $\mu=\overline{\psi}/\sigma$ where $\psi \in Q(X)$ is a holomorphic quadratic differential on $X$. Let $z=x+iy$ and $dA=\sigma(z)dxdy$ be the volume form. The \textit{Weil-Petersson metric} is the Hermitian metric on $\mathcal{T}_{g,n}$ arising from the the \textit{Petersson scalar product} \begin{equation} \left<\varphi,\psi \right>= \int_X \frac{\varphi \cdot \overline{\psi}}{\sigma^2}dA\nonumber \end{equation} via duality. We will concern ourselves primarily with its Riemannian part $g_{WP}$. Throughout this paper we denote by $\Teich(S_{g,n})$ the Teichm\"uller space endowed with the Weil-Petersson metric. By definition it is easy to see that the mapping class group $\mathop{\rm Mod}_{g,n}:=\Diff^+(S_{g,n})/\Diff^0(S_{g,n})$ acts on $\Teich(S_{g,n})$ as isometries. Thus, the Weil-Petersson \ metric descends to a metric, also called the Weil-Petersson \ metric, on the moduli space of Riemann surfaces $\mathcal{M}_{g,n}$ which is defined as $\mathcal{T}_{g,n}/\mathop{\rm Mod}_{g,n}$. Throughout this paper we also denote by $\mathcal{M}_{g,n}$ the moduli space endowed with the Weil-Petersson metric and write $\mathcal{M}_g = \mathcal{M}_{g,0}$ for simplicity. Given $\boldsymbol{L}=(L_1,\cdots, L_n)\in\mathbb{R}_{\geq0}^n$, the weighted Teichm\"uller space $\calT_{g,n}(\boldsymbol{L})$ parametrizes hyperbolic surfaces $X$ marked by $S_{g,n}$ such that for each $i=1,\cdots,n$, \begin{itemize} \item if $L_i=0$, the $i^\text{th}$ puncture of $X$ is a cusp; \item if $L_i>0$, one can attach a circle to the $i^\text{th}$ puncture of $X$ to form a geodesic boundary loop of length $L_i$. \end{itemize} The weighted moduli space $\mathcal{M}_{g,n}(\boldsymbol{L}):=\calT_{g,n}(\boldsymbol{L})/\mathop{\rm Mod}_{g,n}$ then parametrizes unmarked such surface. The Weil-Petersson volume form is also well-defined on $\mathcal{M}_{g,n}(\boldsymbol{L})$ and its total volume, denoted by $V_{g,n}(\boldsymbol{L})$, is finite. \subsection{The Fenchel-Nielsen coordinates} Recall that for any surface $S_{g,n}$, a \emph{pants decomposition} $\mathcal{P}$ of $S_{g,n}$ is a set of $(3g+n-3)$ disjoint simple closed curves $\{\alpha_i\}_{i=1}^{3g+n-3}$ such that the complement $S_{g,n}\setminus \cup_{i=1}^{3g+n-3}\alpha_i$ of $S_{g,n}$ consists of disjoint union of three-holed spheres. For each $\alpha_i \in \mathcal{P}$, there are two natural real positive functions on $\mathcal{T}_{g,n}$: the geodesic length function $\ell_{\alpha_i}(X)$ and the twist function $\tau_{\alpha_i}(X)$ along $\alpha_i$. Associated to $\mathcal{P}$, the \emph{Fenchel-Nielsen coordinates}, given by $X \mapsto (\ell_{\alpha_i}(X),\tau_{\alpha_i}(X))_{i=1}^{3g+n-3}$, is a global coordinate for $\mathcal{T}_{g,n}$. Wolpert in \cite{Wolpert82} showed that the Weil-Petersson \ sympletic structure has a simple and magic form in Fenchel-Nielsen coordinates. More precisely, \begin{theorem}[Wolpert]\label{wol-wp} The Weil-Petersson \ sympletic form $\omega_{\WP}$ on $\mathcal{T}_{g,n}$ is given by \[\omega_{\WP}=\sum_{i=1}^{3g+n-3}d\ell_{\alpha_i}\wedge d\tau_{\alpha_i}.\] \end{theorem} In the sequel, we mainly work with the \emph{Weil-Petersson volume form} $$ d\mathit{vol}_{\WP}:=\tfrac{1}{(3g+n-3)!}\underbrace{\omega_{\WP}\wedge\cdots\wedge\omega_{\WP}}_{\text{$3g+n-3$ copies}}~. $$ It is a $\mathop{\rm Mod}_{g,n}$-invariant measure on $\calT_{g,n}$, hence is the lift of a measure on $\mathcal{M}_{g,n}$, which we still denote by $d\mathit{vol}_{\WP}$. The total volume of $\mathcal{M}_{g,n}$ is known to be finite and is denoted by $V_{g,n}$. Our main objects of study are geometric quantities on $\mathcal{M}_g$. Following \cite{Mirz13}, we view such a quantity $f:\mathcal{M}_g\to\mathbb{R}$ as a random variable on $\mathcal{M}_g$ with respect to the probability measure $\mathop{\rm Prob}\nolimits_{\rm WP}^g$ defined by normalizing $d\mathit{vol}_{\WP}$, and let $\mathbb{E}_{\rm WP}^g[f]$ denote the expectation. Namely, $$ \mathop{\rm Prob}\nolimits_{\rm WP}^g(\mathcal{A}):=\frac{1}{V_g}\int_{\mathcal{M}_g}\mathbf{1}_{\mathcal{A}}dX,\quad \mathbb{E}_{\rm WP}^g[f]:=\frac{1}{V_g}\int_{\mathcal{M}_g}f(X)dX, $$ where $\mathcal{A}\subset\mathcal{M}_g$ is any Borel subset, $\mathbf{1}_\mathcal{A}:\mathcal{M}_g\to\{0,1\}$ is its characteristic function, and we always write $d\mathit{vol}_{\WP}(X)$ as $dX$ for short. In this paper, we view certain geometric quantities as random variables on $\mathcal{M}_g$, and study their asymptotic behaviors as $g\to \infty$. One may also see \cite{DGZZ20-multi, GMST19, GPY11, MT20, MP19} for related interesting topics. \subsection{Mirzakhani's Integration Formula} In \cite{Mirz07}, Mirzakhani gave a formula to integrate geometric functions over moduli spaces, which is an essential formula in the study of random surfaces with respect to Weil-Petersson metric. Then in the same paper she calculated the volume of moduli spaces together with her generalized McShane identity. In \cite{Mirz13}, applying this formula, she gave many estimations for some geometry variables in probability meaning. Here we give the version stated in \cite{Mirz13}, which is a little more general than the one in \cite{Mirz07}. Given a homotopy class of a closed curve $\gamma$ on a topological surface $S_{g,n}$ and $X\in\calT_{g,n}$, we denote $\ell_\gamma(X)$ to be the hyperbolic length of the unique closed geodesic in the homotopy class $\gamma$ on $X$. We also write $\ell(\gamma)$ for simplicity if we do not need to emphasize the surface $X$. Let $\Gamma=(\gamma_1,\cdots,\gamma_k)$ be an ordered k-tuple where $\gamma_i$'s are distinct disjoint homotopy classes of nontrivial, non-peripheral, simple closed curves on $S_{g,n}$. We consider the orbit containing $\Gamma$ under $\mathop{\rm Mod}_{g,n}$ action \begin{equation*} \mathcal O_{\Gamma} = \{(h\cdot\gamma_1,\cdots,h\cdot\gamma_k) ; h\in\mathop{\rm Mod}\nolimits_{g,n}\}. \end{equation*} Given a function $F:\mathbb{R}^k_{\geq0} \rightarrow \mathbb{R}_{\geq0}$ we may define a function on $\mathcal{M}_{g,n}$ \begin{eqnarray*} F^\Gamma:\mathcal{M}_{g,n} &\rightarrow& \mathbb{R} \\ X &\mapsto& \sum_{(\alpha_1,\cdots,\alpha_k)\in \mathcal O_\Gamma} F(\ell_{\alpha_1}(X),\cdots,\ell_{\alpha_k}(X)). \end{eqnarray*} \begin{rem*} Although $\ell_\gamma(\cdot)$ is only defined on $\calT_{g,n}$, the function $F^\Gamma(\cdot)$ is well-defined on $\mathcal{M}_{g,n}$. \end{rem*} Assume $S_{g,n}-\cup\gamma_j = \cup_{i=1}^s S_{g_i,n_i}$. For any given $\boldsymbol{x}=(x_1,\cdots,x_k)\in \mathbb{R}^k_{\geq0}$, we consider the moduli space $\mathcal{M}(S_{g,n}(\Gamma); \ell_{\Gamma}=\boldsymbol{x})$ of hyperbolic Riemann surfaces (may not connected) homeomorphic to $S_{g,n}-\cup\gamma_j$ with $\ell_{\gamma_i^1} = \ell_{\gamma_i^2} =x_i$ for $i=1,\cdots,k$, where $\gamma_i^1$ and $\gamma_i^2$ are the two boundary components of $S_{g,n}-\cup\gamma_j$ given by $\gamma_i$. We consider the volume \begin{equation*} V_{g,n}(\Gamma,\boldsymbol{x}) = \mathop{\rm Vol}\nolimits_{WP}\big(\mathcal{M}(S_{g,n}(\Gamma); \ell_{\Gamma}=\boldsymbol{x})\big). \end{equation*} In general \begin{equation*} V_{g,n}(\Gamma,\boldsymbol{x}) = \prod_{i=1}^s V_{g_i,n_i}(\boldsymbol{x}^{(i)}) \end{equation*} where $\boldsymbol{x}^{(i)}$ is the list of those coordinates $x_j$ of $\boldsymbol{x}$ such that $\gamma_j$ is a boundary component of $S_{g_i,n_i}$. And $V_{g_i,n_i}(\boldsymbol{x}^{(i)})$ is the Weil-Petersson volume of the moduli space $\mathcal{M}_{g_i,n_i}(\boldsymbol{x}^{(i)})$. Mirzakhani used Theorem \ref{wol-wp} of Wolpert to get the following integration formula. \begin{theorem}\cite[Theorem 7.1]{Mirz07} or \cite[Theorem 2.2]{Mirz13} \label{Mirz int formula} For any $\Gamma=(\gamma_1,\cdots,\gamma_k)$, the integral of $F^\Gamma$ over $\mathcal{M}_{g,n}$ with respect to Weil-Petersson metric is given by \begin{equation*} \int_{\mathcal{M}_{g,n}} F^\Gamma(X)dX = 2^{-M(\Gamma)}\int_{\mathbb{R}^k_{\geq0}} F(x_1,\cdots,x_k)V_{g,n}(\Gamma,\boldsymbol{x}) \boldsymbol{x}\cdot d\boldsymbol{x} \end{equation*} where $\boldsymbol{x}\cdot d\boldsymbol{x} = x_1\cdots x_k dx_1\wedge\cdots\wedge dx_k$ and \begin{equation*} M(\Gamma) = \# \{i\ ;\ \gamma_i \ \text{separates off a one-handle from} \ S_{g,n} \}. \end{equation*} \end{theorem} \begin{rem*} Given an unordered multi-curve $\gamma=\sum_{i=1}^k c_i \gamma_i$ where $\gamma_i's$ are distinct disjoint homotopy classes of nontrivial, non-peripheral, simple closed curves on $S_{g,n}$, when $F$ is a symmetric function, we can define \begin{eqnarray*} F_\gamma:\mathcal{M}_{g,n} &\rightarrow& \mathbb{R} \\ X &\mapsto& \sum_{\sum_{i=1}^k c_i\alpha_i \in \mathop{\rm Mod}_{g,n}\cdot \gamma} F(c_1\ell_{\alpha_1}(X),\cdots,c_k\ell_{\alpha_k}(X)). \end{eqnarray*} \noindent It is easy to check that \begin{equation*} F^\Gamma(X) = |\mathop{\rm Sym}(\gamma)| \cdot F_\gamma(X) \end{equation*} where $\Gamma=(c_1\gamma_1,\cdots,c_k\gamma_k)$ and $\mathop{\rm Sym}(\gamma)$ is the symmetry group of $\gamma$ defined by \begin{equation*} \mathop{\rm Sym}(\gamma) = \mathop{\rm Stab}(\gamma) / \cap_{i=1}^k \mathop{\rm Stab}(\gamma_i). \end{equation*} \noindent Actually we consider the integration of $F_\gamma$ for most times in this paper. \end{rem*} \subsection{Counting functions} In this subsection we introduce some notations that will be used in the whole paper here. On a topological surface $S_{g,n}$ with $\chi(S_{g,n})=2-2g-n<0$, let $\gamma=\sum_{i=1}^k \gamma_i$ be a simple closed multi-curves where $\gamma_i's$ are disjoint homotopy classes of nontrivial, non-peripheral, simple closed curves on $S_{g,n}$. For any $X\in \calT_{g,n}$, we define \begin{equation*} \ell_\gamma(X) := \sum_{i=1}^k \ell_{\gamma_i}(X) \end{equation*} where $\ell_{\gamma_i}(X)$ is the length of the unique closed geodesic in the homotopy class $\gamma_i$ on $X$. We also write $\ell(\gamma)$ for simplicity if we do not need to specify the surface $X$. Consider the orbit containing $\gamma$ under the mapping class group $\mathop{\rm Mod}_{g,n}$ action \begin{equation*} \mathcal O_{\gamma} = \{h\cdot\gamma ;\ h\in\mathop{\rm Mod}\nolimits_{g,n}\} \end{equation*} where $h\cdot \gamma = h\cdot \sum_{i=1}^k\gamma_i = \sum_{i=1}^k h\cdot\gamma_i$. For any $X\in\calT_{g,n}$ and $L>0$, we can define the counting function \begin{equation*} N_\gamma(X,L) := \# \{\alpha\in \mathcal O_{\gamma} ;\ \ell_{\alpha}(X)\leq L \}. \end{equation*} Moreover, although $\ell_\gamma(\cdot)$ is only defined for $\calT_{g,n}$, the counting function $N_\gamma(\cdot,L)$ is well-defined on $\mathcal{M}_{g,n}$. Note that the orbit $\mathcal{O}_\gamma$ of a simple closed multi-curves $\gamma$ is determined by the topology of $S_{g,n}-\gamma$. We also use the following notations for some special types $\gamma$. When $\alpha$ consists of $n_0$ simple closed curves separating $S_g$ into $S_{g_0,n_0} \cup S_{g-g_0-n_0+1,n_0}$ (\textit{e.g.\@ } see Figure \ref{figure:counting} for the case that $n_0=1$ and $g_0=1$), we write \begin{equation*} N_{g_0,n_0}(X,L) := N_\alpha(X,L). \end{equation*} When $\gamma$ consists of $n_0$ simple closed curves separating $S_g$ into $q+1$ pieces $S_{g_0,n_0} \cup S_{g_1,n_1} \cup S_{g_2,n_2} \cup \cdots \cup S_{g_q,n_q}$ with $n_1+\cdots+n_q=n_0$ and $g_0+g_1+\cdots+g_q+n_0-q=g$ (\textit{e.g.\@ } see Figure \ref{figure:counting}), we write \begin{equation*} N_{g_0,n_0}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L) := N_\gamma(X,L). \end{equation*} In particularly, $N_{g_0,n_0}(X,L)= N_{g_0,n_0}^{(g-g_0-n_0+1,n_0)}(X,L)$. \begin{figure}[h] \centering \includegraphics[width=8.2cm]{counting-0.pdf} \caption{} \label{figure:counting} \end{figure} We will also use some other specific and not so regular counting functions in the paper. We will introduce them just when required. \section{Union of two subsurfaces with geodesic boundaries}\label{section union} In this section, we present some hyperbolic-geometric constructions and lemmas used in Section \ref{section upper bound} below. \begin{con*} Fix a closed hyperbolic surface $X\in\mathcal{M}_g$ and let $X_1, X_2$ be two distinct connected, compact subsurfaces of $X$ with geodesic boundaries, such that $X_1\cap X_2\neq\emptyset$ and neither of them contains the other. Then the union $X_1\cup X_2$ is a subsurface whose boundary is only piecewise geodesic. We can construct from it a new surface $\tilde X_{12}$, with geodesic boundary, by deforming each of its boundary components $\xi\subset\partial(X_1\cup X_2)$ as follows: \begin{itemize} \item if $\xi$ is homotopically trivial, we fill the disc bounded by $\xi$ into $X_1\cup X_2$; \item otherwise, we deform $X_1\cup X_2$ by shrinking $\xi$ to the unique simple closed geodesic homotopic to it. \end{itemize} \noindent Note that $\tilde{X}_{12}$ may not be a compact subsurface of $X$ because it is possible that two different boundary components $\xi\subset\partial(X_1\cup X_2)$ shrink to the same simple closed geodesic $\gamma$ from the two sides of $\gamma$; for this case that a simple closed geodesic appears in $\partial \tilde X_{12}$ twice, we glue them together to obtain a compact subsurface of $X$ denoted by $X_{12}$. \end{con*} \noindent By construction above we know that $X_{12}\subset X$ is a compact subsurface of geodesic boundary. Clearly we have \begin{equation} \chi(X_{12}) = \chi(\tilde X_{12}). \end{equation} As for length of boundary, we have \begin{equation} \ell(\partial X_{12}) \leq \ell(\partial \tilde X_{12}) \leq \ell(\partial X_1) + \ell(\partial X_2). \end{equation} We will mainly apply this construction to the situation where $X_1$ and $X_2$ are both one-handles (that is, of type $S_{1,1}$). So we introduce the following notation for later use: \begin{definition*}\label{def U} Suppose $X\in\mathcal{M}_g$. For a simple closed geodesic $\alpha\subset X$ bounding a one-handle, let $X_\alpha$ denote the one-handle bounded by $\alpha$. For two such geodesics $\alpha,\beta$ with $\alpha\neq\beta$, $\alpha\cap\beta\neq\emptyset$, let $X_{\alpha\beta}$ denote the subsurface $X_{12}$ of $X$ constructed above for $X_1=X_\alpha$ and $X_2=X_\beta$. See Figure \ref{figure_examples}. \end{definition*} \begin{figure}[ht] \centering \includegraphics[width=11cm]{examples2.pdf} \caption{Examples of $(\alpha,\beta)$ and $X_{\alpha\beta}$. The one-handle $X_\beta$ is colored. $X_{\alpha\beta}$ is of type $S_{1,2}$ in the first example and of type $S_{2,1}$ in both examples on the right.} \label{figure_examples} \end{figure} \begin{rem*} The first example in Figure \ref{figure_examples} illustrates the case where $\beta$ is obtained from $\alpha$ by $n$-times Dehn twist along another simple closed curve ($n=3$ in the figure). In this case, $X_{\alpha\beta}$ is always of type $S_{1,2}$. Note that $X_\beta\setminus X_\alpha$ is a disjoint union of stripes homotopic to each other in this case. So one can construct a pair $(\alpha,\beta)$ with $|\chi(X_{\alpha\beta})|$ arbitrarily large by modifying these stripes, making them not homotopic. \end{rem*} We now return to the general case and establish a basic property for $X_{12}$: \begin{lemma}\label{alpha sbs U} Let $X_1$, $X_2$ and $X_{12}$ be as above. Then we have \begin{equation*} X_1 \cup X_2 \subset X_{12}, \end{equation*} and the complement $X_{12} \setminus (X_1 \cup X_2)$ is a disjoint union of topological discs and cylinders. \end{lemma} \begin{proof} We begin with the observation that $X_0:=X_1\cup X_1$ is a subsurface of $X$ with \emph{concave} piecewise geodesic boundary, where the concavity means that for each junction point $p\in\partial X_0$ of two geodesic pieces of $\partial X_0$, the inner angle $\angle_pX_0$ of $X_0$ at $p$ is greater than $\pi$ (see Figure \ref{figure:crown1}). This is because $\angle_pX_0$ is formed by overlapping the two $\pi$-angles given by $X_1$ and $X_2$ at $p$. \begin{figure}[ht] \centering \includegraphics[width=4cm]{crown1.pdf} \caption{} \label{figure:crown1} \end{figure} By the construction of $X_{12}$, in order to prove the required statements, we only need to show that if $\xi$ is a component of $\partial X_0$ which is homotopically nontrivial and consists of at least two geodesic pieces, then $\xi$ and the simple closed geodesic $\xi'$ homotopic to $\xi$ together bound an annulus outside of $X_0$, as Figure \ref{figure:crown1} shows. Suppose by contradiction that $\xi$ violates this property. Then we are in one of the following cases: \textbf{Case 1. $\xi'$ is contained in $X_0\setminus\xi$} (see Figure \ref{figure:crown2}). \begin{figure}[ht] \centering \includegraphics[width=4cm]{crown2.pdf} \caption{} \label{figure:crown2} \end{figure} Applying the Gauss-Bonnet formula to the annulus $A$ bounded by $\xi$ and $\xi'$ in this case, we get $$ -\mathop{\rm Area}(A)+\sum_{p\in J(\xi)}(\pi-\angle_pX_0)=2\pi\chi(A)=0, $$ where $J(\xi)$ denote the set of junction points of the geodesic pieces of $\xi$. This is a contradiction because the LHS is negative. \textbf{Case 2.} Otherwise, we have $\xi'\cap\xi\neq\emptyset$. In this case, $\xi'$ contains an arc $\xi'_0$ in $X_0\setminus\xi$ joining two points $q_1,q_2$ of $\xi$ (see Figure \ref{figure:crown2}). These two points separate $\xi$ into two arcs and one of them, denoted by $\xi_0$, bounds a disc $D$ together with $\xi_0'$ because $\xi'$ is homotopic to $\xi$. Applying Gauss-Bonnet to $D$, we get $$ -\mathop{\rm Area}(D)+(\pi-\angle_{q_1}D)+(\pi-\angle_{q_2}D)+\sum_{p\in J(\xi_0)}(\pi-\angle_pX_0)=2\pi\chi(D)=2\pi, $$ which also leads to a contradiction. The proof is complete. \end{proof} We proceed to give bounds on the Euler characteristic of $X_{12}$: \begin{lemma}\label{lemma chiU12} Let $X_1$, $X_2$ and $X_{12}$ be as above. Then we have \begin{equation*} |\chi(X_{12})| \geq 1+\max\{|\chi(X_1)|,|\chi(X_2)|\} \end{equation*} and \begin{equation*} |\chi(X_{12})| \leq |\chi(X_1)| + |\chi(X_2)|+\frac{\ell(\partial X_1)+\ell(\partial X_2)}{2\pi}. \end{equation*} \end{lemma} \begin{proof} By Gauss-Bonnet formula and the assumption that neither $X_1$ nor $X_2$ contains the other, we have \begin{align*} |\chi(X_{12})|&=\frac{1}{2\pi}\mathop{\rm Area}(X_{12}) \\ &>\frac{1}{2\pi}\max\{\mathop{\rm Area}(X_1), \mathop{\rm Area}(X_2) \} \\ &=\max\{|\chi(X_1)|,|\chi(X_2)|\}, \end{align*} which is equivalent to the required lower bound of $|\chi(X_{12})|$ because Euler characteristics are integers. To prove the upper bound, let $\xi_1,\cdots,\xi_r$ be the boundary components of $X_1\cup X_2$ which are piecewise geodesics with at least two pieces. Let $I$ denote the set of indices $i\in\{1,\cdots,r\}$ such that $\xi_i$ is homotopically trivial, $J$ denote the set of indices $j\in\{1,\cdots,r\}$ such that $\xi_j$ is homotopic to a component of $\partial X_{12}$, and $K$ denote the set of indices $k\in\{1,\cdots,r\}$ such that $\xi_k$ is homotopic to a geodesic in the interior of $X_{12}$ (recall that two boundary simple closed geodesics in $\tilde X_{12}$ may be glued together to a single one in $X_{12}$). By Lemma \ref{alpha sbs U}, $X_{12} \setminus (X_1 \cup X_2)$ is a disjoint union of topological discs $\{D_i\}_{i\in I}$, cylinders $\{C_j\}_{j\in J}$ and cylinders $\{C'_{p}\}_{p\in P}$, where $\partial D_i$ is exactly $\xi_i$, $\partial C_j$ is the union of $\xi_j$ and some boundary component of $X_{12}$, and $\partial C'_{p}$ is the union of two elements $\xi_{k_p^1}$ and $\xi_{k_p^2}$ of $\{\xi_k\}_{k\in K}$. Each element of $\{\xi_1,\cdots,\xi_r\}$ appears in $\{\partial D_i\}_{i\in I}$ or $\{\partial C_j\}_{j\in J}$ or $\{\partial C'_{p}\}_{p\in P}$ exactly once. By Isoperimetric Inequality for topological discs and cylinders on hyperbolic surfaces (\textit{e.g.\@ } see \cite{Buser10} or \cite{WX18}), we have \begin{equation*} \mathop{\rm Area}(D_i) \leq \ell(\partial D_i)=\ell(\xi_i), \end{equation*} \begin{equation*} 2\mathop{\rm Area}(C_j) = \mathop{\rm Area}(2C_j)\leq \ell(\partial (2C_j)) = 2\ell(\xi_j), \end{equation*} \begin{equation*} \mathop{\rm Area}(C'_p) \leq \ell(\partial C'_p)=\ell(\xi_{k_p^1})+\ell(\xi_{k_p^2}), \end{equation*} where $2C_i$ denote the double of $C_i$ along its geodesic boundary component in $\partial X_{12}$. Therefore, \begin{align*} \mathop{\rm Area}(X_{12}) &= \mathop{\rm Area}(X_1 \cup X_2) +\sum_{i\in I}\mathop{\rm Area}(D_i)+\sum_{j\in J}\mathop{\rm Area}(C_j)+\sum_{p\in P}\mathop{\rm Area}(C'_p) \\ &\leq \mathop{\rm Area}(X_1 \cup X_2) +\ell(\xi_1)+\cdots+\ell(\xi_r) \\ &\leq \mathop{\rm Area}(X_1) + \mathop{\rm Area}(X_2) + \ell(\partial (X_1 \cup X_2))\\ &\leq \mathop{\rm Area}(X_1) + \mathop{\rm Area}(X_2) + \ell(\partial X_1)+\ell(\partial X_2). \end{align*} This gives the required upper bound of $|\chi(X_{12})|$ again by Gauss-Bonnet. \end{proof} In the case where $X_1$ and $X_2$ are one-handles, Lemma \ref{lemma chiU12} implies: \begin{lemma}\label{area U small} On $X\in\mathcal{M}_g$, let $\alpha,\beta$ be two simple closed geodesics bounding one-handles with $\ell(\alpha)\leq L, \ell(\beta)\leq L$ and $\alpha\neq\beta$, $\alpha\cap\beta\neq\emptyset$. Then we have \begin{enumerate}[label=(\arabic*)] \item\label{item:U1} The genus of $X_{\alpha\beta}$ is at least $1$, and the Euler characteristic $\chi(X_{\alpha\beta})$ satisfies \[2 \leq |\chi(X_{\alpha\beta})| \leq \frac{1}{\pi}L+2.\] \item\label{item:U2} If $|\chi(X_{\alpha\beta})|=2$ and $g\geq3$, then $X_{\alpha\beta}$ is of type $S_{1,2}$. \end{enumerate} \end{lemma} \begin{proof} Statement \ref{item:U1} follows from Lemma \ref{lemma chiU12}. Statement \ref{item:U2} is because the only surfaces $S_{g_0,n_0}$ such that $|\chi(S_{g_0,n_0})|=2$ and $g_0\geq1$ are $S_{1,2}$ and $S_{2,0}$, whereas $X$ cannot have a subsurface of type $S_{2,0}$ if $g\geq3$. \end{proof} Finally, we show that in the case where $X_{\alpha\beta}$ is of type $S_{1,2}$, under some additional assumptions, one can reduce $X_{\alpha\beta}$ to a $4$-holed sphere: \begin{lemma}\label{lemma:4holded} On $X\in\mathcal{M}_g$, let $\alpha,\beta$ be two simple closed geodesics bounding one-handles be such that \begin{itemize} \item $\alpha\neq\beta$, $\alpha\cap\beta\neq\emptyset$; \item $X_{\alpha\beta}$ is of type $S_{1,2}$; \item $\ell(\alpha)\leq L$, $\ell(\beta)\leq L$; \item $\ell(\partial X_{\alpha\beta})\geq\frac{5}{3}L$. \end{itemize} Then $\alpha$ and $\beta$ have exactly $4$ intersection points, and the intersection $\mathring{X_\alpha}\cap \mathring{X_\beta}$ (where the ``$\circ$" superscript denotes the interior) contains a unique simple closed geodesic $\delta$ (see Figure \ref{a new geodesic in S12}). \end{lemma} Note that since a one-handle with geodesic boundary is cut by any simple closed geodesic in its interior into a pair-of-pants, the geodesic $\delta$ given by the lemma cuts $X_{\alpha\beta}$ into a $4$-holed sphere containing both $\alpha$ and $\beta$. \begin{figure}[h] \centering \includegraphics[width=5cm]{a-new-geodesic-in-S12.pdf} \caption{Simple closed geodesic $\delta\subset \mathring{X_\alpha}\cap \mathring{X_\beta}$.} \label{a new geodesic in S12} \end{figure} \begin{proof} We first show $$\#(\alpha\cap\beta)=4.$$ Since $\alpha$ and $\beta$ are intersecting simple closed geodesics representing the zero homology class, $\#(\alpha\cap\beta)$ is a positive even number. Moreover, we have $\#(\alpha\cap\beta)\neq 2$. If not, then $\beta\cap X_\alpha$ would be a single simple geodesic arc and would not splits $X_\alpha$ into at least two pieces (namely, the connected components of $X_\alpha\cap \mathring{X_\beta}$ and $X_\alpha\setminus X_\beta$), which is a contradiction because any simple geodesic arc in a one-handle joining boundary points can not separate the one-handle. Thus, if $\#(\alpha\cap\beta)\neq4$, then we have $\#(\alpha\cap\beta)\geq6$, so that $\beta\setminus \mathring{X_\alpha}$ consists of at least $3$ segment. The shortest two, which we denote by $\beta_1$ and $\beta_2$, would have total length $$ \ell(\beta_1)+\ell(\beta_2)\leq\tfrac{2}{3}\ell(\beta)\leq \tfrac{2}{3}L. $$ Since $\beta_1$ and $\beta_2$ are disjoint geodesic arcs in the pair of pants $X_{\alpha\beta}\setminus \mathring{X_\alpha}$ with endpoint in the same boundary component $\alpha$, they must be homotopic to each other relative to $\alpha$, and $\partial X_{\alpha\beta}$ is homotopic to the two closed piecewise geodesics formed by $\beta_1$, $\beta_2$ together with two disjoint segments of $\alpha$. So we have $$ \ell(\partial X_{\alpha\beta})<\ell(\alpha)+\ell(\beta_1)+\ell(\beta_2)\leq L+\tfrac{2}{3}L=\tfrac{5}{3}L, $$ contradicting the assumption $\ell(X_{\alpha\beta})\geq\frac{5}{3}L$. This proves $\#(\alpha\cap\beta)=4$. As a consequence, $\beta$ is split by $\alpha$ into $4$ segments. Since the two segments $\beta_1,\beta_2\subset X_{\alpha\beta}\setminus \mathring{X_\alpha}$ considered above are homotopic, $X_\beta\setminus \mathring{X_\alpha}$ is homeomorphic to a disk. We now consider the other two segments, which are contained in $X_\alpha$, and denote them by $\beta_1',\beta_2'$. It is a basic fact that given a one-handle $Y$ with geodesic boundary, for any disjoint simple geodesic arcs $a_1,a_2\subset Y$ with endpoints in $\partial Y$, we have: \begin{itemize} \item If $a_1$ and $a_2$ are homotopic relative to $\partial Y$, then they cut $Y$ into two pieces, namely a topological cylinder and a topological disk; \item Otherwise, $a_1$ and $a_2$ cut $Y$ open into a topological disk. \end{itemize} Now since $\beta_1'$ and $\beta_2'$ separate $X_\alpha$, they must belong to the first case. Thus, among the two pieces of $X_\alpha$ split out by $\beta$, namely $X_\alpha\cap X_\beta$ and $X_\alpha\setminus \mathring{X_\beta}$, one is a cylinder and the other is a disk. But we have shown above that $X_\beta\setminus \mathring{X_\alpha}$ is a disk, and the argument implies $X_\alpha\setminus \mathring{X_\beta}$ is a disk as well if we switch the roles of $\alpha$ and $\beta$. Therefore, we conclude that $X_\alpha\cap X_\beta$ is a cylinder as shown in Figure \ref{a new geodesic in S12}. This cylinder contains a unique simple closed geodesic $\delta$, namely the one homotopic to its boundary loops. And $\delta$ is in the interior of the cylinder since it is contained in both $X_\alpha$ and $X_\beta$. The proof is complete. \end{proof} \begin{rem*} By construction, $\alpha\cup\beta$ is always homotopic to $\partial X_{\alpha \beta}\cup2\delta$ where $2\delta$ means two copies of $\delta$'s. We will use this simple observation later in Subsection \ref{proof of third tend to 0}. \end{rem*} \begin{rem*}\label{remark:topology} The second statement of Lemma \ref{lemma:4holded} actually holds true for any intersecting pair $(\alpha,\beta)$ of simple closed geodesics bounding one-handles such that $X_{\alpha\beta}$ is of type $S_{1,2}$ (\textit{c.f.\@ } the first example in Figure \ref{figure_examples}). The proof is more complicated and not necessary for our purpose. \end{rem*} \section{Weil-Petersson volume}\label{section wp volume} In this section we give some results on the Weil-Petersson \ volumes of moduli spaces. All of these are already known or generalizations of known results. We denote $V_{g,n}(x_1,\cdots,x_n)$ to be the Weil-Petersson \ volume of $\mathcal{M}_{g,n}(x_1,\cdots,x_n)$ and $V_{g,n}= V_{g,n}(0,\cdots,0)$. One may also see \cite{Agg20, Grus01, LX14, Mirz07, Mirz07-int, Mirz13, MZ15, Penner92, ST01, Zograf08} for the asymptotic behavior of $V_{g,n}$ and its deep connection to the intersection theory of $\mathcal{M}_{g,n}$. First we recall several results of Mirzakhani and her coauthors. \begin{theorem}\label{Mirz vol lemma 0} \begin{enumerate} \item \cite[Theorem 1.1]{Mirz07} The volume $V_{g,n}(x_1,\cdots,x_n)$ is a polynomial in $x_1^2,\cdots,x_n^2$ with degree $3g-3+n$. Namely we have \begin{equation*} V_{g,n}(x_1,\cdots,x_n) = \sum_{\alpha; |\alpha|\leq 3g-3+n} C_\alpha \cdot x^{2\alpha} \end{equation*} where $C_\alpha>0$ lies in $\pi^{6g-6+2n-|2\alpha|} \cdot \mathbb Q$. Here $\alpha=(\alpha_1,\cdots,\alpha_n)$ is a multi-index and $|\alpha|=\alpha_1+\cdots+\alpha_n$, $x^{2\alpha}= x_1^{2\alpha_1}\cdots x_n^{2\alpha_n}$. \item \cite[Table 1]{Mirz07} \begin{equation*} V_{0,3}(x,y,z) = 1, \end{equation*} \begin{equation*} V_{1,1}(x) = \frac{1}{24}(x^2+4\pi^2). \end{equation*} \end{enumerate} \end{theorem} \begin{lemma}\label{Mirz vol lemma 1} \begin{enumerate} \item \cite[Lemma 3.2]{Mirz13} \begin{equation*} V_{g,n} \leq V_{g,n}(x_1,\cdots,x_n) \leq e^{\frac{x_1+\cdots+x_n}{2}} V_{g,n}. \end{equation*} \item \cite[Lemma 3.2]{Mirz13} For any $g,n\geq 0$ \begin{equation*} V_{g-1,n+4} \leq V_{g,n+2} \end{equation*} and \begin{equation*} b_0\leq \frac{V_{g,n+1}}{(2g-2+n)V_{g,n}} \leq b_1 \end{equation*} for some universal constant $b_0,b_1>0$ independent of $g,n$. \item \cite[Theorem 3.5]{Mirz13} For fixed $n\geq 0$, as $g\rightarrow \infty$ we have \begin{equation*} \frac{V_{g,n+1}}{2g V_{g,n}} = 4\pi^2 + O(\frac{1}{g}), \end{equation*} \begin{equation*} \frac{V_{g,n}}{V_{g-1,n+2}} = 1 + O(\frac{1}{g}). \end{equation*} Where the implied constants are related to $n$ and independent of $g$. \end{enumerate} \end{lemma} \begin{rem*} For Part $(3)$, one may also see the following Theorem \ref{MZ vol thm} of Mirzakhani-Zograf. \end{rem*} \begin{lemma}\cite[Corollary 3.7]{Mirz13} \label{Mirz vol lemma 2} For fixed $b,k,r\geq 0$ and $C<C_0= 2\log 2$, \begin{equation*} \sum_{\begin{array}{c} g_1+g_2=g+1-k \\ r+1\leq g_1\leq g_2 \end{array}} e^{Cg_1} \cdot g_1^b \cdot V_{g_1,k} \cdot V_{g_2,k} \asymp \frac{V_g}{g^{2r+k}} \end{equation*} as $g\rightarrow\infty$. The implied constants are related to $b,k,r,C$ and independent of $g$. Here $A\asymp B$ means $c_1 A\leq B\leq c_2 A$ for two constants $c_1,c_2>0$ independent of $g$. \end{lemma} \begin{theorem}\cite[Theorem 1.2]{MZ15} \label{MZ vol thm} There exists a universal constant $\alpha>0$ such that for any given $n\geq0$, \begin{equation*} V_{g,n} = \alpha \frac{1}{\sqrt{g}} (2g-3+n)! (4\pi^2)^{2g-3+n} \big(1+O(\frac{1}{g}) \big) \end{equation*} as $g\rightarrow\infty$. The implied constant is related to $n$ and independent of $g$. \end{theorem} \begin{rem*} It is conjectured by Zograf in \cite{Zograf08} that $\alpha=\frac{1}{\sqrt\pi}$, which is still open. \end{rem*} The following result is motivated by \cite[Proposition 3.1]{MP19} where the error term in the lower bound is different. \begin{lemma} \label{MP vol lemma} Let $g,n\geq 1$ and $x_1,\cdots,x_n\geq 0$, then there exists a constant $c=c(n)>0$ independent of $g,x_1,\cdots,x_n$ such that \begin{equation*} \prod_{i=1}^n \frac{\sinh(x_i/2)}{x_i/2} \big(1- c(n)\frac{\sum_{i=1}^n x_i^2}{g}\big) \leq \frac{V_{g,n} (x_1,\cdots,x_n)}{ V_{g,n}} \leq \prod_{i=1}^n \frac{\sinh(x_i/2)}{x_i/2}. \end{equation*} \end{lemma} \begin{proof} By Theorem \ref{Mirz vol lemma 0} we know that $V_{g,n} (2x_1,\cdots,2x_n)$ is a polynomial of $x_1^2,\cdots,x_n^2$ with degree $3g-3+n$. As in \cite[(3.1)]{Mirz13} we write \begin{equation*} V_{g,n} (2x_1,\cdots,2x_n) = \sum_{|\boldsymbol{d}| \leq 3g-3+n} [\tau_{d_1},\cdots,\tau_{d_n}]_{g,n} \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \end{equation*} where $\boldsymbol{d} = (d_1,\cdots,d_n)$ with $d_i\geq 0$ and $|\boldsymbol{d}| = d_1+\cdots+d_n$. In \cite[page 286]{Mirz13}, Mirzakhani gave the following bound for $[\tau_{d_1},\cdots,\tau_{d_n}]_{g,n}$. Given $n\geq1$, we have \begin{equation*} 0\leq 1- \frac{[\tau_{d_1},\cdots,\tau_{d_n}]_{g,n} }{V_{g,n}} \leq c_0 \frac{|\boldsymbol{d}|^2}{g} \end{equation*} where $c_0$ is a constant independent of $g,\boldsymbol{d}$ and related to $n$. So we have \begin{equation*} \frac{V_{g,n}(2x_1,\cdots,2x_n)}{V_{g,n}} \leq \sum_{|\boldsymbol{d}| \leq 3g-3+n} \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \end{equation*} and \begin{eqnarray*} \frac{V_{g,n}(2x_1,\cdots,2x_n)}{V_{g,n}} &&\geq \sum_{|\boldsymbol{d}| \leq 3g-3+n} \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \\ & & - \frac{c_0}{g} \sum_{|\boldsymbol{d}| \leq 3g-3+n} |\boldsymbol{d}|^2\frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!}. \end{eqnarray*} Recall that \begin{equation*} \prod_{i=1}^n \frac{\sinh(x_i)}{x_i} = \sum_{d_1,\cdots,d_n =0}^\infty \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!}. \end{equation*} So we get the upper bound. That is, \begin{equation*} \frac{V_{g,n}(2x_1,\cdots,2x_n)}{V_{g,n}} \leq \prod_{i=1}^n \frac{\sinh(x_i)}{x_i}. \end{equation*} For the lower bound, first we have \begin{equation*} (x_1^2+\cdots+x_n^2)\prod_{i=1}^n \frac{\sinh(x_i)}{x_i} = \sum_{d_1,\cdots,d_n =0}^\infty \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} (\sum_{i=1}^n 2d_i(2d_i+1)). \end{equation*} Then by Cauchy-Schwarz inequality we have \begin{equation*} \sum_{|\boldsymbol{d}| \leq 3g-3+n} |\boldsymbol{d}|^2\frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \leq \frac{n}{4}(x_1^2+\cdots+x_n^2)\prod_{i=1}^n \frac{\sinh(x_i)}{x_i}. \end{equation*} Recall that the Stirling formula says that $$k!\sim \sqrt{2\pi k} (\frac{k}{e})^k$$ which implies that for large $k>0$, $$k!\geq (\frac{k}{e})^k.$$ Hence, we have \begin{eqnarray*} && \sum_{|\boldsymbol{d}| > 3g-3+n} \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \\ &\leq& \sum_{k>3g-3+n} \frac{1}{k!} \sum_{|\boldsymbol{d}| =k} \frac{k!}{d_1!\cdots d_n!}(x_1^2)^{d_1}\cdots(x_n^2)^{d_n} \\ &=& \sum_{k>3g-3+n} \frac{1}{k!} (x_1^2+\cdots+x_n^2)^k \\ &\leq& \sum_{k>3g-3+n} \big( \frac{e\cdot (x_1^2+\cdots+x_n^2)}{k} \big)^k. \end{eqnarray*} If $\frac{e\cdot (x_1^2+\cdots+x_n^2)}{3g-2+n} \leq 0.5$, we have \begin{eqnarray*} \sum_{|\boldsymbol{d}| > 3g-3+n} \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} &\leq& 2 \big( \frac{e\cdot (x_1^2+\cdots+x_n^2)}{3g-2+n} \big)^{3g-2+n} \\ &\leq& 4 \frac{x_1^2+\cdots+x_n^2}{g} \prod_{i=1}^n \frac{\sinh(x_i)}{x_i}. \end{eqnarray*} Then we get when $\frac{e\cdot (x_1^2+\cdots+x_n^2)}{3g-2+n} \leq 0.5$, \begin{eqnarray*} \frac{V_{g,n}(2x_1,\cdots,2x_n)}{V_{g,n}} &\geq& \prod_{i=1}^n \frac{\sinh(x_i)}{x_i} - \sum_{|\boldsymbol{d}| > 3g-3+n} \frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \\ & & - \frac{c_0}{g} \sum_{|\boldsymbol{d}| \leq 3g-3+n} |\boldsymbol{d}|^2\frac{x_1^{2d_1}}{(2d_1+1)!}\cdots\frac{x_n^{2d_n}}{(2d_n+1)!} \\ &\geq& \prod_{i=1}^n \frac{\sinh(x_i)}{x_i} \big( 1-(\frac{n}{4}c_0 + 4)\frac{x_1^2+\cdots+x_n^2}{g} \big). \end{eqnarray*} \noindent If $\frac{e(x_1^2+\cdots+x_n^2)}{3g-2+n} > 0.5$, then $e \cdot \frac{x_1^2+\cdots+x_n^2}{g}>1$ and the lower bound is trivial in this case. \end{proof} \begin{rem*} In the proof above, \begin{enumerate} \item for the lower bound, the $x_i's$ may be related to $g$ but $n$ is independent of $g$ as $g\rightarrow\infty$; \item for the upper bound, both the $x_i's$ and $n$ may be related to $g$ as $g\rightarrow\infty$. \end{enumerate} \end{rem*} One may observe from Lemma \ref{Mirz vol lemma 1}, \ref{Mirz vol lemma 2} and Theorem \ref{MZ vol thm} that the asymptotic behavior of $V_{g,n}$ is related to the quantity $|\chi(S_{g,n})|=|2g-2+n|$. Therefore, we introduce the following notation, where the subscript $r\geq1$ represents this quantity: $$ W_{r}:= \begin{cases} V_{\frac{r}{2}+1}&\text{if $r$ is even},\\[5pt] V_{\frac{r+1}{2},1}&\text{if $r$ is odd}. \end{cases} $$ Now we provide the following properties for $W_r$ which will be applied later. \begin{lemma}\label{Wr-prop} \begin{enumerate} \item For any $g,n\geq 0$, we have $$V_{g,n} \leq c \cdot W_{2g-2+n}$$ for some universal constant $c>0$. \item For any $r\geq1$ and $m_0\leq \frac{1}{2}r$, we have $$\sum_{m=m_0}^{[\frac{r}{2}]} W_m W_{r-m} \leq c(m_0) \frac{1}{r^{m_0}}W_r$$ for some constant $c(m_0)>0$ only depending on $m_0$. \end{enumerate} \end{lemma} \begin{proof} For $(1)$, first by Part $(2)$ of Lemma \ref{Mirz vol lemma 1} we know that there exists a pair $(g',n')$ with $0\leq n'\leq 3$ and $2g'-2+n'=2g-2+n$ such that $$V_{g,n} \leq V_{g',n'}.$$ Again by Part $(3)$ of Lemma \ref{Mirz vol lemma 1} or Theorem \ref{MZ vol thm} we know that there is a universal constant $c>0$ such that $$V_{g',2}\leq c V_{g'+1} \quad \text{and} \quad V_{g',3}\leq cV_{g'+1,1}.$$ So for odd $n>0$ we have $$V_{g,n}\leq V_{g',n'}\leq c V_{g+\frac{n-1}{2},1} = c W_{2g-2+n},$$ and for even $n\geq 0$ we also have $$V_{g,n}\leq V_{g',n'}\leq c V_{g+\frac{n}{2}} = c W_{2g-2+n}$$ which completes the proof of $(1)$.\\ For $(2)$, we only show it for the case that both $m_0$ and $r$ are odd. The proofs of other cases are similar. We leave them as an exercise to the readers. First by Part $(3)$ of Lemma \ref{Mirz vol lemma 1}, there is a universal constant $c>0$ such that for odd $m$, $$W_{m}\leq c\frac{1}{m} V_{\frac{m+3}{2}}.$$ Recall that Part $(3)$ of Lemma \ref{Mirz vol lemma 1} implies that for some universal constant $c'>0$, \[\frac{V_{g+1}}{V_{g,1}}\leq c'\cdot g.\] Then it follows by Lemma \ref{Mirz vol lemma 2} that there exist two constants $c'(m_0),c(m_0)>0$ only depending on $m_0$ such that \begin{eqnarray*} \sum_{m=m_0}^{[\frac{r}{2}]} W_{m} W_{r-m} &\leq& \sum_{\tiny\begin{array}{c}m=m_0+1 \\ m\ \text{even}\end{array}}^{[\frac{r}{2}]} \frac{c}{r-m} V_{\frac{m}{2}+1} V_{\frac{r-m+3}{2}} + \sum_{\tiny\begin{array}{c}m=m_0 \\ m\ \text{odd}\end{array}}^{[\frac{r}{2}]} \frac{c}{m} V_{\frac{m+3}{2}} V_{\frac{r-m}{2}+1} \\ &\leq& \frac{c}{r} \sum_{k=\frac{m_0+3}{2}}^{[\frac{r}{4}]+1} V_{k} V_{\frac{r+5}{2}-k} + \frac{c}{m_0} \sum_{k=\frac{m_0+3}{2}}^{[\frac{r}{4}]+1} V_{k} V_{\frac{r+5}{2}-k} \\ &\leq& c'(m_0)\frac{1}{r^{m_0+1}} V_{\frac{r+3}{2}} \\ &\leq& c(m_0)\frac{1}{r^{m_0}} V_{\frac{r+1}{2},1} \\ &=& c(m_0)\frac{1}{r^{m_0}} W_r. \end{eqnarray*} The proof is complete. \end{proof} The following lemma is a generalization of \cite[lemma 3.2]{MP19} and \cite[lemma 6.3]{GMST19}. Here we allow the $n_i's$ and $q$ depend on $g$ as $g\to \infty$. \begin{lemma}\label{sum vol lemma} Assume $q\geq 1$, $n_1,\cdots,n_q\geq 0$, $r\geq2$. Then there exists two universal constants $c,D>0$ such that \begin{equation*} \sum_{\{g_i\}} V_{g_1,n_1}\cdots V_{g_q,n_q} \leq c \big(\frac{D}{r}\big)^{q-1} W_r \end{equation*} where the sum is taken over all $\{g_i\}_{i=1}^q \subset \mathbb{N}$ such that $2g_i-2+n_i \geq 1$ for all $i=1,\cdots,q$, and $\sum_{i=1}^q (2g_i-2+n_i) = r$. \end{lemma} \begin{proof} Given a $\{g_i\}$ in the summation, let $g_i'\geq0$ and $0\leq n_i'\leq 3$ be such that $2g_i'-2+n_i'=2g_i-2+n_i$ for each $i$. By Lemma \ref{Mirz vol lemma 1} we know that $$V_{g_i,n_i} \leq V_{g'_i,n'_i}.$$ And by Theorem \ref{MZ vol thm}, we have \begin{align*} V_{g'_i,n'_i}&\leq \alpha_0 \frac{\sqrt2}{\sqrt{2g'_i-3+n'_i}} (2g'_i-3+n'_i)! (4\pi^2)^{2g'_i-3+n'_i}\\ &=\alpha_0 \frac{\sqrt2}{\sqrt{2g_i-3+n_i}} (2g_i-3+n_i)! (4\pi^2)^{2g_i-3+n_i} \end{align*} and $$ W_r\geq \alpha_1 \frac{\sqrt2}{\sqrt{r-1}} (r-1)! (4\pi^2)^{r-1} $$ for universal constants $\alpha_0 > \alpha_1 >0$. Recall that the Stirling's formula says that as $k\rightarrow\infty$, $$k!\sim \sqrt{2\pi k} (\frac{k}{e})^k.$$ So there exist two universal constants $a_0>a_1>0$ such that \begin{equation*} a_1 \sqrt{2\pi } (\frac{k}{e})^k\leq \frac{k!}{\sqrt{k}} \leq a_0 \sqrt{2\pi } (\frac{k}{e})^k. \end{equation*} \noindent Now we have \begin{eqnarray}\label{mp-eq-1} & & \frac{\sum_{\{g_i\}} V_{g_1,n_1}\cdots V_{g_q,n_q}}{W_r} \\ &\leq& \frac{\sum_{\{g_i\}} \prod_{i=1}^q 2\sqrt{\pi}a_0\alpha_0 (\frac{2g_i-3+n_i}{e})^{2g_i-3+n_i} (4\pi^2)^{2g_i-3+n_i}} {2\sqrt{\pi}a_1\alpha_1 (\frac{r-1}{e})^{r-1} (4\pi^2)^{r-1}} \nonumber\\ &=& \frac{1}{2\sqrt{\pi}a_1\alpha_1\frac{e}{4\pi^2}} (2\sqrt{\pi}a_0\alpha_0\frac{e}{4\pi^2})^q \frac{\sum_{\{g_i\}} \prod_{i=1}^q (2g_i-3+n_i)^{2g_i-3+n_i}} {(r-1)^{r-1}}.\nonumber \end{eqnarray} For each $i=1,\cdots,q$, we have $2g_i-3+n_i\geq 0$. Now assume there are exactly $j$ of the $(2g_i-3+n_i)'s$ are non-zero. The number of such $\{g_i\}$ (such that $\sum_{i=1}^q (2g_i-3+n_i) = r-q$) is bounded from above by \begin{equation*} \left( \begin{array}{c} q \\ j \\ \end{array} \right) \left( \begin{array}{c} r-q-1 \\ j-1 \\ \end{array} \right) \end{equation*} where $\left( \begin{array}{c} q \\ j \\ \end{array} \right) = \frac{q!}{j!(q-j)!}$ is the combinatorial number. Recall the following elementary says: if $\sum_{i=1}^j x_i = S$ and $x_i\geq 1$ for all $i$, then $\prod_{i=1}^j x_i^{x_i}$ reaches the maximum value when $j-1$ of the $x_i's$ are $1$. As a result, we have \begin{equation*} \prod_{i=1}^j x_i^{x_i} \leq (S-j+1)^{S-j+1}. \end{equation*} \noindent Thus for each such $\{g_i\}$ we have \begin{equation*} \prod_{i=1}^q (2g_i-3+n_i)^{2g_i-3+n_i} \leq 1^1\cdots 1^1\cdot (r-q-j+1)^{r-q-j+1}. \end{equation*} So we have \begin{eqnarray*} & & \frac{\sum_{\{g_i\}} \prod_{i=1}^q (2g_i-3+n_i)^{2g_i-3+n_i}} {(r-1)^{r-1}} \\ & \leq & \frac{1} {(r-1)^{r-1}} \sum_{j=0}^q \left( \begin{array}{c} q \\ j \\ \end{array} \right) \left( \begin{array}{c} r-q -1 \\ j-1 \\ \end{array} \right) (r-q-j+1)^{r-q-j+1} \\ & \leq & \sum_{j=0}^q \left( \begin{array}{c} q \\ j \\ \end{array} \right) \frac{(r-q-1)^{j-1} (r-q-j+1)^{r-q-j+1}} {(r-1)^{r-1}} \\ & \leq & \sum_{j=0}^q \left( \begin{array}{c} q \\ j \\ \end{array} \right) \frac{1} {(r-1)^{q-1}} \\ & = & \frac{2^q} {(r-1)^{q-1}}. \end{eqnarray*} Then combining \eqref{mp-eq-1} we get \begin{equation*} \frac{\sum_{\{g_i\}} V_{g_1,n_1}\cdots V_{g_q,n_q}}{W_r} \leq 2\frac{a_0\alpha_0}{a_1\alpha_1} \big(\frac{a_0\alpha_0\frac{e}{\pi^{3/2}}}{r-1}\big)^{q-1}. \end{equation*} The proof is complete. \end{proof} We enclose this section by the following useful property. \begin{proposition}\label{1 over gm} Given $m\geq 1$, for any $g\geq m+1$, $q\geq 1$, $n_1,...,n_q\geq 1$, there exists a constant $c(m)>0$ only depending on $m$ such that \begin{equation*} \sum_{\{g_i\}} V_{g_1,n_1}\cdots V_{g_q,n_q} \leq c(m)\frac{1}{g^m} V_g \end{equation*} where the sum is taken over all $\{g_i\}_{i=1}^q \subset \mathbb{N}$ such that $2g_i-2+n_i \geq 1$ for all $i=1,\cdots,q$, and $\sum_{i=1}^q (2g_i-2+n_i) = 2g-2-m$. \end{proposition} \begin{proof} If $g$ is bounded from above, then the nonnegative integers $m,q,n_1,\cdots n_q$ are all bounded from above, and hence the inequality is trivial. It suffices to show it for large enough $g$. First by Lemma \ref{sum vol lemma} we know that \begin{equation*} \sum_{\{g_i\}} V_{g_1,n_1}\cdots V_{g_q,n_q} \leq c \big(\frac{D}{2g-2-m}\big)^{q-1} W_{2g-2-m}. \end{equation*} By Part $(3)$ of Lemma \ref{Mirz vol lemma 1} or Theorem \ref{MZ vol thm} we know that \[\frac{V_g}{V_{g-1}}\asymp g^2 \quad \textit{and} \quad \frac{V_{g,1}}{V_g}\asymp g. \] Which implies that there exists a constant $c'(m)>0$ only depending on $m$ such that \begin{equation*} W_{2g-2-m} \leq c'(m) \frac{1}{g^m} V_g. \end{equation*} Therefore, we have that for large enough $g>0$, \begin{eqnarray*} \sum_{\{g_i\}} V_{g_1,n_1}\cdots V_{g_q,n_q} &\leq& c'(m) (\frac{D}{g})^{q-1} \frac{1}{g^m} V_g\\ & \leq& c(m) \frac{1}{g^m} V_g \end{eqnarray*} for some constant $c(m)>0$ only depending on $m$. The proof is complete. \end{proof} \section{Mirzakhani's generalized McShane identity}\label{section McShane identity} In \cite{Mirz07} Mirzakhani generalized McShane's identity \cite{McS98} as follows, and then calculated the Weil-Petersson volume of moduli spaces by applying her integration formula (see Theorem \ref{Mirz int formula}). \begin{theorem}\cite[Theorem 1.3]{Mirz07} \label{McShane id} For $X\in\mathcal{M}_{g,n}(L_1,\cdots,L_n)$ with $n$ geodesic boundaries $\beta_1,\cdots,\beta_n$ of length $L_1,\cdots,L_n$, we have \begin{equation*} \sum_{\{\gamma_1,\gamma_2\}} \mathcal D(L_1, \ell(\gamma_1), \ell(\gamma_2)) + \sum_{i=2}^n \sum_\gamma \mathcal R(L_1,L_i,\ell(\gamma)) = L_1 \end{equation*} where the first sum is over all unordered pairs of simple closed geodesics $\{\gamma_1, \gamma_2\}$ bounding a pair of pants with $\beta_1$, and the second sum is over all simple closed geodesics $\gamma$ bounding a pair of pants with $\beta_1$ and $\beta_i$. Here $\mathcal D$ and $\mathcal R$ are given by \begin{equation*} \mathcal D(x,y,z) = 2\log\big( \frac{e^{\frac{x}{2}}+e^{\frac{y+z}{2}}}{e^{\frac{-x}{2}}+e^{\frac{y+z}{2}}} \big), \end{equation*} \begin{equation*} \mathcal R(x,y,z) = x - \log\big( \frac{\cosh(\frac{y}{2})+\cosh(\frac{x+z}{2})}{\cosh(\frac{y}{2})+\cosh(\frac{x-z}{2})} \big). \end{equation*} \end{theorem} We will use this identity in subsection \ref{proof of third tend to 0} to control the number of certain types of closed geodesics in a surface. Here we provide the following elementary properties for $\mathcal D(x,y,z)$ and $\mathcal R(x,y,z)$. \begin{lemma} \label{estimation R,D} Assume that $x,y,z> 0$, then the following properties hold. \begin{enumerate} \item $\mathcal R(x,y,z)\geq 0$ and $\mathcal D(x,y,z)\geq0$. \item $\mathcal R(x,y,z)$ is decreasing with respect to $z$ and increasing with respect to $y$. $\mathcal D(x,y,z)$ is decreasing with respect to $y$ and $z$ and increasing with respect to $x$. \item We have \begin{equation*} \frac{x}{\mathcal R(x,y,z)} \leq 100(1+x)(1+e^{\frac{z}{2}}e^{-\frac{x+y}{2}}), \end{equation*} and \begin{equation*} \frac{x}{\mathcal D(x,y,z)} \leq 100(1+x)(1+e^{\frac{y+z}{2}}e^{-\frac{x}{2}}). \end{equation*} Moreover, if $x+y>z$, we have \begin{equation*} \frac{x}{\mathcal R(x,y,z)} \leq 500+ 500\frac{x}{x+y-z}. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} Part $(1)$ is easy to check. Actually $\mathcal D$ and $\mathcal R$ given in \cite{Mirz07} are lengths of certain segments for $x,y,z>0$.\\ For Part $(2)$, a direct computation shows that \begin{eqnarray*} && \frac{d}{dz}\big( \frac{\cosh(\frac{y}{2})+\cosh(\frac{x+z}{2})}{\cosh(\frac{y}{2})+\cosh(\frac{x-z}{2})} \big) \\ &=& \frac{\frac12 \sinh\frac{x+z}{2}(\cosh\frac{y}{2}+\cosh\frac{x-z}{2}) + \frac12 \sinh\frac{x-z}{2}(\cosh\frac{y}{2}+\cosh\frac{x+z}{2}) } {(\cosh(\frac{y}{2})+\cosh(\frac{x-z}{2}))^2} \\ &=& \frac{\sinh\frac{x}{2}\cosh\frac{z}{2}\cosh\frac{y}{2} + \frac12 \sinh x} {(\cosh(\frac{y}{2})+\cosh(\frac{x-z}{2}))^2} \\ &>& 0 \end{eqnarray*} where we have used the elementary equations \begin{eqnarray*} \sinh(a+b)=\sinh a\cosh b + \cosh a\sinh b, \\ \sinh a+\sinh b = 2\sinh\frac{a+b}{2}\cosh\frac{a-b}{2}. \end{eqnarray*} So $\mathcal R(x,y,z)$ is decreasing with respect to $z$. The other parts of $(2)$ are obvious.\\ For Part $(3)$, first as for $\mathcal R(x,y,z)$ we have \begin{eqnarray}\label{estimation R,D--R geq} \mathcal R(x,y,z) &=& \log\big( e^x \frac{\cosh(\frac{y}{2})+\cosh(\frac{x-z}{2})}{\cosh(\frac{y}{2})+\cosh(\frac{x+z}{2})} \big) \\ &=& \log\big( e^x \frac{e^{\frac{y}{2}}+e^{\frac{-y}{2}} + e^{\frac{x-z}{2}}+e^{\frac{z-x}{2}}} {e^{\frac{y}{2}}+e^{\frac{-y}{2}} + e^{\frac{x+z}{2}}+e^{\frac{-x-z}{2}}} \big) \nonumber\\ &=& \log\big( e^x \frac{e^y e^{\frac{x+z}{2}}+e^{\frac{x+z}{2}} + e^x e^{\frac{y}{2}}+e^z e^{\frac{y}{2}}} {e^y e^{\frac{x+z}{2}}+e^{\frac{x+z}{2}} + e^{x+z} e^{\frac{y}{2}}+e^{\frac{y}{2}}} \big) \nonumber\\ &=& \log\big( 1+ \frac{(e^x -1)(e^y +1)e^{\frac{x+z}{2}} + (e^{2x}-1)e^{\frac{y}{2}}} {e^y e^{\frac{x+z}{2}}+e^{\frac{x+z}{2}} + e^{x+z} e^{\frac{y}{2}}+e^{\frac{y}{2}}} \big) \nonumber\\ &\geq& \log\big( 1+ \frac{(e^x -1)(e^y +1)e^{\frac{x+z}{2}} } {e^y e^{\frac{x+z}{2}}+e^{\frac{x+z}{2}} + 2e^{x+z} e^{\frac{y}{2}}} \big) \nonumber\\ &=& \log\big( 1+ \frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{2e^{\frac{y}{2}}}{e^y +1} } \big) \nonumber\\ &=& \log\big( 1+ \frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}} } \big). \nonumber \end{eqnarray} Then we treat the following cases separately: \textbf{Case 1: $\frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}} } \geq 1$}. Then we have $e^x\geq 2$ and by \eqref{estimation R,D--R geq} \begin{equation}\label{estimation R,D--x/R case 1} \frac{x}{\mathcal R(x,y,z)} \leq \frac{x}{\log 2} \leq 2x. \end{equation} \textbf{Case 2: $\frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}} }< 1$.} Recall that $\log(1+t)\geq \frac{t}{2}$ for $0<t\leq 1$. Then by \eqref{estimation R,D--R geq} we have \begin{eqnarray}\label{estimation R,D--x/R case 2} \frac{x}{\mathcal R(x,y,z)} &\leq& \frac{x}{\frac{1}{2} \frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}}} } \\ &=& \frac{2x}{e^x-1}+ e^{\frac{z}{2}}\frac{x}{\sinh\frac{x}{2}}\frac{1}{\cosh \frac{y}{2}} \nonumber \\ &\leq& 2 + e^{\frac{z}{2}}\frac{x}{\sinh\frac{x}{2}}\frac{1}{\cosh \frac{y}{2}} \nonumber\\ &\leq& 2 + 100(1+x) e^{\frac{z}{2}}e^{-\frac{x+y}{2}}.\nonumber \end{eqnarray} So combining \eqref{estimation R,D--x/R case 1} and \eqref{estimation R,D--x/R case 2} we have \begin{equation*} \frac{x}{\mathcal R(x,y,z)} \leq 100(1+x)(1+e^{\frac{z}{2}}e^{-\frac{x+y}{2}}). \end{equation*} Now assume $x+y>z$ and consider the following subcases of Case 1. \textbf{Case 1a:} $\frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}} } \geq 1$ (which implies $e^x\geq 2$) and $1\geq e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}}$. Then by \eqref{estimation R,D--R geq} we have \begin{eqnarray}\label{estimation R,D--x/R moreover case 1.1} \frac{x}{\mathcal R(x,y,z)} &\leq& \frac{x}{\log\big( \frac{e^x+1}{2} \big) } \\ &\leq& 100.\nonumber \end{eqnarray} \textbf{Case 1b:} $\frac{e^x -1}{1 + e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}} } \geq 1$ (which implies $e^x\geq 2$) and $e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}}\geq1$. Then by \eqref{estimation R,D--R geq} we have \begin{eqnarray}\label{estimation R,D--x/R moreover case 1.2} \frac{x}{\mathcal R(x,y,z)} &\leq& \frac{x}{\log\big(1+ \frac{e^x-1}{2e^{\frac{x+z}{2}} \frac{1}{\cosh \frac{y}{2}}} \big) }\\ &\leq& \frac{x}{\log\big(1+ \frac{e^x-1}{4e^x} e^{\frac{x+y-z}{2}} \big) } \nonumber\\ &\leq& 100 \frac{x}{x+y-z} \nonumber \end{eqnarray} where in the last inequality we apply the elementary inequality $1+ae^{x}\geq e^{ax}$ for any $a\in (0,1)$ and $x>0$. So combining \eqref{estimation R,D--x/R case 2}, \eqref{estimation R,D--x/R moreover case 1.1} and \eqref{estimation R,D--x/R moreover case 1.2} we have \begin{eqnarray*} \frac{x}{\mathcal R(x,y,z)} &\leq& 100+100 \frac{x}{x+y-z} + 2 + 100(1+x) e^{\frac{z}{2}}e^{-\frac{x+y}{2}} \\ &\leq& 202+ 200\frac{x}{x+y-z}. \end{eqnarray*} As for $\mathcal D(x,y,z)$, we have \begin{equation*} \mathcal D(x,y,z) = 2\log\big( 1+\frac{2\sinh\frac{x}{2}} {e^{-\frac{x}{2}}+e^{\frac{y+z}{2}}} \big). \end{equation*} \textbf{Case 1c:} $\frac{2\sinh\frac{x}{2}} {e^{-\frac{x}{2}}+e^{\frac{y+z}{2}}}\leq 1$. Then by the fact that $\log(1+t)\geq \frac{t}{2}$ for $0<t\leq 1$, we have \begin{eqnarray}\label{estimation R,D--x/D case 1} \frac{x}{\mathcal D(x,y,z)} &\leq& \frac{x}{2\cdot \frac{1}{2}\frac{2\sinh\frac{x}{2}} {e^{-\frac{x}{2}}+e^{\frac{y+z}{2}}}}\\ &=& xe^{-\frac{x}{2}}\frac{1}{2\sinh\frac{x}{2}} + xe^{\frac{y+z}{2}}\frac{1}{2\sinh\frac{x}{2}}. \nonumber \end{eqnarray} \textbf{Case 1d:} $\frac{2\sinh\frac{x}{2}} {e^{-\frac{x}{2}}+e^{\frac{y+z}{2}}}> 1$. Then we have \begin{equation}\label{estimation R,D--x/D case 2} \frac{x}{\mathcal D(x,y,z)} \leq \frac{1}{2\log2}x. \end{equation} So combining \eqref{estimation R,D--x/D case 1} and \eqref{estimation R,D--x/D case 2} we have \begin{equation*} \frac{x}{\mathcal D(x,y,z)} \leq 100(1+x)(1+e^{\frac{y+z}{2}}e^{-\frac{x}{2}}). \end{equation*} (Here one may prove the inequality above for two cases: $0<x\leq 1$ or $x>1$.) The proof is complete. \end{proof} \section{Lower bound} \label{section lower bound} In this section, we will show the relative simple part of Theorems \ref{main} and \ref{cor L1}, namely the lower bound. More precisely, we show that \begin{proposition}\label{prop lower bound} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Then we have \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \ell_{\mathop{\rm sys}}^{\rm sep}(X) \geq 2\log g - 4\log \log g - \omega(g) \big) = 1 \end{equation*} and \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_1(X) \geq 2\log g - 4\log \log g - \omega(g) \big) = 1. \end{equation*} \end{proposition} \noindent Since $\ell_{\mathop{\rm sys}}^{\rm sep}(X) \geq \mathcal{L}_1(X)$, it suffices to prove the second limit. We follow the method in \cite[Section 4.3]{Mirz13} for this part. Let $L>0$ and assume $\mathcal{L}_1(X) \leq L$. Then there exists a simple closed multi-geodesic of length $\leq L$ separating $X$ into $S_{g_0,k}\cup S_{g-g_0-k+1,k}$ for some $(g_0,k)$ with $|\chi(S_{g_0,k})|\leq \frac{1}{2}|\chi(S_g)|=g-1$. That is, \begin{equation*} \sum_{(g_0,k);\ 1\leq 2g_0-2+k\leq g-1} N_{g_0,k}(X,L)\geq 1. \end{equation*} So we have \begin{eqnarray}\label{prob(L1 leq L) leq} &&\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_1(X) \leq L \big) \\ &&\leq \mathop{\rm Prob}\nolimits_{\rm WP}^g\big( \sum_{(g_0,k); \ 1\leq 2g_0-2+k\leq g-1} N_{g_0,k}(X,L)\geq 1 \big) \nonumber\\ &&\leq \sum_{(g_0,k);\ 1\leq 2g_0-2+k\leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]. \nonumber \end{eqnarray} \noindent By Mirzakhani's Integration Formula (see Theorem \ref{Mirz int formula}), we have \begin{eqnarray}\label{E[N_g0,k]} && \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] =\frac{1}{V_g} \frac{2^{-M}}{|\mathop{\rm Sym}|} \int_{\mathbb{R}^k_{\geq 0}} \mathbf 1_{[0,L]}(x_1+\cdots+x_k)\\ && \times V_{g_0,k}(x_1,\cdots,x_k) V_{g-g_0-k+1,k}(x_1,\cdots,x_k) x_1\cdots x_k dx_1\cdots dx_k \nonumber \end{eqnarray} where $|\mathop{\rm Sym}|=k!$, $M=1$ if $(g_0,k)=(1,1)$ and $M=0$ otherwise. Then we split the proof of Proposition \ref{prop lower bound} by calculating the quantity $\mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]$ for three different cases. \begin{lemma}\label{E[N]} For $(g_0,k)=(1,1)$ or $(0,3)$, we have \begin{equation*} \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)] = \frac{1}{192\pi^2} L^2 e^{\frac{L}{2}} \frac{1}{g} \big(1+O(\frac{1}{g})\big) \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \end{equation*} and \begin{equation*} \mathbb{E}_{\rm WP}^g[N_{0,3}(X,L)] = \frac{1}{48\pi^2} L^2 e^{\frac{L}{2}} \frac{1}{g} \big(1+O(\frac{1}{g})\big) \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big). \end{equation*} Where the implied constant are independent of $L$ and $g$. \end{lemma} \begin{proof} First we consider the case $(g_0,k)=(1,1)$. By using Theorem \ref{Mirz vol lemma 0} and Lemma \ref{Mirz vol lemma 1}, \ref{MP vol lemma} and Equation \eqref{E[N_g0,k]}, we have \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]&=&\frac{1}{V_g} \frac{1}{2} \int_0^L V_{1,1}(x)V_{g-1,1}(x)xdx \\ &=& \frac{1}{2V_g}\int_0^L\frac{1}{24}(x^2+4\pi^2)x\frac{\sinh(x/2)}{x/2} dx \times V_{g-1,1} \big(1+O(\frac{L^2}{g})\big) \\ &=& \frac{1}{24} L^2 e^{\frac{L}{2}} \frac{V_{g-1,1}}{V_g} \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \\ &=& \frac{1}{192\pi^2} L^2 e^{\frac{L}{2}} \frac{1}{g} \big(1+O(\frac{1}{g})\big) \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big). \end{eqnarray*} Similarly for $(g_0,k)=(0,3)$, we have \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[N_{0,3}(X,L)] &=& \frac{1}{V_g} \frac{1}{3!} \int_{0\leq x+y+z \leq L} \frac{\sinh(x/2)}{x/2}\frac{\sinh(y/2)}{y/2}\frac{\sinh(z/2)}{z/2} \\ & & xyz dxdydz \times V_{g-2,3} \big(1+O(\frac{L^2}{g})\big) \\ &=& \frac{1}{6} L^2 e^{\frac{L}{2}} \frac{V_{g-2,3}}{V_g} \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \\ &=& \frac{1}{48\pi^2} L^2 e^{\frac{L}{2}} \frac{1}{g} \big(1+O(\frac{1}{g})\big) \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big). \end{eqnarray*} The proof is complete. \end{proof} \begin{rem*}\label{remark:comes from} The dominating term $L^2e^\frac{L}{2}\frac{1}{g}$ in both expressions in Lemma \ref{E[N]} is where the upper and lower bounds $2\log g-4\log \log g\pm\omega(g)$ in Theorems \ref{main} and \ref{cor L1} come from. In fact, a function $L(g)$ in the variable $g\in\{2,3,\cdots\}$ has the form $2\log g-4\log\log g+\omega(g)$, with $\omega(g)$ satisfying the assumption of Theorems \ref{main}, if and only if $$ \lim_{g\to\infty}L(g)^2e^\frac{L(g)}{2}\frac{1}{g}\to+\infty,\quad L(g)^2e^\frac{L(g)}{2}\frac{1}{g}=O\big((\log g)^\epsilon\big) $$ for any $\epsilon>0$. Similarly, $L(g)$ has the form $2\log g-4\log\log g-\omega(g)$ if and only if $\lim_{g\to\infty}L(g)^2e^\frac{L(g)}{2}\frac{1}{g}\to0$ and $L(g)^2e^\frac{L(g)}{2}\frac{1}{g}\geq C(\log g)^{-\epsilon}$ when $g$ is large enough for any $C,\epsilon>0$. \end{rem*} \begin{lemma}\label{sum chi=m E[N]} For any given positive integer $m$, there exists a constant $c(m)>0$ independent of $L$ and $g$ such that \begin{equation*} \sum_{|\chi(S_{g_0,k})| = m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] \leq c(m) (1+L^{3m-1})e^{\frac{L}{2}}\frac{1}{g^m} \end{equation*} where the summation is taken over all possibilities on $(g_0,k)$'s satisfying $g_0\geq 0$, $k\geq 1$, and $$|\chi(S_{g_0,k})| = m.$$ \end{lemma} \begin{proof} Assume $2g_0-2+k=|\chi(S_{g_0,k})| = m$. Then both $g_0$ and $k$ are bounded from above by $m+2$. By Theorem \ref{Mirz vol lemma 0} of Mirzakhani we know that $V_{g_0,k}(x_1,\cdots,x_k)$ is a polynomial of degree $6g_0-6+2k$ with coefficients bounded by some constant only related to $m$. So when $0\leq x_1+\cdots+x_k\leq L$ we have \begin{equation*} V_{g_0,k}(x_1,\cdots,x_k) \leq c'(m) (1+L^{6g_0-6+2k}) \end{equation*} for some constant $c'(m)>0$ only depending on $m$. Then by Lemma \ref{Mirz vol lemma 1}, \ref{MP vol lemma} and Equation \eqref{E[N_g0,k]} we have \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] &\leq& \frac{1}{V_g} \int_{0\leq x_1+\cdots+x_k \leq L} c'(m) (1+L^{6g_0-6+2k}) \\ & & \frac{\sinh(x_1/2)}{x_1/2}\cdots\frac{\sinh(x_k/2)}{x_k/2} x_1\cdots x_k dx_1\cdots dx_k V_{g-g_0-k+1,k} \\ &\leq& c'(m) (1+L^{6g_0-7+3k}) e^{\frac{L}{2}} \frac{V_{g-g_0-k+1,k}}{V_g} \\ &\leq& c'(m) (1+L^{3m-1}) e^{\frac{L}{2}} \frac{1}{g^m} . \end{eqnarray*} Since $g_0,k$ are bounded above by $m+2$, there exists a constant $c(m)>0$ only depending on $m$ such that \begin{equation*} \sum_{|\chi(S_{g_0,k})| = m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] \leq c(m) (1+L^{3m-1})e^{\frac{L}{2}}\frac{1}{g^m} \end{equation*} as desired. \end{proof} \begin{lemma}\label{sum chi geq m E[N]} For any given positive integer $m$, there exists a constant $c(m)>0$ independent of $L$ and $g$ such that \begin{equation*} \sum_{m\leq|\chi(S_{g_0,k})| \leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] \leq c(m) e^{2L}\frac{1}{g^m} \end{equation*} where the summation is taken over all possibilities on $(g_0,k)$'s satisfying $g_0\geq 0$, $k\geq 1$, and $$m\leq |\chi(S_{g_0,k})| \leq g-1.$$ \end{lemma} \begin{proof} First by Part $(1)$ of Lemma \ref{Mirz vol lemma 1} we know that \begin{equation*} V_{g_0,k}(x_1,\cdots,x_k)\leq e^{\frac{x_1+\cdots+x_k}{2}}V_{g_0,k} \end{equation*} and \begin{equation*} V_{g-g_0-k+1,k}(x_1,\cdots,x_k)\leq e^{\frac{x_1+\cdots+x_k}{2}}V_{g-g_0-k+1,k}. \end{equation*} \noindent Then by \eqref{E[N_g0,k]} we have \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] &\leq& \frac{1}{V_g} \frac{1}{k!} \int_{0\leq \sum x_i \leq L} e^{x_1+\cdots+x_k} x_1\cdots x_k dx_1\cdots dx_k V_{g_0,k} V_{g-g_0-k+1,k} \\ &\leq& \frac{1}{k!} \frac{V_{g_0,k} V_{g-g_0-k+1,k}}{V_g} e^L \int_{0\leq \sum x_i \leq L} x_1\cdots x_k dx_1\cdots dx_k \\ &=& \frac{1}{k!} \frac{L^{2k}}{(2k)!} e^L \frac{V_{g_0,k} V_{g-g_0-k+1,k}}{V_g}. \end{eqnarray*} \noindent Recall that Part (2) of Lemma \ref{Mirz vol lemma 1} says that for any $g,n\geq 0$ \[V_{g-1,n+4}\leq V_{g,n+2}.\] So we have $$V_{g_0,k} \leq V_{g_0+\frac{k-k'}{2},k'} \quad \emph{and} \quad V_{g-g_0-k+1,k} \leq V_{g-g_0-k+1+\frac{k-k'}{2},k'}$$ where $k' \in \{1,2,3\}$ with even $k-k'\geq0$. For any fixed integer $k>0$, we consider the summation over $g_0$ with $m\leq|\chi(S_{g_0,k})| \leq g-1$. By Lemma \ref{Mirz vol lemma 2} we have \begin{equation*} \sum_{g_0;m\leq |\chi(S_{g_0,k})| \leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] \leq c(m) \frac{1}{k!} \frac{L^{2k}}{(2k)!} e^L \frac{1}{g^{m}} \end{equation*} for some constant $c(m)>0$ only depending on $m$. Then the total summation satisfies that \begin{eqnarray*} \sum_{m\leq |\chi(S_{g_0,k})| \leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] &\leq& \sum_{k\geq 1} c(m) \frac{1}{k!} \frac{L^{2k}}{(2k)!} e^L \frac{1}{g^{m}} \\ &\leq& c(m) e^{2L} \frac{1}{g^{m}} \end{eqnarray*} because $\sum_{k\geq 1} \frac{1}{k!} \frac{L^{2k}}{(2k)!} \leq e^L.$ The proof is complete. \end{proof} Now we are ready to prove Proposition \ref{prop lower bound}. \begin{proof} [Proof of Proposition \ref{prop lower bound}] First by Equation \eqref{prob(L1 leq L) leq}, Lemma \ref{E[N]}, \ref{sum chi=m E[N]} and \ref{sum chi geq m E[N]}, we have that for large $g>0$, \begin{eqnarray*} &&\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_1(X) \leq L \big) \leq \sum_{(g_0,k);1\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] \\ &&=\left(\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]+\mathbb{E}_{\rm WP}^g[N_{0,3}(X,L)]\right)\\ &&+ \sum_{m=2}^{10} \sum_{|\chi(S_{g_0,k})| = m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]+\sum_{11\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]\\ &&\leq c L^2 e^{\frac{L}{2}}\frac{1}{g} + \sum_{m=2}^{10} cL^{3m-1}e^{\frac{L}{2}}\frac{1}{g^m} + ce^{2L}\frac{1}{g^{11}} \end{eqnarray*} for some uniform constant $c>0$. Now for $$L= 2\log g - 4\log \log g - \omega(g),$$ we have \[L^2 e^{\frac{L}{2}}\frac{1}{g}=O(e^{-\frac{\omega(g)}{2}}),\] \[\sum_{m=2}^{10} L^{3m-1}e^{\frac{L}{2}}\frac{1}{g^m}=O(\frac{(\log g)^{29}}{g}),\] and \[ \frac{e^{2L}}{g^{11}}=O(\frac{1}{g^7}).\] Recall that $\omega(g)\to \infty$ as $g\to \infty$. Hence we get \begin{eqnarray*} \lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_1(X) \leq 2\log g - 4\log \log g - \omega(g) \big)=0 \end{eqnarray*} which implies that \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_1(X) \geq 2\log g - 4\log \log g - \omega(g) \big)=1\] as desired. \end{proof} Actually the argument above also leads to Proposition \ref{lower bound for chi geq 2}, which will be applied later. First we recall the following definition generalizing $\mathcal{L}_1$ in the Introduction. For any integer $m\in [1,g-1]$ and $X\in \mathcal{M}_g$, \[\mathcal{L}_{1,m}(X):=\min_{\Gamma} \ell_{\Gamma}(X)\] where the minimum runs over all simple closed multi-geodesics $\Gamma$ separating $X$ into $S_{g_1,k}\cup S_{g_2,k}$ with \[|\chi(S_{g_1,k})|\geq |\chi(S_{g_2,k})|\geq m.\] Now we are ready to prove Proposition \ref{lower bound for chi geq 2}. \begin{proposition}[=Proposition \ref{lower bound for chi geq 2}] \label{lower bound for chi geq 2-1} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Then we have that for any fixed $m\geq 1$ independent of $g$, \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in\mathcal{M}_g ;\ \mathcal{L}_{1,m}(X) \geq 2m\log g - (6m-2)\log\log g -\omega(g)\right) = 1. \end{equation*} \end{proposition} \begin{proof} The proof is almost the same as the proof of Proposition \ref{prop lower bound}. First we have that for large $g>0$, \begin{eqnarray*} &&\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_{1,m}(X) \leq L \big) \leq \sum_{(g_0,k);m\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)] \\ &&=\sum_{|\chi(S_{g_0,k})| = m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]+ \sum_{m+1\leq |\chi(S_{g_0,k})|\leq 10m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]\\ &&+\sum_{10m+1\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]. \end{eqnarray*} \noindent Now for $$L= 2m\log g - (6m-2)\log \log g - \omega(g),$$ by Lemma \ref{E[N]}, \ref{sum chi=m E[N]} and \ref{sum chi geq m E[N]} we have \[\sum_{|\chi(S_{g_0,k})| = m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]=O(L^{3m-1} e^{\frac{L}{2}}\frac{1}{g^m})=O(e^{-\frac{\omega(g)}{2}}),\] \[\sum_{m+1\leq |\chi(S_{g_0,k})|\leq 10m} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]=O(\sum_{j=m+1}^{10m} L^{3j-1}e^{\frac{L}{2}}\frac{1}{g^j})=O(\frac{(\log g)^{30m-1}}{g}),\] and \[\sum_{10m+1\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[N_{g_0,k}(X,L)]= O(\frac{e^{2L}}{g^{10m+1}})=O(\frac{1}{g^{6m+1}}).\] Recall that $\omega(g)\to \infty$ as $g\to \infty$. Hence we get \begin{eqnarray*} \lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_{1,m}(X) \leq 2m\log g - (6m-2)\log \log g - \omega(g) \big)=0 \end{eqnarray*} which implies that \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \mathcal{L}_{1,m}(X) \geq 2m\log g - (6m-2)\log \log g - \omega(g) \big)=1\] as desired. \end{proof} \begin{rem*} The $m=1$ case of Proposition \ref{lower bound for chi geq 2} is exactly Proposition \ref{prop lower bound}. \end{rem*} \section{Upper bound}\label{section upper bound} In this section, we will show the upper bound in Theorem \ref{main} and \ref{cor L1}. We begin with the following definition. \begin{def*} Assume $\omega(g)$ is a function satisfying \eqref{eq-omega}. For any $X\in \mathcal{M}_g$, we say $X\in \mathcal A(\omega(g))$ if there exists a simple closed geodesic $\gamma$ on $X$ such that \begin{enumerate} \item $\gamma$ separates $X$ into $S_{1,1}\cup S_{g-1,1}$; \item the length $\ell_{\gamma}(X) \leq 2\log g- 4\log \log g +\omega(g)$. \end{enumerate} \end{def*} Now we are ready to state the upper bound of Theorem \ref{main} which is also the essential part of this paper. \begin{theorem}\label{prop upper bound} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Then we have \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in \mathcal{M}_g; \ X\in\mathcal A(\omega(g)) \big)= 1. \end{equation*} \end{theorem} \subsection{Proofs of Theorem \ref{main} and \ref{cor L1}} Before proving Theorem \ref{prop upper bound}, we finish the proofs of Theorem \ref{main} and \ref{cor L1}. \begin{theorem}[=Theorem \ref{main}]\label{main-1} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Consider the following two conditions defined for all $X\in\mathcal{M}_g$: \begin{itemize} \item[(a).] \label{item_main1} $|\ell_{\mathop{\rm sys}}^{\rm sep}(X)-(2\log g - 4\log \log g)| \leq \omega(g)$; \item[(b).] \label{item_main2} $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is achieved by a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$. \end{itemize} Then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \textit{$X$ satisfies $(a)$ and $(b)$} \right)=1. $$ \end{theorem} \begin{proof} Let $m=2$ in Proposition \ref{lower bound for chi geq 2} we get \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ;\ \mathcal{L}_{1,2}(X)> 3.9\log g)=1. \end{equation*} Set \[\mathcal A'(\omega(g)):=\{X\in \mathcal{M}_g; \ \ell_{\mathop{\rm sys}}^{\rm sep}(X) \geq 2\log g - 4\log \log g - \omega(g) \}\] and \[\mathcal A''(g):=\{X\in \mathcal{M}_g; \ \mathcal{L}_{1,2}(X)> 3.9\log g \}.\] Then for any $X\in \mathcal A(\omega(g))\cap \mathcal A''(g)$ and large enough $g>0$, the quantity $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is realized by a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$. For any $X\in \mathcal A(\omega(g))\cap \mathcal A'(\omega(g))$, we have \[|\ell_{\mathop{\rm sys}}^{\rm sep}(X)-(2\log g -4\log \log g)|\leq \omega(g).\] Thus, it follows by Proposition \ref{prop lower bound}, Proposition \ref{lower bound for chi geq 2} for $m=2$ and Theorem \ref{prop upper bound} that as $g\to \infty$, $\mathop{\rm Prob}\nolimits_{\rm WP}^g\left(A(\omega(g))\right),\mathop{\rm Prob}\nolimits_{\rm WP}^g\left(A'(\omega(g))\right)$ and $\mathop{\rm Prob}\nolimits_{\rm WP}^g\left(A''(\omega(g))\right)$ all tend to $1$. Therefore, we have \[\lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \ X\in \mathcal A(\omega(g))\cap \mathcal A'(\omega(g))\cap \mathcal A''(g) \big)=1.\] The proof is complete. \end{proof} \begin{theorem}[=Theorem \ref{cor L1}]\label{cor L1-1} Let $\omega(g)$ be a function satisfying \eqref{eq-omega}. Consider the following two conditions defined for all $X\in\mathcal{M}_g$: \begin{itemize} \item[(e).] $|\mathcal{L}_1(X)-(2\log g - 4\log \log g)| \leq \omega(g)$; \item[(f).] $\mathcal{L}_1(X)$ is achieved by either a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$ or three simple closed geodesics separating $X$ into $S_{0,3}\cup S_{g-2,3}$. \end{itemize} Then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \textit{$X$ satisfies $(e)$ and $(f)$} \right)=1. $$ \end{theorem} \begin{proof} The proof is similar as the proof of Theorem \ref{main}. Let $m=2$ in Proposition \ref{lower bound for chi geq 2} we get \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ;\ \mathcal{L}_{1,2}(X)> 3.9\log g)=1. \end{equation*} Set \[\mathcal A'(\omega(g)):=\{X\in \mathcal{M}_g; \ \mathcal{L}_1(X) \geq 2\log g - 4\log \log g - \omega(g) \}\] and \[\mathcal A''(g):=\{X\in \mathcal{M}_g; \ \mathcal{L}_{1,2}(X)> 3.9\log g \}.\] Then for any $X\in \mathcal A(\omega(g))\cap \mathcal A''(g)$ and large enough $g>0$, the quantity $\mathcal L_1(X)$ is realized by either a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$ or three simple closed geodesic separating $X$ into $S_{0,3}\cup S_{g-2,3}$. For any $X\in \mathcal A(\omega(g))\cap \mathcal A'(\omega(g))$, we have \[|\mathcal L_1(X)-(2\log g -4\log \log g)|\leq \omega(g).\] Thus, it follows by Proposition \ref{prop lower bound}, Proposition \ref{lower bound for chi geq 2} for $m=2$ and Theorem \ref{prop upper bound} that \[\lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \ X\in \mathcal A(\omega(g))\cap \mathcal A'(\omega(g))\cap \mathcal A''(g) \big)=1.\] The proof is complete. \end{proof} \begin{rem*} It is interesting to study whether $\mathcal{L}_1(X)$ is realized just by a simple closed geodesic separating $X$ into $S_{1,1}\cup S_{g-1,1}$ on a generic point $X\in \mathcal{M}_g$. Or does the following limit hold: \[\lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ; \ \mathcal{L}_1(X)=\ell_{\mathop{\rm sys}}^{\rm sep}(X) \big)=1?\] \end{rem*} Set \begin{equation} L=L(g)=2\log g -4\log \log g +\omega(g) \end{equation} where $\omega(g)$ is given as above in \eqref{eq-omega}. In the following arguments we always assume that $g$ is large enough. So $L$ is also large enough.\\ In order to prove Theorem \ref{prop upper bound}, it suffices to show that \begin{equation} \label{N(1,1)=0} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g ;\ N_{1,1}(X,L)=0 \big)= 0. \end{equation} For each $X\in\mathcal{M}_g$, we denote $\mathcal{N}_{1,1}(X,L)$ to be the set of simple closed geodesics on $X$ which separate $X$ into $S_{1,1}\cup S_{g-1,1}$ and has length $\leq L$. Then $$N_{1,1}(X,L) = \# \mathcal{N}_{1,1}(X,L).$$ Instead of $\mathcal{N}_{1,1}(X,L)$, we consider the subset $\mathcal{N}^*_{1,1}(X,L)$ which is defined as follows. \begin{def*}\label{N*1,1} \begin{equation*} \mathcal{N}^*_{1,1}(X,L):= \left\{ \alpha\in\mathcal{N}_{1,1}(X,L)\ ; \ \parbox[l]{6.5cm}{$\forall \alpha\neq\gamma\in\mathcal{N}_{1,1}(X,L)$, either $\alpha \cap \gamma =\emptyset$ or $X_{\alpha\gamma}$ is of type $S_{1,2}$}\right\} \end{equation*} and \begin{equation*} N^*_{1,1}(X,L) := \#\mathcal{N}^*_{1,1}(X,L) \end{equation*} where $X_{\alpha\gamma}$ is defined in section \ref{section union}. \end{def*} \noindent Since $N^*_{1,1}(X,L)\leq N_{1,1}(X,L)$, we clearly have that \begin{equation*} \mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in \mathcal{M}_g;\ N_{1,1}(X,L)=0 \big) \leq \mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in \mathcal{M}_g;\ N^*_{1,1}(X,L)=0 \big). \end{equation*} We will show the following limit which implies \eqref{N(1,1)=0}. \begin{equation}\label{N*1,1=0} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in \mathcal{M}_g;\ N^*_{1,1}(X,L)=0 \big)= 0. \end{equation} \begin{rem*} The purpose to study $N^*_{1,1}(X,L)$ instead of $N_{1,1}(X,L)$ is to simplify certain estimations. Actually the following method also works for $N_{1,1}(X,L)$ by adding more detailed discussions. \end{rem*} \subsection{Bounding probability by expectation} For any nonnegative integer-valued random variable $N$, by Cauchy-Schwarz inequality we have $$ \mathbb E[N]^2=\mathbb E\big[N\cdot\mathbf{1}_{\{N>0\}}\big]^2\leq \mathbb E[N^2]\cdot\mathbb E \big[\mathbf{1}_{\{N>0\}}^2\big]=\mathbb E[N^2] \cdot\mathbb{P}(N>0). $$ So we have $$\mathbb P(N>0)\geq \frac{\mathbb E[N]^2}{\mathbb E[N^2]}.$$ Then since the variance $\mathop{\rm Var} [N] = \mathbb E[N^2] - \mathbb E[N]^2$ is nonnegative, we have $$ \mathbb{P}(N=0)\leq \frac{\mathbb E[N^2]-\mathbb E[N]^2}{\mathbb E[N^2]} \leq \frac{\mathbb E[N^2] -\mathbb E[N]^2}{\mathbb E[N]^2}. $$ Applying this to $N^*_{1,1}(X,L)$, we get \begin{equation}\label{prob(N*=0) leq} \mathop{\rm Prob}\nolimits_{\rm WP}^g\big(N^*_{1,1}(X,L)=0 \big) \leq \frac{\mathbb{E}_{\rm WP}^g[(N^*_{1,1}(X,L))^2] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}. \end{equation} In order to control the RHS above, the most essential part is to study $(N^*_{1,1}(X,L))^2$. We decompose it into three different parts as follows. We define \begin{def*} \begin{equation*} \mathcal Y^*(X,L):= \left\{ (\alpha,\beta)\in\mathcal{N}^*_{1,1}(X,L)\times\mathcal{N}^*_{1,1}(X,L)\ ;\ \alpha\neq \beta, \alpha\cap\beta=\emptyset \right\}, \end{equation*} \begin{equation*} \mathcal Z^*(X,L):= \left\{ (\alpha,\beta)\in\mathcal{N}^*_{1,1}(X,L)\times\mathcal{N}^*_{1,1}(X,L)\ ;\ \alpha\neq \beta, \alpha\cap\beta\neq\emptyset \right\}. \end{equation*} Denote \begin{equation*} Y^*(X,L) := \#\mathcal Y^*(X,L), \end{equation*} \begin{equation*} Z^*(X,L) := \#\mathcal Z^*(X,L). \end{equation*} Then we have $$N^*_{1,1}(X,L)^2=N^*_{1,1}(X,L)+ Y^*(X,L) +Z^*(X,L).$$ \end{def*} \noindent Inserting this decomposition into the \rm{RHS} of \eqref{prob(N*=0) leq} we get \begin{eqnarray}\label{prob(N*=0) leq 3 parts} &&\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(N^*_{1,1}(X,L)=0 \big)\leq \frac{1}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]}\\ && + \frac{\mathbb{E}_{\rm WP}^g[Y^*(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}+ \frac{\mathbb{E}_{\rm WP}^g[Z^*(X,L)]}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}.\nonumber \end{eqnarray} In the following subsections we will show that each of all these three terms in the $\rm RHS$ of \eqref{prob(N*=0) leq 3 parts} above goes to $0$ as $g\to \infty$ for $L=L(g)=2\log g-4\log\log g+\omega(g)$, which in particular implies Theorem \ref{prop upper bound}. More precisely, \begin{equation}\label{tend to 0 (1)}\tag{A} \lim_{g\rightarrow\infty}\frac{1}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]} = 0, \end{equation} \begin{equation}\label{tend to 0 (2)}\tag{B} \lim_{g\rightarrow\infty}\frac{\mathbb{E}_{\rm WP}^g[Y^*(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2} = 0 \end{equation} and \begin{equation}\label{tend to 0 (3)}\tag{C} \lim_{g\rightarrow\infty}\frac{\mathbb{E}_{\rm WP}^g[Z^*(X,L)]}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2} = 0. \end{equation} \begin{rem*} The proofs of \eqref{tend to 0 (1)} and \eqref{tend to 0 (2)} are similar, where we use $\mathbb{E}_{\rm WP}^g[N_{1,1}]$ $(\mathbb{E}_{\rm WP}^g[Y])$ to approximate $\mathbb{E}_{\rm WP}^g[N^*_{1,1}]$ $(\mathbb{E}_{\rm WP}^g[Y^*])$ respectively (we will define $Y(X,L)$ later). For the proof of \eqref{tend to 0 (3)}, we will control the number of certain types of simple closed geodesics by using Mirzakhani's generalized McShane identity. \end{rem*} We will prove \eqref{tend to 0 (1)}, \eqref{tend to 0 (2)} and \eqref{tend to 0 (3)} in the following subsections. \subsection{Proof of \eqref{tend to 0 (1)}} Recall that $L=L(g)=2\log g -4\log \log g +\omega(g)$ goes to $\infty$ as $g\to \infty$. By Lemma \ref{E[N]} we have $$\lim \limits_{g\to \infty}\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]=\infty.$$ We will show that $\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]$ is closed to $\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]$ for large $g>0$. More precisely, \begin{proposition}\label{N-N*} With the notations as above, we have \begin{equation*} \lim_{g\rightarrow\infty} \big(\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)] \big)=0. \end{equation*} In particular, Equation \eqref{tend to 0 (1)} holds. \end{proposition} We split the proof into several parts. We always assume that $g>0$ is large enough. By definition, $N_{1,1}(X,L) - N^*_{1,1}(X,L) \geq0$. Assume that $$\gamma\in\mathcal{N}_{1,1}(X,L) \setminus \mathcal{N}^*_{1,1}(X,L).$$ By definition of $N^*_{1,1}(X,L)$ and Lemma \ref{area U small} we know that there exists a simple closed geodesic $\alpha\in \mathcal{N}_{1,1}(X,L)$ with $\alpha\neq\gamma$ such that $$\gamma\cap\alpha \neq \emptyset \quad \text{and} \quad |\chi(X_{\gamma\alpha})| \geq 3.$$ Assume that $X_{\gamma\alpha}$ is of type $S_{g_0,k}$. Then $\partial X_{\gamma\alpha}$ is a simple closed multi-geodesic that split off an $S_{g_0,k}$ from $X$. By Lemma \ref{area U small} we know that $$g_0\geq 1 \quad \text{and} \quad 3\leq 2g_0-2+k\leq g-1.$$ And we have \begin{equation*} \ell(\partial X_{\gamma\alpha}) \leq \ell(\alpha) +\ell(\gamma) \leq 2L. \end{equation*} Note that by Lemma \ref{alpha sbs U} we have $$X_\gamma\subset X_{\gamma\alpha}.$$ Now we define a counting function as follows: \begin{def*} Define the counting function $\hat{N}_{g_0,n_0}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L_1,L_2)$ to be the number of pairs $(\gamma_1,\gamma_2)$ satisfying \begin{itemize} \item $\gamma_2$ is a simple closed multi-geodesics in $X$ consisting of $n_0$ geodesics that split off an $S_{g_0,n_0}$ from $X$, and its complement $X\setminus S_{g_0,n_0}$ consists of $q$ components $S_{g_1,n_1},\cdots,S_{g_q,n_q}$ for some $q\geq 1$; \item $\gamma_1$ is a simple closed geodesic in that $S_{g_0,n_0}$ and splits off a one-handle from that $S_{g_0,n_0}$; \item $\ell(\gamma_1)\leq L_1$ and $\ell(\gamma_2)\leq L_2$. \end{itemize} (see Figure \ref{figure:def hat N}.) \end{def*} \begin{figure}[h] \centering \includegraphics[width=8.2cm]{counting.pdf} \caption{} \label{figure:def hat N} \end{figure} Note that the map \begin{equation*} \gamma \mapsto (\gamma,\partial X_{\gamma\alpha}) \end{equation*} is injective and $\gamma\cap \partial X_{\gamma\alpha} = \emptyset$, then we have \begin{equation}\label{N-N* leq sum N Gamma} N_{1,1}(X,L) - N^*_{1,1}(X,L) \leq \sum \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) \end{equation} where the summation takes over all possible $(g_0,k)$, $q\geq 1$, and $(g_1,n_1),\cdots,(g_q,n_q)$ such that \begin{itemize} \item $g_0\geq 1$, $3\leq 2g_0-2+k \leq g-1$; \item $n_i\geq 1$, $2g_i-2+n_i \geq 1$, $\forall 1\leq i\leq q$; \item $n_1+\cdots+n_q = k$, $g_0+g_1+\cdots+g_q + k-q =g$. \end{itemize} For such a counting function, by Mirzakhani's Integration Formula (see Theorem \ref{Mirz int formula}), we have \begin{eqnarray*} & & \int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) dX \\ = & & \frac{2^{-M}}{|\mathop{\rm Sym}|} \int_{\mathbb{R}_{\geq0}^{k+1}} \mathbf 1_{[0,L]}(y)\mathbf 1_{[0,2L]}\big(\sum_{i=1}^q(x_{i,1} + \cdots +x_{i,n_i})\big) \\ & & V_{1,1}(y) V_{g_0-1,k+1}(y,x_{1,1},\cdots,x_{q,n_q}) \\ & & V_{g_1,n_1}(x_{1,1},\cdots,x_{1,n_1})\cdots V_{g_q,n_q}(x_{q,1},\cdots,x_{q,n_q}) \\ & & y x_{1,1}\cdots x_{q,n_q} dy dx_{1,1}\cdots dx_{q,n_q}. \end{eqnarray*} From Theorem \ref{Mirz vol lemma 0} of Mirzakhani we know that \begin{equation*} V_{1,1}(y) = \frac{1}{24}(y^2 + 4\pi^2). \end{equation*} It is clear that $2^{-M} \leq 1$ and the symmetry is \begin{equation*} |\mathop{\rm Sym}| = n_1!\cdots n_q!. \end{equation*} By Lemma \ref{MP vol lemma}, we have \begin{equation*} V_{g,n}(x_1,\cdots,x_n) \leq \prod_{i=1}^n \frac{\sinh (x_i/2)}{x_i/2} V_{g,n}, \end{equation*} and we also have that for $x>0$, \begin{equation*} \frac{\sinh (x/2)}{x/2} \leq \frac{e^{x/2}}{x}. \end{equation*} Set the condition \begin{equation*} Cond:=\{ 0\leq y\leq L, \ 0\leq x_{i,j}, \ \sum_{i=1}^q\sum_{j=1}^{n_i} x_{i,j} \leq 2L \}. \end{equation*} \noindent Put all these equations together we get \begin{eqnarray}\label{E[N Gamma] leq} &&\int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) dX \leq \frac{1}{n_1!\cdots n_q!} V_{g_1,n_1}\cdots V_{g_q,n_q} \\ & & \times\int_{Cond} \big( \frac{1}{24}(y^2 + 4\pi^2)y e^{(x_{1,1}+\cdots+x_{q,n_q})/2} V_{g_0-1,k+1}(y,x_{1,1},\cdots,x_{q,n_q})\big) \nonumber\\ & & dy dx_{1,1}\cdots dx_{q,n_q} . \nonumber \end{eqnarray} Next we control the summation $\sum \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)]$ for two different cases, and then combine them to obtain Proposition \ref{N-N*}. \begin{lemma}\label{sum E[N Gamma] for chi=m} Given an integer $m\geq 2$ independent of $g$, then there exists a constant $c(m)>0$ only depending on $m$ such that \begin{equation*} \sum_{|\chi(S_{g_0,k})|=m} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)] \leq c(m) (1+L^{3m-1}) e^L \frac{1}{g^m} \end{equation*} where the summation takes over all possible $(g_0,k)$, $q\geq 1$ and $(g_1,n_1),\cdots,(g_q,n_q)$ such that $g_0\geq 1$ and $2g_0-2+k=m$. \end{lemma} \begin{proof} Since $|\chi(S_{g_0,k})|=m$, we have that all the nonnegative integers $g_0,k,q,n_1,...,n_q$ are all bounded from above by a constant only depending on $m$. By Theorem \ref{Mirz vol lemma 0} of Mirzakhani, we know that $V_{g_0-1,k+1}(y,x_{1,1},\cdots,x_{q,n_q})$ is a polynomial of degree $6g_0-10+2k$. Thus there exists a constant $c_1(m)>0$ only depending on $m$ such that \begin{equation} \label{V-upper-1-1} V_{g_0-1,k+1}(y,x_{1,1},\cdots,x_{q,n_q})\leq c_1(m)(1+L^{6g_0-10+2k}). \end{equation} For the integral in the RHS of \eqref{E[N Gamma] leq}, there exists a uniform constant $c>0$, and two constants $c'(m), c''(m)>0$ only depending on $m$ such that \begin{eqnarray*} & & \int_{Cond} \frac{1}{24}(y^2 + 4\pi^2)y e^{(x_{1,1}+\cdots+x_{q,n_q})/2} dydx_{1,1}\cdots dx_{q,n_q} \\ &\leq& c \cdot (1+L^4) \int_{Cond} e^{(x_{1,1}+\cdots +x_{q,n_q})/2} dx_{1,1}\cdots dx_{q,n_q} \\ &\leq& c'(m) (1+L^4) (1+L^{\sum_{i=1}^q n_i-1}) e^L \\ &\leq& c''(m) (1+L^{k+3}) e^L. \end{eqnarray*} Which together with \eqref{E[N Gamma] leq} and \eqref{V-upper-1-1} imply that there exists a constant $c'''(m)>0$ only depending on $m$ such that \begin{eqnarray*} \int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) &\leq& c'''(m) (1+L^{6g_0-7+3k}) e^L V_{g_1,n_1}\cdots V_{g_q,n_q} \\ &=& c'''(m) (1+L^{3m-1}) e^L V_{g_1,n_1}\cdots V_{g_q,n_q}. \end{eqnarray*} \noindent By Proposition \ref{1 over gm} we know that there exists a constant $c_2(m)>0$ only depending on $m$ such that \begin{eqnarray*} \sum_{g_1,\cdots,g_q} V_{g_1,n_1}\cdots V_{g_q,n_q}\leq c_2(m) \frac{1}{g^{m}} V_g. \end{eqnarray*} \noindent So we have that there exists a constant $c_3(m)>0$ only depending on $m$ such that \begin{equation*} \sum_{g_1,\cdots,g_q} \int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) \leq c_3(m) (1+L^{3m-1}) e^L \frac{1}{g^{m}} V_g. \end{equation*} Recall that the nonnegative integers $g_0,k,q,n_1,\cdots,n_q$ are all bounded from above by a constant only depending on $m$. Therefore there exists a constant $c(m)>0$ only depending on $m$ such that \begin{equation*} \sum_{|\chi(S_{g_0,k})| =m} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)]) \leq c(m) (1+L^{3m-1}) e^L \frac{1}{g^m}. \end{equation*} Which completes the proof. \end{proof} \begin{lemma}\label{sum E[N Gamma] for chi geq m} Given an integer $m\geq 2$ independent of $g$, then there exists a constant $c(m)>0$ only depending on $m$ such that \begin{equation*} \sum_{m+1\leq|\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)] \leq c(m) (1+L^3) e^{\frac{9}{2}L} \frac{1}{g^m} \end{equation*} where the summation takes over all possible $(g_0,k)$, $q\geq 1$ and $(g_1,n_1),\cdots,(g_q,n_q)$ such that $g_0\geq 1$ and $m+1\leq 2g_0-2+k\leq g-1$. \end{lemma} \begin{proof} First by Lemma \ref{Mirz vol lemma 1} we know that \begin{eqnarray*} V_{g_0-1,k+1}(y,x_{1,1},\cdots,x_{q,n_q}) &\leq& \left(e^{y/2}\cdot \prod_{i=1}^{q} \prod_{j=1}^{n_i} e^{x_{i,j}/2}\right)\cdot V_{g_0-1,k+1}\\ &=&\left(e^{y/2}\cdot e^{\sum x_{i,j}/2}\right)\cdot V_{g_0-1,k+1}. \end{eqnarray*} Then by \eqref{E[N Gamma] leq} we have \begin{eqnarray*} \int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) & \leq & \frac{1}{n_1!\cdots n_q!} V_{g_0-1,k+1} V_{g_1,n_1}\cdots V_{g_q,n_q} \\ & & \int_{Cond} \frac{y}{24}(y^2 + 4\pi^2)e^{y/2} e^{\sum x_{i,j}} \\ & & dy dx_{1,1}\cdots dx_{q,n_q} . \end{eqnarray*} For the integral in the {\rm RHS} above, there exists a universal constant $c>0$ such that for large enough $g$ and $L$, \begin{eqnarray*} & &\int_{Cond} \frac{y}{24}(y^2 + 4\pi^2)e^{y/2} e^{\sum x_{i,j}} dy dx_{1,1}\cdots dx_{q,n_q} \\ & = & \int_0^L \frac{y}{24}(y^2 + 4\pi^2)e^{y/2}dy \\ & & \int_{\sum x_{i,j}\leq 2L, x_{i,j}\geq0} e^{\sum x_{i,j}} dx_{1,1}\cdots dx_{q,n_q} \\ & \leq & c (1+L^3) e^{L/2} e^{2L} \int_{\sum x_{i,j}\leq 2L, x_{i,j}\geq0} dx_{1,1}\cdots dx_{q,n_q} \\ & = & c (1+L^3) e^{\frac{5}{2}L} \frac{(2L)^{k}}{k!} . \end{eqnarray*} So we have \begin{equation*} \int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) \leq c (1+L^3) e^{\frac{5}{2}L} \frac{(2L)^{k}}{k!n_1!\cdots n_q!} V_{g_0-1,k+1} V_{g_1,n_1}\cdots V_{g_q,n_q}. \end{equation*} \noindent Similar as in proof of Lemma \ref{sum E[N Gamma] for chi=m}, it follows by Lemma \ref{sum vol lemma} that \begin{equation*} \sum_{g_1,\cdots,g_q} V_{g_1,n_1}\cdots V_{g_q,n_q} \leq c \big(\frac{D}{2g-2g_0-k}\big)^{q-1} W_{2g-2g_0-k} \end{equation*} Recall that for fixed $k$, we always have \begin{equation*} \sum_{n_1+..+n_q=k,\ n_i\geq0} \frac{k!}{n_1!...n_q!} = q^{k}. \end{equation*} So we have that for large enough $g>0$, \begin{eqnarray*} & & \sum_{(g_0,k)} \sum_q\sum_{n_1,\cdots,n_q}\sum_{g_1,\cdots,g_q} \int_{\mathcal{M}_g} \hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L) \\ &\leq & \sum_{(g_0,k)} \sum_q c(1+L^3) e^{\frac{5}{2}L} (\frac{D}{g})^{q-1} \frac{(2L)^{k}}{k!}\frac{q^{k}}{k!} V_{g_0-1,k+1} W_{2g-2g_0-k} \\ & \leq & \sum_{(g_0,k)} \sum_q c(1+L^3) e^{\frac{5}{2}L} (\frac{D}{g})^{q-1} e^{2L} e^q V_{g_0-1,k+1} W_{2g-2g_0-k} \\ & \leq & \sum_{(g_0,k)} c(1+L^3) e^{\frac{9}{2}L} V_{g_0-1,k+1} W_{2g-2g_0-k} \\ \end{eqnarray*} \noindent Recall that Part $(1)$ of Lemma \ref{Wr-prop} tells that $V_{g,n}\leq c W_{2g-2+n}$ for a universal constant $c>0$. Then it follows by Part $(2)$ of Lemma \ref{Wr-prop} that there exist two constants $c'(m),c(m)>0$ only depending on $m$ such that \begin{eqnarray*} &&(\sum_{m+1\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)])\cdot V_g\\ &&\leq \sum_{k} \sum_{g_0:\ m+1\leq 2g_0-2+k\leq g-1} c (1+L^3) e^{\frac{9}{2}L} V_{g_0-1,k+1} W_{2g-2g_0-k}\\ &&\leq \sum_{k} \sum_{g_0:\ m+1\leq 2g_0-2+k\leq g-1} c (1+L^3) e^{\frac{9}{2}L} W_{2g_0-3+k} W_{2g-2g_0-k}\\ &&= \sum_{k} \sum_{g_0:\ m\leq 2g_0-3+k\leq g-2} c (1+L^3) e^{\frac{9}{2}L} W_{2g_0-3+k} W_{2g-2g_0-k}\\ &&\leq \sum_{k} c'(m) (1+L^3) e^{\frac{9}{2}L} \frac{1}{g^{m}} W_{2g-3} \\ &&= \sum_{k} c'(m) (1+L^3) e^{\frac{9}{2}L} \frac{V_{g-1,1}}{g^{m}} \\ &&\leq c(m) (1+L^3) e^{\frac{9}{2}L} \frac{1}{g^{m}} V_g \end{eqnarray*} where in the last inequality we apply the facts that $k\leq g-1$ and $V_{g}\asymp gV_{g-1,1}$ (see Part $(2)$ and $(3)$ of Lemma \ref{Mirz vol lemma 1}). That is, \[\sum_{m+1\leq |\chi(S_{g_0,k})|\leq g-1} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)] \leq c(m) (1+L^3) e^{\frac{9}{2}L} \frac{1}{g^{m}}.\] The proof is complete. \end{proof} Now we are ready to prove Proposition \ref{N-N*}. \begin{proof}[Proof of Proposition \ref{N-N*}] First since $\mathcal{N}^*_{1,1}(X,L) \subset \mathcal{N}_{1,1}(X,L)$, \begin{equation}\label{1 upper bound} \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]\geq 0. \end{equation} It suffices to show the other side. By Equation \eqref{N-N* leq sum N Gamma}, Lemma \ref{sum E[N Gamma] for chi=m} and \ref{sum E[N Gamma] for chi geq m} we have \begin{eqnarray*} && \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]-\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)] \\ &\leq& \sum \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)] \\ &=& \sum_{3\leq |\chi(S_{g_0,k})|\leq 100} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)]\\ &+&\sum_{|\chi(S_{g_0,k})|>100} \mathbb{E}_{\rm WP}^g[\hat{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,2L)]\\ &\leq& \sum_{m=3}^{100} c(m) (1+L^{3m-1}) e^L \frac{1}{g^m} + c(100) (1+L^3) e^{\frac{9}{2}L} \frac{1}{g^{100}}. \end{eqnarray*} \noindent Recall that $L=L(g)=2\log g -4\log \log g +\omega(g)$. As $g\to \infty$ we have that $$\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]-\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)] = O\left(\frac{(\log g)^4}{g} e^{\omega(g)}\right) \to 0 \ \text{as}\ g\to\infty.$$ Which together with \eqref{1 upper bound} imply that $$\lim_{g\rightarrow\infty} \left(\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]\right) =0.$$ By Lemma \ref{E[N]}, we know that $\lim_{g\rightarrow\infty} \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)] = \infty$. So $$\lim_{g\rightarrow\infty} \frac{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]}{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]}=1.$$ For \eqref{tend to 0 (1)}, as shown above and by Lemma \ref{E[N]} we have \begin{equation*} \frac{1}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]} \sim \frac{1}{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]} \sim \frac{1}{\frac{1}{192\pi^2} L^2 e^{\frac{L}{2}} \frac{1}{g} } =O(e^{-\frac{\omega(g)}{2}}) \rightarrow 0 \end{equation*} as $g\rightarrow\infty$, which proves \eqref{tend to 0 (1)}. \end{proof} \subsection{Proof of \eqref{tend to 0 (2)}} In this subsection we show \eqref{tend to 0 (2)}, whose proof is similar to the one of \eqref{tend to 0 (1)}. First we define \begin{def*} \begin{equation*} \mathcal Y(X,L):= \left\{ (\alpha,\beta)\in\mathcal{N}_{1,1}(X,L)\times\mathcal{N}_{1,1}(X,L)\ ;\ \alpha\neq \beta, \alpha\cap\beta=\emptyset \right\} \end{equation*} and \begin{equation*} Y(X,L) := \#\mathcal Y(X,L) = \sum_{\alpha\neq \beta,\alpha\cap\beta=\emptyset} \mathbf 1_{\mathcal{N}_{1,1}(X,L)}(\alpha) \mathbf 1_{\mathcal{N}_{1,1}(X,L)}(\beta). \end{equation*} \end{def*} \begin{lemma}\label{E[Y]} As $g\to \infty$, we have \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[Y(X,L)] &=& \frac{1}{(192\pi^2)^2} L^4 e^L \frac{1}{g^2} \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \big(1+O(\frac{1}{g})\big) \\ &=& \big(\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]\big)^2 \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \big(1+O(\frac{1}{g})\big). \end{eqnarray*} Where the implied constants are independent of $L$ and $g$. As a consequence, for $L(g):=2\log g-4\log\log g+\omega(g)$, we have $$ \lim_{g\to0}\big(\mathbb{E}_{\rm WP}^g[Y(X,L(g))]-\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L(g))]^2\big)=0. $$ \end{lemma} \begin{proof} By Mirzakhani's Integration Formula (see Theorem \ref{Mirz int formula}), \begin{eqnarray*} &&\int_{\mathcal{M}_g}Y(X,L) dX \\ &=& 2^{-M}\int_{\mathbb{R}_{\geq 0}^2} \mathbf 1_{[0,L]}(x) \mathbf 1_{[0,L]}(y) V_{1,1}(x)V_{1,1}(y)V_{g-2,2}(x,y)xydxdy \\ &=& \frac{1}{4} \int_{[0,L]^2} \frac{1}{24}x(x^2 + 4\pi^2) \frac{1}{24}y(y^2 + 4\pi^2) V_{g-2,2}(x,y)dxdy. \end{eqnarray*} (Pair $(\alpha,\beta)$ is ordered, so there is no $|\mathop{\rm Sym}|$ term here.) \noindent By Lemma \ref{Mirz vol lemma 1} we know that \[\frac{V_{g-2,2}}{V_g}=\frac{1}{(8\pi^2 g)^2}\left(1+O\left(\frac{1}{g}\right)\right).\] Thus, it follows by Lemma \ref{MP vol lemma} that \begin{eqnarray*} V_{g-2,2}(x,y) &=& \frac{\sinh(x/2)}{x/2} \frac{\sinh(y/2)}{y/2} V_{g-2,2} \big(1+O(\frac{L^2}{g})\big) \\ &=& \frac{\sinh(x/2)}{x/2} \frac{\sinh(y/2)}{y/2} \frac{1}{64\pi^4 g^2} V_{g} \big(1+O(\frac{L^2}{g})\big) \big(1+O(\frac{1}{g})\big). \end{eqnarray*} So we have \begin{equation*} \mathbb{E}_{\rm WP}^g[Y(X,L)] = \frac{1}{(192\pi^2)^2} L^4 e^L \frac{1}{g^2} \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \big(1+O(\frac{1}{g})\big). \end{equation*} By Lemma \ref{E[N]} we have \begin{equation*} \mathbb{E}_{\rm WP}^g[Y(X,L)]= \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2 \big(1+O(\frac{1}{L})\big) \big(1+O(\frac{L^2}{g})\big) \big(1+O(\frac{1}{g})\big). \end{equation*} The proof is complete. \end{proof} Recall that $L=L(g)=2\log g -4\log \log g +\omega(g)$. Lemma \ref{E[Y]} implies that as $g\rightarrow\infty$, \begin{equation}\label{Y-X/X} \frac{\mathbb{E}_{\rm WP}^g[Y(X,L)] - \mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2} = O\big(\frac{1}{L}+\frac{L^2}{g}+\frac{1}{g} \big) \rightarrow 0. \end{equation} We will show that $\mathbb{E}_{\rm WP}^g[Y^*]$ is an approximation of $\mathbb{E}_{\rm WP}^g[Y]$. More precisely, \begin{proposition}\label{E[Y]-E[Y*]} With the notations as above, we have \begin{equation*} \lim_{g\rightarrow\infty} (\mathbb{E}_{\rm WP}^g[Y(X,L)] - \mathbb{E}_{\rm WP}^g[Y^*(X,L)])=0 \end{equation*} Moreover, Equation \eqref{tend to 0 (2)} holds. \end{proposition} \begin{proof} First by definition of $Y$ and $Y^*$ we know that \begin{equation*} Y^*(X,L) \leq Y(X,L). \end{equation*} So we have \begin{equation}\label{1-Y upper bound} \mathbb{E}_{\rm WP}^g[Y(X,L)] - \mathbb{E}_{\rm WP}^g[Y^*(X,L)] \geq 0. \end{equation} It suffices to show the other side. The proof is similar to the proof of Proposition \ref{N-N*}. For any ordered pair $(\alpha,\beta) \in \mathcal Y(X,L)\setminus \mathcal Y^*(X,L)$, we have $\alpha\in\mathcal{N}_{1,1}(X,L) \setminus \mathcal{N}^*_{1,1}(X,L)$ or $\beta\in\mathcal{N}_{1,1}(X,L) \setminus \mathcal{N}^*_{1,1}(X,L)$. Without loss of generality we assume $\alpha\in\mathcal{N}_{1,1}(X,L) \setminus \mathcal{N}^*_{1,1}(X,L)$. Then by definition of $\mathcal{N}_{1,1}(X,L)$ and $\mathcal{N}^*_{1,1}(X,L)$, it follows by Lemma \ref{area U small} that there exists a simple closed geodesic $\alpha'\in \mathcal{N}_{1,1}(X,L)$ with $\alpha'\neq\alpha$ such that $$\alpha\cap\alpha' \neq \emptyset \quad \text{and} \quad |\chi(X_{\alpha\alpha'})| \geq 3.$$ The relation between $X_\beta$ and $X_{\alpha\alpha'}$ can be divided into the following three cases. (see Figure \ref{figure:cases}.) \begin{figure}[h] \includegraphics[width=12cm]{cases.pdf} \caption{Relation between $X_\beta$ and $X_{\alpha\alpha'}$ in the three cases.} \label{figure:cases} \end{figure} \textbf{Case 1. $X_\beta \subset X_{\alpha\alpha'}$.} \\ For this case we have $$\beta \cap \partial X_{\alpha\alpha'} = \emptyset.$$ ($\beta$ won't be part of $\partial X_{\alpha\alpha'}$ since $X_\beta$ is a one-handle and $X_{\alpha\alpha'}$ is not.) So $\alpha,\beta$ and $\partial X_{\alpha\alpha'}$ are pairwisely disjoint. Assume $X_{\alpha\alpha'}$ is of type $S_{g_0,k}$. Note that $X_\alpha,X_\beta$ are two disjoint one-handles in $X_{\alpha\alpha'}$, so $g_0\geq 2$. By Lemma \ref{area U small}, we have \begin{equation*} 3 \leq |\chi(X_{\alpha\alpha'})| \leq g-1, \end{equation*} and \begin{equation*} \ell(\alpha)\leq L, \ \ell(\beta)\leq L,\ \ell(\partial X_{\alpha\alpha'}) \leq 2L. \end{equation*} Similar to what we have done in the proof of Proposition \ref{N-N*}, we define a counting function as follows: \begin{def*} Define the counting function $\dot{N}_{g_0,n_0}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L_1,L_2,L_3)$ to be the number of pairs $(\gamma_1,\gamma_2,\gamma_3)$ satisfying \begin{itemize} \item $\gamma_3$ is a simple closed multi-geodesics in $X$ consisting of $n_0$ geodesics that split off an $S_{g_0,n_0}$ from $X$ and the complement $X\setminus S_{g_0,n_0}$ consist of $q$ components $S_{g_1,n_1},\cdots,S_{g_q,n_q}$ for some $q\geq 1$; \item $\gamma_1$ and $\gamma_2$ are two disjoint simple closed geodesics in that $S_{g_0,n_0}$, and split off two disjoint one-handles in that $S_{g_0,n_0}$; \item $\ell(\gamma_1)\leq L_1$, $\ell(\gamma_2)\leq L_2$, $\ell(\gamma_3)\leq L_3$. \end{itemize} \end{def*} Since the map \begin{equation*} (\alpha,\beta) \mapsto (\alpha,\beta,\partial X_{\alpha\alpha'}) \end{equation*} is injective, the number of pairs $(\alpha,\beta) \in \mathcal Y(X,L)\setminus \mathcal Y^*(X,L)$ satisfying Case $1$ is bounded from above by \begin{equation}\label{Y-Y* case 1 leq} Q_1 := \sum \dot{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,L,2L) \end{equation} where the summation takes over all possible $(g_0,k)$, $q\geq 1$ and $(g_1,n_1),\cdots,(g_q,n_q)$ such that \begin{itemize} \item $g_0\geq 2$, $3\leq 2g_0-2+k \leq g-1$; \item $n_i\geq 1$, $2g_i-2+n_i \geq 1$, $\forall 1\leq i\leq q$; \item $n_1+\cdots+n_q = k$, $g_0+g_1+\cdots+g_q + k-q =g$. \end{itemize} \textbf{Case 2: $X_\beta \cap X_{\alpha\alpha'} = \emptyset$.}\\ For this case we have that $\alpha,\beta$ and $\partial X_{\alpha\alpha'}$ are pairwisely disjoint. Assume $X_{\alpha\alpha'}$ is of type $S_{g_0,k}$. By Lemma \ref{area U small}, we have \begin{equation*} g_0\geq 1,\ \ 3 \leq |\chi(X_{\alpha\alpha'})| \leq g-1, \end{equation*} and \begin{equation*} \ell(\alpha)\leq L, \ \ell(\beta)\leq L,\ \ell(\partial X_{\alpha\alpha'}) \leq 2L. \end{equation*} Similar to what we have done in the proof of Proposition \ref{N-N*}, we define a counting function as follows: \begin{def*} Define the counting function $\ddot{N}_{g_0,n_0}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L_1,L_2,L_3)$ to be the number of pairs $(\gamma_1,\gamma_2,\gamma_3)$ satisfying \begin{itemize} \item $\gamma_3$ is a simple closed multi-geodesics in $X$ consisting of $n_0$ geodesics that split off an $S_{g_0,n_0}$ from $X$ and the complement $X\setminus S_{g_0,n_0}$ consist of $q$ components $S_{g_1,n_1},\cdots,S_{g_q,n_q}$ for some $q\geq 1$; \item $\gamma_1$ is a simple closed geodesic in that $S_{g_0,n_0}$, and splits off a one-handle in that $S_{g_0,n_0}$; \item $\gamma_2$ is a simple closed geodesic in that $S_{g_1,n_1}$, and splits off a one-handle in that $S_{g_1,n_1}$; \item $\ell(\gamma_1)\leq L_1$, $\ell(\gamma_2)\leq L_2$, $\ell(\gamma_3)\leq L_3$. \end{itemize} \end{def*} Since the map \begin{equation*} (\alpha,\beta) \mapsto (\alpha,\beta,\partial X_{\alpha\alpha'}) \end{equation*} is injective, the number of pairs $(\alpha,\beta) \in \mathcal Y(X,L)\setminus \mathcal Y^*(X,L)$ satisfying Case $2$ is bounded from above by \begin{equation}\label{Y-Y* case 2 leq} Q_2 := \sum \ddot{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,L,2L) \end{equation} where the summation takes over all possible $(g_0,k)$, $q\geq 1$ and $(g_1,n_1),\cdots,(g_q,n_q)$ such that \begin{itemize} \item $g_0\geq 1$, $3\leq 2g_0-2+k \leq g-1$; \item $n_i\geq 1$, $2g_i-2+n_i \geq 1$, $\forall 1\leq i\leq q$; \item $n_1+\cdots+n_q = k$, $g_0+g_1+\cdots+g_q + k-q =g$; \item $g_1\geq 1$. \end{itemize} \textbf{Case 3: $X_\beta \cap X_{\alpha\alpha'} \neq \emptyset$ and $X_\beta \nsubseteq X_{\alpha\alpha'}$.}\\ For this case we have that $\beta$ and $\partial X_{\alpha\alpha'}$ are not disjoint. We consider the subsurface with geodesic boundary $X_{\alpha\alpha'\beta}\subset X$ constructed from $X_{\alpha\alpha'}$ and $X_\beta$ in the way described in Section \ref{section union} (\textit{i.e.\@ } $X_1=X_{\alpha\alpha'}$, $X_2=X_\beta$ and $X_{12}=X_{\alpha\alpha'\beta}$ in the notation of Section \ref{section union}). Then $\alpha,\beta$ and $\partial X_{\alpha\alpha'\beta}$ are pairwisely disjoint. Assume $X_{\alpha\alpha'\beta}$ is of type $S_{g_0,k}$. Note that $X_\alpha,X_\beta$ are two disjoint one-handles in $X_{\alpha\alpha'\beta}$, so $g_0\geq 2$. Recall that $L=L(g)=2\log g -4\log \log g +\omega(g)$. Thus, by Lemma \ref{area U small} we have that for large enough $g>0$, \begin{equation*} 3 \leq |\chi(X_{\alpha\alpha'})| \leq \tfrac{1}{2}g, \end{equation*} and \begin{equation*} \ell(\partial X_{\alpha\alpha'}) \leq 2L. \end{equation*} Then again by Lemma \ref{lemma chiU12}, we have for large enough $g>0$, \begin{equation*} 4 \leq |\chi(X_{\alpha\alpha'\beta})| \leq g-1. \end{equation*} and \begin{equation*} \ell(\alpha)\leq L, \ \ell(\beta)\leq L,\ \ell(\partial X_{\alpha\alpha'\beta}) \leq 3L. \end{equation*} Since the map \begin{equation*} (\alpha,\beta) \mapsto (\alpha,\beta,\partial X_{\alpha\alpha'}) \end{equation*} is injective, the number of $(\alpha,\beta) \in \mathcal Y(X,L)\setminus \mathcal Y^*(X,L)$ satisfying Case $3$ is bounded from above by \begin{equation}\label{Y-Y* case 3 leq} Q_3 := \sum \dot{N}_{g_0,k}^{(g_1,n_1),\cdots,(g_q,n_q)}(X,L,L,3L) \end{equation} where the summation takes over all possible $(g_0,k)$, $q\geq 1$ and $(g_1,n_1),\cdots,(g_q,n_q)$ such that \begin{itemize} \item $g_0\geq 2$, $4\leq 2g_0-2+k \leq g-1$; \item $n_i\geq 1$, $2g_i-2+n_i \geq 1$, $\forall 1\leq i\leq q$; \item $n_1+\cdots+n_q = k$, $g_0+g_1+\cdots+g_q + k-q =g$. \end{itemize} Then by the discussion above, we have \begin{equation}\label{Y-Y* leq} Y(X,L)-Y^*(X,L) \leq 2(Q_1+Q_2+Q_3) \end{equation} where the coefficient $2$ comes from the reason that we have assumed $\alpha\in \mathcal{N}_{1,1}(X,L)\setminus\mathcal{N}^*_{1,1}(X,L)$; indeed if $\beta\in \mathcal{N}_{1,1}(X,L)\setminus\mathcal{N}^*_{1,1}(X,L)$ and $\alpha \in \mathcal{N}^*_{1,1}(X,L)$ one may have the same upper bound $(Q_1+Q_2+Q_3)$. \\ Then we have the following estimations of $\mathbb{E}_{\rm WP}^g[Q_1], \mathbb{E}_{\rm WP}^g[Q_2]$ and $\mathbb{E}_{\rm WP}^g[Q_3]$, whose proofs are exactly the same as the proofs of Lemma \ref{sum E[N Gamma] for chi=m} and \ref{sum E[N Gamma] for chi geq m}, and we omit the details. For $L=L(g)=2\log g -4\log \log g +\omega(g)$, we have that as $g\to \infty$, \begin{equation}\label{E[Q1]} \mathbb{E}_{\rm WP}^g[Q_1] \leq c L^{8} e^L \frac{1}{g^3} = O(\frac{(\log g)^4}{g}e^{\omega(g)}) \to 0, \end{equation} \begin{equation}\label{E[Q2]} \mathbb{E}_{\rm WP}^g[Q_2] \leq c L^{10} e^\frac{3L}{2} \frac{1}{g^4} = O(\frac{(\log g)^4}{g}e^{\frac{3}{2}\omega(g)}) \to 0, \end{equation} and \begin{equation}\label{E[Q3]} \mathbb{E}_{\rm WP}^g[Q_3] \leq c L^{11} e^\frac{3L}{2} \frac{1}{g^4} = O(\frac{(\log g)^5}{g}e^{\frac{3}{2}\omega(g)}) \to 0. \end{equation} Therefore we have that as $g\to \infty$, \begin{equation}\label{Y-Y*-0} 0\leq \mathbb{E}_{\rm WP}^g[Y(X,L)]-\mathbb{E}_{\rm WP}^g[Y^*(X,L)] = O(\frac{(\log g)^5}{g}e^{\frac{3}{2}\omega(g)}) \to 0. \end{equation} For \eqref{tend to 0 (2)}, first we rewrite \begin{eqnarray*} && \frac{\mathbb{E}_{\rm WP}^g[Y^*(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2} \\ &&= \frac{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2} \times \frac{\mathbb{E}_{\rm WP}^g[Y(X,L)]}{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2}\\ &&\times \big( 1-\frac{\mathbb{E}_{\rm WP}^g[Y(X,L)]-\mathbb{E}_{\rm WP}^g[Y^*(X,L)]}{\mathbb{E}_{\rm WP}^g[Y(X,L)]} \big) -1. \end{eqnarray*} \noindent As $g\to \infty$, it follows by Lemma \ref{E[N]}, \ref{N-N*} and \ref{E[Y]} that \[\lim \limits_{g\to\infty}\frac{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}=1 \ \textit{and} \ \lim \limits_{g\to\infty} \frac{\mathbb{E}_{\rm WP}^g[Y(X,L)]}{\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)]^2}=1.\] Which together with \eqref{Y-Y*-0} imply that \begin{eqnarray*} \lim \limits_{g\to \infty} \frac{\mathbb{E}_{\rm WP}^g[Y^*(X,L)] - \mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}=0. \end{eqnarray*} The proof of \eqref{tend to 0 (2)} is complete. \end{proof} \subsection{Proof of \eqref{tend to 0 (3)}} \label{proof of third tend to 0} In this subsection we show \eqref{tend to 0 (3)}, where we will apply Mirzakhani's generalized McShane identity for certain counting problem for $S_{0,4}$ and $S_{1,2}$. Our the main result in the subsection is as follows. \begin{proposition}\label{E[Z]/E[N]} With the notations as above, there exists a universal constant $c>0$ such that \begin{equation*} \mathbb{E}_{\rm WP}^g[Z^*(X,L)]\leq c L^3 e^L \frac{1}{g^2}. \end{equation*} Moreover, Equation \eqref{tend to 0 (3)} holds. \end{proposition} Consider an ordered pair $(\alpha,\beta)\in \mathcal Z^*(X,L)$, that is, $\alpha,\beta\in \mathcal{N}^*_{1,1}(X,L)$ with $\alpha\neq \beta$ and $\alpha\cap\beta\neq\emptyset$. By definition of $\mathcal{N}^*_{1,1}$, we know that $X_{\alpha\beta}$ is of type $S_{1,2}$. Unfortunately, one can not apply Mirzakhani's Integration Formula (see Theorem \ref{Mirz int formula}) to the pair $(\alpha,\beta,\partial X_{\alpha\beta})$ because $\alpha\cap\beta\neq\emptyset$. We consider the following map \begin{equation*} (\alpha,\beta) \mapsto (\alpha,\partial X_{\alpha\beta}). \end{equation*} This map may not be injective. However, one can control the multiplicity of this map. To do this, it is sufficient to count the number of such $\beta's$ with lengths $\leq L$ in a given $S_{1,2}$ with geodesic boundary. We also need a such counting result in a given $S_{0,4}$ for some technical reason. More precisely, we have the following lemma. \begin{lemma}\label{multi in S12 S04} \begin{enumerate} \item Consider a hyperbolic surface of type $S_{1,2}$ with geodesic boundaries $\gamma_1,\gamma_2$ of lengths $L_1,L_2$ respectively. Let $\#_{1,2}(L)$ be the number of simple closed geodesics in this surface of lengths $\leq L$ and bounding a pair of pants with $\gamma_1$ and $\gamma_2$. Then \begin{equation*} \#_{1,2}(L) \leq \frac{L_1}{\mathcal R (L_1,L_2,L)} \end{equation*} where $\mathcal R(x,y,z)$ is the function given in Mirzakhani's generalized McShane identity (see Theorem \ref{McShane id}). \item Consider a hyperbolic surface of type $S_{0,4}$ with geodesic boundaries $\gamma_1,\gamma_2,\gamma_3,\gamma_4$ of lengths $L_1,L_2,L_3,L_4$ respectively. Let $\#_{0,4}(L)$ be the number of simple closed geodesics in this surface of lengths $\leq L$ and bounding a pair of pants with $\gamma_1$ and $\gamma_2$. Then \begin{equation*} \#_{0,4}(L) \leq \frac{L_1}{\mathcal R (L_1,L_2,L)}. \end{equation*} \end{enumerate} \end{lemma} \begin{proof} We only show $(1)$. The proof of $(2)$ is similar. By Lemma \ref{estimation R,D} we know that $\mathcal D>0$ and $\mathcal R>0$. So by Mirzakhani's generalized McShane identity (see Theorem \ref{McShane id}) we have \begin{eqnarray*} L_1 &=& \sum_{\{\alpha_1,\alpha_2\}} \mathcal D(L_1, \ell(\alpha_1), \ell(\alpha_2)) + \sum_{i=2}^n \sum_\gamma \mathcal R(L_1,L_i,\ell(\gamma)) \\ &\geq& \sum_{\gamma'} \mathcal R(L_1,L_2,\ell(\gamma')) \end{eqnarray*} where $\gamma'$ is taken over all simple closed geodesics of lengths $\leq L$ and bounding a pair of pants together with the union $\gamma_1\cup\gamma_2$. By Lemma \ref{estimation R,D} we know that $\mathcal R(x,y,z)$ is decreasing with respect to $z$. Thus, \begin{equation*} L_1 \geq \#_{1,2}(L) \cdot \mathcal R(L_1,L_2,L). \end{equation*} The proof is complete. \end{proof} \begin{rem*} If one only considers closed geodesics instead of simple closed geodesics, it follows by \cite[Lemma 6.6.4]{Buser10} that \begin{equation*} \#_{1,2}(L)\leq c e^L \quad \text{and} \quad \#_{0,4}{L} \leq c e^L \end{equation*} where $c>0$ is a universal constant. By Lemma \ref{estimation R,D}, the lemma above gives the bounds as follows: \begin{equation*} \#_{1,2}(L)\leq c(L_1,L_2) e^{\frac{L}{2}} \quad \text{and} \quad \#_{0,4}(L) \leq c(L_1,L_2) e^{\frac{L}{2}} \end{equation*} where $c(L_1,L_2)>0$ is a constant depending on $L_1$ and $L_2$. They are not optimal. By \cite{Mirz08}, the optimal result one can expect is that \begin{equation*} \#_{1,2}(L) \leq c_1 L^4 \quad \text{and} \quad \#_{0,4}(L) \leq c_2 L^2 \end{equation*} as $L\rightarrow\infty$ for some $c_1,c_2>0$ related to the given hyperbolic surfaces. However, it is not that easy to give explicit expression for $c_1$ and $c_2$ which only depend on the lengths of these geodesics such as $L_1,L_2,L_3$ and $L_4$. \end{rem*} Now we return to the proof of Proposition \ref{E[Z]/E[N]}. Note that the complement of $S_{1,2}$ in $X$ may have several possibilities: $$X\setminus S_{1,2} = S_{g-2,2} \ \text{or}\ S_{k,1}\cup S_{g-k-1,1}$$ for some $1\leq k\leq \frac{1}{2}(g-1)$. We divide $\mathcal Z^*(X,L)$ into several parts as follows. \begin{def*} \begin{equation*} \mathcal Z^{*0}(X,L):= \left\{ (\alpha,\beta)\in\mathcal Z^*(X,L) \ ;\ X\setminus X_{\alpha\beta} \ \text{is of type}\ S_{g-2,2} \right\} \end{equation*} and for any $1\leq k\leq \frac{1}{2}(g-1)$, \begin{equation*} \mathcal Z^{*k}(X,L):= \left\{ (\alpha,\beta)\in\mathcal Z^*(X,L) \ ;\ X\setminus X_{\alpha\beta} \ \text{is of type}\ S_{k,1}\cup S_{g-k-1,1} \right\}. \end{equation*} \end{def*} On the other hand, recall that for an ordered pair $(\alpha,\beta)\in \mathcal Z^*(X,L)$, since $\ell(\alpha)\leq L$ and $\ell(\beta)\leq L$, we have $$\ell(\partial X_{\alpha\beta}) \leq 2L.$$ We divide $Z^*(X,L)$ into two parts \begin{equation*} Z^*(X,L) = Z_1^*(X,L) +Z_2^*(X,L) \end{equation*} where $Z_1^*(X,L)$ and $Z_2^*(X,L)$ are defined as follow. \begin{def*} \begin{equation*} \mathcal Z_1^*(X,L):= \left\{ (\alpha,\beta)\in\mathcal Z^*(X,L) ;\ \ell(\partial X_{\alpha\beta}) \leq 1.9L \right\}, \end{equation*} \begin{equation*} Z_1^*(X,L) := \#\mathcal Z_1^*(X,L). \end{equation*} \begin{equation*} \mathcal Z_2^*(X,L):= \left\{ (\alpha,\beta)\in\mathcal Z^*(X,L);\ \ell(\partial X_{\alpha\beta}) > 1.9L \right\}, \end{equation*} \begin{equation*} Z_2^*(X,L) := \#\mathcal Z_2^*(X,L). \end{equation*} For $i=1,2$ and $1\leq k\leq \frac{1}{2}(g-1)$, we also define \begin{equation*} \mathcal Z_i^{*0}(X,L):= \mathcal Z^{*0}(X,L)\cap \mathcal Z_i^*(X,L), \end{equation*} \begin{equation*} Z_i^{*0}(X,L) := \#\mathcal Z_i^{*0}(X,L), \end{equation*} and \begin{equation*} \mathcal Z_i^{*k}(X,L):= \mathcal Z^{*k}(X,L)\cap \mathcal Z_i^*(X,L), \end{equation*} \begin{equation*} Z_i^{*k}(X,L) := \#\mathcal Z_i^{*k}(X,L). \end{equation*} \end{def*} \begin{rem*} The value 1.9 is not crucial and can be replaced by any number in the interval $(\frac{5}{3},2)$, where $\frac{5}{3}$ comes from Lemma \ref{lemma:4holded}. \end{rem*} We divide the proof of Proposition \ref{E[Z]/E[N]} into the following two lemmas. \begin{lemma}\label{E[Z1*]} Let $L=L(g)=2\log g -4\log \log g +\omega(g)$ as before. Then we have as $g\to \infty$, \begin{equation*} \mathbb{E}_{\rm WP}^g[Z_1^*(X,L)] \leq c L^6 e^{0.95L} \frac{1}{g^2} \end{equation*} for a universal constant $c>0$. \end{lemma} \begin{proof} By Lemma \ref{multi in S12 S04} we have \begin{eqnarray*} Z_1^{*0}(X,L) &=& \sum_{\mbox{\tiny $\begin{array}{c} \alpha\neq \beta, \alpha\cap\beta \neq \emptyset, \\ \ell(\partial X_{\alpha\beta})\leq 1.9L, \\ X\setminus X_{\alpha\beta} = S_{g-2,2} \end{array}$} } \mathbf 1_{\mathcal{N}^*_{1,1}(X,L)}(\alpha) \mathbf 1_{\mathcal{N}^*_{1,1}(X,L)}(\beta) \\ &\leq& \sum_{(\alpha,\gamma_1,\gamma_2)} \mathbf 1_{[0,L]}(\ell(\alpha)) \mathbf 1_{[0,1.9L]}(\ell(\gamma_1)+\ell(\gamma_2)) \#_{1,2}(\gamma_1,\gamma_2,L) \\ &\leq& \sum_{(\alpha,\gamma_1,\gamma_2)} \mathbf 1_{[0,L]}(\ell(\alpha)) \mathbf 1_{[0,1.9L]}(\ell(\gamma_1)+\ell(\gamma_2)) \frac{\ell(\gamma_1)}{\mathcal R (\ell(\gamma_1),\ell(\gamma_2),L)} \end{eqnarray*} where the sum of $(\alpha,\gamma_1,\gamma_2)$ is taken over all ordered simple closed geodesic pair $(\alpha,\gamma_1,\gamma_2)$ such that the union $\gamma_1\cup\gamma_2$ splits off an $S_{1,2}$ with complement $S_{g-2,2}$, and $\alpha$ splits off a one-handle in that $S_{1,2}$. (see the \rm{LHS} of Figure \ref{figure:multicurves}.) \\ And similarly, for all $1\leq k\leq \frac{1}{2}(g-1)$ we have \begin{equation*} Z_1^{*k}(X,L) \leq \sum_{(\alpha,\gamma_1,\gamma_2)} \mathbf 1_{[0,L]}(\ell(\alpha)) \mathbf 1_{[0,1.9L]}(\ell(\gamma_1)+\ell(\gamma_2)) \frac{\ell(\gamma_1)}{\mathcal R (\ell(\gamma_1),\ell(\gamma_2),L)} \end{equation*} where the sum of $(\alpha,\gamma_1,\gamma_2)$ is taken over all ordered simple closed geodesic pair $(\alpha,\gamma_1,\gamma_2)$ such that the union $\gamma_1\cup\gamma_2$ splits off an $S_{1,2}$ with complement $S_{k,1}\cup S_{g-k-1,1}$, and $\alpha$ splits off a one-handle in that $S_{1,2}$. (see the \rm{RHS} of Figure \ref{figure:multicurves}.) Then one may apply Mirzakhani's Integration Formula (see Theorem \ref{Mirz int formula}) to get \begin{eqnarray*} \int_{\mathcal{M}_g} Z_1^{*0}(X,L) dX &\leq& \int_{0\leq z\leq L} \int_{0\leq x+y\leq 1.9L; x,y\geq 0} \frac{x}{\mathcal R (x,y,L)} \\ & & V_{1,1}(z) V_{0,3}(x,y,z) V_{g-2,2}(x,y) xyz dxdydz \end{eqnarray*} and \begin{eqnarray*} \int_{\mathcal{M}_g} Z_1^{*k}(X,L) dX &\leq& \int_{0\leq z\leq L} \int_{0\leq x+y\leq 1.9L; x,y\geq 0} \frac{x}{\mathcal R (x,y,L)} \\ & & V_{1,1}(z) V_{0,3}(x,y,z) V_{k,1}(x) V_{g-k-1,1}(y) xyz dxdydz \end{eqnarray*} for all $1\leq k\leq \frac{1}{2}(g-1)$. \noindent By Theorem \ref{Mirz vol lemma 0} we know that \begin{equation*} V_{1,1}(z) = \frac{1}{24}(z^2+4\pi^2)\quad \text{and} \quad V_{0,3}(x,y,z)=1. \end{equation*} \noindent By Lemma \ref{MP vol lemma} we know that \begin{equation*} V_{g-2,2}(x,y) \leq \frac{\sinh(\frac{x}{2})\sinh(\frac{y}{2})}{\frac{x}{2}\frac{y}{2}} V_{g-2,2} \leq \frac{e^{\frac{x+y}{2}}}{xy} V_{g-2,2}, \end{equation*} \begin{equation*} V_{k,1}(x) \leq \frac{\sinh(\frac{x}{2})}{\frac{x}{2}} V_{k,1} \leq \frac{e^{\frac{x}{2}}}{x} V_{k,1}, \end{equation*} \begin{equation*} V_{g-k-1,1}(y) \leq \frac{\sinh(\frac{y}{2})}{\frac{y}{2}} V_{g-k-1,1} \leq \frac{e^{\frac{y}{2}}}{y} V_{g-k-1,1}. \end{equation*} \noindent By Lemma \ref{estimation R,D} we know that \begin{equation*} \frac{x}{\mathcal R (x,y,L)} \leq 100(1+x)(1+e^{-\frac{x+y}{2}}e^{\frac{L}{2}}). \end{equation*} Put all the inequalities above together we have \begin{eqnarray*} \int_{\mathcal{M}_g} Z_1^*(X,L) dX &=& \int_{\mathcal{M}_g} \big( Z_1^{*0}(X,L) + \sum_{1\leq k\leq\frac{1}{2}(g-1)} Z_1^{*k}(X,L) \big) dX\\ &\leq& \frac{100}{24} \big( V_{g-2,2} + \sum_{1\leq k\leq\frac{1}{2}(g-1)} V_{k,1}V_{g-k-1,1} \big) \\ & & \int_{0\leq z\leq L} \int_{0\leq x+y\leq 1.9L; x,y\geq 0} \\ & & z(z^2+4\pi^2) e^{\frac{x+y}{2}} (1+x)(1+e^{-\frac{x+y}{2}}e^{\frac{L}{2}}) dxdydz \\ &\leq& c\cdot \big((1+L^6) e^{0.95L} + (1+L^7) e^{\frac{L}{2}}\big) \\ & & \big( V_{g-2,2} + \sum_{1\leq k\leq\frac{1}{2}(g-1)} V_{k,1}V_{g-k-1,1} \big) \end{eqnarray*} for some universal constant $c>0$. And by Lemma \ref{Mirz vol lemma 1} and \ref{Mirz vol lemma 2} we know that \begin{equation*} V_{g-2,2}=\frac{1}{(8\pi^2 g)^2}V_g (1+O(\frac{1}{g})) \end{equation*} and \begin{equation*} \sum_{1\leq k\leq\frac{1}{2}(g-1)} V_{k,1}V_{g-k-1,1} =O\left( \frac{V_g}{g^3}\right). \end{equation*} So as $g\to \infty$, we have \begin{equation*} \mathbb{E}_{\rm WP}^g[Z_1^*(X,L)] \leq c L^6 e^{0.95L} \frac{1}{g^2} \end{equation*} for some universal constant $c>0$. The proof is complete. \end{proof} \begin{figure}[h] \centering \includegraphics[width=12.5cm]{multicurves.pdf} \caption{} \label{figure:multicurves} \end{figure} Now we estimate the expectation of $Z_2^*(X,L)$. \begin{lemma}\label{E[Z2*]} Let $L=L(g)=2\log g -4\log \log g +\omega(g)$ as before. Then we have as $g\to \infty$, \begin{equation*} \mathbb{E}_{\rm WP}^g[Z_2^*(X,L)] \leq c L^3 e^L \frac{1}{g^2} \end{equation*} for a universal constant $c>0$. \end{lemma} \begin{proof} Assume that $(\alpha,\beta)\in \mathcal Z_2^*(X,L)$. Denote the boundary of $X_{\alpha\beta}$ to be the two simple closed geodesics $\gamma_1$ and $\gamma_2$. Then we have \begin{equation*} 1.9L<\ell(\gamma_1)+\ell(\gamma_2)\leq 2L. \end{equation*} \noindent By Lemma \ref{lemma:4holded} we know that $\alpha$ and $\beta$ have exactly 4 intersection points, and the intersection $X_\alpha \cap X_\beta$ contains a simple closed geodesic $\delta$ which is disjoint with $\alpha,\beta,\gamma_1,\gamma_2$. (see Figure \ref{a new geodesic in S12}.) Since $\alpha\cup\beta$ is homotopic to $\gamma_1\cup\gamma_2\cup2\delta$ (see the remark after Lemma \ref{lemma:4holded}), we have \begin{equation*} \ell(\gamma_1)+\ell(\gamma_2)+2\ell(\delta) \leq \ell(\alpha)+\ell(\beta) \leq 2L. \end{equation*} \noindent Now similar to how we deal with $Z_1^*(X,L)$, by Lemma \ref{multi in S12 S04} we have \begin{eqnarray*} Z_2^{*0}(X,L) &=& \sum_{\mbox{\tiny $\begin{array}{c} \alpha\neq \beta, \alpha\cap\beta \neq \emptyset, \\ 1.9L<\ell(\partial X_{\alpha\beta})\leq 2L, \\ X\setminus X_{\alpha\beta} = S_{g-2,2} \end{array}$} } \mathbf 1_{\mathcal{N}^*_{1,1}(X,L)}(\alpha) \mathbf 1_{\mathcal{N}^*_{1,1}(X,L)}(\beta) \\ &\leq& \sum_{(\alpha,\gamma_1,\gamma_2,\delta)} \mathbf 1_{[1.9L,2L]}(\ell(\gamma_1)+\ell(\gamma_2)) \mathbf 1_{[0,L]}(\ell(\alpha)) \\ & & \mathbf 1_{[0,2L]}(\ell(\gamma_1)+\ell(\gamma_2)+2\ell(\delta)) \cdot \#_{0,4}(\gamma_1,\gamma_2,\delta,\delta,L) \\ &\leq& \sum_{(\alpha,\gamma_1,\gamma_2,\delta)} \mathbf 1_{[1.9L,2L]}(\ell(\gamma_1)+\ell(\gamma_2)) \mathbf 1_{[0,L]}(\ell(\alpha)) \\ & & \mathbf 1_{[0,2L]}(\ell(\gamma_1)+\ell(\gamma_2)+2\ell(\delta)) \frac{\ell(\gamma_1)}{\mathcal R (\ell(\gamma_1),\ell(\gamma_2),L)} \end{eqnarray*} where the sum of $(\alpha,\gamma_1,\gamma_2,\delta)$ is taken over all ordered simple closed geodesic pair $(\alpha,\gamma_1,\gamma_2,\delta)$ such that the union $\gamma_1\cup\gamma_2$ splits off an $S_{1,2}$ with complement $S_{g-2,2}$, and $\alpha$ splits off a one-handle from that $S_{1,2}$, and $\delta$ is in that one-handle. (see the \rm{LHS} of Figure \ref{figure:multicurves}.) Similarly, for all $1\leq k\leq \frac{1}{2}(g-1)$ we have \begin{eqnarray*} Z_2^{*k}(X,L) &\leq& \sum_{(\alpha,\gamma_1,\gamma_2,\delta)} \mathbf 1_{[1.9L,2L]}(\ell(\gamma_1)+\ell(\gamma_2)) \mathbf 1_{[0,L]}(\ell(\alpha)) \\ & & \mathbf 1_{[0,2L]}(\ell(\gamma_1)+\ell(\gamma_2)+2\ell(\delta)) \frac{\ell(\gamma_1)}{\mathcal R (\ell(\gamma_1),\ell(\gamma_2),L)} \end{eqnarray*} where the sum of $(\alpha,\gamma_1,\gamma_2,\delta)$ is taken over all ordered simple closed geodesic pair $(\alpha,\gamma_1,\gamma_2,\delta)$ such that the union $\gamma_1\cup\gamma_2$ splits off an $S_{1,2}$ with complement $S_{k,1}\cup S_{g-k-1,1}$, and $\alpha$ splits off a one-handle from that $S_{1,2}$, and $\delta$ is in that one-handle. (see the \rm{RHS} of Figure \ref{figure:multicurves}.) When $$L<1.9L<\ell(\gamma_1)+\ell(\gamma_2)\leq 2L,$$ it follows by Lemma \ref{estimation R,D} that \begin{equation*} \frac{\ell(\gamma_1)}{\mathcal R (\ell(\gamma_1),\ell(\gamma_2),L)} \leq 500+500\frac{\ell(\gamma_1)}{0.9L} < 2000. \end{equation*} So we have \begin{equation*} Z_2^{*0}(X,L) \leq 2000\sum_{(\alpha,\gamma_1,\gamma_2,\delta)} \mathbf 1_{[0,L]}(\ell(\alpha)) \mathbf 1_{[1.9L,2L]}(\ell(\gamma_1)+\ell(\gamma_2)+2\ell(\delta)) \end{equation*} and \begin{equation*} Z_2^{*k}(X,L) \leq 2000\sum_{(\alpha,\gamma_1,\gamma_2,\delta)} \mathbf 1_{[0,L]}(\ell(\alpha)) \mathbf 1_{[1.9L,2L]}(\ell(\gamma_1)+\ell(\gamma_2)+2\ell(\delta)). \end{equation*} \noindent By Theorem \ref{Mirz vol lemma 0} we know that \begin{equation*} \quad V_{0,3}(x,y,z)=1. \end{equation*} \noindent By Lemma \ref{MP vol lemma} we know that \begin{equation*} V_{g-2,2}(x,y) \leq \frac{\sinh(\frac{x}{2})\sinh(\frac{y}{2})}{\frac{x}{2}\frac{y}{2}} V_{g-2,2} \leq \frac{e^{\frac{x+y}{2}}}{xy} V_{g-2,2}, \end{equation*} Then one may apply Mirzakhani's Integration Formula (see Theorem \ref{Mirz int formula}) to get \begin{eqnarray*} && \int_{\mathcal{M}_g} Z_2^{*0}(X,L) dX \\ &\leq& \int_{0\leq z\leq L} \int_{1.9L\leq x+y+2w\leq 2L; x,y,w\geq 0} 2000 \\ & & V_{0,3}(z,w,w) V_{0,3}(x,y,z) V_{g-2,2}(x,y) xyzw dxdydzdw \\ &\leq& 2000 V_{g-2,2} \int_{0\leq z\leq L} \int_{1.9L\leq x+y+2w\leq 2L; x,y,w\geq 0} e^{\frac{x+y}{2}}zwdxdydzdw \\ &\leq& c V_{g-2,2} L^3 e^{L} \end{eqnarray*} for some universal constant $c>0$. Similarly, for all $1\leq k \leq \frac{1}{2}(g-1)$ we have \begin{eqnarray*} && \int_{\mathcal{M}_g} Z_2^{*k}(X,L) dX \\ &\leq& \int_{0\leq z\leq L} \int_{1.9L\leq x+y+2w\leq 2L; x,y,w\geq 0} 2000 \\ & & V_{0,3}(z,w,w) V_{0,3}(x,y,z) V_{k,1}(x) V_{g-k-1,1}(y) xyzw dxdydzdw \\ &\leq& 2000 V_{k,1}V_{g-k-1,1} \int_{0\leq z\leq L} \int_{1.9L\leq x+y+2w\leq 2L; x,y,w\geq 0} e^{\frac{x+y}{2}}zwdxdydzdw \\ &\leq& c V_{k,1}V_{g-k-1,1} L^3 e^{L} \end{eqnarray*} for some universal constant $c>0$. And by Lemma \ref{Mirz vol lemma 1} and \ref{Mirz vol lemma 2} we know that \begin{equation*} V_{g-2,2}=\frac{1}{(8\pi^2 g)^2}V_g (1+O(\frac{1}{g})) \end{equation*} and \begin{equation*} \sum_{1\leq k\leq\frac{1}{2}(g-1)} V_{k,1}V_{g-k-1,1} =O\left( \frac{V_g}{g^3}\right). \end{equation*} Therefore we have \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[Z_2^*(X,L)] &=& \mathbb{E}_{\rm WP}^g[Z_2^{*0}(X,L)] + \sum_{1\leq k\leq\frac{1}{2}(g-1)} \mathbb{E}_{\rm WP}^g[Z_2^{*k}(X,L)] \\ &\leq& c L^3 e^L \cdot \frac{V_{g-2,2} + \sum \limits_{1\leq k\leq\frac{1}{2}(g-1)} V_{k,1}V_{g-k-1,1}}{V_g} \\ &\leq& c L^3 e^L \frac{1}{g^2} \end{eqnarray*} for some universal constant $c>0$. The proof is complete. \end{proof} Now are are ready to prove Proposition \ref{E[Z]/E[N]}. \begin{proof} [Proof of Proposition \ref{E[Z]/E[N]}] Recall that $L=L(g)=2\log g -4\log \log g +\omega(g)$. Thus, it follows by Lemma \ref{E[Z1*]} and \ref{E[Z2*]} that there exists a universal constant $c>0$ such that \begin{eqnarray*} \mathbb{E}_{\rm WP}^g[Z^*(X,L)] &=& \mathbb{E}_{\rm WP}^g[Z_1^*(X,L)]+\mathbb{E}_{\rm WP}^g[Z_2^*(X,L)] \\ &\leq& c \left(L^6e^{0.95L} + L^3 e^L\right) \frac{1}{g^2} \\ &\leq& c L^3 e^L \frac{1}{g^2}. \end{eqnarray*} For \eqref{tend to 0 (3)}, by Lemma \ref{E[N]} we know that as $g\to \infty$, \[\mathbb{E}_{\rm WP}^g[N_{1,1}(X,L)] \sim \frac{1}{192\pi^2} L^2 e^{\frac{L}{2}} \frac{1}{g}.\] Thus, we have \begin{equation*} \frac{\mathbb{E}_{\rm WP}^g[Z^*(X,L)]}{\mathbb{E}_{\rm WP}^g[N^*_{1,1}(X,L)]^2}=O(\frac{1}{L}) \rightarrow 0 \end{equation*} as $g\rightarrow\infty$, which proves \eqref{tend to 0 (3)}. \end{proof} Now we finish the proof of Theorem \ref{prop upper bound}. \begin{proof} [Proof of Theorem \ref{prop upper bound}] By the definition of $N_{1,1}(X,L)$ we have \begin{eqnarray*} &&\lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in \mathcal{M}_g; \ X\in\mathcal A(\omega(g)) \big)=\lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(N_{1,1}(X,L)\geq 1 \big)\\ &&=1-\lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(N_{1,1}(X,L)=0 \big). \end{eqnarray*} \noindent By Proposition \ref{N-N*}, \ref{E[Y]-E[Y*]}, \ref{E[Z]/E[N]} and Equation \eqref{prob(N*=0) leq 3 parts} we have \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(N^*_{1,1}(X,L)=0 \big)=0. \end{equation*} Since $N^*_{1,1}(X,L)\leq N_{1,1}(X,L)$, \begin{equation*} \lim_{g\rightarrow\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(N_{1,1}(X,L)=0 \big)=0. \end{equation*} The proof is complete. \end{proof} \section{Half-collars and separating extremal length systole}\label{sec:half collar} In this section, we prove Theorem \ref{thm:half collar} and \ref{cor:extremal}. \subsection{Half-collars}\label{subsection half collar} While a \emph{collar} of a simple closed geodesic $\gamma$ on a complete hyperbolic surface means an equidistance neighborhood $U$ of $\gamma$ homeomorphic to a cylinder, we will mainly consider a half of $U$ cut out by $\gamma$, which we call a \emph{half-collar}. More precisely: \begin{definition*} Given $l,w>0$, let $C_{l,w}$ denote the hyperbolic cylinder with boundary as shown in Figure \ref{figure:collar}. \begin{figure}[h] \centering \includegraphics[width=3cm]{collar.pdf} \caption{The cylinder $C_{l,w}$. It has a geodesic boundary component of length $l$, and every point on the other boundary component has distance $w$ from the geodesic component.} \label{figure:collar} \end{figure} Given a hyperbolic surface $X$ and a simple closed geodesic $\gamma\subset X$, a \emph{half-collar} of width $w$ around $\gamma$ is by definition a subsurface $C\subset X$ isometric to $C_{\ell_\gamma(X),w}$ such that $\gamma$ is the geodesic boundary component of $C$. \end{definition*} The following result is standard: \begin{lemma}\label{prop:collar} Let $X$ be a compact hyperbolic surface of type $S_{g,1}$ with geodesic boundary $\gamma:=\partial X\approx\mathbb{S}^1$. Given $w>0$, the following conditions are equivalent: \begin{enumerate}[label=(\roman*)] \item\label{item:collar1} there is no half-collar of width $w$ around $\gamma$; \item\label{item:collar2} there is a simple geodesic arc $a\subset X$ of length $\leq2w$ with endpoints in $\gamma$. \end{enumerate} \end{lemma} \begin{proof} We first prove the implication ``\ref{item:collar2}$\Rightarrow$\ref{item:collar1}'' by showing that if \ref{item:collar1} fails then \ref{item:collar2} fails as well. So suppose there is a half-collar $C$ of width $w$ around $\gamma$ and let $a$ be any geodesic arc with endpoints in $\gamma$. Since $a$ cannot be entirely contained in $C$ (otherwise it would give rise to a geodesic bigon, which is impossible), there are disjoint sub-arcs $a_1,a_2\subset a$, each of which joins a point of $\gamma$ with a point in the non-geodesic boundary component of $C$. It follows that $\ell(a)>\ell(a_1)+\ell(a_2)\geq 2w$, hence \ref{item:collar2} fails. We have thus shown ``\ref{item:collar2}$\Rightarrow$\ref{item:collar1}''. As for the implication ``\ref{item:collar1}$\Rightarrow$\ref{item:collar2}'', consider the $\epsilon$-neighborhood $U_\epsilon:=\{x\in X\,;\,d(x,\gamma)<\epsilon\}$ of $\gamma$ in $X$. When $\epsilon$ is small enough, the closure $\overline{U}_\epsilon$ is homeomorphic to a cylinder with boundary, hence is a half-collar of width $\epsilon$. As $\epsilon$ grows larger, there is a critical value $\epsilon_0$ such that $\overline{U}_{\epsilon_0}$ stops to be a cylinder for the first time, which is characterized by the existence of a point $x_0\in X$ with $d(x,\gamma)=\epsilon_0$ such that $\overline{U}_{\epsilon_0}$ touches itself at $x_0$. One can then draw two geodesic segments of length $\epsilon_0$ from $x_0$ to $\gamma$ which fit together to form a simple geodesic arc of length $2\epsilon_0$. Now, Condition \ref{item:collar1} just means $\epsilon_0\leq w$. In this case, the arc that we just constructed implies that \ref{item:collar2} holds. This shows ``\ref{item:collar1}$\Rightarrow$\ref{item:collar2}''. \end{proof} We can now prove Theorem \ref{thm:half collar} by using Theorem \ref{main} and Proposition \ref{lower bound for chi geq 2}. \begin{theorem}[=Theorem \ref{thm:half collar}]\label{thm:half collar-1} Given any $\epsilon>0$, and consider the following conditions defined for all $X\in \mathcal{M}_g$: \begin{itemize} \item[(c).] $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is achieved by a simple closed geodesic $\gamma$ separating $X$ into $S_{1,1}\cup S_{g-1,1}$; \item[(d).] There is a half-collar around $\gamma$ in the $S_{g-1,1}$-part of $X$ with width $\frac{1}{2}\log g-\left(\frac{3}{2}+\epsilon\right)\log\log g$. \end{itemize} Then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \textit{$X$ satisfies $(c)$ and $(d)$} \right)=1. $$ \end{theorem} \begin{proof} Fix the function $\omega(g)$ as in Theorem \ref{main} and let $\mathcal{A}_g$ and $\mathcal{B}_g$ denote the subsets of $\mathcal{M}_g$ as follow. $$ \mathcal{A}_g:=\left\{X\in\mathcal{M}_g\,;\,\parbox[l]{6.8cm}{$|\ell_{\mathop{\rm sys}}^{\rm sep}(X)-(2\log g - 4\log \log g)| \leq \omega(g)$, and $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ is achieved only by simple closed geodesics bounding one-handles},\right\} $$ $$ \mathcal{B}_g:=\{X\in\mathcal{M}_g\,;\,\mathcal{L}_{1,2}(X)> 4\log g-10\log\log g-\omega(g)\}. $$ Fix $\epsilon>0$ and suppose $g\geq3$ satisfies \begin{equation}\label{eqn:proof half collar} \omega(g)< 2\epsilon\log\log g. \end{equation} We claim that every $X\in\mathcal{A}_g\cap\mathcal{B}_g$ satisfies the condition stated in Theorem \ref{thm:half collar}. That is, for any simple closed geodesic $\gamma$ achieving $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$, which separates $X$ into $S_{1,1}\cup S_{g-1,1}$ because $X\in\mathcal{A}_g$, there is a half-collar around $\gamma$ in the $S_{g-1,1}$-part of $X$ with width $\frac{1}{2}\log g-\left(\frac{3}{2}+\epsilon\right)\log\log g$. Suppose by contradiction that the claim is false. Then by Lemma \ref{prop:collar}, there exists an $X\in\mathcal{A}_g\cap\mathcal{B}_g$, a simple closed geodesic $\gamma\subset X$ achieving $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$, and a simple geodesic arc $a$ in the $S_{g-1,1}$-part of $X$ with endpoints on $\gamma$, such that $$ \ell(a)\leq\log g-\left(3+2\epsilon\right)\log\log g. $$ In this situation, there are simple closed geodesics $\gamma_1$ and $\gamma_2$ homotopic to the two closed piecewise geodesics formed by $a$ and the two arcs of $\gamma$ split out by $a$, respectively (see Figure \ref{figure:arc}), \begin{figure}[h] \includegraphics[width=4.3cm]{arc.pdf} \caption{From an arc to a pair-of-pants} \label{figure:arc} \end{figure} such that $\gamma_1$, $\gamma_2$ and $\gamma$ together bound a pair of pants outside of the one-handle $X_\gamma$. Since each of $\gamma_1$ and $\gamma_2$ is shorter than the corresponding closed piecewise geodesic, we have \begin{align} &\ell(\gamma_1)+\ell(\gamma_2)\leq \ell(\gamma)+2\ell(a)=\ell_{\mathop{\rm sys}}^{\rm sep}(X)+2\ell(a)\label{eqn:proof half collar 2}\\ &\leq 2\log g-4\log\log g+\omega(g)+2\big(\log g-(3+2\epsilon)\log\log g\big)\nonumber\\ &=4\log g-(10+4\epsilon)\log\log g+\omega(g).\nonumber \end{align} But on the other hand, by definition of $\mathcal{L}_{1,2}(X)$ (see the definition in the Introduction) and the assumption $X\in\mathcal{B}_g$, we have $$ \ell(\gamma_1)+\ell(\gamma_2)\geq\mathcal{L}_{1,2}(X)> 4\log g-10\log\log g-\omega(g). $$ This leads to a contradiction because by \eqref{eqn:proof half collar}, the lower bound of $\ell(\gamma_1)+\ell(\gamma_2)$ here is greater than the upper bound in \eqref{eqn:proof half collar 2}. We have thus shown the claim. As $g\to\infty$, since $\mathop{\rm Prob}\nolimits_{\rm WP}^g(\mathcal{A}_g)$ and $\mathop{\rm Prob}\nolimits_{\rm WP}^g(\mathcal{B}_g)$ both tend to $1$ by Theorem \ref{main} and Proposition \ref{lower bound for chi geq 2}, we have $\mathop{\rm Prob}\nolimits_{\rm WP}^g(\mathcal{A}_g\cap\mathcal{B}_g)\to 1$ as well. In view of the above claim, this implies the required statement. \end{proof} \subsection{Extremal length}\label{subsec-el} Given a Riemann surface $U$ and a set $\Gamma$ of rectifiable curves on $U$, the \emph{extremal length} $\rm Ext_\Gamma(U)$ of $\Gamma$ is defined as (\textit{e.g.\@ } see \cite[Chapter 4]{Ahlfors-ci} and \cite[Section 3]{Kerck80}) $$ \rm Ext_\Gamma(U):=\sup_{\sigma}\frac{\inf_{\alpha\in\Gamma}\ell_\sigma(\alpha)^2}{A_\sigma(U)}, $$ where the supremum is over all Borel-measurable conformal metrics $\sigma$ on $U$, and $\ell_\sigma(\alpha)$ and $A_\sigma(U)$ denote the length of $\alpha$ and the area of $U$ under $\sigma$, respectively. In particular, given a closed hyperbolic surface $X\in\mathcal{M}_g$ and a simple closed geodesic $\gamma\subset X$, we denote $$ \rm Ext_\gamma(X):=\rm Ext_{\Gamma_\gamma}(X) $$ for the set $\Gamma_\gamma$ of all rectifiable closed curves on $X$ homotopic to $\gamma$. We then define the \emph{separating extremal length systole} $\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)$ of $X$ as $$ \rm Ext_{\mathop{\rm sys}}^{\rm sep}(X):=\inf_\gamma\rm Ext_\gamma(X), $$ where the infimum is over all separating simple closed geodesics on $X$. Maskit \cite{Maskit} established some basic relations between the extremal length $\rm Ext_\gamma(X)$ and the hyperbolic length $\ell_\gamma(X)$. The following lemma is a reformulation of \cite[Prop.\@ 1]{Maskit}: \begin{lemma}\label{lemma:extremal} Let $X\in\mathcal{M}_g$. For any simple closed geodesic $\gamma\subset X$, we have $$ \ell_\gamma(X)\leq \pi \rm Ext_\gamma(X). $$ Conversely, if there exists a half-collar around $\gamma$ with width $w$, then $$ \ell_\gamma(X)\geq2\big(\arctan(e^w)-\tfrac{\pi}{4}\big)\rm Ext_\gamma(X). $$ \end{lemma} \begin{proof} The first inequality is exactly Inequality (2) in \cite[Prop.\@ 1]{Maskit}. On the other hand, Inequality (1) in \cite[Prop.\@ 1]{Maskit} implies that if we identify the universal cover of $X$ with the upper half-plane $\mathbb{H}^2$ in such a way that $\gamma$ lifts to $\boldsymbol{i}\mathbb{R}_+$, and assume that $\gamma$ has a half-collar $C$ which lifts to $$ \left\{z\,;\,\tfrac{\pi}{2}-\theta\leq \arg(z)\leq \tfrac{\pi}{2}\right\} $$ for some $\theta\in(0,\frac{\pi}{2})$, then $\ell_\gamma(X)\geq \theta\rm Ext_\gamma(X)$. By an elementary hyperbolic-geometric calculation, the width $w$ of $C$ is related to $\theta$ by $\cosh(w)=\frac{1}{\cos\theta}$, which is equivalent to $2\big(\arctan(e^w)-\tfrac{\pi}{4}\big)=\theta$ (this can be seen by using the trigonometric identity $\tan(\phi)+\cot(\phi)=2\csc(2\phi)$). The second required inequality follows. \end{proof} We can now deduce Theorem \ref{cor:extremal} from Theorem \ref{main}. \begin{theorem}\label{cor:extremal-1} Given any $\epsilon>0$, then we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \frac{\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)}{\ell_{\mathop{\rm sys}}^{\rm sep}(X)}< \frac{2+\epsilon}{\pi} \right)=1. $$ As a consequence of Theorem \ref{main}, we have $$ \lim \limits_{g\to \infty} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in \mathcal{M}_g; \ \frac{(2-\epsilon)}{\pi}\log g< \rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)< \frac{(4+\epsilon)}{\pi}\log g \right)=1. $$ \end{theorem} \begin{proof} Let $\mathcal{A}_g$ denote the subset of $\mathcal{M}_g$ consisting of those $X\in\mathcal{M}_g$ satisfying the conditions in Theorem \ref{main} and \ref{thm:half collar}. The sequence $(\mathcal{A}_g)$ has the property that given any $w>0$, every $X\in\mathcal{A}_g$ with $g$ large enough contains a half-collar of width $w$ around some separating simple closed geodesic $\gamma$ with $\ell_\gamma(X)=\ell_{\mathop{\rm sys}}^{\rm sep}(X)$. Now fix $\epsilon>0$ and let $w_\epsilon>0$ be large enough such that $$ \frac{1}{2(\arctan(e^{w_\epsilon})-\frac{\pi}{4})}\leq\frac{2+\epsilon}{\pi}. $$ For every $X\in\mathcal{A}_g$ with $g$ large enough, letting $\gamma\subset X$ be the separating simple closed geodesic described in the property above, which achieves $\ell_{\mathop{\rm sys}}^{\rm sep}(X)$ and has a half-collar of width $w_\epsilon$, by Lemma \ref{lemma:extremal} we have \begin{eqnarray*} \frac{\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)}{\ell_{\mathop{\rm sys}}^{\rm sep}(X)}&=&\frac{\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X)}{\ell_\gamma(X)}\\ &\leq&\frac{\rm Ext_\gamma(X)}{\ell_\gamma(X)}\\ &\leq& \frac{1}{2(\arctan(e^{w_\epsilon})-\frac{\pi}{4})}\\ &\leq&\frac{2+\epsilon}{\pi}. \end{eqnarray*} Therefore, the first statement of the corollary follows from Theorem \ref{main}. The second statement is then a consequence of the first statement, the fact that $$ \ell_{\mathop{\rm sys}}^{\rm sep}(X)\leq\pi\rm Ext_{\mathop{\rm sys}}^{\rm sep}(X), $$ which follows from Lemma \ref{lemma:extremal}, and the fact that $$ \lim_{g\to\infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g\big(X\in\mathcal{M}_g\,;\,(2-\epsilon)\log g< \ell_{\mathop{\rm sys}}^{\rm sep}(X)< (2+\epsilon)\log g\big)=0, $$ which follows from Theorem \ref{main}. The proof is now complete. \end{proof} \section{Expectation value of $\mathcal{L}_1$}\label{section exp-L1} In this section we consider the expectation value of $\mathcal{L}_1$ over $\mathcal{M}_g$ and prove Theorem \ref{cor E[L1]}. Unlike the unboundness of $\ell_{\mathop{\rm sys}}^{\rm sep}$ we first show that \begin{proposition}\label{L1-upp} There exists a universal constant $C>0$ independent of $g$ such that $$\sup_{X\in \mathcal{M}_g}\mathcal{L}_1(X)\leq C \log g.$$ \end{proposition} \begin{proof} It suffices to consider the case that $g$ is large. For any $X\in \mathcal{M}_g$, by \cite[Theorem 1.3]{Sabo08} of Sabourau we know that there exists a separating closed geodesic $\gamma' \subset X$ which may not be simple such that \[\ell_{\gamma'}(X)\leq C' \log g\] for some universal constant $C'>0$. If $\gamma'$ is simple, we are done. Now we assume that $\gamma'$ is non-simple and consider the $\varepsilon$-neighborhood $\mathcal{N}_{\varepsilon}(\gamma')$ of $\gamma'$ where $\varepsilon>0$ is small enough such that $\mathcal{N}_{\varepsilon}(\gamma')$ is homotopic to $\gamma'$ in $X$. Now similar as the construction in Section \ref{section union} we obtain a surface $X(\gamma')$ by deforming the boundary $\partial \mathcal{N}_{\varepsilon}(\gamma')$ into simple closed geodesics; if a component $\alpha'$ of the boundary is homotopically trivial, we add the disk bounded by $\alpha'$ into $\mathcal{N}_{\varepsilon}(\gamma')$ (Here if two components of $\partial \mathcal{N}_{\varepsilon}(\gamma')$ deforms to the same simple closed geodesic, we do not glue them together, \textit{i.e.\@ }, one may view $X(\gamma')$ as an open subsurface of $X$). Since $X(\gamma')$ is freely homotopic to $\mathcal{N}_{\varepsilon}(\gamma')$ in $X$, $X(\gamma')$ is also freely homotopic to $\gamma'$ in $X$. Since $\gamma'$ is the unique closed geodesic representing the free homotopy class $\gamma'$ and $X(\gamma')\subset X$ is a subsurface of geodesic boundary, \[\gamma'\subset X(\gamma').\] Clearly we have \[\ell(\partial X(\gamma'))\leq 2\ell_{\gamma'}(X)\leq 2C' \log g.\] \noindent So by construction we know that the complement $X(\gamma')\setminus \gamma'= (\sqcup D_i) \sqcup (\sqcup C_j)$ where the subsets are setwisely disjoint, the $D_i's$ are disjoint discs and the $C_j's$ are disjoint cylinders. By elementary Isoperimetric Inequality (\textit{e.g.\@ } see \cite{Buser10, WX18}) we know that \[\mathop{\rm Area}(D_i)\leq \ell(\partial D_i) \quad \text{and} \quad \mathop{\rm Area}(C_j)\leq \ell(\partial C_j).\] Thus, we have \begin{eqnarray*} \mathop{\rm Area}(X(\gamma'))&=&\mathop{\rm Area} (X(\gamma')\setminus \gamma')=\sum(\mathop{\rm Area}(D_i))+\sum(\mathop{\rm Area}(C_j))\\ &\leq & \sum (\ell(\partial D_i))+\sum (\ell(\partial C_j))\\ &\leq & \ell(\partial X(\gamma'))+\ell_{\gamma'}(X) \\ &\leq & 3C'\log g. \end{eqnarray*} Recall that by Gauss-Bonnet we know that $\mathop{\rm Area}(X)=4\pi(g-1)$. So for large $g$, we have that $X(\gamma')$ is a proper subsurface of $X$. Clearly the boundary $\partial X(\gamma')$ consists of multi simple closed geodesics which separate $X$. Hence, we have \[\mathcal{L}_1(X)\leq \ell(\partial X(\gamma'))\leq 2C'\log g\] which completes the proof by setting $C=2C'>0$. \end{proof} \begin{rem*} Buser-Sarnak in \cite{BS94} showed that for all $g\geq 2$, there exists a hyperbolic surface $X_g\in \mathcal{M}_g$ such that $\ell_{sys}(X_g)\geq K\log g$ for some uniform constant $K>0$ independent of $g$. Which together with the proposition above imply that $$\sup_{X\in \mathcal{M}_g}\mathcal{L}_1(X)\asymp \log g.$$ \end{rem*} Now we are ready to prove Theorem \ref{cor E[L1]}. \begin{theorem}[=Theorem \ref{cor E[L1]}]\label{cor E[L1]-1} The expectation value $\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]$ of $\mathcal{L}_1(\cdot)$ on $\mathcal{M}_g$ satisfies \begin{equation*} \lim_{g\rightarrow\infty}\frac{\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]}{\log g} = 2. \end{equation*} \end{theorem} \begin{proof} First we set \[\mathcal{B}(\omega(g))=\{X\in \mathcal{M}_g; \ |\mathcal{L}_1(X)-(2\log g-4 \log \log g)|\leq \omega(g)\}.\] By Theorem \ref{cor L1} we know that \[\lim_{g\rightarrow\infty} \frac{\mathop{\rm Vol}(\mathcal{B}(\omega(g)))}{V_g}=1.\] For the lower bound, we have \begin{eqnarray*} \frac{\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]}{\log g} &\geq & \frac{1}{V_g}\int_{\mathcal{B}(\omega(g))}\frac{\mathcal{L}_1(X) }{\log g}dX \\ &\geq & \frac{2\log g-4 \log \log g-\omega(g)}{\log g}\cdot \frac{\mathop{\rm Vol}(\mathcal{B}(\omega(g)))}{V_g} \end{eqnarray*} which implies that \[\liminf \limits_{g\to \infty}\frac{\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]}{\log g}\geq 2.\] For the upper bound, it follows by Proposition \ref{L1-upp} that \begin{eqnarray*} && \frac{\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]}{\log g} = \frac{1}{V_g}\int_{\mathcal{B}(\omega(g))}\frac{\mathcal{L}_1(X) }{\log g}dX+ \frac{1}{V_g}\int_{\mathcal{M}_g\setminus \mathcal{B}(\omega(g))}\frac{\mathcal{L}_1(X) }{\log g}dX \\ &&\leq \frac{2\log g-4 \log \log g+\omega(g)}{\log g}\cdot \frac{\mathop{\rm Vol}(\mathcal{B}(\omega(g)))}{V_g}+ C \cdot \frac{\mathop{\rm Vol}(\mathcal{M}_g\setminus \mathcal{B}(\omega(g)))}{V_g}. \end{eqnarray*} Let $g\to \infty$ we get \[\limsup \limits_{g\to \infty}\frac{\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]}{\log g}\leq 2.\] The proof is complete. \end{proof} \section{Further questions}\label{section questions} In this last section we propose several questions related to the results in this article. \subsection{Shorest separating simple closed multi-geodesics} By Theorem \ref{main} we know that on a generic $X\in \mathcal{M}_g$, a separating systolic closed geodesic of $X$ separates $X$ into $S_{1,1}\cup S_{g-1,1}$. However, by Theorem \ref{cor L1} we only know that on a generic $X\in \mathcal{M}_g$, the shortest separating simple closed multi-geodesics of $X$ separates $X$ into either $S_{1,1}\cup S_{g-1,1}$ or $S_{0,3}\cup S_{g-2,3}$. A natural question is to determine the weights of these two cases. Or more precisely, \begin{question} On a generic $X\in \mathcal{M}_g$, is $\mathcal{L}_1(X)$ achieved by a separating systole as $g\to \infty$? \end{question} \subsection{Expectation of $\ell_{\mathop{\rm sys}}^{\rm sep}$} Theorem \ref{cor E[L1]} tells that as $g\to \infty$, the expectation value $\mathbb{E}_{\rm WP}^g[\mathcal{L}_1]$ behaves like $2\log g$. The two ingredients in the proof are Theorem \ref{cor L1} and Proposition \ref{L1-upp}, the latter one says that $\sup_{X\in\mathcal{M}_g}\mathcal{L}_1(X)\leq C \log g$ for some universal constant $C>0$. For $\ell_{\mathop{\rm sys}}^{\rm sep}$, although we still have the first ingredient, namely Theorem 1, the second is missing because it is known that $\sup_{X\in \mathcal{M}_g}\ell_{\mathop{\rm sys}}^{\rm sep}(X)=\infty$. So we raise the following question: \begin{question} Does the following limit hold: $$\lim \limits_{g\to \infty}\frac{\mathbb{E}_{\rm WP}^g[\ell_{\mathop{\rm sys}}^{\rm sep}]}{\log g} = 2?$$ \end{question} \subsection{Geometric Cheeger constants} Recall as in the Introduction, for all $1\leq m\leq g-1$ the \emph{$m$-th geometric Cheeger constant} $H_m(X)$ of $X$ is defined as \[H_m(X):= \inf \limits_{\gamma}\frac{\ell_{\gamma}(X)}{2\pi m}\] where $\gamma$ is a simple closed multi-geodesics on $X$ with $X\setminus \gamma=X_1\cup X_2$, and $X_1$ and $X_2$ are connected subsurfaces of $X$ such that $|\chi(X_1)|=m\leq |\chi(X_2)|$. As a direct consequence of Theorem \ref{cor L1}, the first geometric Cheeger constant $H_1(\cdot)$ on $\mathcal{M}_g$ asymptotically behaves as \[\lim \limits_{g\to \infty}\mathop{\rm Prob}\nolimits_{\rm WP}^g \left(X\in \mathcal{M}_g; \ (1-\epsilon)\cdot \frac{\log g}{\pi}< H_1(X)< \frac{\log g}{\pi}\right)=1\] for any $\epsilon>0$. A natural question is to study general $H_m$. \begin{question}\label{ques-hm} For $m\in [1,g-1]$, what is the asymptotic behavior of $H_m(\cdot)$ on $\mathcal{M}_g$ as $g\to \infty$? \end{question} \noindent This question is related to \cite[Problem 10.5]{Wright-tour} of Wright on the asymptotic behavior of the classical Cheeger constant $h(X)$ of $X$, because $H(X):=\min_{1\leq m \leq g-1}H_m(X)$ serves as a natural upper bound for $h(X)$. For fixed $m>0$ independent of $g$, the question above may be reduced to study the following explicit one: \begin{question}\label{ques-L1m} Let $\omega(g)$ be a function as \eqref{eq-omega} and $m>0$ be fixed. Then does the following limit hold: as $g\to \infty$, \begin{equation*} \mathop{\rm Prob}\nolimits_{\rm WP}^g\left(X\in\mathcal{M}_g;\ |\mathcal{L}_{1,m}(X) - (2m\log g - (6m-2)\log\log g)| \leq \omega(g)\right)\to 1? \end{equation*} \end{question} \noindent Theorem \ref{cor L1} answers Question \ref{ques-hm} and \ref{ques-L1m} for $m=1$, By Proposition \ref{lower bound for chi geq 2} it suffices to study the upper bound. \bibliographystyle{amsalpha}
train/arxiv
BkiUdW05qYVBND0w4MPy
5
1
\section{Introduction} Recently, subsolutions of the Duffin-Kemmer-Petiau (DKP) equations were found and it was shown that the subsolutions fulfill the appropriately projected Dirac equation \cite{Okninski2003,Okninski2004}. On the other hand, massive subsolutions of the Dirac equation were also found and studied \cite{Okninski2007}. In the present paper we demonstrate that subsolutions of the DKP equations and those of the Dirac equation obey the same Dirac equation with some built-in projection operator. This equation was shown to be covariant in our earlier paper \cite{Okninski2007}. We shall refer to this equation as supersymmetric since it has bosonic (spin $0$ and $1$) as well as fermionic (spin $\frac{1}{2}$) degrees of freedom. Some of the results described below were derived earlier but are included for the sake of completeness. The paper is organized as follows. In Section 2 the Dirac as well as the DKP equations are described shortly. The DKP equations for $s=0$ are written as a set of two $3\times 3$ equations in Section 3 (the case $s=1$ leads to analogous equations \cite{Okninski2003}) and it is shown that their solutions fulfill the Dirac equation. In Section 4 subsolutions of the Dirac equation are described. Then in Section 5 it is demonstrated that all these subsolutions obey the Dirac equation with built-in projection operator, referred henceforth as the supersymmetric equation. Finally, the supersymmetric equation is written in representation independent form (its covariance was verified in \cite{Okninski2007}). \section{Relativistic equations} In what follows tensor indices are denoted with Greek letters, $\mu =0,1,2,3 . We shall use the following convention for the Minkowski space-time metric tensor: $g^{\mu \nu }=$ \textrm{diag}$\left( 1,-1,-1,-1\right) $ and we shall always sum over repeated indices. Four-momentum operators are defined in natural units, $c=1$, $\hslash =1$, as $p^{\mu }=i\frac{\partial } \partial x_{\mu }}$. \subsection{Dirac equation} The Dirac equation is a relativistic quantum mechanical wave equation formulated by Paul Dirac in 1928 providing a description of elementary spin- \frac{1}{2}$ particles, such as electrons and quarks, consistent with both the principles of quantum mechanics and the theory of special relativity \cite{Dirac1928}. The Dirac equation is \cite{Bjorken1964, Berestetskii1971, Thaller1992}: \begin{equation} \gamma ^{\mu }p_{\mu }\Psi =m\Psi , \label{Dirac1} \end{equation where $m$ is the rest mass of the elementary particle. The $\gamma $'s are 4\times 4$ anticommuting Dirac matrices: $\gamma ^{\mu }\gamma ^{\nu }+\gamma ^{\nu }\gamma ^{\mu }=2g^{\mu \nu }I$ where $I$ is a unit matrix. In the spinor representation of the Dirac matrices we have $\gamma ^{0}=\left( \begin{array}{cc} \mathbf{0} & \sigma ^{0} \\ \sigma ^{0} & \mathbf{0 \end{array \right) $, $\gamma ^{j}=\left( \begin{array}{cc} \mathbf{0} & -\sigma ^{j} \\ \sigma ^{j} & \mathbf{0 \end{array \right) $, $j=1,2,3$, $\gamma ^{5}=\left( \begin{array}{cc} \sigma ^{0} & \mathbf{0} \\ \mathbf{0} & -\sigma ^{0 \end{array \right) $. The wave function is a bispinor, i.e. consists of $2$ two-component spinors $\xi $, $\eta $: $\Psi =\left( \begin{array}{c} \xi \\ \et \end{array \right) $. \subsection{Duffin-Kemmer-Petiau equations} The DKP equations for spin $0$ and $1$ are written as: \begin{equation} \beta _{\mu }p^{\mu }\Psi =m\Psi , \label{KDP-s0,1} \end{equation with $5\times 5$ and $10\times 10$ matrices $\beta ^{\mu }$, respectively, which fulfill the following commutation relations \cite{Duffin1938, Kemmer1939} \begin{equation} \beta ^{\lambda }\beta ^{\mu }\beta ^{\nu }+\beta ^{\nu }\beta ^{\mu }\beta ^{\lambda }=g^{\lambda \mu }\beta ^{\nu }+g^{\nu \mu }\beta ^{\lambda }. \label{algebra-b} \end{equation} In the case of $5\times5$ (spin-$0$) representation of $\beta^{\mu}$ matrices Eq.(\ref{KDP-s0,1}) is equivalent to the following set of equations: \begin{equation} \left. \begin{array}{ccc} p^{\mu}\psi & = & m\psi^{\mu} \\ p_{\nu}\psi^{\nu} & = & m\ps \end{array} \right\} , \label{KDP-s0-1} \end{equation} if we define $\Psi$ in (\ref{KDP-s0,1}) as: \begin{equation} \Psi=\left( \psi^{\mu},\psi\right) ^{T}=\left( \psi^{0},\psi^{1},\psi ^{2},\psi^{3},\psi\right) ^{T}, \label{wavef-0} \end{equation} where $^{T}$ denotes transposition of a matrix. Let us note that Eq.(\re {KDP-s0-1}) can be obtained by factorizing second-order derivatives in the Klein-Gordon equation $p_{\mu}p^{\mu}\,\psi=m^{2}\psi$. In the case of $10\times10$ (spin-$1$) representation of matrices \beta^{\mu }$ Eq.(\ref{KDP-s0,1}) reduces to: \begin{equation} \left. \begin{array}{ccc} p^{\mu }\psi ^{\nu }-p^{\nu }\psi ^{\mu } & = & m\psi ^{\mu \nu } \\ p_{\mu }\psi ^{\mu \nu } & = & m\psi ^{\nu \end{array \right\} , \label{KDP-s1-1} \end{equation with the following definition of $\Psi $ in (\ref{KDP-s0,1}): \begin{equation} \Psi =\left( \psi ^{\mu \nu },\psi ^{\lambda }\right) ^{T}=\left( \psi ^{01},\psi ^{02},\psi ^{03},\psi ^{23},\psi ^{31},\psi ^{12},\psi ^{0},\psi ^{1},\psi ^{2},\psi ^{3}\right) ^{T}, \label{wavef-1} \end{equation where $\psi ^{\lambda }$ are real and $\psi ^{\mu \nu }$ are purely imaginary (in alternative formulation we have $-\partial ^{\mu }\psi ^{\nu }+\partial ^{\nu }\psi ^{\mu }=m\psi ^{\mu \nu }$, $\partial _{\mu }\psi ^{\mu \nu }=m\psi ^{\nu }$, where $\psi ^{\lambda }$, $\psi ^{\mu \nu }$ are real). Because of antisymmetry of $\psi ^{\mu \nu }$ we have $p_{\nu }\psi ^{\nu }=0$ what implies spin $1$ condition. The set of equations (\re {KDP-s1-1}) was first written by Proca \cite{Proca1936} and in a different context by Lanczos \cite{Lanczos1929}. More on the rich history of the formalism of Duffin, Kemmer and Petiau can be found in \cite{Bogush2007}. \section{Splitting the spin-$0$ Duffin-Kemmer-Petiau equations} Four-vectors $\psi ^{\mu }=\left( \psi ^{0},\mathbf{\psi }\right) $ and spinors $\zeta ^{A\dot{B}}$ are related by formula: \begin{equation} \zeta ^{A\dot{B}}=\left( \sigma ^{0}\psi ^{0}+\mathbf{\sigma }\cdot \mathbf \psi }\right) ^{A\dot{B}}=\left( \begin{array}{cc} \zeta ^{1\dot{1}} & \zeta ^{1\dot{2}} \\ \zeta ^{2\dot{1}} & \zeta ^{2\dot{2} \end{array \right) =\left( \begin{array}{cc} \psi ^{0}+\psi ^{3} & \psi ^{1}-i\psi ^{2} \\ \psi ^{1}+i\psi ^{2} & \psi ^{0}-\psi ^{3 \end{array \right) , \label{4vector-spinor} \end{equation where $A,\dot{B}$ number rows and columns, respectively, and $\sigma ^{j}$, j=1,2,3$, are the Pauli matrices, $\sigma ^{0}$\ is the unit matrix. For details of the spinor calculus reader should consult \cit {Berestetskii1971,MTW1973,Corson1953}. Equations (\ref{KDP-s0-1}) can be written within spinor formalism as: \begin{equation} \left. \begin{array}{ccc} p^{A\dot{B}}\psi & = & m\psi ^{A\dot{B}} \\ p_{A\dot{B}}\psi ^{A\dot{B}} & = & 2m\ps \end{array \right\} . \label{KDP-s0-2} \end{equation} It follows from (\ref{KDP-s0-2}) that $p_{A\dot{B}}\psi ^{A\dot{B}}=p_{A\dot B}}p^{A\dot{B}}\psi $ and $p_{A\dot{B}}p^{A\dot{B}}\psi =2m^{2}\psi $. Moreover, $p_{A\dot{B}}p^{A\dot{B}}=p_{1\dot{1}}p^{1\dot{1}}+p_{2\dot{1}}p^{ \dot{1}}+p_{1\dot{2}}p^{1\dot{2}}+p_{2\dot{2}}p^{2\dot{2}}=2p_{\mu }p^{\mu }$ and the Klein-Gordon equation $p_{\mu }p^{\mu }\psi =m^{2}\psi $ follows. Let us note that due to spinor identities $p_{1\dot{1}}p^{1\dot{1}}+p_{2\dot 1}}p^{2\dot{1}}=p_{\mu }p^{\mu }$, $p_{1\dot{2}}p^{1\dot{2}}+p_{2\dot{2}}p^{ \dot{2}}=p_{\mu }p^{\mu }$ we can split the last of equations (\ref{KDP-s0-2 ) and write Eqs.(\ref{KDP-s0-2}) as a set of two equations \begin{equation} \left. \begin{array}{r} p^{1\dot{1}}\psi =m\psi ^{1\dot{1}} \\ p^{2\dot{1}}\psi =m\psi ^{2\dot{1}} \\ p_{1\dot{1}}\psi ^{1\dot{1}}+p_{2\dot{1}}\psi ^{2\dot{1}}=m\ps \end{array \right\} , \label{const-s0-1} \end{equation \begin{equation} \left. \begin{array}{r} p^{1\dot{2}}\psi =m\psi ^{1\dot{2}} \\ p^{2\dot{2}}\psi =m\psi ^{2\dot{2}} \\ p_{1\dot{2}}\psi ^{1\dot{2}}+p_{2\dot{2}}\psi ^{2\dot{2}}=m\ps \end{array \right\} , \label{const-s0-2} \end{equation each of which describes particle with mass $m$ (we check this substituting e.g. $\psi ^{1\dot{1}}$, $\psi ^{2\dot{1}}$ or $\psi ^{1\dot{2}}$, $\psi ^{ \dot{2}}$\ into the third equations). Eq. (\ref{KDP-s0-2}) and the set of two equations (\ref{const-s0-1}), (\ref{const-s0-2}) are equivalent. We described equations (\ref{const-s0-1}), (\ref{const-s0-2}) in \cit {Okninski1981,Okninski1982}. From each of equations(\ref{const-s0-1}), (\re {const-s0-2}) an identity follows \begin{eqnarray} p^{2\dot{1}}\psi ^{1\dot{1}} &=&p^{1\dot{1}}\psi ^{2\dot{1}}, \label{identities0-a} \\ p^{2\dot{2}}\psi ^{1\dot{2}} &=&p^{1\dot{2}}\psi ^{2\dot{2}}. \label{identities0-b} \end{eqnarray} Equation (\ref{const-s0-1}) and the identity (\ref{identities0-a}), as well as equation (\ref{const-s0-2}) and the identity (\ref{identities0-b}) can be written in form of the Dirac equations \begin{equation} \left( \begin{array}{cccc} 0 & 0 & p^{0}+p^{3} & p^{1}-ip^{2} \\ 0 & 0 & p^{1}+ip^{2} & p^{0}-p^{3} \\ p^{0}-p^{3} & -p^{1}+ip^{2} & 0 & 0 \\ -p^{1}-ip^{2} & p^{0}+p^{3} & 0 & \end{array \right) \left( \begin{array}{c} \psi ^{1\dot{1}} \\ \psi ^{2\dot{1}} \\ \chi \\ \end{array \right) =m\left( \begin{array}{c} \psi ^{1\dot{1}} \\ \psi ^{2\dot{1}} \\ \chi \\ \end{array \right) , \label{A-DKP} \end{equation \begin{equation} \left( \begin{array}{cccc} 0 & 0 & p^{0}-p^{3} & p^{1}+ip^{2} \\ 0 & 0 & p^{1}-ip^{2} & p^{0}+p^{3} \\ p^{0}+p^{3} & -p^{1}-ip^{2} & 0 & 0 \\ -p^{1}+ip^{2} & p^{0}-p^{3} & 0 & \end{array \right) \left( \begin{array}{c} \psi ^{2\dot{2}} \\ \psi ^{1\dot{2}} \\ \chi \\ \end{array \right) =m\left( \begin{array}{c} \psi ^{2\dot{2}} \\ \psi ^{1\dot{2}} \\ \chi \\ \end{array \right) , \label{B-DKP} \end{equation respectively, with one zero component, where explicit formulae for the spinor $p^{A\dot{B}}$ were used, cf. (\ref{4vector-spinor}). \section{Subsolutions of the Dirac equation} \subsection{Classical subsolutions of the Dirac equation} In the $m=0$ case it is possible to obtain two independent equations for spinors $\xi$, $\eta$\ by application of projection operators $Q_{\pm}=\frac 1}{2}\left( 1\pm\gamma^{5}\right) $ to Eq.(\ref{Dirac1}) since $\gamma^{5 \overset{df}{=}-i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$ anticommutes with $\gamma^{\mu}p_{\mu}$: \begin{equation} Q_{\pm}\gamma^{\mu}p_{\mu}\Psi=\gamma^{\mu}p_{\mu}\left( Q_{\mp}\Psi\right) =0. \label{DiracNeutrino} \end{equation} In the spinor representation of the Dirac matrices \cite{Berestetskii1971} we have $\gamma^{5}=\ \mathrm{diag\,}\left( -1,-1,1,1\right) $ and thus Q_{-}\Psi=\left( \begin{array}{c} \xi \\ \end{array} \right) $, $Q_{+}\Psi=\left( \begin{array}{c} 0 \\ \et \end{array} \right) $ and separate equations for $\xi$, $\eta$ follow: \begin{subequations} \label{WEYL} \begin{align} \left( p^{0}+\overrightarrow{\sigma}\cdot\overrightarrow{p}\right) \eta & =0, \label{Weyl1} \\ \left( p^{0}-\overrightarrow{\sigma}\cdot\overrightarrow{p}\right) \xi & =0, \label{Weyl2} \end{align} where $\overset{\rightarrow}{\sigma}$ denotes the vector built of the Pauli matrices. Equations (\ref{WEYL}) are known as the Weyl equations and are used to describe massless left-handed and right-handed neutrinos. However, since the experimentally established phenomenon of neutrino oscillations requires non-zero neutrino masses, theory of massive neutrinos, which can be based on the Dirac equation, is necessary \cite{Zralek1997, Perkins2000,Fukugita2003}. Alternatively, a modification of the Dirac or Weyl equation, called the Majorana equation, is thought to apply to neutrinos. According to Majorana theory neutrino and antineutrino are identical and neutral \cite{Majorana1937}. Although the Majorana equations can be introduced without any reference to the Dirac equation they are subsolutions of the Dirac equation \cite{Zralek1997}. Indeed, demanding in (\ref{Dirac1}) that $\Psi=\mathcal{C}\Psi$ where \mathcal{C}$ is the charge conjugation operator, $\mathcal{C}\Psi=i\gamma ^{2}\Psi^{\ast}$, we obtain in the spinor representation \xi=-i\sigma^{2}\eta^{\ast}$, $\eta=i\sigma^{2}\xi^{\ast}$and the Dirac equation (\ref{Dirac1}) reduces to two separate Majorana equations for two-component spinors: \end{subequations} \begin{subequations} \label{MAJORANA} \begin{align} \left( p^{0}+\overrightarrow{\sigma}\cdot\overrightarrow{p}\right) \eta & =-im\sigma^{2}\eta^{\ast}, \label{Majorana1} \\ \left( p^{0}-\overrightarrow{\sigma}\cdot\overrightarrow{p}\right) \xi & =+im\sigma^{2}\xi^{\ast}. \label{Majorana2} \end{align} It follows from the condition $\Psi =\mathcal{C}\Psi $ that Majorana particle has zero charge built-in condition. The problem whether neutrinos are described by the Dirac equation or the Majorana equations is still open \cite{Zralek1997, Perkins2000, Fukugita2003}. Let us note that the Dirac equations (\ref{Dirac1}) in the spinor representation of the $\gamma ^{\mu }$ matrices can be also separated in form of second-order equations: \end{subequations} \begin{eqnarray} \left( p^{0}+\overrightarrow{\sigma }\cdot \overrightarrow{p}\right) \left( p^{0}-\overrightarrow{\sigma }\cdot \overrightarrow{p}\right) \xi &=&m^{2}\xi , \label{Dirac2a} \\ \left( p^{0}-\overrightarrow{\sigma }\cdot \overrightarrow{p}\right) \left( p^{0}+\overrightarrow{\sigma }\cdot \overrightarrow{p}\right) \eta &=&m^{2}\eta . \label{Dirac2b} \end{eqnarray Such equations were used by Feynman and Gell-Mann to describe weak decays in terms of two-component spinors \cite{Feynman1958}. \subsection{Other massive subsolutions of the free Dirac equation} The free Dirac equation (\ref{Dirac1}) in the spinor representation of \gamma $ matrices reads: \begin{equation} \left. \begin{array}{r} \left( p^{0}+p^{3}\right) \eta _{\dot{1}}+\left( p^{1}-ip^{2}\right) \eta _ \dot{2}}=m\xi ^{1} \\ \left( p^{1}+ip^{2}\right) \eta _{\dot{1}}+\left( p^{0}-p^{3}\right) \eta _ \dot{2}}=m\xi ^{2} \\ \left( p^{0}-p^{3}\right) \xi ^{1}+\left( -p^{1}+ip^{2}\right) \xi ^{2}=m\eta _{\dot{1}} \\ \left( -p^{1}-ip^{2}\right) \xi ^{1}+\left( p^{0}+p^{3}\right) \xi ^{2}=m\eta _{\dot{2} \end{array \right\} , \label{Dirac2} \end{equation with $\Psi =\left( \xi ^{1},\xi ^{2},\eta _{\dot{1}},\eta _{\dot{2}}\right) ^{T}$ \cite{Berestetskii1971} (see also \cite{MTW1973,Corson1953} for full exposition of spinor formalism). In this Subsection we shall investigate other possibilities of finding subsolutions of the Dirac equation in the setting of first-order equations. For $m\neq 0$ we can define new quantities: \begin{subequations} \label{DEF1} \begin{align} \left( p^{0}+p^{3}\right) \eta _{\dot{1}}& =m\xi _{(1)}^{1},\quad \left( p^{1}-ip^{2}\right) \eta _{\dot{2}}=m\xi _{(2)}^{1}, \label{def1} \\ \left( p^{1}+ip^{2}\right) \eta _{\dot{1}}& =m\xi _{(1)}^{2},\quad \left( p^{0}-p^{3}\right) \eta _{\dot{2}}=m\xi _{(2)}^{2}, \label{def2} \end{align} where we have: \end{subequations} \begin{subequations} \label{DEF2} \begin{align} \xi_{(1)}^{1}+\xi_{(2)}^{1} & =\xi^{1}, \label{def3} \\ \xi_{(1)}^{2}+\xi_{(2)}^{2} & =\xi^{2}. \label{def4} \end{align} In spinor notation $\xi_{(1)}^{1}=\psi_{\dot{1}}^{1\dot{1}}$, \xi_{(2)}^{1}=\psi_{\dot{2}}^{1\dot{2}}$, $\xi_{(1)}^{2}=\psi_{\dot{1}}^{ \dot{1}}$, $\xi_{(2)}^{2}=\psi_{\dot{2}}^{2\dot{2}}$. Equations (\ref{Dirac2}) can be now written as \end{subequations} \begin{equation} \left. \begin{array}{r} \left( p^{0}+p^{3}\right) \eta _{\dot{1}}=m\xi _{(1)}^{1} \\ \left( p^{1}-ip^{2}\right) \eta _{\dot{2}}=m\xi _{(2)}^{1} \\ \left( p^{1}+ip^{2}\right) \eta _{\dot{1}}=m\xi _{(1)}^{2} \\ \left( p^{0}-p^{3}\right) \eta _{\dot{2}}=m\xi _{(2)}^{2} \\ \left( p^{0}-p^{3}\right) \left( \xi _{(1)}^{1}+\xi _{(2)}^{1}\right) +\left( -p^{1}+ip^{2}\right) \left( \xi _{(1)}^{2}+\xi _{(2)}^{2}\right) =m\eta _{\dot{1}} \\ \left( -p^{1}-ip^{2}\right) \left( \xi _{(1)}^{1}+\xi _{(2)}^{1}\right) +\left( p^{0}+p^{3}\right) \left( \xi _{(1)}^{2}+\xi _{(2)}^{2}\right) =m\eta _{\dot{2} \end{array \right\} \label{Dirac3} \end{equation} It follows from Eqs.(\ref{DEF1}) that the following identities hold: \begin{subequations} \label{ID2} \begin{align} \left( p^{1}+ip^{2}\right) \xi _{(1)}^{1}& =\left( p^{0}+p^{3}\right) \xi _{(1)}^{2}, \label{id1a} \\ \left( p^{0}-p^{3}\right) \xi _{(2)}^{1}& =\left( p^{1}-ip^{2}\right) \xi _{(2)}^{2}. \label{id2a} \end{align Taking into account the identities (\ref{ID2}) we can finally write equations (\ref{Dirac3}) as a system of the following two equations: \end{subequations} \begin{equation} \left. \begin{array}{r} \left( p^{0}+p^{3}\right) \eta_{\dot{1}}=m\xi_{(1)}^{1} \\ \left( p^{1}+ip^{2}\right) \eta_{\dot{1}}=m\xi_{(1)}^{2} \\ \left( p^{0}-p^{3}\right) \xi_{(1)}^{1}+\left( -p^{1}+ip^{2}\right) \xi_{(1)}^{2}=m\eta_{\dot{1} \end{array} \right\} , \label{constituent1} \end{equation} \begin{equation} \left. \begin{array}{r} \left( p^{1}-ip^{2}\right) \eta_{\dot{2}}=m\xi_{(2)}^{1} \\ \left( p^{0}-p^{3}\right) \eta_{\dot{2}}=m\xi_{(2)}^{2} \\ \left( -p^{1}-ip^{2}\right) \xi_{(2)}^{1}+\left( p^{0}+p^{3}\right) \xi_{(2)}^{2}=m\eta_{\dot{2} \end{array} \right\} . \label{constituent2} \end{equation} Due to the identities (\ref{ID2}) equations (\ref{constituent1}), (\re {constituent2}) can be cast into form \begin{equation} \left( \begin{array}{cccc} 0 & 0 & p^{0}+p^{3} & p^{1}-ip^{2} \\ 0 & 0 & p^{1}+ip^{2} & p^{0}-p^{3} \\ p^{0}-p^{3} & -p^{1}+ip^{2} & 0 & 0 \\ -p^{1}-ip^{2} & p^{0}+p^{3} & 0 & \end{array \right) \left( \begin{array}{c} \xi _{(1)}^{1} \\ \xi _{(1)}^{2} \\ \eta _{\dot{1}} \\ \end{array \right) =m\left( \begin{array}{c} \xi _{(1)}^{1} \\ \xi _{(1)}^{2} \\ \eta _{\dot{1}} \\ \end{array \right) , \label{A-D} \end{equation \begin{equation} \left( \begin{array}{cccc} 0 & 0 & p^{0}-p^{3} & p^{1}+ip^{2} \\ 0 & 0 & p^{1}-ip^{2} & p^{0}+p^{3} \\ p^{0}+p^{3} & -p^{1}-ip^{2} & 0 & 0 \\ -p^{1}+ip^{2} & p^{0}-p^{3} & 0 & \end{array \right) \left( \begin{array}{c} \xi _{(2)}^{2} \\ \xi _{(2)}^{1} \\ \eta _{\dot{2}} \\ \end{array \right) =m\left( \begin{array}{c} \xi _{(2)}^{2} \\ \xi _{(2)}^{1} \\ \eta _{\dot{2}} \\ \end{array \right) . \label{B-D} \end{equation} \section{Supersymmetric equations and their symmetries} We shall now interpret the subsolutions equations (\ref{A-DKP}), (\ref{B-DKP ) and (\ref{A-D}), (\ref{B-D})., First of all, we note that pairs of equations (\ref{A-DKP}), (\ref{B-DKP}) and (\ref{A-D}), (\ref{B-D}) are identical in form but have vector and spinor solutions, respectively. We shall thus refer to these equations as supersymmetric equations. We have demonstrated that equations (\ref{A-D}) and (\ref{B-D}) are Lorentz covariant \cite{Okninski2007} and that (\ref{A-DKP}), (\ref{B-DKP}) are charge conjugated one to another \cite{Okninski2004}. Let us consider Eqs.(\ref{A-DKP}), (\ref{A-D}). They can be written as \begin{equation} \gamma ^{\mu }p_{\mu }P_{4}\Psi =mP_{4}\Psi , \label{SUSY1} \end{equation where $P_{4}$ is the projection operator, $P_{4}=$ \textrm{diag }$\left( 1,1,1,0\right) $ and spinor representation of the Dirac matrices. Incidentally, there are other projection operators which lead to analogous three component equations, $P_{1}=$\textrm{diag}$\left( 0,1,1,1\right) $, P_{2}=$\textrm{diag}$\left( 1,0,1,1\right) $, $P_{3}=$ \textrm{diag}$\left( 1,1,0,1\right) $ but we shall need only the operator $P_{4}$. Acting from the left on (\ref{SUSY1}) with $P_{4}$ and $\left( 1-P_{4}\right) $ we obtain two equations: \begin{subequations} \begin{align} P_{4}\left( \gamma ^{\mu }p_{\mu }\right) P_{4}\Psi & =mP_{4}\Psi , \label{SUSY2a} \\ \left( 1-P_{4}\right) \left( \gamma ^{\mu }p_{\mu }\right) P_{4}\Psi & =0. \label{SUSY2b} \end{align In the spinor representation of $\gamma ^{\mu }$ matrices Eq.(\ref{SUSY2a}) is equivalent to (\ref{constituent1}) while Eq.(\ref{SUSY2b}) is equivalent to the identity (\ref{id1a}). Now the projection operator can be written as P_{4}=\frac{1}{4}\left( 3\mathbf{+}\gamma ^{5}-\gamma ^{0}\gamma ^{3}+i\gamma ^{1}\gamma ^{2}\right) $ (and similar formulae can be given for other projection operators $P_{1},P_{2},P_{3}$, see \cite{Corson1953} where another convention for $\gamma ^{\mu }$ matrices was however used). It thus follows that the supersymmetric equation (\ref{SUSY1}) is now given representation independent form. \section{Discussion} We have shown that subsolutions of the Dirac equation as well as of the DKP equations for spin $0$\ (similar subsolutions arise in the DKP theory for spin $1$ \cite{Okninski2003}) obey the Dirac equation with built-in projection operator (\ref{SUSY1}). Therefore, this covariant equation has bosonic as well as fermionic degrees of freedom and may provide a background for supersymmetric formalism. Let us note here that interaction can be incorporated into \textsl{(\ref{SUSY1}}) via minimal action, $p^{\mu }\rightarrow \pi ^{\mu }=p^{\mu }-eA^{\mu }$, but in the interacting case Eq.(\ref{SUSY1}) is non-equivalent neither to the Dirac or the DKP equations \cite{Okninski2007}. \newpage
train/arxiv
BkiUdos4ubnjot5Ja_G-
5
1
\section{Introduction} \subsection{Bases and skew-Hermitian forms over division algebras} A classical result in algebraic number theory, due to Minkowski, asserts that if $R$ is the ring of integers of a number field, then every ideal $I \subset R$ contains an element~$x$ such that the index $[I:Rx]$ is bounded by an explicit multiple of $\sqrt{\disc(R)}$. A similar result can be proved for torsion-free modules of finite rank over the ring of integers of a number field, by combining Minkowski's theorem with the structure theory of finite-rank modules over a Dedekind domain (see \cite[\S 22, Exercise~6]{CR62}). In \cite{MW95}, Masser and Wüstholz generalise this theorem to torsion-free $R$-modules $L$ of finite rank over any order $R$ in a division $\mathbb{Q}$-algebra. This generalisation shows that there is a free $R$-submodule of finite index in~$L$, with index $[L:R]$ bounded polynomially in terms of $\disc(R)$. The statement is as follows. \begin{theorem} {\cite[Chapter 2, Class Index Lemma]{MW95}} \label{minkowski-general-index} Let $D$ be a division $\mathbb{Q}$-algebra and let $R$ be an order in $D$. Let $L$ be a torsion-free $R$-module of finite rank $m$. Then there exists a left $D$-basis $v_1, \dotsc, v_m$ for $D \otimes_R L$ such that $v_1, \dotsc, v_m$ are in $L$ and $[L:Rv_1 + \dotsb + Rv_m] \leq \abs{\disc(R)}^{m/2}$. \end{theorem} In another direction, if $L$ is a $\mathbb{Z}$-module of finite rank equipped with a positive definite symmetric bilinear form $\psi \colon L \times L \to \mathbb{Z}$, then one can use the classical reduction theory of quadratic forms to find an \textit{orthogonal} basis $v_1, \dotsc, v_m$ for $L \otimes_\mathbb{Z} \mathbb{Q}$ such that $v_1, \dotsc, v_n \in L$ and $[L:\mathbb{Z} v_1 + \dotsb + \mathbb{Z} v_m]$ is bounded by a polynomial in $\abs{\disc(L)}$. A similar result for a $\mathbb{Z}$-module of finite rank equipped with a symplectic form can be found at \cite[Lemma~4.3]{Orr15}. In this paper, we obtain a version of \cref{minkowski-general-index} in which $L$ is equipped with a $(D,\dag)$-skew-Hermitian form and we seek a basis of $D \otimes_R L$ which is weakly symplectic or weakly unitary with respect to this form. (See sections \ref{subsec:skew-hermitian-forms} and~\ref{subsec:unitary-bases} for the definitions of $(D,\dag)$-skew-Hermitian forms and related concepts. Weakly symplectic or weakly unitary bases are the analogues of bases which are orthogonal but not necessarily orthonormal.) This theorem is as follows. \begin{theorem} \label{minkowski-hermitian-perfect} Let $D$ be either a totally real number field or a totally indefinite quaternion algebra over a totally real number field. Let $\dag$ be a positive involution of~$D$. Let $V$ be a left $D$-vector space of dimension~$m$, equipped with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $L$ be a $\mathbb{Z}$-lattice of full rank in~$V$ such that $\Trd_{D/\mathbb{Q}} \psi(L \times L) \subset \mathbb{Z}$. Let $R=\Stab_D(L)$ denote the stabiliser of $L$ in~$D$. Then there exists a $D$-basis $v_1, \dotsc, v_m$ for $V$ such that: \begin{enumerate}[(i)] \item $v_1, \dotsc, v_m \in L$; \item the basis is weakly symplectic (when $D$ is a field) or weakly unitary (when $D$ is a quaternion algebra) with respect to~$\psi$; \item $[L:Rv_1 + \dotsb + Rv_m] \leq \newC{minkowski-main-first} \abs{\disc(R)}^{\newC*} \abs{\disc(L)}^{\newC*}$; \item $\abs{\psi(v_i, v_j)}_D \leq \newC* \abs{\disc(R)}^{\newC*} \abs{\disc(L)}^{\newC{minkowski-main-last}}$ for $1 \leq i, j \leq m$. \end{enumerate} The constants $\refC{minkowski-main-first}, \dotsc, \refC{minkowski-main-last}$ depend only on $m$ and $\dim_\mathbb{Q}(D)$. \end{theorem} Explicit, but not optimal, values for the constants are given in \cref{weakly-unitary-induction}. One could also prove a version of this theorem that bounds the lengths of the vectors $v_i$, in the style of Minkowski's second theorem, but this is stronger than needed for our application, and according to the proof that we know, the constants are exponential instead of polynomial in~$m$. Division $\mathbb{Q}$-algebras with positive involution were classified by Albert into four types (see section~\ref{subsec:albert}). The division algebras treated in \cref{minkowski-hermitian-perfect} are those of types I and~II in Albert's classification. It is likely that this paper's strategy could be adapted to prove \cref{minkowski-hermitian-perfect} for division $\mathbb{Q}$-algebras with positive involution of types III and~IV, as well as a version for Hermitian forms instead of skew-Hermitian forms, although various steps in the argument would require modification. \subsection{Applications to the Zilber--Pink conjecture} \label{subsec:intro-zp-theorem} We will apply \cref{minkowski-hermitian-perfect} to prove certain cases of the Zilber--Pink conjecture on unlikely intersections in the moduli space $\mathcal{A}_g$ of principally polarised abelian varieties of dimension $g$ (which is an example of a Shimura variety), as follows. \begin{theorem} \label{main-theorem-zp} Let $g \geq 3$. Let $\Sigma$ denote the set of points $s \in \mathcal{A}_g(\mathbb{C})$ for which the endomorphism algebra of the associated abelian variety $A_s$ is either a totally real field, other than~$\mathbb{Q}$, or a non-split totally indefinite quaternion algebra over a totally real field. Let $C$ be an irreducible Hodge generic algebraic curve in $\mathcal{A}_g$. If $C$ satisfies \cref{galois-orbits}, then $C \cap \Sigma$ is finite. \end{theorem} \Cref{galois-orbits}, referred to in \cref{main-theorem-zp}, is a large Galois orbits conjecture, of the sort appearing in many works on unlikely intersections (for example, \cite[Conjecture~2.7]{Ull14}, \cite[Conjecture~8.2]{HP16}, \cite[Conjecture~6.2]{QRTUI}). We can prove this conjecture in some cases and thus establish \cref{main-theorem-zp} unconditionally in those settings, as discussed further below. In \cref{main-theorem-zp}, and throughout this paper, whenever we refer to endomorphisms of an abelian variety, we mean its endomorphisms over an algebraically closed field. The analogous statement to \cref{main-theorem-zp} for $g=2$ was proved in our earlier work \cite{QRTUI}. \Cref{main-theorem-zp} builds on other previous work on the Zilber--Pink conjecture for Shimura varieties, such as \cite{HP12}, \cite{HP16}, \cite{DR18}, \cite{OrrUI}, \cite{ExCM}. The proof of \cref{main-theorem-zp} is ultimately based on the method of Pila and Zannier. The main new ingredient is \cref{minkowski-hermitian-perfect}, as part of the parameter height bound. \subsection{Contextualising Theorem \ref{main-theorem-zp}} \label{subsec:intro-zp-context} Let us recall a general statement of the Zilber--Pink conjecture for Shimura varieties. A \defterm{special subvariety} of a Shimura variety $S$ means an irreducible component of a Shimura subvariety of~$S$. An irreducible subvariety of $S$ is \defterm{Hodge generic} if it is not contained in any special subvariety other than a component of $S$ itself. \pagebreak \begin{conjecture} \label{zilber-pink} \cite[Conjecture~1.3]{pink:generalisation} Let $S$ be a Shimura variety and let $V$ be an irreducible Hodge generic subvariety of $S$. Then the intersection of $V$ with the special subvarieties of $S$ having codimension greater than $\dim V$ is not Zariski dense in $V$. \end{conjecture} In order to relate this to \cref{main-theorem-zp}, we introduce a class of special subvarieties of~$\mathcal{A}_g$ which come from endomorphisms of abelian varieties. We recall that $\mathcal{A}_g$ is an irreducible algebraic variety over~$\mathbb{Q}$. For any algebraically closed field $k$ containing $\mathbb{Q}$ and any point $s \in \mathcal{A}_g(k)$, we write $A_s$ for the principally polarised abelian variety over $k$ (defined up to isomorphism) corresponding to the point $s$. For any ring $R$, the set \[ \mathcal{M}_R = \{ s \in \mathcal{A}_g(\mathbb{C}) : \text{there exists an injective homomorphism } R \to \End(A_s) \} \] is a countable union of algebraic subvarieties of $\mathcal{A}_g$. Each irreducible component of $\mathcal{M}_R$ is a special subvariety of $\mathcal{A}_g$. We call a subvariety of $\mathcal{A}_g$ a \defterm{special subvariety of PEL type} if it is an irreducible component of $\mathcal{M}_R$ for some $R$. If $R \neq \mathbb{Z}$, then $\mathcal{M}_R$ is strictly contained in $\mathcal{A}_g$. Hence the set $\Sigma$ defined in \cref{main-theorem-zp} is contained in the union of the proper special subvarieties of PEL type of $\mathcal{A}_g$. Furthermore, according to \cref{codim-pel}, for $g \geq 3$, all proper special subvarieties of PEL type of $\mathcal{A}_g$ have codimension at least~$2$. Thus, \cref{zilber-pink} predicts that the intersection $C \cap \Sigma$ of \cref{main-theorem-zp} should not be Zariski dense in the curve~$C$, that is, it should be finite. For each special subvariety of PEL type $S \subset \mathcal{A}_g$, there is a largest ring $R$ such that $S$ is a component of $\mathcal{M}_R$. We call this ring $R$ the \defterm{generic endomorphism ring} of $S$, and we call $R \otimes_\mathbb{Z} \mathbb{Q}$ the \defterm{generic endomorphism algebra} of $S$. We say that a point $s\in S(\mathbb{C})$ is \defterm{endomorphism generic} if the endomorphism ring of $A_s$ is equal to $R$. Note that all points in the complement of countably many proper subvarieties of $S$ are endomorphism generic. We call $S \subset \mathcal{A}_g$ a \defterm{special subvariety of simple PEL type} if it is a special subvariety of PEL type and its generic endomorphism algebra is a division algebra. (Equivalently, $A_s$ is a simple abelian variety for endomorphism generic points $s \in S(\mathbb{C})$.) We call $S$ a \defterm{special subvariety of simple PEL type I or~II} if it is a special subvariety of PEL type whose generic endomorphism ring is a division algebra of type I or~II in the Albert classification (see section~\ref{subsec:albert}). Thus the set $\Sigma$ in \cref{main-theorem-zp} is the union of the endomorphism generic loci of all special subvarieties of simple PEL type I or~II, excluding $\mathcal{A}_g$ itself. \subsection{Large Galois orbits and unconditional cases of Zilber--Pink} \Cref{main-theorem-zp} is conditional on the following large Galois orbits conjecture. \begin{conjecture} \label{galois-orbits} Define $\Sigma \subset \mathcal{A}_g$ as in \cref{main-theorem-zp} and let $C\subset\mathcal{A}_g$ denote an irreducible Hodge generic algebraic curve defined over a finitely generated field $L \subset \mathbb{C}$. Then there exist positive constants $\newC{galois-orbits-mult}$ and $\newC{galois-orbits-exp}$, depending only on $g$, $L$ and $C$, such that, for any point $s \in C \cap \Sigma$, \[ \# \Aut(\mathbb{C}/L) \cdot s \geq \refC{galois-orbits-mult} \abs{\disc(\End(A_s))}^{\refC{galois-orbits-exp}}. \] \end{conjecture} Previous conjectures of this type often used a notion of complexity of special subvarieties. In our conjecture, we are taking the complexity of a special subvariety of PEL type to be the discriminant of its generic endomorphism ring. Using André's G-functions method \cite{And89}, in the form of \cite[Theorem~8.2]{ExCM}, we prove \cref{galois-orbits} in certain cases. \begin{theorem}\label{unconditional} Let $g$ be an even positive integer. Let $\Sigma^*$ denote the set of points $s \in \mathcal{A}_g$ for which $\End(A_s) \otimes_\mathbb{Z} \mathbb{Q}$ is a non-split totally indefinite quaternion algebra whose centre is a totally real field of degree $e$ such that $4e$ does not divide $g$. Let $C\subset\mathcal{A}_g$ denote an irreducible Hodge generic algebraic curve defined over a number field. Suppose that the Zariski closure of $C$ in the Baily--Borel compactification of $\mathcal{A}_g$ intersects the 0-dimensional stratum. Then $C$ satisfies \cref{galois-orbits} for $\Sigma^*$ (in the place of $\Sigma$). Hence, $C \cap \Sigma^*$ is finite. \end{theorem} Compared with \cref{galois-orbits}, \cref{unconditional} adds two restrictions: $\Sigma^*$ is defined by a smaller class of endomorphism algebras than $\Sigma$, and there is a condition on the intersection of the Zariski closure $C$ with the boundary of the Baily--Borel compactification. We recall that the Baily--Borel compactification of the moduli space $\mathcal{A}_g$ is naturally stratified as a disjoint union \[\mathcal{A}_g\sqcup\mathcal{A}_{g-1}\sqcup\cdots\sqcup\mathcal{A}_1\sqcup\mathcal{A}_0\] of locally closed subvarieties. The zero-dimensional stratum is $\mathcal{A}_0$, which is a point. The condition that $C$ intersects the zero-dimensional stratum is equivalent to saying that the associated family of principally polarised abelian varieties degenerates to a torus (this informal statement can be made precise as in \cite[Theorem~1.4]{ExCM}). \subsection{Remark on effectivity} We note that \cref{minkowski-hermitian-perfect} and \cref{EQ-scheme} are effective. As such, the obstructions to effectivity in \cref{unconditional} are (1) its dependence on the (ineffective) Habegger-Pila-Wilkie theorem (as stated in \cite[Theorem 9.1]{DR18}) from o-minimality and (2) the ineffectivity in \cite{QRTUI}, as explained in Remark 4.3 therein. Obstruction (1) was recently overcome for the Andr\'e--Oort conjecture for non-compact curves in Hilbert modular varieties by Binyamini and Masser \cite{BM:AO} using so-called $Q$-functions. It seems plausible that these techniques could also apply to our setting. \subsection{Outline of the paper} The paper is in two parts. The first part, sections \ref{sec:division-algebras} to~\ref{sec:minkowski-proof}, proves \cref{minkowski-hermitian-perfect}. It deals only with modules over division algebras and skew-Hermitian forms, with no mention of Shimura varieties. The second part, sections \ref{sec:ZP-high-level} to~\ref{cases-of-ZP}, proves \cref{main-theorem-zp}. It applies \cref{minkowski-hermitian-perfect} and other results from the first part to the Zilber--Pink conjecture. In section~\ref{sec:division-algebras}, we introduce terminology around division algebras and their orders, as well as various lemmas used throughout the calculations in sections \ref{sec:skew-hermitian} and~\ref{sec:minkowski-proof}. In section~\ref{sec:skew-hermitian}, we define the notion of a skew-Hermitian form on a module over a division algebra with involution and define several notions of well-behaved bases with respect to a skew-Hermitian form. Section~\ref{sec:minkowski-proof} consists of the proof of \cref{minkowski-hermitian-perfect}, which involves substantial calculations. Section~\ref{sec:ZP-high-level} introduces Shimura data and establishes the basic properties of special subvarieties of simple PEL type I and~II in $\mathcal{A}_g$. It also gives a high-level overview of the strategy used to prove \cref{main-theorem-zp}. This strategy uses \cite[Theorem~1.2]{QRTUI}, which requires as input a group representation with various properties. The required representation is constructed and its properties proved in sections \ref{sec:representation} and~\ref{sec:rep-bound}. The application of \cref{minkowski-hermitian-perfect} is found in section~\ref{sec:rep-bound}. Finally section~\ref{cases-of-ZP} states some slightly stronger versions of \cref{main-theorem-zp,unconditional} and completes their proofs. \subsection{Notation} \label{subsec:notation} We shall use the following notation for matrices. If $A$ and $B$ are square matrices, we will denote by $A\oplus B$ the block diagonal matrix with blocks $A$ (top-left) and $B$ (bottom-right). We will write $A^{\oplus d}$ to denote the block diagonal matrix $A\oplus\cdots\oplus A$ with $A$ appearing $d$ times. We shall write $J_2 = \fullsmallmatrix{0}{1}{-1}{0}$ and $J_n = J_2^{\oplus n/2}$ for each even positive integer~$n$. \subsection*{Acknowledgements} This work was supported by the Engineering and Physical Sciences Research Council [EP/S029613/1 to C.D., EP/T010134/1 to M.O.]. \section{Division algebras} \label{sec:division-algebras} In this section, we introduce the notation and terminology we shall use for division algebras. A key definition is a norm $\abs{\cdot}_D$ on an $\mathbb{R}$-algebra with positive involution. We establish useful properties of this norm and of the discriminants of orders in division algebras. We also include some broader preliminary lemmas, on discriminants of bilinear forms and versions of Minkowski's second theorem. In this paper, our main interest will be in division $\mathbb{Q}$-algebras with positive involution of Albert types I and~II. However, we have stated many of the definitions and results in this section in greater generality, such as for semisimple algebras over any subfield of $\mathbb{R}$. We do this not only because this greater generality is often natural, but also it is sometimes necessary as we wish to apply the results to $D \otimes_\mathbb{Q} \mathbb{R}$ where $D$ is a division $\mathbb{Q}$-algebra, but $D \otimes_\mathbb{Q} \mathbb{R}$ might not be a division algebra. We have not stated all results at their greatest possible generality, if doing so would require additional complications while not being required for our application. Throughout this section, $k$ denotes a subfield of $\mathbb{R}$. Later in the paper, we will usually use $k = \mathbb{Q}$ or $k = \mathbb{R}$. Whenever we say \defterm{$k$-algebra}, we mean a $k$-algebra of finite dimension. If $V$ is a $k$-vector space or $k$-algebra, then $V_\mathbb{R}$ denotes $V \otimes_k \mathbb{R}$. \subsection{Semisimple algebras, traces and discriminants} \label{subsec:semisimple-algebras} Let $D$ be a semisimple $k$-algebra. Then $D \cong \prod_{i=1}^s D_i$ for some simple $k$-algebras $D_1, \dotsc, D_s$. For each $i$, let $F_i$ be the centre of $D_i$, which is a field. We write $\Trd_{D_i/F_i}$ and $\Nrd_{D_i/F_i}$ for the reduced trace and reduced norm respectively of the central simple algebra $D_i/F_i$. Letting $\Tr_{F_i/\mathbb{Q}}$ and $\Nm_{F_i/\mathbb{Q}}$ denote the trace and norm of finite extensions of fields, we define \[ \Trd_{D/k} = \sum_{i=1}^s \Tr_{F_i/k} \circ \Trd_{D_i/F_i}, \quad \Nrd_{D/k} = \prod_{i=1}^s \Nm_{F_i/k} \circ \Nrd_{D_i/F_i}. \] Note that $\Trd_{D/k}$ and $\Nrd_{D/k}$ are compatible with extension of scalars. By this, we mean that, if $K$ is a field containing $k$ and $D_K = D \otimes_k K$, then $\Trd_{D/k} = \Trd_{D_K/K}|_D$ and similarly for $\Nrd_{D/k}$. This is true even though the simple factors of $D$ might not remain simple after extension of scalars. Note also that $\Trd_{D/k}(ab) = \Trd_{D/k}(ba)$ for all $a, b \in D$. Suppose that $D$ is a simple $k$-algebra and let $F$ be the centre of $D$. Let \[ d = \sqrt{\dim_F(D)} = \Trd_{D/F}(1), \qquad e = [F:k]. \] Then $\dim_k(D) = d^2e$. We will use the notation $F$, $d$, $e$ from this paragraph throughout the paper whenever we talk about simple algebras, without further comment. Note that $\Tr_{D/F}(a) = d\Trd_{D/F}(a)$ and $\Nm_{D/F}(a) = \Nrd_{D/F}(a)^d$ for all $a \in D$, where $\Tr_{D/F}$ and $\Nm_{D/F}$ are the non-reduced trace and norm. \subsection{Division algebras with positive involution} \label{subsec:albert} Let $D$ be a semisimple $k$-algebra. An \defterm{involution} $\dag$ of $D$ means a $k$-linear map $D \to D$ such that $\dag \circ \dag = \mathrm{id}_D$ and $(ab)^\dag = b^\dag a^\dag$ for all $a,b \in D$. For every $a \in D$, we have $\Trd_{D/k}(a^\dag) = \Trd_{D/k}(a)$. Consequently the bilinear form $D \times D \to k$ given by $(a,b) \mapsto \Trd_{D/k}(ab^\dag)$ is symmetric. The involution $\dag$ is said to be \defterm{positive} if this bilinear form is positive definite. Division $\mathbb{Q}$-algebras with positive involution $(D,\dag)$ were classified by Albert into four types, depending on the isomorphism type of $D_\mathbb{R}$ \cite[sec.~21, Theorem~2]{Mum74}. \begin{enumerate}[label=\textbf{Type \Roman*.},align=left,widest*=3,leftmargin=*,itemsep=2pt] \item $D = F$, a totally real number field. The involution is trivial. (In this case $D_\mathbb{R} \cong \mathbb{R}^e$.) \item $D$ is a non-split totally indefinite quaternion algebra over a totally real number field $F$. (Totally indefinite means that $D_\mathbb{R} \cong \mathrm{M}_2(\mathbb{R})^e$.) The involution is of orthogonal type, meaning that after extending scalars to $\mathbb{R}$ it becomes matrix transpose on each copy of $\mathrm{M}_2(\mathbb{R})$. \item $D$ is a totally definite quaternion algebra over a totally real number field~$F$. (Totally definite means that $D_\mathbb{R} \cong \mathbb{H}^e$ where $\mathbb{H}$ is Hamilton's quaternions.) The involution is the canonical involution $a \mapsto \Trd_{D/F}(a) - a$. \item $D$ is a division algebra whose centre is a CM field~$F$. The involution restricts to complex conjugation on $F$. (In this case $D_\mathbb{R} \cong \mathrm{M}_d(\mathbb{C})^e$.) \end{enumerate} \subsection{The norm \texorpdfstring{$\abs{\cdot}_D$}{||D}} \label{subsec:norm-D} Let $(D,\dag)$ be a semisimple $k$-algebra with a positive involution. We define a norm $\abs{\cdot}_D$ on $D_\mathbb{R}$ by: \[ \abs{a}_D = \sqrt{\Trd_{D_\mathbb{R}/\mathbb{R}}(aa^\dag)}. \] This is a norm in the sense of a real vector space norm (that is, a length function). Note that $\abs{a^\dag}_D = \abs{a}_D$ for all $a \in D_\mathbb{R}$. The norm $\abs{\cdot}_D$ is induced by the inner product $(a,b) \mapsto \Trd_{D_\mathbb{R}/\mathbb{R}}(ab^\dag)$ on $D_\mathbb{R}$. This inner product (together with an orientation of $D_\mathbb{R}$) also induces a volume form. Whenever we refer to the covolume of a lattice in $D_\mathbb{R}$, we use this volume form. (Note that the covolume is the absolute value of the integral of the volume form over a fundamental domain, so it is independent of the choice of orientation.) If $D$ is a semisimple $k$-algebra, then \[ D_\mathbb{R} \cong \prod_{i=1}^r \mathrm{M}_{s_i}(\mathbb{K}_i) \] where $\mathbb{K}_i = \mathbb{R}$, $\mathbb{C}$ or~$\mathbb{H}$. If $D$ is equipped with a positive involution~$\dag$, then we can choose the isomorphism so that $\dag$ corresponds to conjugate-transpose on each simple factor. Throughout the paper, whenever we choose an isomorphism between $D_\mathbb{R}$ and a product of matrix algebras, we implicitly assume that it has this property. Let $\abs{\cdot}_F$ denote the \defterm{Frobenius norm} on any matrix algebra over $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$: \[ \abs{M}_F^2 = \sum_{j,k=1}^s M_{jk} \overline{M_{jk}}. \] Then, for any $a = (a_1, \dotsc, a_r) \in \prod_i \mathrm{M}_{s_i}(\mathbb{K}_i)$, we have \[ \abs{a}_D^2 = \sum_{i=1}^r \abs{a_i}_F^2. \] The following lemma will be used repeatedly throughout sections \ref{sec:skew-hermitian} and~\ref{sec:minkowski-proof}. \begin{lemma} \label{length-submult} Let $(D, \dag)$ be a semisimple $k$-algebra with positive involution. Then $\abs{ab}_D \leq \abs{a}_D \abs{b}_D$ for all $a, b \in D_\mathbb{R}$. \end{lemma} \begin{proof} Identify $D_\mathbb{R}$ with $\prod_{i=1}^r \mathrm{M}_{s_i}(\mathbb{K}_i)$ and write \[ a = (a_1, \dotsc, a_r), \; b = (b_1, \dotsc, b_r) \in \prod_{i=1}^r \mathrm{M}_{s_i}(\mathbb{K}_i). \] Then \[ \abs{ab}_D^2 = \sum_{i=1}^r \length{a_ib_i}_F^2 \leq \sum_{i=1}^r \length{a_i}_F^2 \length{b_i}_F^2 \leq \Bigl( \sum_{i=1}^r \length{a_i}_F \Bigr)^2 \Bigl( \sum_{i=1}^r \length{b_i}_F \Bigr)^2 = \abs{a}_D^2 \abs{b}_D^2. \] This calculation uses the submultiplicativity of the Frobenius norm and the following inequality, valid for all non-negative real numbers $x_1, \dotsc, x_r, y_1, \dotsc, y_r$: \begin{equation} \label{eqn:sum-squares} \sum_{i=1}^r x_iy_i \leq \Bigl( \sum_{i=1}^r x_i \Bigr) \Bigl( \sum_{i=1}^r y_i \Bigr). \end{equation} Since the Frobenius norm is less well-known over $\mathbb{H}$, we remark that, just as in the real and complex cases, submultiplicativity of the Frobenius norm follows from the Cauchy--Schwarz inequality \[ \Bigl( \sum_{j=1}^s x_j \overline{y}_j \Bigr) \Bigl( \sum_{j=1}^s y_j \overline{x}_j \Bigr) \leq \Bigl( \sum_{j=1}^s x_j \overline{x}_j \Bigr) \Bigl( \sum_{j=1}^s y_j \overline{y}_j \Bigr) \text{ for all } x, y \in \mathbb{K}^n. \] The Cauchy--Schwarz inequality can be proved by considering the discriminant of the quadratic polynomial $(\sum_{i=1}^s x_j t + y_j)(\sum_{i=1}^n \overline{x}_j t + \overline{y}_j)$, which is non-negative for all $t \in \mathbb{R}$, and then applying the AM-GM inequality to the left hand side. \end{proof} We say that a semisimple $k$-algebra $D$ is \defterm{$\mathbb{R}$-split} if $D_\mathbb{R} \cong \mathrm{M}_d(\mathbb{R})^e$ for some positive integers $d$ and~$e$. Note that a division $\mathbb{Q}$-algebra with positive involution is $\mathbb{R}$-split if and only if it has type I or~II in the Albert classification, and these are the types of algebras that we focus on in this paper. \begin{lemma} \label{Nrd-length-bound} Let $(D, \dag)$ be an $\mathbb{R}$-split semisimple $k$-algebra with positive involution and let $F$ be its centre. Then, for all $a \in D_\mathbb{R}^\times$: \begin{enumerate}[(i)] \item $\abs{\Nrd_{D_\mathbb{R}/F_\mathbb{R}}(a)}_D \leq d^{(1-d)/2} \abs{a}_D^d$; \item $\abs{\Nrd_{D_\mathbb{R}/\mathbb{R}}(a)} \leq (de)^{-de/2} \abs{a}_D^{de}$. \end{enumerate} \end{lemma} \begin{proof} Identify $D_\mathbb{R}$ with $\mathrm{M}_d(\mathbb{R})^e$ and write \[ a = (a_1, \dotsc, a_e). \] For each $i$, the matrix $a_i a_i^t \in \mathrm{M}_d(\mathbb{R})$ is symmetric and positive definite and therefore diagonalisable with positive eigenvalues. Let its eigenvalues be $\lambda_{i1}, \dotsc, \lambda_{id}$. Note that $\abs{a_i}_F^2 = \Tr(a_i a_i^t) = \lambda_{i1} + \dotsb + \lambda_{id}$. By the AM-GM inequality, \begin{equation} \label{eqn:det-ai} \det(a_i)^{2/d} = \det(a_i a_i^t)^{1/d} = \bigl( \lambda_{i1} \dotsm \lambda_{id} \bigr)^{1/d} \leq d^{-1} \bigl( \lambda_{i1} + \dotsb + \lambda_{id} \bigr) = d^{-1} \abs{a_i}_F^2. \end{equation} \begin{enumerate}[(i)] \item We have $\Nrd_{D_\mathbb{R}/F_\mathbb{R}}(a) = (\det(a_1) I_d, \dotsc, \det(a_e)I_d)$ where $I_d$ denotes the identity matrix in $\mathrm{M}_d(\mathbb{R})$. Hence \begin{align*} \abs{\Nrd_{D_\mathbb{R}/F_\mathbb{R}}(a)}_D^2 & = \sum_{i=1}^e \abs{\det(a_i) I_d}_F^2 = \sum_{i=1}^e d \abs{\det(a_i)}^2 \\& \leq \sum_{i=1}^e d \bigl( d^{-1} \abs{a_i}_F^2 \bigr)^{d} = d^{1-d} \sum_{i=1}^e \abs{a_i}_F^{2d} \\& \leq d^{1-d} \Bigl( \sum_{i=1}^e \abs{a_i}^2 \Bigr)^d = d^{1-d} \abs{a}_D^{2d}. \end{align*} \item Using \eqref{eqn:det-ai} and another application of the AM-GM inequality, \[ \abs{\Nrd_{D_\mathbb{R}/\mathbb{R}}(a)}^{2/de} = \Bigl( \prod_{i=1}^e \abs{\det(a_i)}^{2/d} \Bigr)^{1/e} \leq e^{-1} \sum_{i=1}^e d^{-1} \abs{a_i}_F^2 = (de)^{-1} \abs{a}_D^2. \qedhere \] \end{enumerate} \end{proof} \subsection{The Hermite constant and Minkowski's theorems} Let $\gamma_n$ denote the Hermite constant for $\mathbb{R}^n$, that is, the smallest positive real number such that the following holds: \textit{For every lattice $L$ in $\mathbb{R}^n$ with the Euclidean norm and volume form, there exists a vector $v \in L$ with $\abs{v} \leq \sqrt{\gamma_n} \covol(L)^{1/n}$.} It is immediate from the definition that $\gamma_n \geq 1$ for all~$n$. As a consequence of Minkowski's theorem on convex bodies, \begin{equation} \label{eqn:minkowski-gamma} \gamma_n \leq 4 \mathcal{V}_n^{-2/n} = \tfrac{4}{\pi} \Gamma(\tfrac{n}{2} + 1)^{2/n} \end{equation} where $\mathcal{V}_n$ denotes the volume of the unit ball in $\mathbb{R}^n$. \begin{lemma} \label{Gamma-bound} For all positive integers $n$, $\Gamma(\tfrac{n}{2} + 1) \leq 2(\tfrac{n}{4})^{n/2}$. \end{lemma} \begin{proof} The proof is by induction on $n$. When $n=1$, we have $\Gamma(\frac{n}{2} + 1) = \Gamma(\tfrac{3}{2}) = \tfrac{1}{2}\sqrt{\pi} < 1$ while $2(\tfrac{n}{4})^{n/2} = 2(\tfrac{1}{4})^{1/2} = 1$. When $n=2$, we have $\Gamma(\frac{n}{2} + 1) = \Gamma(2) = 1$ while $2(\tfrac{n}{4})^{n/2} = 2(\tfrac{1}{2})^1 = 1$. When $n \geq 3$, write $m=n-2 \geq 1$. Using a standard property of the gamma function and by induction, \[ \Gamma(\tfrac{n}{2} + 1) = \tfrac{n}{2}\Gamma(\tfrac{n}{2}) = \tfrac{n}{2}\Gamma(\tfrac{m}{2} + 1) \leq \tfrac{n}{2} \cdot 2(\tfrac{m}{4})^{m/2}. \] Now \[ n^{m/2} = (m+2)^{m/2} \geq m^{m/2} + \tfrac{m}{2} \cdot 2m^{m/2-1} = 2m^{m/2} \] (the inequality is obtained by taking the first two terms of the binomial expansion). Hence we obtain \[ \tfrac{n}{2} \cdot 2(\tfrac{m}{4})^{m/2} \leq \tfrac{n}{2} (\tfrac{n}{4})^{m/2} = 2(\tfrac{n}{4})^{n/2}. \qedhere \] \end{proof} Plugging \cref{Gamma-bound} into \eqref{eqn:minkowski-gamma}, we obtain \begin{equation} \label{eqn:hermite-constant-bound} \gamma_n \leq \tfrac{4}{\pi} \cdot 2^{2/n} \cdot \tfrac{n}{4} = 4^{1/n} \cdot \tfrac{n}{\pi}. \end{equation} The inequality \eqref{eqn:hermite-constant-bound} is not optimal for large~$n$, but we have chosen to use this bound because we need a simple inequality valid for all $n \geq 1$ in order to avoid fiddly special cases in \cref{weakly-unitary-induction}. A version of Minkowski's second theorem for the Euclidean norm also holds with the Hermite constant: \begin{theorem} \label{minkowski-2nd} {\cite[Ch.~VIII, Theorem~1]{cassels:geom-of-numbers}} For every lattice $L$ in $\mathbb{R}^n$ with the Euclidean norm and volume form, there exist vectors $e_1, \dotsc, e_n \in L$ which form a basis for $\mathbb{R}^n$ and which satisfy $\abs{e_1}\dotsm\abs{e_n} \leq \gamma_n^{n/2} \covol(L)$. \end{theorem} With some book-keeping, we can obtain a version of \cref{minkowski-2nd} for vector spaces over a division $\mathbb{Q}$-algebra. This is the same method as the proof of a version of Minkowski's second theorem over number fields in \cite[C.2.18]{BG06}. \begin{proposition} \label{D-minkowski} Let $D$ be a division $\mathbb{Q}$-algebra. Let $V$ be a left $D$-vector space of dimension~$m$. Let $L$ be a $\mathbb{Z}$-lattice in~$V$. Let $\abs{\cdot}$ be any norm on $V_\mathbb{R}$ induced by an inner product, and use the associated volume form to define $\covol(L)$. \pagebreak Then there exists a $D$-basis $w_1, \dotsc, w_m$ for $V$ such that: \begin{enumerate}[(i)] \item $w_1, \dotsc, w_m \in L$; \item $\abs{w_1} \abs{w_2} \dotsm \abs{w_m} \leq \gamma_{[D:\mathbb{Q}]m}^{m/2} \covol(L)^{1/[D:\mathbb{Q}]}$. \end{enumerate} \end{proposition} \begin{proof} Let $n = \dim_\mathbb{Q}(V) = [D:\mathbb{Q}]m$. Choose $e_1, \dotsc, e_n \in L$ as in \cref{minkowski-2nd}. Order the $e_i$ so that $\abs{e_i} \leq \abs{e_{i+1}}$ for all $i = 1, \dotsc, n-1$. For $i = 1, \dotsc, m$, let $q_i$ denote the smallest positive integer $q$ such that the $D$-span of $e_1, \dotsc, e_q$ has $D$-dimension equal to $i$. Let $w_i = e_{q_i}$. By construction, for each~$i$, the $D$-span of $w_1, \dotsc, w_i$ has $D$-dimension equal to $i$. Hence $w_1, \dotsc, w_m$ is a $D$-basis for $V$. For $1 \leq i \leq m$, the vectors $e_1, \dotsc, e_{q_i-1}$ are contained in a $D$-vector space of $D$-dimension $i-1$, so they are contained in a $\mathbb{Q}$-vector space of $\mathbb{Q}$-dimension at most $[D:\mathbb{Q}](i-1)$. These vectors are $\mathbb{Q}$-linearly independent, so \[ q_i - 1 \leq [D:\mathbb{Q}](i-1). \] Since the lengths $\abs{e_i}$ are increasing, we deduce that \[ \abs{w_i}^{[D:\mathbb{Q}]} \leq \abs{e_{[D:\mathbb{Q}](i-1) + 1}}^{[D:\mathbb{Q}]} \leq \prod_{j=1}^{[D:\mathbb{Q}]} \abs{e_{[D:\mathbb{Q}](i-1) + j}}. \] Hence by \cref{minkowski-2nd}, \[ \prod_{i=1}^m \abs{w_i}^{[D:\mathbb{Q}]} \leq \prod_{i=1}^n \abs{e_i} \leq \gamma_n^{n/2} \covol(L). \qedhere \] \end{proof} Let $D$ be a division $\mathbb{Q}$-algebra, $R$ an order in $D$ and $L$ a torsion-free $R$-module of rank~$m$. Combining \cref{D-minkowski} with \cref{minkowski-2nd} applied to $R$ and Hadamard's inequality, we could prove that there exist $w_1, \dotsc, w_m \in L$ forming a $D$-basis for $D \otimes_R L$ and satisfying $[L : Rw_1 + \dotsb + Rw_m] \leq \newC{D-index-multiplier} \abs{\disc(R)}^{m/2}$. However this method of proof gives a constant $\refC{D-index-multiplier} > 1$, so this is weaker than \cref{minkowski-general-index}. \subsection{Discriminants of bilinear forms} If $\Lambda$ is a $\mathbb{Z}$-module, we write $\Lambda_\mathbb{Q}$ for $\Lambda\otimes_\mathbb{Z}\mathbb{Q}$. If $\Lambda$ is free of finite rank and $\phi \colon \Lambda_\mathbb{Q} \times \Lambda_\mathbb{Q} \to \mathbb{Q}$ is a bilinear form, we write $\disc(\Lambda,\phi)$ for the determinant of the matrix $(\phi(e_i,e_j))_{i,j}$ where $\{e_1,\ldots,e_n\}$ is a $\mathbb{Z}$-basis for $\Lambda$ (the determinant is independent of the choice of basis). \begin{lemma} \label{disc-lattice-complement} Let $L$ be a free $\mathbb{Z}$-module of finite rank and let $\phi \colon L \times L \to \mathbb{Z}$ be a non-degenerate bilinear form. Let $M \subset L$ be a $\mathbb{Z}$-submodule such that $\phi_{|M \times M}$ is non-degenerate. Let \[ M^\perp = \{ x \in L : \phi(x,y) = 0 \text{ for all } y \in M \}. \] Then \begin{enumerate}[(i)] \item $[L : M + M^\perp] \leq \abs{\disc(M, \phi)}$; and \item $\abs{\disc(M^\perp, \phi)} \leq \abs{\disc(L, \phi)} \abs{\disc(M, \phi)}$. \end{enumerate} \end{lemma} \begin{proof} Since $\phi_{|M \times M}$ is non-degenerate, $L_\mathbb{Q} = M_\mathbb{Q} \oplus M_\mathbb{Q}^\perp$. Let $\pi \colon L_\mathbb{Q} \to M_\mathbb{Q}$ denote the projection with kernel $M_\mathbb{Q}^\perp$. If $x \in L$ and $\pi(x) \in M$, then $x-\pi(x) \in \ker(\pi) \cap L = M^\perp$. Hence $x \in M + M^\perp$. Conversely, if $x \in M + M^\perp$, it is clear that $\pi(x) \in M$. Thus $\pi^{-1}(M) = M + M^\perp$. Let \[ M^* = \{ x \in M_\mathbb{Q} : \phi(x, y) \in \mathbb{Z} \text{ for all } y \in M \}. \] If $x \in L$, then $\phi(\pi(x),y) = \phi(x,y) \in \mathbb{Z}$ for all $y \in M$ so $\pi(x) \in M^*$. Thus $\pi(L) \subset M^*$. Thus we obtain \[ L/(M + M^\perp) = L/\pi^{-1}(M) \cong \pi(L)/M \subset M^*/M. \] It is well-known that $[M^*:M] = \abs{\disc(M,\phi)}$, so this proves (i). Since $M$ and $M^\perp$ are orthogonal with respect to $\phi$, \begin{align*} \abs{\disc(M, \phi)} \abs{\disc(M^\perp, \phi)} & = \abs{\disc(M + M^\perp, \phi)} \\& = [L:M+M^\perp]^2 \abs{\disc(L, \phi)} \leq \abs{\disc(M, \phi)}^2 \abs{\disc(L, \phi)}. \end{align*} Since $\abs{\disc(M, \phi)} \neq 0$, this proves~(ii). \end{proof} \subsection{Orders and discriminants} Let $k = \mathbb{Q}$ or $\mathbb{R}$. If $V$ is a finite-dimensional $k$-vector space, then a \defterm{$\mathbb{Z}$-lattice} in $V$ means a $\mathbb{Z}$-submodule $L \subset V$ such that the natural map $L \otimes_\mathbb{Z} k \to V$ is an isomorphism. Let $D$ be a semisimple $\mathbb{Q}$-algebra. An \defterm{order} in $D$ is a $\mathbb{Z}$-lattice in $D$ which is also a subring. Note that if $V$ is a $D$-vector space and $L$ is a $\mathbb{Z}$-lattice in $V$, then $\Stab_D(L) = \{ a \in D : aL \subset L \}$ is an order in~$D$. (This is proved on \cite[p.~109]{Rei75} when $V=D$, and the proof generalises.) If $R$ is an order in $D$, the \defterm{discriminant} $\disc(R)$ is defined to be the discriminant of the $k$-bilinear form $(a,b) \mapsto \Tr_{D/k}(ab)$ on $R$, where $\Tr_{D/\mathbb{Q}}$ is the \emph{non-reduced} trace. The trace form of a semisimple algebra is non-degenerate, so $\disc(R) \neq 0$. Furthermore, $\Tr_{D/\mathbb{Q}}(a) \in \mathbb{Z}$ for all $a \in R$, so $\disc(R) \in \mathbb{Z}$. If $D$ is a simple $\mathbb{Q}$-algebra, then $\Trd_{D/\mathbb{Q}}(a) \in \mathbb{Z}$ for all $a \in R$ \cite[Theorem~10.1]{Rei75}. Since $\Tr_{D/\mathbb{Q}} = d \Trd_{D/\mathbb{Q}}$, it follows that $\disc(R) \in d^{d^2e} \mathbb{Z}$ so \begin{equation} \label{eqn:disc-lower-bound} \abs{\disc(R)} \geq d^{d^2e}. \end{equation} Now suppose that $(D,\dag)$ is a simple $\mathbb{Q}$-algebra with a positive involution. According to \cite[Lemma~5.6]{QRTUI}, for any order $R \subset D$, $\abs{\disc(R)}$ is equal to the discriminant of the symmetric bilinear form $(a,b) \mapsto \Tr_{D/k}(ab^\dag)$. Consequently, $\abs{\disc(R)}$ is equal to $d^{d^2e}$ multiplied by the discriminant on $R$ of the positive definite bilinear form which induces the norm $\abs{\cdot}_D$. We conclude that \begin{equation} \label{eqn:disc-covol} \abs{\disc(R)} = d^{d^2e}\covol(R)^2. \end{equation} For an order $R$ in a simple $\mathbb{Q}$-algebra $D$, let $R^*$ denote the dual lattice \[ R^* = \{ a \in D : \Trd_{D/\mathbb{Q}}(ab) \in \mathbb{Z} \text{ for all } b \in R \}. \] \begin{lemma} \label{index-RcapF} Let $D$ be a semisimple $k$-algebra and let $R$ be an order in $D$. Let $F$ be the centre of $D$ and let $\mathcal{O}$ be an order in $F$ which contains $R \cap F$. Then \[ [\mathcal{O}:R \cap F]^2 \, \abs{\disc(\mathcal{O} R)} \leq \abs{\disc(R)}. \] \end{lemma} \begin{proof} This follows from the facts $\mathcal{O} + R \subset \mathcal{O} R$ and $[\mathcal{O} + R : R] = [\mathcal{O} : R \cap F]$. \end{proof} \begin{lemma} \label{dual-ideal} Let $D$ be a simple $\mathbb{Q}$-algebra. Let $F$ be the centre of $D$ and let $\mathcal{O}_F$ be the maximal order of~$F$. Let $S$ be an order in $D$ which contains $\mathcal{O}_F$. Define $S^*$ analogously to $R^*$. Then there exists an ideal $I \subset \mathcal{O}_F$ such that $IS^* \subset S$ and \[ \Nm(I) \leq d^{-d^2e} \abs{\disc(S)}. \] \end{lemma} \begin{proof} Let $I = \{ x \in \mathcal{O}_F : xS^* \subset S \}$, that is, the annihilator of the finite $\mathcal{O}_F$-module $S^*/S$. By the structure theorem for finitely generated torsion modules over a Dedekind domain, there is an isomorphism of $\mathcal{O}_F$-modules \[ S^*/S \cong \mathcal{O}_F/I_1 \oplus \mathcal{O}_F/I_2 \oplus \dotsb \oplus \mathcal{O}_F/I_r \] for some $\mathcal{O}_F$-ideals $I_1, I_2, \dotsc, I_r$. We have $I = I_1 \cap I_2 \cap \dotsb \cap I_r \supset I_1 I_2 \dotsm I_r$ and so \[ \Nm(I) \leq \Nm(I_1)\Nm(I_2) \dotsm \Nm(I_r) = [S^*:S]. \] The index $[S^*:S]$ is equal to the absolute value of the discriminant of $S$ with respect to the reduced trace form. Thus $[S^*:S] = d^{-d^2e} \abs{\disc(S)}$. \end{proof} \begin{lemma} \label{conductor-S} Let $D$ be a simple $\mathbb{Q}$-algebra. Let $F$ be the centre of $D$ and let $\mathcal{O}_F$ be the maximal order of~$F$. Let $R$ be an order in $D$. Let $S = \mathcal{O}_F R$. Let $\mathfrak{c}$ be the conductor of $R \cap F$ (as an order in the number field~$F$). Then \[ \mathfrak{c} S \subset R \quad \text{ and } \quad \mathfrak{c} R^* \subset S^*. \] \end{lemma} \begin{proof} From the definitions of $S$ and $\mathfrak{c}$, \[ \mathfrak{c} S = \mathfrak{c} \mathcal{O}_F R \subset (R \cap F) R \subset R. \] If $c \in \mathfrak{c}$ and $a \in R^*$, then for all $b \in S$ we have \[ \Trd_{D/\mathbb{Q}}((ca)b) = \Trd_{D/\mathbb{Q}}(a(cb)) \in \mathbb{Z} \] because $c$ is in the centre of $D$ and $cb \in \mathfrak{c} S \subset R$. Thus $ca \in S^*$. \end{proof} \begin{lemma} \label{disc-R-S} Let $D$ be a division $\mathbb{Q}$-algebra and let $V$ be a left $D$-vector space of dimension~$m$. Let $L$ be a $\mathbb{Z}$-lattice in $V$ be an order in $D$ and let $R = \Stab_D(L)$. Let $S = \End_R(L)$ denote the endomorphisms of $L$ commuting with $R$. Then \[ \abs{\disc(S)} \leq \abs{\disc(R)}^{(d^2em+1)m^2}. \] \end{lemma} \begin{proof} By \cref{minkowski-general-index}, there is a $D$-basis $v_1, \dotsc, v_m$ for $V$ such that $v_1, \dotsc, v_m \in L$ and \begin{equation} \label{eqn:R-free-index} [L:Rv_1 + \dotsb + Rv_m] \leq \abs{\disc(R)}^{m/2}. \end{equation} Let $N = [L:Rv_1 + \dotsb + Rv_m]$ and $s = \dim_\mathbb{Q}(\End_D(V)) = d^2em^2$. Using the $D$-basis $v_1, \dotsc, v_m$, we identify $\End_D(V)$ with $\mathrm{M}_m(D^\mathrm{op})$. Note that $\End_R(L)$ and $\mathrm{M}_m(R^\mathrm{op})$ are both $\mathbb{Z}$-lattices in $\End_D(V)$. For every $a \in \mathrm{M}_m(R^\mathrm{op}) \subset \End_D(V)$, we have \[ aNL \subset a(Rv_1 + \dotsb + Rv_m) \subset Rv_1 + \dotsb + Rv_m \subset L. \] Hence $Na \in \End_R(L)$. Thus $N\mathrm{M}_m(R^\mathrm{op}) \subset \End_R(L)$. Therefore \[ \abs{\disc(S)} \leq N^{2s} \abs{\disc(\mathrm{M}_m(R^\mathrm{op}))} = N^{2s} \abs{\disc(R)}^{m^2}. \] Combining this with the bound for~$N$ from \eqref{eqn:R-free-index} proves the lemma. \end{proof} \subsection{Anti-symmetric elements in division algebras of type~II} If $(D, \dag)$ is a division $\mathbb{Q}$-algebra with involution, we define \[ D^- = \{ a \in D : a^\dag = -a \}. \] If $\psi \colon V \times V \to D$ is a $(D,\dag)$-skew-Hermitian form and $x \in V$, then $\psi(x,x) \in D^-$, so $D^-$ is important for the study of weakly unitary bases. Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of Albert type~II. Choose an isomorphism $D_\mathbb{R} \cong \mathrm{M}_2(\mathbb{R})^e$ (as always, we implicitly assume that $\dag$ corresponds to matrix transpose on each factor). Then $D_\mathbb{R}^-$ consists of those elements of $\mathrm{M}_2(\mathbb{R})^e$ in which all matrices are anti-symmetric. Hence $D_\mathbb{R}^-$ is a free $F_\mathbb{R}$-module of rank $1$, so $D^-$ is a $1$-dimensional $F$-vector space. The following lemma can be proved by calculations in $D_\mathbb{R} \cong \mathrm{M}_2(\mathbb{R})^e$. \begin{lemma} \label{action-on-antisymm} Let $(D, \dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type~II. Let $F$ be the center of $D$. \begin{enumerate}[(i)] \item If $a, b \in D^-$, then $ab \in F$. \item If $a \in D$ and $b \in D^-$, then $aba^\dag = \Nrd_{D/F}(a)b$. \end{enumerate} \end{lemma} \begin{lemma} \label{small-antisymm-star} Let $(D, \dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type~II. Let $R$ be an order in $D$ and let $\eta \in \mathbb{Z}_{>0}$ be a positive integer such that $\eta R^\dag \subset R$. Then there exists $\omega \in D$ such that: \begin{enumerate}[(i)] \item $\omega \in D^- \setminus \{0\}$; \item $\omega R^* \subset R$ and $R^* \omega \subset R$; \item $\abs{\omega}_D \leq 2^{-4} \gamma_e^{1/2} \eta^7 \abs{\disc(R)}^{2/e}$. \end{enumerate} \end{lemma} \begin{proof} Let $F$ be the centre of $D$ and let $\mathcal{O}_F$ be the maximal order of~$F$. Let $\mathfrak{c} \subset \mathcal{O}_F$ be the conductor of the order $R \cap F$. By \cite[(2)]{DCD00}, we have the following inclusion of ideals in $\mathbb{Z}$: \[ \disc_{F/\mathbb{Q}}(R \cap F) \subseteq \Nm_{F/\mathbb{Q}}(\mathfrak{c}) \disc_{F/\mathbb{Q}}(\mathcal{O}_F). \] This leads to the following inequality of integers: \[ \Nm(\mathfrak{c}) \abs{\disc(\mathcal{O}_F)} \leq \abs{\disc(R \cap F)}. \] Since also $\abs{\disc(R \cap F)} = [\mathcal{O}_F : R \cap F]^2 \abs{\disc(\mathcal{O}_F)}$, we deduce that \[ \Nm(\mathfrak{c}) \leq [\mathcal{O}_F : R \cap F]^2. \] Let $S = \mathcal{O}_F R$ and $S^- = S \cap D^-$. Let $I$ be the ideal of $\mathcal{O}_F$ given by \cref{dual-ideal} applied to $S$. Let $J = \mathfrak{c}^2 I$ (as a product of ideals of $\mathcal{O}_F$). Then by \cref{conductor-S}, \begin{gather*} JSR^* = \mathfrak{c} SI \mathfrak{c} R^* \subset \mathfrak{c} SIS^* \subset \mathfrak{c} SS \subset \mathfrak{c} S \subset R, \\ R^*JS = \mathfrak{c} I \mathfrak{c} R^* S \subset \mathfrak{c} IS^*S \subset \mathfrak{c} SS \subset \mathfrak{c} S \subset R. \end{gather*} Hence if we choose $\omega \in JS \cap D^- \setminus \{0\} = JS^- \setminus \{ 0 \}$, then it will satisfy (i) and~(ii). Since $S^-$ is a non-zero $\mathcal{O}_F$-submodule of an $F$-vector space of dimension~$1$, we can write $S^- = I^-\alpha$ for some ideal $I^- \subset \mathcal{O}_F$ and some $\alpha \in D^-$, then use the multiplicativity of ideal norms in $\mathcal{O}_F$ to conclude that \[ \covol(JS^-) = \Nm(J) \covol(S^-), \] where we measure covolumes in $D_\mathbb{R}^-$ by the volume form associated with the restriction of the inner product $\Trd_{D_\mathbb{R}/\mathbb{R}}(ab^\dag)$. Let $S^+ = \{ a \in S : a^\dag = a \}$. Then $S^+ \cap S^- = \{0\}$. Thus the sum $S^+ + S^-$ is direct. This sum is also orthogonal because, if $a \in S^+$ and $b \in S^-$, then \[ \Trd_{D/\mathbb{Q}}(ab^\dag) = \Trd_{D/\mathbb{Q}}((ab^\dag)^\dag) = \Trd_{D/\mathbb{Q}}(ba^\dag) = -\Trd_{D/\mathbb{Q}}(ab^\dag) \] so $\Trd_{D/\mathbb{Q}}(ab^\dag) = 0$. For every $a \in S$, we have $\eta a^\dag \in \eta (\mathcal{O}_F R)^\dag = \mathcal{O}_F \eta R^\dag \subset \mathcal{O}_F R = S$. Hence \[ 2\eta a = (\eta a+\eta a^\dag) + (\eta a-\eta a^\dag) \in S^+ + S^-. \] Thus $2\eta S \subset S^+\oplus S^-$, so \begin{equation} \label{eqn:covolS+S-} \covol(S^+) \covol(S^-) = \covol(S^+ \oplus S^-) \leq \covol(2\eta S) = 2^{4e} \eta^{4e} \covol(S). \end{equation} Here we measure covolumes in both $D_\mathbb{R}^-$ and $S^+ \otimes_{\mathbb{Z}} \mathbb{R}$ by the volume forms associated with the restriction of the inner product $\Trd_{D_\mathbb{R}/\mathbb{R}}(ab^\dag)$. For all $a,b \in S$, $\eta ab^\dag \in S$ and so $\Trd_{D/\mathbb{Q}}(ab^\dag) \in \eta^{-1}\mathbb{Z}$. Consequently $\covol(S^+) \geq \eta^{-\rk_\mathbb{Z}(S^+)} = \eta^{-3e}$ so by \eqref{eqn:disc-covol} applied to $S$ and \eqref{eqn:covolS+S-}, \[ \covol(S^-) \leq \eta^{3e} \cdot 2^{4e} \eta^{4e} \covol(S) = 2^{4e} \eta^{7e} \cdot 2^{-2e} \abs{\disc(S)}^{1/2}. \] Therefore, using \cref{dual-ideal}, \begin{align*} \covol(JS^-) & = \Nm(\mathfrak{c})^2 \Nm(I) \covol(S^-) \\& \leq [\mathcal{O}_F : R \cap F]^4 \cdot 2^{-4e} \abs{\disc(S)} \cdot 2^{2e} \eta^{7e} \abs{\disc(S)}^{1/2} \\& = 2^{-2e} \eta^{7e} [\mathcal{O}_F : R \cap F]^4 \abs{\disc(S)}^{3/2}. \end{align*} Applying \eqref{eqn:disc-lower-bound} to $S$, we see that $\abs{\disc(S)} \geq 2^{4e}$. Using \cref{index-RcapF}, we deduce that \begin{align*} \covol(JS^-) & \leq 2^{-2e} \abs{\disc(S)}^{-1/2} \eta^{7e} [\mathcal{O}_F : R \cap F]^4 \abs{\disc(S)}^2 = 2^{-4e} \eta^{7e} \abs{\disc(R)}^2. \end{align*} Since $JS^-$ is a free $\mathbb{Z}$-module of rank $e$, there exists $\omega \in JS^- \setminus \{0\}$ with \[ \abs{\omega}_D \leq \sqrt{\gamma_e} \covol(JS^-)^{1/e} \leq \sqrt{\gamma_e} \cdot 2^{-4} \eta^7 \abs{\disc(R)}^{2/e}. \qedhere \] \end{proof} \section{Skew-Hermitian forms over division algebras} \label{sec:skew-hermitian} In this section, we introduce the notion of a $(D,\dag)$-skew-Hermitian form on a vector space over a division algebra $D$ with an involution, and explain how this is related to skew-symmetric forms over the base field. We define several notions of good behaviour for bases relative to $(D,\dag)$-skew-Hermitian forms, such as symplectic and unitary bases and a weakened version of these notions. Finally we prove the existence of norms on $D$-vector spaces, which we call $D$-norms, which behave well relative to the action of~$D$ and to a $(D,\dag)$-skew-Hermitian form. As in section~\ref{sec:division-algebras}, we are interested in applying the results of this section when $(D, \dag)$ is either a division $\mathbb{Q}$-algebra with a positive involution of type I or~II, or the semisimple $\mathbb{R}$-algebra which arises from such a $\mathbb{Q}$-algebra by extending scalars to~$\mathbb{R}$, but we state the results in greater generality whenever it is convenient. \subsection{Skew-Hermitian forms} \label{subsec:skew-hermitian-forms} Let $k$ be any field. Let $(D, \dag)$ be a semisimple $k$-algebra with an involution. Let $V$ be a left $D$-module. A \defterm{$(D,\dag)$-skew-Hermitian form} on $V$ is a $k$-bilinear map $\psi \colon V \times V \to D$ which satisfies \[ \psi(y,x) = -\psi(x,y)^\dag \text{ and } \psi(ax, by) = a\psi(x,y)b^\dag \] for all $a, b \in D$ and $x,y \in V$. We say that a $(D,\dag)$-skew-Hermitian form $\psi$ is \defterm{non-degenerate} if, for every $x \in V \setminus \{0\}$, there exists $y \in V$ such that $\psi(x,y) \neq 0$. A \defterm{$(D,\dag)$-compatible skew-symmetric form} on $V$ is a skew-symmetric $k$-bilinear map $\phi \colon V \times V \to k$ which satisfies \[ \phi(ax, y) = \phi(x, a^\dag y) \] for all $a \in D$ and $x,y \in V$. A pair $(V,\phi)$, where $\phi$ is a $(D,\dag)$-compatible skew-symmetric form, is called a symplectic $(D,\dag)$-module in \cite[section~8]{Mil05}. \begin{lemma} \label{tr-skew-hermitian-form} Let $(D, \dag)$ be a semisimple $k$-algebra with an involution. Let $V$ be a left $D$-module. Then the map $\psi \mapsto {\Trd_{D/k}} \circ \psi$ is a bijection between the set of $(D,\dag)$-skew-Hermitian forms on $V$ and the set of $(D,\dag)$-compatible skew-symmetric forms on~$V$. \end{lemma} \begin{proof} It is clear that, if $\psi$ is a $(D,\dag)$-skew-Hermitian form on $V$, then $\Trd_{D/k} \psi$ is a $(D,\dag)$-compatible skew-symmetric form. Let $\phi$ be a $(D,\dag)$-compatible skew-symmetric form. We shall show that there is a unique $(D,\dag)$-skew-Hermitian form on $V$ such that $\phi = \Trd_{D/k} \psi$. For each $x,y \in V$, define a $k$-linear map $\alpha_{x,y} \colon D \to k$ by $\alpha_{x,y}(a) = \phi(ax, y)$. Because $D$ is a semisimple $k$-algebra, $(a,b) \mapsto \Trd_{D/k}(ab)$ is a non-degenerate bilinear form $D \times D \to k$ \cite[Theorem~9.26]{Rei75}. Hence there exists a unique element $\beta_{x,y} \in D$ such that \[ \alpha_{x,y}(a) = \Trd_{D/k}(a\beta_{x,y}) \text{ for all } a \in D. \] Define $\psi(x,y) = \beta_{x,y}$. Using the uniqueness of the elements $\beta_{x,y}$, it is clear that the resulting function $\psi \colon V \times V \to D$ is $k$-bilinear. If $a, b \in D$ and $x,y \in V$, then \[ \Trd_{D/k}(ab\beta_{x,y}) = \alpha_{x,y}(ab) = \phi(abx, y) = \alpha_{bx,y}(a) = \Trd_{D/k}(a\beta_{bx,y}). \] By uniqueness of $\beta_{bx,y}$, we deduce that $\psi$ is $D$-linear in the first variable. If $a \in D$ and $x, y \in V$, then \begin{align*} \Trd_{D/k}(a\beta_{x,y}) & = \phi(ax, y) = -\phi(a^\dag y, x) = -\Trd_{D/k}(a^\dag \beta_{y, x}) = -\Trd_{D/k}(a\beta_{y,x}^\dag). \end{align*} Again by uniqueness of $\beta_{bx,y}$, $\psi(x,y) = -\psi(y,x)^\dag$. Since $\psi$ is $D$-linear in the first variable and satisfies $\psi(x,y) = -\psi(y,x)^\dag$, it is also $(D,\dag)$-anti-linear in the second variable. Thus it is $(D,\dag)$-skew-Hermitian. \end{proof} \begin{lemma} \label{orthog-complements} Let $(D, \dag)$ be a semisimple $k$-algebra with an involution. Let $V$ be a left $D$-module. Let $\psi \colon V \times V \to k$ be a $(D,\dag)$-skew-Hermitian form and let $\phi = \Trd_{D/k} \psi \colon V \times V \to k$. Let $W \subset V$ be a left $D$-submodule and define \begin{gather*} W_\psi^\perp = \{ x \in V : \psi(w,x) = 0 \text{ for all } w \in W \}, \\ W_\phi^\perp = \{ x \in V : \phi(w,x) = 0 \text{ for all } w \in W \}. \end{gather*} Then $W_\psi^\perp = W_\phi^\perp$. In particular, $W_\phi^\perp$ is a left $D$-submodule of $V$. \end{lemma} \begin{proof} It is clear that $W_\psi^\perp \subset W_\phi^\perp$. If $x \in W_\phi^\perp$ and $w \in W$ then, for all $a \in D$, we have $aw \in W$ and so \[ \Trd_{D/\mathbb{Q}}(a\psi(w,x)) = \Trd_{D/\mathbb{Q}}(\psi(aw,x)) = \phi(aw,x) = 0. \] By the non-degeneracy of the reduced trace form, it follows that $\psi(w,x) = 0$, that is, $x \in W_\psi^\perp$. Thus $W_\phi^\perp \subset W_\psi^\perp$. \end{proof} \begin{corollary} \label{tr-non-deg} Let $(D, \dag)$ be a semisimple $k$-algebra with an involution. Let $V$ be a left $D$-module. Let $\psi \colon V \times V \to k$ be a $(D,\dag)$-skew-Hermitian form and let $\phi = \Trd_{D/k} \psi \colon V \times V \to k$. Then $\psi$ is non-degenerate if and only if $\phi$ is non-degenerate. \end{corollary} \begin{proof} Apply \cref{orthog-complements} to $W = V$. \end{proof} \subsection{Weakly symplectic and weakly unitary bases} \label{subsec:unitary-bases} Let $k$ be a field satisfying $\characteristic(k) \neq 2$ and let $(D,\dag)$ be a semisimple $k$-algebra with an involution. Let $V$ be a free left $D$-module and let $\psi \colon V \times V \to D$ be a $(D,\dag)$-skew-Hermitian form. We will now define special properties relative to $\psi$ which may be possessed by a basis of $V$. The notion of (weakly) symplectic basis is useful when $D$ a division $\mathbb{Q}$-algebra of type~I or $k^e$, and the notion of (weakly) unitary basis is useful when $D$ is a division $\mathbb{Q}$-algebra of type~II or $\mathrm{M}_2(k)^e$. We say that a $D$-basis $v_1, \dotsc, v_m$ for $V$ is \defterm{weakly symplectic} if $\psi(v_i, v_j) = 0$ for all $i, j$ except when $\{i,j\} = \{2k-1,2k\}$ for some $k \in \mathbb{Z}$. If $\psi$ is non-degenerate, then this implies that $\psi(v_{2k-1}, v_{2k}) \neq 0$ for all~$k$. We say that a $D$-basis $v_1, \dotsc, v_m$ is \defterm{symplectic} if $\psi$ is non-degenerate, the basis is weakly symplectic and furthermore, $\psi(v_{2k-1}, v_{2k}) = 1$ for all~$k$. When $D$ is a field and $\dag=\mathrm{id}$, a $(D,\dag)$-skew-Hermitian form is the same thing as a symplectic form and this definition agrees with the usual definition of symplectic basis. We say that a $D$-basis $v_1, \dotsc, v_m$ is \defterm{weakly unitary} if $\psi(v_i, v_j) = 0$ for all $i, j \in \{ 1, \dotsc, m \}$ such that $i \neq j$. If $\psi$ is non-degenerate, then this implies that $\psi(v_i, v_i) \neq 0$ for all~$i$. For a general division algebra with involution~$(D,\dag)$, there is no canonical choice of a non-zero element of $D^-$, so there is no natural definition of ``unitary basis'' with respect to a $(D,\dag)$-skew-Hermitian form. In the special case $D_0 = \mathrm{M}_d(k)^e$ with $d$ even, let us define \[ \omega_0 = ( J_d, \dotsc, J_d ) \in D_0^- \] where $J_d \in \mathrm{M}_d(k)$ was defined in section~\ref{subsec:notation}. If $V$ is a free left $D_0$-module equipped with a $(D_0,t)$-skew-Hermitian form $\psi_0$, then we say that a left $D_0$-basis $v_1, \dotsc, v_m$ of $V$ is \defterm{unitary} if it is weakly unitary and $\psi(v_i, v_i) = \omega_0$ for all $i = 1, \dotsc, m$. If $(D,\dag)$ is a division $\mathbb{Q}$-algebra with positive involution of type~II, $\alpha \colon (D_{0,\mathbb{R}},\dag) \to (D_\mathbb{R},t)$ is an isomorphism of $\mathbb{R}$-algebras with involution and $V$ is a left $D$-vector space equipped with a $(D,\dag)$-skew-Hermitian form $\psi$, then we say that a left $D_\mathbb{R}$-basis for $V_\mathbb{R}$ is \defterm{$\alpha$-unitary} if it forms a unitary $D_{0,\mathbb{R}}$-basis for $V_\mathbb{R}$ viewed as a $D_{0,\mathbb{R}}$-module via $\alpha$ and equipped with the $(D_{0,\mathbb{R}},t)$-skew-Hermitian form $\alpha \circ \psi \colon V_\mathbb{R} \times V_\mathbb{R} \to D_{0,\mathbb{R}}$. Theelements $v_i$ of an $\alpha$-unitary basis satisfy $\psi(v_i, v_i) = \alpha(\omega_0)$. As an aside, which will be used in later calculations, we remark that, for any $a \in D_0$, the entries of the matrices which make up $a \omega_0$ are (up to signs) a permutation of the matrix entries making up $a$. Hence \begin{equation} \label{eqn:a-omega0} \abs{a\omega_0}_{D_0} = \abs{a}_{D_0}. \end{equation} The following lemma shows how we can adjust a weakly symplectic or weakly unitary basis to become symplectic or $\alpha$-unitary. Note that it works only over $D_\mathbb{R}$, not over~$D$, because it requires taking square roots. \pagebreak \begin{lemma} \label{semi-orthogonal-normalise} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type I or~II. Let $\alpha \colon (D_\mathbb{R},\dag) \to (\mathrm{M}_d(\mathbb{R})^e,t)$ be an isomorphism of $\mathbb{R}$-algebras with involution. Let $V$ be a left $D$-vector space equipped with a $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $v_1, \dotsc, v_m$ be a left $D$-basis for $V$ which is weakly symplectic (when $D$ has type~I) or weakly unitary (when $D$ has type~II). Then there exist $s_1, \dotsc, s_m \in D_\mathbb{R}^\times$ such that $s_1^{-1} v_1, \dotsc, s_m^{-1} v_m$ form a symplectic or $\alpha$-unitary $D_\mathbb{R}$-basis for $V_\mathbb{R}$ (according to the type of~$D$) and, for all $i$, \[ \abs{s_i}_D \leq (de)^{1/4} \abs{\psi(v_i, v_j)}_D^{1/2} \] where $j$ is the unique index such that $\psi(v_i, v_j) \neq 0$. \end{lemma} \begin{proof} The proof is in two parts, depending on the type of~$D$. \subsubsection*{Type~I case} For each $k = 1, \dotsc, m/2$, $i=2k-1$ and $j=2k$, let \[ t_k = (de)^{-1/2} \abs{\psi(v_i, v_j)}_D \in \mathbb{R}_{>0}. \] Let $s_i = t_k^{-1/2} \psi(v_i, v_j)$ and $s_j = t_k^{1/2}$. Then \[ \psi(s_i^{-1} v_i, s_j^{-1} v_j) = s_i^{-1} \psi(v_i, v_j) (s_j^{-1})^\dag = 1 \] since $s_j^\dag = s_j$ and $t_k \in \mathbb{R}$ is in the centre of $D_\mathbb{R}$. Furthermore \[ \abs{s_i}_D = t_k^{-1/2} \abs{\psi(v_i, v_j)}_D = (de)^{1/4} \abs{\psi(v_i, v_j)}_D^{1/2} \] while \[ \abs{s_j}_D = t_k^{1/2} \abs{1}_D = (de)^{1/2} t_k^{1/2} = (de)^{1/4} \abs{\psi(v_i, v_j)}_D^{1/2}. \] \subsubsection*{Type~II case} For each~$i$, $\psi(v_i, v_i) \in D^- \setminus\{0\} \subset F_\mathbb{R}^\times \alpha(\omega_0)$. Thus $\psi(v_i, v_i) = t_i\alpha(\omega_0)$ for some $t_i \in F_\mathbb{R}^\times$. Write $\alpha^{-1}(t_i) = (t_{i1}, \dotsc, t_{ie}) \in (\mathbb{R}^\times)^e$. Let $s_i = \alpha(s_{i1}, \dotsc, s_{ie}) \in D_\mathbb{R}^\times$ where $s_{ij} \in \GL_2(\mathbb{R})$ are defined as follows: \begin{align*} s_{ij} &= \fullmatrix{\sqrt{t_{ij}}}{0}{0}{\sqrt{t_{ij}}} \text{ if } t_{ij} \geq 0, \\ s_{ij} &= \fullmatrix{\sqrt{-t_{ij}}}{0}{0}{-\sqrt{-t_{ij}}} \text{ if } t_{ij} < 0. \end{align*} Then \[ \Nrd_{D_\mathbb{R}/F_\mathbb{R}}(s_i) = \alpha(\det(s_{i1}), \dotsc, \det(s_{ie})) = \alpha(t_{i1}, \dotsc, t_{ie}) = t_i. \] Hence by \cref{action-on-antisymm}, \[ \psi(s_i^{-1} v_i, s_i^{-1} v_i) = s_i^{-1} \psi(v_i, v_i) (s_i^{-1})^\dag = \Nrd_{D_\mathbb{R}/F_\mathbb{R}}(s_i^{-1}) \psi(v_i, v_i) = \alpha(\omega_0). \] Furthermore, \begin{align*} \abs{s_i}_D^2 & = \sum_{j=1}^e \Tr(s_{ij} s_{ij}^t) = \sum_{j=1}^e 2\abs{t_{ij}} \\& \leq \sqrt{4e \sum_{j=1}^e \abs{t_{ij}}^2} = \sqrt{2e \Trd_{D_\mathbb{R}/\mathbb{R}}(t_i t_i^\dag)} = (2e)^{1/2} \abs{t_i}_D. \end{align*} By \eqref{eqn:a-omega0}, this implies that \[ \abs{s_i}_D^2 \leq (2e)^{1/2} \abs{t_i\alpha(\omega_0)}_D = (de)^{1/2} \abs{\psi(v_i, v_i)}_D. \qedhere \] \end{proof} \begin{lemma} \label{D0-basis} \leavevmode Let $D_0 = \mathrm{M}_d(k)^e$ where $d = 1$ or $2$ and let $t$ denote the involution of $D_0$ which is transpose on each factor. Let $V$ be a free left $D_0$-module and let $\psi_0$ be a non-degenerate $(D_0,t)$-skew-Hermitian form $V \times V \to D_0$. Then there exists a $D_0$-basis $v_1, \dotsc, v_m$ for $V$ and a $k$-basis $a_1, \dotsc, a_{d^2e}$ for $D_0$ with the following properties: \begin{enumerate}[(i)] \item $\{ v_1, \dotsc, v_m \}$ is symplectic with respect to $\psi_0$ if $d=1$ and unitary if $d=2$. \item $\{ a_1, \dotsc, a_{d^2e} \}$ is an orthonormal basis for $D_0$ with respect to $\abs{\cdot}_D$. \item $\{ a_r v_j : 1 \leq r \leq d^2e, 1 \leq j \leq m \}$ is a symplectic $k$-basis for $V$ with respect to $\Trd_{D_0/k} \psi$. \end{enumerate} \end{lemma} \begin{proof} Write $B_0 = \mathrm{M}_d(k)$. Write $F_0$ for the centre of $D_0$, namely $k^e$. Let $u_1, \dotsc, u_e$ denote the standard $k$-basis of $F_0 = k^e$. Let $V_i = u_i V$. Then $V = \bigoplus_{i=1}^e V_i$ and each $V_i$ is a free left $B_0$-module. Because $V_0$ is a free left $D_0$-module, $\rk_{B_0}(V_1) = \dotsb = \rk_{B_0}(V_e)$. Let $m$ denote this rank. Because $\psi : V \times V \to D_0$ is $F_0$-bilinear, it takes the form \[ \psi((x_1, \dotsc, x_e), (y_1, \dotsc, y_e)) = (\psi_1(x_1, y_1), \dotsc, \psi_e(x_e, y_e)) \text{ for all } x_i, y_i \in V_i, \] where $\psi_i \colon V_i \times V_i \to B_0$ are some non-degenerate $(B_0, t)$-skew-Hermitian forms. Below, we shall prove the lemma with $(D_0, V, \psi_0)$ replaced by $(B_0, V_i, \psi_i)$, yielding an $B_0$-basis $v_{i1}, \dotsc, v_{im}$ for $(V_i, \psi_i)$ and a $k$-basis $b_1, \dotsc, b_{d^2}$ for $B_0$. Then letting $v_j = (v_{1j}, \dotsc, v_{ej})$, we obtain a symplectic or unitary $D_0$-basis for $V$. Furthermore $\{ u_i b_j : 1 \leq i \leq e, 1 \leq j \leq d^2 \}$ forms a $k$-basis for $D_0$ which satisfies (ii) and (iii). Now we prove the lemma for $(B_0, V_i, \psi_i)$, breaking into two cases depending on~$d$. \subsubsection*{Case $d=1$} When $d=1$, $B_0=k$. Each $V_i$ is a $k$-vector space of dimension $m$ and $\psi_i$ is a non-degenerate symplectic form $V_i \times V_i \to k$. By the theory of symplectic forms, there exists a symplectic $k$-basis $\{ v_{i1}, \dotsc, v_{im} \}$ for~$V_i$, proving (i). Choosing $b_1 = 1$ gives an orthonormal $k$-basis of $B_0$ with respect to $\abs{\cdot}_{B_0}$. Since $\Trd_{B_0/k} \psi = \psi$, the bases $v_{i1}, \dotsc, v_{im}$ and $b_1$ satisfy (iii). \subsubsection*{Case $d=2$, part~(i)} We prove by induction on $m$ that there is a unitary $B_0$-basis $v_{i1}, \dotsc, v_{im}$ using the Gram--Schmidt method. First we claim that there exists $z \in V_i$ such that $\psi_i(z,z) \neq 0$. The image of $\psi_i \colon V_i \times V_i \to B_0$ is a two-sided ideal in $B_0$, which is a simple algebra, so this image is all of $B_0$. In particular, we can choose $x,y \in V_i$ such that $\psi_i(x,y)$ is not skew-symmetric, that is, $\psi_i(x,y) + \psi_i(y,x) = \psi_i(x,y) + \psi_i(x,y)^t \neq 0$. Then $\psi_i(x,x)$, $\psi_i(y,y)$ and $\psi_i(x+y,x+y)$ are not all zero. Choosing $z$ to be one of $x$, $y$ and $x+y$, we obtain $\psi_i(z,z) \neq 0$. Then $\psi_i(z,z) \in B_0^- = k J_d$ so $\psi_i(z,z) = s J_d$ for some $s \in k^\times$. Letting $v_{i1} = \fullsmallmatrix{s^{-1}}{0}{0}{1}z$, we obtain that $\psi_i(v_{i1}, v_{i1}) = J_d$. Let $V_i' = \{ v \in V_i : \psi_i(v_{i1}, v) = 0 \} = \{ v \in V_i : \psi_i(v, v_{i1}) = 0 \}$, which is a left $B_0$-submodule of $V_i$. For every $b \in B_0 \setminus \{0\}$, we have \begin{equation} \label{eqn:aJ} \psi_i(bv_{i1}, v_{i1}) = b\psi_i(v_{i1}, v_{i1}) = bJ_d \neq 0 \end{equation} and so $B_0v_{i1} \cap V_i' = \{ 0 \}$. For every $v \in V_i$, we have \[ v - \psi_i(v, v_{i1}) J_d^{-1} v_{i1} \in V_i'. \] Hence $V_i = B_0v_{i1} \oplus V_i'$ as a direct sum of left $B_0$-modules. By \eqref{eqn:aJ}, $bv_{i1} \neq 0$ for all $b \in B_0 \setminus \{0\}$. Hence $\dim_k(B_0v_{i1}) = 4$ and so $\dim_k(V_0') = 4(m-1)$. Every $B_0$-module whose $k$-dimension is a multiple of 4 is a free $B_0$-module, so $B_0v_{i1}$ and $V_0'$ are free left $B_0$-modules. By induction, there is a unitary $B_0$-basis $v_{i2}, \dotsc, v_{im}$ for $V_i'$. Then $v_{i1}, v_{i2}, \dotsc, v_{im}$ is a unitary $B_0$-basis for $V_i$. \subsubsection*{Case $d=2$, part (ii) and~(iii)} Let \[ b_1 = \fullsmallmatrix{1}{0}{0}{0}, \quad b_2 = \fullsmallmatrix{0}{1}{0}{0}, \quad b_3 = \fullsmallmatrix{0}{0}{1}{0}, \quad b_4 = \fullsmallmatrix{0}{0}{0}{1} \in B_0 = \mathrm{M}_2(k). \] These form an orthonormal $k$-basis for $B_0$ with respect to $\abs{\cdot}_{B_0}$. Since $\psi_i$ is $(B_0,t)$-skew-Hermitian, \[ \psi_i(b_rv_{ij}, b_{r'}v_{ij'}) = b_r\psi_i(v_{ij}, v_{ij'})b_{r'}^\dag = b_r\omega_0b_{r'}^\dag. \] Thus if $j \neq j'$, we obtain $\psi_i(b_rv_{ij}, b_{r'}v_{ij'}) = 0$. If $j = j'$, then we can calculate \[ \Trd_{B_0/k} \psi(b_rv_{ij}, b_{r'}v_{ij}) = \Trd_{\mathrm{M}_2(k)/k}(b_rJ_db_{r'}^t) = \begin{cases} 1 &\text{if } (r,r') = (1,2) \text{ or } (3,4), \\ -1 &\text{if } (r,r') = (2,1) \text{ or } (4,3), \\ 0 &\text{otherwise}. \end{cases} \] Thus the bases $v_{i1}, \dotsc, v_{im}$ and $b_1, \dotsc, b_4$ satisfy (iii) for $(B_0, V_i, \psi_i)$. \end{proof} \subsection{Discriminants and skew-Hermitian forms} The following lemmas are useful for calculating discriminants of skew-Hermitian forms. \begin{lemma} \label{disc-a-form} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with an involution and let $R$ be an order in $D$. Let $r_1, \dotsc, r_{d^2e}$ be a $\mathbb{Z}$-basis for $R$. For $a \in D$, let $T_a \in \mathrm{M}_{d^2e}(\mathbb{Q})$ be the matrix with entries $(T_a)_{ij} = \Trd_{D/\mathbb{Q}}(r_i a r_j^\dag)$. Then \[ \det(T_a) = \pm d^{-d^2e} \disc(R) \Nm_{D/\mathbb{Q}}(a). \] \end{lemma} \begin{proof} Let $M_a \in \mathrm{M}_{d^2e}(\mathbb{Z})$ denote the matrix which represents ``multiplication by $a$ on the right'' with respect to the basis $r_1, \dotsc, r_{d^2e}$. Using the facts that $\Trd_{D/\mathbb{Q}}(xy) = \Trd_{D/\mathbb{Q}}(yx)$ for all $x,y \in D$ and that $\Trd_{D/\mathbb{Q}}$ is $\mathbb{Q}$-linear, \begin{align*} (T_a)_{ij} = \Trd_{D/\mathbb{Q}}(r_i a r_j^\dag) & = \Trd_{D/\mathbb{Q}}(r_j^\dag r_i a) = \Trd_{D/\mathbb{Q}} \bigl( r_j^\dag \sum_{k=1}^{d^2e} (M_a)_{ki} r_k \bigr) \\& = (M_a)_{ki} \sum_{k=1}^{d^2e} \Trd_{D/\mathbb{Q}}(r_k r_j^\dag) = (M_a)_{ki} (T_1)_{kj}. \end{align*} Thus $T_a = M_a^t T_1$ so \[ \det(T_a) = \det(M_a) \det(T_1) = \Nm_{D/\mathbb{Q}}(a) \det(T_1). \] Now $T_1$ is the Gram matrix of the bilinear form $(x,y) \mapsto d^{-1}\Trd_{D/\mathbb{Q}}(xy^\dag)$ with respect to $r_1, \dotsc, r_{d^2e}$. Hence by \cite[Lemma~5.6]{QRTUI}, $\det(T_1) = \pm d^{-d^2e} \disc(R)$. \end{proof} The following lemma allows us to calculate the discriminant of $\Trd_{D/\mathbb{Q}} \psi$ on a free $R$-module generated by a weakly symplectic or weakly unitary basis (weakly symplectic or weakly unitary bases with respect to a non-degenerate form automatically satisfy the condition about uniqueness of a permutation~$\sigma$). We have stated the lemma more generally because we shall require it in one additional case: when $m=2$ and the matrix with entries $\psi(v_i, v_j)$ has the form $\fullsmallmatrix{0}{*}{*}{*}$. \begin{lemma} \label{disc-triangular} Let $(D, \dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type I or~II. Let $V$ be a left $D$-vector space with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $R$ be an order in $D$. Let $v_1, \dotsc, v_m$ be a $D$-basis for $V$. Suppose that there is exactly one permutation $\sigma \in S_m$ for which $\psi(v_i, v_{\sigma(i)}) \neq 0$ for all $i = 1, \dotsc, m$. Then \[ \abs{\disc(Rv_1 + \dotsb + Rv_m, \Trd_{D/\mathbb{Q}} \psi)} = d^{-d^2em} \abs{\disc(R)}^m \prod_{i=1}^m \abs{\Nm_{D/\mathbb{Q}}(\psi(v_i, v_{\sigma(i)}))}. \] \end{lemma} \begin{proof} Choose a $\mathbb{Z}$-basis $r_1, \dotsc, r_{d^2e}$ for $R$. Let $A \in \mathrm{M}_n(\mathbb{Q})$ be the Gram matrix of the bilinear form $\Trd_{D/\mathbb{Q}} \psi \colon V \times V \to \mathbb{Q}$ with respect to the $\mathbb{Q}$-basis $r_1 v_1, r_2 v_1, \dotsc, r_{d^2e} v_1, r_1 v_2, \dotsc, r_{d^2e} v_m$ for $V$. Then $A$ is made up of square blocks $B_{ij} \in \mathrm{M}_{d^2e}(\mathbb{Q})$ where $B_{ij}$ is the matrix with entries \[ (B_{ij})_{k\ell} = \Trd_{D/\mathbb{Q}} \psi(r_k v_i, r_\ell v_j) = \Trd_{D/\mathbb{Q}}(r_k\psi(v_i, v_j)r_\ell^\dag). \] In other words, $B_{ij}$ is equal to the matrix $T_{\psi(v_i, v_j)}$ as defined in \cref{disc-a-form}. The only non-zero blocks of $A$ are $B_{i\sigma(i)}$ where $\sigma \in S_m$ is the permutation in the hypothesis of the lemma. Hence, using \cref{disc-a-form}, \begin{align*} \disc(Rv_1 + \dotsb + Rv_m, \Trd_{D/\mathbb{Q}} \psi) & = \det(A) = \pm \prod_{i=1}^m \det(B_{i\sigma(i)}) \\& = \pm \prod_{i=1}^m d^{-d^2e} \disc(R) \Nm_{D/\mathbb{Q}}(\psi(v_i, v_{\sigma(i)}). \qedhere \end{align*} \end{proof} \subsection{\texorpdfstring{$D$}{D}-norms} Let $k$ be a subfield of $\mathbb{R}$. Let $(D,\dag)$ be a semisimple $k$-algebra with a positive involution. Let $V$ be a left $D$-module. We say that a function $\abs{\cdot} \colon V_\mathbb{R} \to \mathbb{R}$ is a \defterm{$D$-norm} if it is a norm induced by a positive definite inner product on $V_\mathbb{R}$ and it satisfies the inequality \[ \abs{av} \leq \abs{a}_D \abs{v} \text{ for all } a \in D_\mathbb{R}, x \in V_\mathbb{R}. \] Note that $\abs{\cdot}_D$ is itself a $D$-norm on $D_\mathbb{R}$ thanks to \cref{length-submult}. Let $\psi \colon V \times V \to D$ be a non-degenerate $(D,\dag)$-skew-Hermitian form. We say that a $D$-norm $\abs{\cdot}$ is \defterm{adapted to $\psi$} if it satisfies the following two conditions: \begin{enumerate} \item $\covol(L_1) = 1$ where $L_1 \subset V$ is the $\mathbb{Z}$-module generated by a symplectic $k$-basis for $V$ with respect to $\Trd_{D/k} \psi$. (Note that a symplectic basis always exists since $\Trd_{D/k} \psi$ is a symplectic form over a field. Furthermore, this condition is independent of the choice of symplectic $k$-basis, because the matrix transforming one symplectic basis into another has determinant~$1$.) \item $\abs{\psi(x, y)}_D \leq \abs{x} \abs{y}$ for all $x, y \in V_\mathbb{R}$. \end{enumerate} The following two lemmas demonstrate the significance of condition~(1) and establish the existence of a $D$-norm adapted to~$\psi$. \begin{lemma} \label{covol-disc-lattice} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type I or~II. Let $V$ be a left $D$-vector space with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $\abs{\cdot}$ be a $D$-norm on $V_\mathbb{R}$ which satisfies condition~(1) from the definition of ``adapted to $\psi$.'' Let $L$ be a $\mathbb{Z}$-lattice in $V$. Then $\covol(L) = \abs{\disc(L)}^{1/2}$, where we use the volume form associated with $\abs{\cdot}$. \end{lemma} \begin{proof} Choose a symplectic $\mathbb{Q}$-basis $e_1, \dotsc, e_n$ for $V$ with respect to $\Trd_{D/\mathbb{Q}} \psi$ and a $\mathbb{Z}$-basis $f_1, \dotsc, f_n$ for $L$. Let $M$ be the matrix which maps $e_1, \dotsc, e_n$ to $f_1, \dotsc, f_n$. The $\mathbb{Z}$-module generated by $e_1, \dotsc, e_n$ has covolume~$1$ by condition~(1). Hence $\covol(L) = \abs{\det(M)}$. The matrix with entries $\psi(f_i, f_j)$ is equal to $MJ_dM^t$. So \[ \disc(L) = \det(MJ_dM^t) = \det(M)^2. \qedhere \] \end{proof} \begin{lemma} \label{good-norm} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type I or~II. Let $V$ be a left $D$-vector space of dimension $m$, equipped with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Then there exists a $D$-norm $\abs{\cdot}$ on $V_\mathbb{R}$ which is adapted to $\psi$. \end{lemma} \begin{proof} Identify $D_\mathbb{R}$ with $\mathrm{M}_d(\mathbb{R})^e$ where $d = 1$ or $2$. By \cref{D0-basis}(i), there exists a symplectic or unitary $D_\mathbb{R}$-basis $v_1, \dotsc, v_m$ for $V_\mathbb{R}$, according to the type of $(D,\dag)$. Define the following norm on $V_\mathbb{R}$: \[ \Bigabs{\sum_{i=1}^m x_i v_i} = \sqrt{\sum_{i=1}^m \abs{x_i}_D^2}. \] This is induced by the inner product $\langle \sum_{i=1}^m x_i v_i, \sum_{j=1}^m y_j v_i \rangle = \Trd_{D_\mathbb{R}/\mathbb{R}} \sum_{i=1}^m x_i y_i^\dag$. It is a $D$-norm by \cref{length-submult}. Let $a_1, \dotsc, a_{d^2e}$ be the $\mathbb{R}$-basis for $D_\mathbb{R}$ given by \cref{D0-basis}. Since $a_1, \dotsc, a_{d^2e}$ is an orthonormal $\mathbb{R}$-basis for $D_\mathbb{R}$ with respect to $\abs{\cdot}_D$, $\{ a_j v_i \}$ is an orthonormal basis for $V_\mathbb{R}$ with respect to $\abs{\cdot}$. Therefore the lattice generated by $\{ a_j v_i \}$ has covolume~$1$. According to \cref{D0-basis}(iii), $\{ a_j v_i \}$ is a symplectic basis for $V_\mathbb{R}$ with respect to $\Trd_{D_\mathbb{R}/\mathbb{R}} \psi$. Thus the norm $\abs{\cdot}$ satisfies condition~(1). By the triangle inequality for $\abs{\cdot}_D$, we have \begin{equation} \label{eqn:psi-triangle-ineq} \Bigabs{\psi \bigl( \sum_{i=1}^m x_i v_i, \sum_{j=1}^m y_j v_j \bigr)}_D \leq \sum_{i=1}^m \sum_{j=1}^m \abs{x_i \psi(v_i, v_j) y_j^\dag}_D. \end{equation} If $\psi(v_i, v_j) \neq 0$, then $\psi(v_i, v_j) = \pm 1$ or $\omega_0$ for all $i$, $j$ and so by \eqref{eqn:a-omega0}, $\abs{x_i}_D = \abs{x_i \psi(v_i, v_j)}_D$. Hence \begin{equation} \label{eqn:psi-vivj} \abs{x_i \psi(v_i, v_j) y_j^\dag}_D \leq \abs{x_i \psi(v_i, v_j)}_D \abs{y_j^\dag}_D = \abs{x_i}_D \abs{y_j}_D. \end{equation} Let $\sigma \in S_m$ be the permutation such that $\psi(v_i, v_{\sigma(i)}) \neq 0$ (thus if $(D,\dag)$ has type~I, then $\sigma = (1,2)(3,4)(5,6)\dotsm$, while if $(D,\dag)$ has type~II, then $\sigma=\mathrm{id}$). From \eqref{eqn:psi-triangle-ineq} and~\eqref{eqn:psi-vivj}, we obtain \[ \Bigabs{\psi \bigl( \sum_{i=1}^m x_i v_i, \sum_{j=1}^m y_j v_j \bigr)}_D \leq \sum_{i=1}^m \abs{x_i}_D \abs{y_{\sigma(i)}}_D. \] By the Cauchy--Schwarz inequality, we get \[ \Bigabs{\psi \bigl( \sum_{i=1}^m x_i v_i, \sum_{j=1}^m y_j v_j \bigr)}_D \leq \Bigl( \sum_{i=1}^m \abs{x_i}_D^2 \Bigr)^{1/2} \Bigl( \sum_{j=1}^m \abs{y_j}_D^2 \Bigr)^{1/2} = \Bigabs{\sum_{i=1}^m x_iv_i}_D \, \Bigabs{\sum_{j=1}^m y_jv_j}_D. \] Thus the norm $\abs{\cdot}$ satisfies condition~(2). \end{proof} \section{Proof of Theorem~\ref{minkowski-hermitian-perfect}} \label{sec:minkowski-proof} In this section we prove our main theorem on weakly unitary or symplectic bases with respect to skew-Hermitian forms. The proof is based on the Gram--Schmidt process, following an inductive structure. For technical reasons we may construct either one or two basis vectors at each step of the induction. \Cref{pre-induction} constructs the new basis vector(s) for each induction step, and then \cref{weakly-unitary-induction} consists of calculations to keep track of the bounds during this induction. \subsection{Initial vectors of a weakly symplectic or unitary basis} We would like to begin by choosing $v_1$ to be the shortest non-zero vector in $V$ (with respect to a suitable $D$-norm), then inductively choosing a basis for $V^\perp$, the orthogonal complement of $Dv_1$. However if we do this, $\psi(v_1, v_1)$ might be zero (indeed, if $D$ has type~I, then it must be zero) and then $Dv_1 + V^\perp$ is not a direct sum. We will therefore instead choose either \begin{enumerate}[(1)] \item one short vector $v_1 \in V$ such that $\psi(v_1, v_1) \neq 0$; or \item two short vectors $v_1, v_2 \in V$ such that the restriction of $\psi$ to $Dv_1 + Dv_2$ is non-degenerate, and $v_1, v_2$ form a weakly symplectic or weakly unitary basis for $Dv_1 + Dv_2$. \end{enumerate} Let $V^\perp$ denote the orthogonal complement of $v_1$ (in case~(1)) or of $Dv_1 + Dv_2$ (in case~(2)). We will bound the discriminant of $\Trd_{D/\mathbb{Q}} \psi$ restricted to $V^\perp$, and then inductively obtain a weakly symplectic or weakly unitary basis for $V^\perp$. Combining this with $v_1$ and perhaps~$v_2$ gives the basis for~$V$ required to prove \cref{minkowski-hermitian-perfect}. The following lemmas choose $v_1$ and perhaps~$v_2$ satisfying (1) or~(2) above. \begin{lemma} \label{non-zero-permutation} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with an involution. Let $V$ be a left $D$-vector space, equipped with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $w_1, \dotsc, w_m$ be a $D$-basis for $V$. Then there exists a permutation $\sigma \in S_m$ such that $\psi(w_i, w_{\sigma(i)}) \neq 0$ for all $i = 1, \dotsc, m$. \end{lemma} \begin{proof} If $D$ is a field, then the non-degeneracy of $\psi$ implies that the matrix with entries $\psi(w_i,w_j)$ has non-zero determinant. Then the result is immediate by expressing the determinant as an alternating sum over permutations in $S_m$. When $D$ is non-commutative, we cannot use determinants so we instead use a combinatorial argument (which is also valid in the commutative case). For each subset $S \subset \{ 1, \dotsc, m \}$, let \[ N(S) = \{ j : 1 \leq j \leq m \text{ and there is some } i \in S \text{ satisfying } \psi(w_i, w_j) \neq 0 \}. \] We claim that \begin{equation} \label{hall-marriage-condition} \abs{N(S)} \geq \abs{S}. \end{equation} Indeed, suppose that some subset $S \subset \{ 1, \dotsc, m \}$ does not satisfy \eqref{hall-marriage-condition}. Let $V_S$ denote the left $D$-vector space spanned by $\{ w_i : i \in S \}$. Consider the vectors $v \in V_S$ satisfying $\psi(v, w_j) = 0$ for all $j \in N(S)$. Since \eqref{hall-marriage-condition} is not satisfied, we have imposed $\abs{N(S)} < \abs{S} = \dim_D(V_S)$ left $D$-linear conditions on $v$. Hence there exists a non-zero $v \in V_S$ which is orthogonal to every element of $N(S)$. By the definition of $N(S)$, this $v$ is orthogonal to all of~$V$. This contradicts the non-degeneracy of~$\psi$. Consider a bipartite graph $\Gamma$ with vertices $A_1, \dotsc, A_m, B_1, \dotsc, B_m$ and with an edge $\{A_i,B_j\}$ precisely when $\psi(w_i, w_j) \neq 0$. Then \eqref{hall-marriage-condition} tells us that each subset $S \subset \{ A_1, \dotsc, A_m \}$ has at least $\abs{S}$ neighbours in the bipartite graph. Therefore by Hall's Marriage Theorem, there is a perfect matching in this bipartite graph. Let $\sigma \in S_m$ be a permutation such that the edges $\{ A_i, B_{\sigma(i)} \}$ form a matching in $\Gamma$. For each $i = 1, \dotsc, m$, since $\{ A_i, B_{\sigma(i)} \}$ is an edge of $\Gamma$, we have $\psi(w_i, w_{\sigma(i)}) \neq 0$. \end{proof} \begin{lemma} \label{short-non-degenerate-vectors} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution. Let $V$ be a left $D$-vector space of dimension $m$, equipped with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $\abs{\cdot}$ be a $D$-norm on $V_\mathbb{R}$. Let $w_1, \dotsc, w_m$ be a $D$-basis for $V$. Then there exist $i,j \in \{ 1, \dotsc, m \}$ satisfying the following conditions: \begin{enumerate}[(i)] \item $\abs{w_i}\abs{w_j} \leq \bigl( \abs{w_1} \abs{w_2} \dotsm \abs{w_m} \bigr)^{2/m}$; \item $\psi(w_i, w_j) \neq 0$; \item if $i \neq j$, then $\psi(w_i,w_i) = 0$. \end{enumerate} \end{lemma} \begin{proof} Let $\sigma$ be a permutation as in \cref{non-zero-permutation}. Choose $k \in \{ 1, \dotsc, m \}$ so that $\abs{w_k} \abs{w_{\sigma(k)}}$ is minimal. Then \[ \abs{w_k} \abs{w_{\sigma(k)}} \leq \bigl( \prod_{i=1}^m \abs{w_i} \abs{w_{\sigma(i)}} \bigr)^{1/m} = \bigl( \prod_{i=1}^m \abs{w_i} \cdot \prod_{j=1}^m \abs{w_j} \bigr)^{1/m} = \bigl( \prod_{i=1}^m \abs{w_i} \bigr)^{2/m}. \] By the choice of $\sigma$, we have $\psi(w_k, w_{\sigma(k)}) \neq 0$. If $\sigma(k) = k$, then $i=j=k$ satisfies the conditions of the lemma. Otherwise choose $i \in \{ k, \sigma(k) \}$ so that $\abs{w_i}$ is minimal. If $\psi(w_i, w_i) \neq 0$, then choosing $j=i$ satisfies the required conditions. If $\psi(w_i, w_i) = 0$, then choose $j$ to be the element of $\{ k, \sigma(k) \}$ which is different from $i$. This $i$ and $j$ satisfy the required conditions. \end{proof} In the remainder of this section, whenever we refer to a discriminant other than $\disc(R)$, we mean the discriminant of $\Trd_{D/\mathbb{Q}} \psi$ restricted to the specified $\mathbb{Z}$-module. \begin{lemma} \label{pre-induction} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type I or~II. Let $V$ be a left $D$-vector space with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $L$ be a $\mathbb{Z}$-lattice in $V$ such that $\Trd_{D/\mathbb{Q}} \psi(L \times L) \subset \mathbb{Z}$. Let $R$ be an order which is contained in $\Stab_D(L)$ and let $\eta \in \mathbb{Z}_{>0}$ be a positive integer such that $\eta R^\dag \subset R$. Then there exists an $R$-submodule $M \subset L$ with the following properties: \begin{enumerate}[(i)] \item $r := \dim_D(D \otimes_R M) = 1$ or $2$; \item the restriction of $\psi$ to $M$ is non-degenerate; \item $\abs{\disc(M)} \leq (\gamma_{d^2em}^2/d^3e)^{d^2er/2} \abs{\disc(R)}^r \abs{\disc(L)}^{r/m}$; \item one of the following occurs: \begin{enumerate}[(a)] \item $D$ has type~I, $r=2$ and $M = Rv_1 + Rv_2$ for some $v_1, v_2$ such that \[ \abs{\psi(v_1, v_2)}_D \leq \gamma_{em} \abs{\disc(L)}^{1/em}; \] \item $D$ has type~II, $r=1$ and $M = Rv_1$ for some $v_1$ such that \[ \abs{\psi(v_1, v_1)}_D \leq \gamma_{4em} \abs{\disc(L)}^{1/4em}; \] \item $D$ has type~II, $r=2$ and there exist $D$-linearly independent vectors $v_1, v_2 \in M$ such that $\psi(v_1, v_2) = 0$, \[ \abs{\psi(v_1, v_1)}_D, \abs{\psi(v_2, v_2)}_D \leq 2^{-5/2} \gamma_e^{1/2} \gamma_{4em}^2 \eta^7 \abs{\disc(R)}^{2/e} \abs{\disc(L)}^{1/2em}, \] and \[ [M:Rv_1 + Rv_2] \leq (\gamma_e/8e)^{2e} (\gamma_{4em}^2/8e)^{2e} \eta^{28e} \abs{\disc(R)}^8 \abs{\disc(L)}^{1/m}. \] \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} By \cref{good-norm}, there is a $D$-norm $\abs{\cdot}$ on $V_\mathbb{R}$ adapted to $\psi$. By \cref{D-minkowski}, there exists a $D$-basis $w_1, \dotsc, w_m$ for $V$ satisfying $w_1, \dotsc, w_m \in L$ and \[ \abs{w_1} \dotsm \abs{w_m} \leq \gamma_{d^2em}^{m/2} \covol(L)^{1/d^2e} \leq \gamma_{d^2em}^{m/2} \abs{\disc(L)}^{1/2d^2e} \] where the second inequality comes from \cref{covol-disc-lattice}. Choose $i, j$ as in \cref{short-non-degenerate-vectors}. Since $\abs{\cdot}$ is adapted to~$\psi$, we have \begin{equation} \label{eqn:psi-fi-fj-bound} \abs{\psi(w_i, w_j)}_D \leq \abs{w_i}\abs{w_j} \leq \gamma_{d^2em} \abs{\disc(L)}^{1/d^2em}. \end{equation} \subsubsection*{Proof of (i)--(iii)} Let $M = Rw_i + Rw_j$, so that $r=1$ if $i=j$ and $r=2$ if $i \neq j$. If $i=j$, then by \cref{short-non-degenerate-vectors}, $\psi(w_i, w_i) \neq 0$, so the restriction of $\psi$ to $M$ is non-degenerate. If $i \neq j$, then by \cref{short-non-degenerate-vectors}, $\psi(w_i, w_i) = 0$ and $\psi(w_i, w_j) \neq 0$. Consequently for any vector $x \in M$, if $x \in Dw_i \setminus \{0\}$ then $\psi(x, w_j) \neq 0$ while if $x \not\in Dw_i$ then $\psi(x, w_i) \neq 0$. Thus the restriction of $\psi$ to $M$ is non-degenerate. By \cref{disc-triangular}, \cref{Nrd-length-bound} and \eqref{eqn:psi-fi-fj-bound}, we obtain that in both cases $i=j$ or $i \neq j$, \begin{align*} \abs{\disc(M)} & = d^{-d^2er} \abs{\disc(R)}^r \abs{\Nm_{D/\mathbb{Q}}(\psi(w_i, w_j))}^r \\& = d^{-d^2er} \abs{\disc(R)}^r \abs{\Nrd_{D/\mathbb{Q}}(\psi(w_i, w_j))}^{dr} \\& \leq d^{-d^2er} \abs{\disc(R)}^r (de)^{-d^2er/2} \abs{\psi(w_i, w_j)}_D^{d^2er} \\& \leq (d^3e)^{-d^2er/2} \abs{\disc(R)}^r \cdot \gamma_{d^2em}^{d^2er} \abs{\disc(L)}^{r/m}. \end{align*} \medskip For the proof of (iv), we split into cases depending on the type of~$D$ and on whether $i=j$ or $i \neq j$. \subsubsection*{Case~(a)} If $D$ has type~I, then $D$ is a field and $\psi$ is a symplectic form. Hence $\psi(v,v) = 0$ for all $v \in V$, so we must have $i \neq j$. Let $v_1 = w_i$ and $v_2 = w_j$. The bound in (iv)(a) is~\eqref{eqn:psi-fi-fj-bound}. \subsubsection*{Case~(b)} If $D$ has type~II and $i=j$, then let $v_1 = w_i$. Then (iv)(b) holds thanks to \eqref{eqn:psi-fi-fj-bound}. \subsubsection*{Case~(c)} If $D$ has type~II and $i \neq j$, then choose $\omega \in D^-$ as in \cref{small-antisymm-star}. Let \[ w_j' = 2\psi(w_i, w_j)\omega w_j - \omega\psi(w_j, w_j) w_i. \] Since $\Trd_{D/\mathbb{Q}} \psi(L \times L) \subset \mathbb{Z}$, $\psi(L \times L) \subset R^*$. Hence $\psi(w_i, w_j)\omega$ and $\omega\psi(w_j, w_j) \in R$, so $w_j' \in Rw_i + Rw_j = M$. Furthermore $w_j'$ and $w_i$ are $D$-linearly independent because $\psi(w_i, w_j)\omega \neq 0$. By \cref{action-on-antisymm}(i), $\omega\psi(w_j, w_j), \psi(w_j, w_j)\omega \in F$. Using this, along with the facts that $\psi(w_i,w_i) = 0$ and $(\omega \psi(w_i, w_j))^\dag = \psi(w_j, w_i) \omega$, we can calculate \begin{align*} \psi(w_j', w_j') & = 2\psi(w_i, w_j) \omega \, \psi(w_j, w_j) \, (2\psi(w_i, w_j) \omega)^\dag \\& \qquad - 2\psi(w_i, w_j) \omega \, \psi(w_j, w_i) \, (\omega \psi(w_j, w_j))^\dag \\& \qquad - \omega \psi(w_j, w_j) \, \psi(w_i, w_j) \, (2\psi(w_i, w_j) \omega)^\dag + 0 \\& = (4-2-2) \psi(w_i, w_j) \omega \psi(w_j, w_j) \omega \psi(w_j, w_i) \\& = 0. \end{align*} Using \cref{action-on-antisymm}(ii) and the fact that $\psi(w_i, w_i) = 0$, we can calculate \begin{align*} \psi(w_j', w_i) & = 2 \psi(w_i, w_j) \omega \, \psi(w_j, w_i) - 0 = 2 \Nrd_{D/F}(\psi(w_i, w_j))\omega. \end{align*} Thus $\psi(w_j', w_i) \in F\omega = D^-$, so $\psi(w_i, w_j') = -\psi(w_j', w_i)^\dag = \psi(w_j', w_i)$. Now let \[ v_1 = w_i - w_j', \qquad v_2 = w_i + w_j'. \] Clearly $v_1, v_2 \in Rw_i + Rw_j' \subset M$. Since $w_i = \frac{1}{2}(v_1 + v_2)$ and $w_j' = \frac{1}{2}(v_2 - v_1)$, the vectors $v_1$ and $v_2$ are $D$-linearly independent. Since $\psi(w_j', w_i) = \psi(w_i, w_j')$ we can calculate \begin{align*} \psi(v_1, v_2) & = \psi(w_i, w_i) + \psi(w_i, w_j') - \psi(w_j', w_i) - \psi(w_j', w_j') = 0, \\ \psi(v_1, v_1) & = \psi(w_i, w_i) - \psi(w_i, w_j') - \psi(w_j', w_i) + \psi(w_j', w_j') = 2\psi(w_j', w_i), \\ \psi(v_2, v_2) & = \psi(w_i, w_i) + \psi(w_i, w_j') + \psi(w_j', w_i) + \psi(w_j', w_j') = -2\psi(w_j', w_i). \end{align*} Consequently using \cref{length-submult,small-antisymm-star,Nrd-length-bound} and \eqref{eqn:psi-fi-fj-bound}, \begin{align*} \abs{\psi(v_1, v_1)}_D = \abs{\psi(v_2, v_2)}_D & = 2\abs{\psi(w_j', w_i)}_D \leq 4 \abs{\Nrd_{D/F}(\psi(w_i, w_j))}_D \abs{\omega}_D \\& \leq 4 \cdot 2^{-1/2} \abs{\psi(w_i, w_j)}_D^2 \cdot 2^{-4} \eta^7 \sqrt{\gamma_e} \abs{\disc(R)}^{2/e} \\& = 2^{-5/2} \sqrt{\gamma_e} \eta^7 \abs{\disc(R)}^{2/e} \cdot \gamma_{4em}^2 \abs{\disc(L)}^{2/4em}. \end{align*} This proves the first inequality in (iv)(c). Using \cref{disc-triangular}, we have \begin{align*} [M : Rv_1 + Rv_2] & = \frac{\abs{\disc(Rv_1 + Rv_2)}^{1/2}}{\abs{\disc(M)}^{1/2}} \\& = \frac{\abs{\Nm_{D/\mathbb{Q}}(\psi(v_1, v_1))}^{1/2} \abs{\Nm_{D/\mathbb{Q}}(\psi(v_2, v_2))}^{1/2}}{\abs{\Nm_{D/\mathbb{Q}}(\psi(w_i, w_j))}^{1/2} \abs{\Nm_{D/\mathbb{Q}}(\psi(w_j, w_i))}^{1/2}} \\& = \frac{\abs{\Nrd_{D/\mathbb{Q}}(\psi(v_1, v_1))} \abs{\Nrd_{D/\mathbb{Q}}(\psi(v_2, v_2))}}{\abs{\Nrd_{D/\mathbb{Q}}(\psi(w_i, w_j))}^2}. \end{align*} Now by \cref{Nrd-length-bound} and the fact that if $a \in F$, then $\Nrd_{D/\mathbb{Q}}(a) = \Nm_{F/\mathbb{Q}}(a)^2$, \begin{align*} \abs{\Nrd_{D/\mathbb{Q}}(\psi(v_1, v_1))} \\ = \abs{\Nrd_{D/\mathbb{Q}}(\psi(v_2, v_2))} & = \abs{\Nrd_{D/\mathbb{Q}}(4 \Nrd_{D/F}(\psi(w_i, w_j)) \omega)} \\& = 4^{2e} \abs{\Nm_{F/\mathbb{Q}}(\Nrd_{D/F}(\psi(w_i, w_j))}^2 \abs{\Nrd_{D/\mathbb{Q}}(\omega)} \\& = 4^{2e} \abs{\Nrd_{D/\mathbb{Q}}(\omega)} \abs{\Nrd_{D/\mathbb{Q}}(\psi(w_i, w_j))}^2. \end{align*} Therefore by \cref{Nrd-length-bound,small-antisymm-star} and \eqref{eqn:psi-fi-fj-bound}, \begin{align*} [M : Rv_1 + Rv_2] & = \frac{4^{4e} \abs{\Nrd_{D/\mathbb{Q}}(\omega)}^2 \abs{\Nrd_{D/\mathbb{Q}}(\psi(w_i, w_j))}^4}{\abs{\Nrd_{D/\mathbb{Q}}(\psi(w_i, w_j))}^2} \\& = 4^{4e} \abs{\Nrd_{D/\mathbb{Q}}(\omega)}^2 \abs{\Nrd_{D/\mathbb{Q}}(\psi(w_i, w_j))}^2 \\& \leq 4^{4e} \cdot (2e)^{-2e} \abs{\omega}_D^{4e} \cdot (2e)^{-2e} \abs{\psi(w_i, w_j)}_D^{4e} \\& \leq 2^{4e} e^{-4e} \cdot 2^{-16e} \eta^{28e} \gamma_e^{2e} \abs{\disc(R)}^8 \cdot \gamma_{4em}^{4e} \abs{\disc(L)}^{4e/4em}. \qedhere \end{align*} \end{proof} \subsection{Inductive construction of weakly symplectic or unitary basis} The following theorem is a slight generalisation of \cref{minkowski-hermitian-perfect}, together with explicit values for the constants. Compared to \cref{minkowski-hermitian-perfect}, we only require $R \subset \Stab_D(L)$ (this is needed for the induction) and we add an additional parameter~$\eta$. When $R = \Stab_D(L)$, the parameter~$\eta$ is controlled by \cref{R-cap-Rdag}. \begin{proposition} \label{weakly-unitary-induction} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution of type I or~II. Let $V$ be a left $D$-vector space with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $L$ be a $\mathbb{Z}$-lattice in $V$ such that $\Trd_{D/\mathbb{Q}} \psi(L \times L) \subset \mathbb{Z}$. Let $R$ be an order which is contained in $\Stab_D(L)$ and let $\eta \in \mathbb{Z}_{>0}$ be a positive integer such that $\eta R^\dag \subset R$. Then there exists a $D$-basis $v_1, \dotsc, v_m$ for $V$ such that: \begin{enumerate}[(i)] \item $v_1, \dotsc, v_m \in L$; \item the basis is weakly symplectic (when $D$ has type~I) or weakly unitary (when $D$ has type~II) with respect to $\psi$; \item the index of $Rv_1 + \dotsb + Rv_m$ in $L$ is bounded as follows: \createC{so-index-mult} \createC{so-index-eta} \createC{so-index-R} \createC{so-index-L} \[ [L : Rv_1 + \dotsb + Rv_m] \leq \refC{so-index-mult}(d,e,m) \eta^{\refC{so-index-eta}(d,e,m)} \abs{\disc(R)}^{\refC{so-index-R}(d,e,m)} \abs{\disc(L)}^{\refC{so-index-L}(d,e,m)}; \] \item for all $i, j \in \{ 1, \dotsc, m \}$ such that $\psi(v_i, v_j) \neq 0$, \createC{so-psi-mult} \createC{so-psi-eta} \createC{so-psi-R} \createC{so-psi-L} \[ \abs{\psi(v_i, v_j)}_D \leq \refC{so-psi-mult}(d, e, m) \eta^{\refC{so-psi-eta}(d,e,m)} \abs{\disc(R)}^{\refC{so-psi-R}(d,e,m)} \abs{\disc(L)}^{\refC{so-psi-L}(d,e,m)}. \] \end{enumerate} In order to write down values for the constants in these inequalities, define \[ \newC{so-base}(d,e,m) = \max\bigl\{ 1, \pi^{-1/2}(de)^{1/2}m \bigr\}. \] \pagebreak The inequalities (iii) and (iv) hold with the following values of the constants: \begin{center} \bgroup \renewcommand{\arraystretch}{1.4} \begin{tabular}{c|c|c} & $d=1$ & $d=2$ \\ \hline $\refC{so-index-mult}(d,e,m)$ & $2^{m/2} \refC{so-base}(1,e,m)^{em(m+2)/8}$ & $2^m \refC{so-base}(2,e,m)^{em(m+2)}$ \\ $\refC{so-index-eta}(d,e,m)$ & $0$ & $14em$ \\ $\refC{so-index-R}(d,e,m)$ & $m(m+2)/8$ & $m(m+16)/4$ \\ $\refC{so-index-L}(d,e,m)$ & $(m-2)/4$ & $(m-1)/2$ \\ $\refC{so-psi-eta}(d,e,m)$ & $0$ & $7$ \\ $\refC{so-psi-R}(d,e,m)$ & $\bigl( m(m+2)-8 \bigr)/16e$ & $\bigl( m(m+1)+26 \bigr)/16e$ \\ $\refC{so-psi-L}(d,e,m)$ & $(m+2)/8e$ & $(m+1)/8e$ \end{tabular} \egroup \end{center} \begin{align*} \refC{so-psi-mult}(1,e,m) & = e^{1/2} 2^{(m+2)/4e} \refC{so-base}(1,e,m)^{(m(m+2)+8)/16}, \\ \refC{so-psi-mult}(2,e,m) & = (2e)^{3/2} 2^{(m+1)/4e} \refC{so-base}(2,e,m)^{(m(m+1)+2)/4}. \end{align*} \end{proposition} \begin{proof} The proof is by induction on~$m = \dim_D(V)$. Let $M$ be an $R$-submodule of $L$ as in \cref{pre-induction}. Let $r = \dim_D(D \otimes_R M) = 1$ or~$2$. Choose $v_1$ and perhaps $v_2$ as in \cref{pre-induction}(iv). For part~(iii), the base case of the induction will be when $m=r$, and this is dealt with in the three cases below. For part~(iv), the base case is when $m=0$, in which case (iv) is vacuously true. Let $M^\perp$ be the orthogonal complement of $M$ in $L$ with respect to $\psi$. By \cref{orthog-complements}, $M^\perp$ is also the orthogonal complement of $M$ in $L$ with respect to $\Trd_{D/\mathbb{Q}} \psi$. By \cref{disc-lattice-complement} and \cref{pre-induction}(iii), \begin{equation} \label{eqn:disc-Mperp} \abs{\disc(M^\perp)} \leq \abs{\disc(L)} \cdot \abs{\disc(M)} \leq (\gamma_{d^2em}^2/d^3e)^{d^2er/2} \abs{\disc(R)}^r \abs{\disc(L)}^{(m+r)/m}. \end{equation} Now $\psi$ restricted to $M^\perp$ is non-degenerate, $\dim_D(D \otimes_R M^\perp) = m-r < m$ and $R \subset \Stab_D(M^\perp)$ so we can apply the lemma inductively to $M^\perp$. We obtain a $D$-basis $v_{r+1}, \dotsc, v_m$ for $D \otimes_R M^\perp$ whose elements lie in $M^\perp \subset L$. Now $v_1, \dotsc, v_r \in M$ are orthogonal to $v_{r+1}, \dotsc, v_m$ and $v_1, \dotsc, v_r$ form a weakly symplectic or weakly unitary $D$-basis for $D \otimes_R M$. Hence by induction $v_1, \dotsc, v_m$ form a weakly symplectic or weakly unitary $D$-basis for~$V$. Thus (i) and~(ii) are satisfied. By induction, \begin{align} & \phantom{{}\leq{}} [M^\perp:Rv_{r+1} + \dotsb + Rv_m] \notag \\& \leq \refC{so-index-mult}(d,e,m-r) \eta^{\refC{so-index-eta}(d,e,m-r)} \abs{\disc(R)}^{\refC{so-index-R}(d,e,m-r)} \abs{\disc(M^\perp)}^{\refC{so-index-L}(d,e,m-r)} \notag \\& \leq \refC{so-index-mult}(d,e,m-r) \eta^{\refC{so-index-eta}(d,e,m-r)} \abs{\disc(R)}^{\refC{so-index-R}(d,e,m-r)} \notag \\& \qquad \cdot (\gamma_{d^2em}^2/d^3e)^{d^2er/2 \cdot \refC{so-index-L}(d,e,m-r)} \abs{\disc(R)}^{r\refC{so-index-L}(d,e,m-r)} \abs{\disc(L)}^{(m+r)/m \cdot \refC{so-index-L}(d,e,m-r)}. \label{eqn:Mperp-index} \end{align} We now split into cases depending on the type of~$D$ and on whether $r = 1$ or~$2$, as in \cref{pre-induction}(iv). The proofs in the three cases are very similar, with just the details of the calculations varying. For each case, the proofs of (iii) and~(iv) are independent of each other. In these calculations, we will use the following inequalities for Hermite constants, derived from~\eqref{eqn:hermite-constant-bound}: \begin{equation} \label{eqn:hermite-fraction-bound} \frac{\gamma_{d^2em}^2}{d^3e} \leq 4^{2/d^2em} \frac{(d^2em)^2}{d^3e\,\pi^2} = 2^{4/d^2em} \refC{so-base}(2,e,m)^2, \qquad \frac{\gamma_e}{8e} \leq \frac{4^{1/e}}{8\pi} \leq \frac{4}{8\pi} < 1. \end{equation} \medskip \subsubsection*{Case~(a), part~(iii)} This is the case when $D$ has type~I and $r=2$. When $m=r=2$, from \cref{pre-induction}(iii) and (iv)(a), we have \[ [L:Rv_1 + Rv_2] = [L:M] = \frac{\abs{\disc(M)}^{1/2}}{\abs{\disc(L)}^{1/2}} \leq (\gamma_{2e}^2/e)^{e/2} \abs{\disc(R)}. \] This establishes (iii) when $m=2$ because \begin{gather*} (\gamma_{2e}^2/e)^{e/2} \leq \bigl( 2^{2/e} \refC{so-base}(1,e,2)^2 \bigr)^{e/2} = \refC{so-index-mult}(1,e,2), \\ \refC{so-index-eta}(1,e,2) = 0, \quad \refC{so-index-R}(1,e,2) = 1, \quad \refC{so-index-L}(1,e,2) = 0. \end{gather*} When $m \geq 3$, we have, using the fact that $M = Rv_1 + Rv_2$, \cref{disc-lattice-complement}, \cref{pre-induction}(iii) and \eqref{eqn:Mperp-index}, \begin{align*} & \phantom{{} = {}} [L : Rv_1 + \dotsb + Rv_m] = [L : M + M^\perp] [M^\perp : Rv_3 + \dotsb + Rv_m] \\& \leq \abs{\disc(M)} [M^\perp : Rv_3 + \dotsb + Rv_m] \\& \leq (\gamma_{em}^2/e)^{e} \abs{\disc(R)}^2 \abs{\disc(L)}^{2/m} \cdot \refC{so-index-mult}(1,e,m-2) \, (\gamma_{em}^2/e)^{e\refC{so-index-L}(1,e,m-2)} \\& \qquad \cdot \abs{\disc(R)}^{\refC{so-index-R}(1,e,m-2) + 2\refC{so-index-L}(1,e,m-2)} \abs{\disc(L)}^{(m+2)/m \cdot \refC{so-index-L}(1,e,m-2)}. \end{align*} Now we can calculate: for the multiplicative constant: \begin{align*} & \phantom{{} = {}} \refC{so-index-mult}(1,e,m-2) \, (\gamma_{em}^2/e)^{e(1+\refC{so-index-L}(1,e,m-2))} \\& = 2^{(m-2)/2} \refC{so-base}(1,e,m-2)^{e(m-2)m/8} (\gamma_{em}^2/e)^{em/4} \\& \leq 2^{(m-2)/2} \refC{so-base}(1,e,m)^{e(m-2)m/8} \cdot 2 \refC{so-base}(1,e,m)^{em/2} \\& = 2^{m/2} \refC{so-base}(1,e,m)^{e(m^2-2m + 4m)/8} \\& = \refC{so-index-mult}(1,e,m), \end{align*} for the exponent of $\abs{\disc(R)}$: \begin{align*} 2 + \refC{so-index-R}(1,e,m-2) + 2\refC{so-index-L}(1,e,m-2) & = 2 + \frac{(m-2)m}{8} + 2 \cdot \frac{m-4}{4} \\& = \refC{so-index-R}(1,e,m), \end{align*} for the exponent of $\abs{\disc(L)}$: \begin{align*} \frac{2}{m} + \frac{(m+2)}{m} \cdot \refC{so-index-L}(1,e,m-2) & = \frac{2}{m} + \frac{(m+2)(m-4)}{4m} \\& = \frac{8+(m^2-2m-8)}{4m} = \frac{(m-2)m}{4m} = \refC{so-index-L}(1,e,m). \end{align*} \subsubsection*{Case~(a), part~(iv)} For $i=1$, $j=2$, \cref{pre-induction}(iv)(a) gives \begin{equation} \label{eqn:case-a-iv-base} \abs{\psi(v_1, v_2)}_D \leq \gamma_{em} \abs{\disc(L)}^{1/em}. \end{equation} This establishes (iv) when $i=1$, $j=2$ because, using \eqref{eqn:hermite-constant-bound} and the fact that $m \geq 2$ so $1/em \leq (m+2)/8e$ and $1 \leq (m(m+2)+8)/16$, \begin{align*} \gamma_{em} & \leq 4^{1/em} em/\pi \leq 4^{(m+2)/8e} e^{1/2} \refC{so-base}(1,e,2) \leq \refC{so-psi-mult}(1,e,m). \\ 0 & \leq \frac{m(m+2)-8}{16e} = \refC{so-psi-R}(1,e,m), \\ \frac{1}{em} & \leq \frac{m(m+2)}{8em} = \refC{so-psi-L}(1,e,m). \end{align*} For $i, j \geq 3$, induction gives \begin{align*} \abs{\psi(v_i, v_j)}_D & \leq \refC{so-psi-mult}(1,e,m-2) \abs{\disc(R)}^{\refC{so-psi-R}(1,e,m-2)} \disc(M^\perp)^{\refC{so-psi-L}(1,e,m)} \\& \leq \refC{so-psi-mult}(1,e,m-2) \abs{\disc(R)}^{\refC{so-psi-R}(1,e,m-2)} \\& \qquad \cdot \bigl( (\gamma_{em}^2/e)^{e} \abs{\disc(R)}^2 \abs{\disc(L)}^{(m+2)/m} \bigr)^{\refC{so-psi-L}(1,e,m-2)}. \end{align*} Now we can calculate: for the multiplicative constant (using \eqref{eqn:hermite-constant-bound}): \begin{align*} & \phantom{{} = {}} \refC{so-psi-mult}(1,e,m-2) \, (\gamma_{em}^2/e)^{e\refC{so-psi-L}(1,e,m-2)} \\& = e^{1/2} 2^{m/4e} \refC{so-base}(1,e,m-2)^{((m-2)m+8)/16} \, (\gamma_{em}^2/e)^{m/8} \\& \leq e^{1/2} 2^{m/4e} \refC{so-base}(1,e,m)^{((m-2)m+8)/16} \cdot 2^{1/2e} \refC{so-base}(1,e,m)^{m/4} \\& = e^{1/2} 2^{(m+2)/4e} \refC{so-base}(1,e,m)^{(m^2-2m+8 + 4m)/16} \\& = \refC{so-psi-mult}(1,e,m), \end{align*} for the exponent of $\abs{\disc(R)}$: \begin{align*} \refC{so-psi-R}(1,e,m-2) + 2\refC{so-psi-L}(1,e,m-2) & = \frac{(m-2)m-8}{16e} + 2 \cdot \frac{m}{8e} = \refC{so-psi-R}(1,e,m), \end{align*} for the exponent of $\abs{\disc(L)}$: \begin{align*} \frac{m+2}{m} \refC{so-psi-L}(1,e,m-2) & = \frac{(m+2)}{m} \cdot \frac{m}{8e} = \refC{so-psi-L}(1,e,m). \end{align*} \medskip \subsubsection*{Case~(b), part~(iii)} In this case, $D$ has type~II and $r=1$. When $m=r=1$, from \cref{pre-induction}(iii) and (iv)(b), we have \[ [L:Rv_1] = [L:M] = \frac{\abs{\disc(M)}^{1/2}}{\abs{\disc(L)}^{1/2}} \leq (\gamma_{4e}^2/8e)^{e} \abs{\disc(R)}^{1/2}. \] This establishes (iii) when $m=1$ because \begin{gather*} (\gamma_{4e}^2/8e)^e \leq \bigl( 2^{1/e} \refC{so-base}(2,e,1)^2 \bigr)^e = \refC{so-index-mult}(2,e,1), \\ \refC{so-index-eta}(2,e,1) = 14e > 0, \quad \refC{so-index-R}(2,e,1) = 17/4 > 1/2, \quad \refC{so-index-L}(2,e,1) = 0. \end{gather*} When $m \geq 2$, we have (using $M = Rv_1$, \cref{disc-lattice-complement}, \cref{pre-induction}(iii) and \eqref{eqn:Mperp-index}) \begin{align*} & \phantom{{} = {}} [L : Rv_1 + \dotsb + Rv_m] = [L : M + M^\perp] [M^\perp : Rv_2 + \dotsb + Rv_m] \\& \leq \abs{\disc(M)} [M^\perp : Rv_2 + \dotsb + Rv_m] \\& \leq (\gamma_{4em}^2/8e)^{2e} \abs{\disc(R)} \abs{\disc(L)}^{1/m} \\& \qquad \cdot \refC{so-index-mult}(2,e,m-1) \eta^{\refC{so-index-eta}(2,e,m-1)} (\gamma_{4em}^2/8e)^{2e\refC{so-index-L}(2,e,m-1)} \\& \qquad \cdot \abs{\disc(R)}^{\refC{so-index-R}(2,e,m-1) + \refC{so-index-L}(2,e,m-1)} \abs{\disc(L)}^{(m+1)/m \cdot \refC{so-index-L}(2,e,m-1)}. \end{align*} Now we can calculate: for the multiplicative constant: \begin{align*} & \phantom{{} = {}} \refC{so-index-mult}(2,e,m-1) (\gamma_{4em}^2/8e)^{2e(1+\refC{so-index-L}(2,e,m-1))} \\& = 2^{m-1} \refC{so-base}(2,e,m-1)^{e(m-1)(m+1)} (\gamma_{4em}^2/8e)^{em} \\& \leq 2^{m-1} \refC{so-base}(2,e,m)^{e(m-1)(m+1)} \cdot 2 \refC{so-base}(2,e,m)^{2em} \\& = 2^{m-1+1} \refC{so-base}(2,e,m)^{e(m^2-1 + 2m)} \\& \leq \refC{so-index-mult}(2,e,m), \end{align*} for the exponent of $\eta$: \[ \refC{so-index-eta}(2,e,m-1) = 14e(m-1) < \refC{so-index-eta}(2,e,m), \] for the exponent of $\abs{\disc(R)}$: \begin{align*} 1 + \refC{so-index-R}(2,e,m-1) + \refC{so-index-L}(2,e,m-1) & = 1 + \frac{(m-1)(m+15)}{4} + \frac{m-2}{2} \\& = \frac{m^2+16m-15}{4} < \refC{so-index-R}(2,e,m), \end{align*} for the exponent of $\abs{\disc(L)}$: \begin{align*} \frac{1}{m} + \frac{(m+1)}{m} \cdot \refC{so-index-L}(2,e,m-1) & = \frac{1}{m} + \frac{(m+1)(m-2)}{2m} \\& = \frac{2+(m^2-m-2)}{2m} = \frac{(m-1)m}{2m} = \refC{so-index-L}(2,e,m). \end{align*} \subsubsection*{Case~(b), part~(iv)} For $i=j=1$, \cref{pre-induction}(iv)(b) gives \begin{equation} \label{eqn:case-b-iv-base} \abs{\psi(v_1, v_1)}_D \leq \gamma_{4em} \abs{\disc(L)}^{1/4em}. \end{equation} This establishes (iv) for $i=j=1$ because, using \eqref{eqn:hermite-constant-bound} and the facts that $e \geq 1$ and $m \geq 1$ so $1/2em \leq (m+1)/4e$ and $(m(m+1)+2)/4 \geq 1$, \begin{align*} \gamma_{4em} & \leq 4^{1/4em} (4em/\pi) \leq 2^{1/2em} 2^{3/2} e^{1/2} \refC{so-base}(2,e,m) \\& \leq (2e)^{3/2} 2^{(m+1)/4e} \refC{so-base}(2,e,m)^{(m(m+1)+2)/4} = \refC{so-psi-mult}(2,e,m), \\[3pt] 0 & < 7 = \refC{so-psi-eta}(2,e,m), \\ 0 & < \frac{1 \cdot 2 + 26}{16e} \leq \frac{m(m+1)+26}{16e} = \refC{so-psi-R}(2,e,m), \\ \frac{1}{4em} & \leq \frac{2}{8e} \leq \frac{m+1}{8e} = \refC{so-psi-L}(2,e,m). \end{align*} For $i=j \geq 2$, induction gives \begin{align*} \abs{\psi(v_j, v_j)}_D & \leq \refC{so-psi-mult}(2,e,m-1) \eta^{\refC{so-psi-eta}(2,e,m-1)} \abs{\disc(R)}^{\refC{so-psi-R}(2,e,m-1)} \abs{\disc(M^\perp)}^{\refC{so-psi-L}(2,e,m-1)} \\& \leq \refC{so-psi-mult}(2,e,m-1) \eta^{\refC{so-psi-eta}(2,e,m-1)} \abs{\disc(R)}^{\refC{so-psi-R}(2,e,m-1)} \\& \qquad \cdot \bigl( (\gamma_{4em}^2/8e)^{2e} \abs{\disc(R)} \abs{\disc(L)}^{(m+1)/m} \bigr)^{\refC{so-psi-L}(2,e,m-1)}. \end{align*} Now we can calculate: for the multiplicative constant: \begin{align*} & \phantom{{}={}} \refC{so-psi-mult}(2,e,m-1) (\gamma_{4em}^2/8e)^{2e\refC{so-psi-L}(2,e,m-1)} \\& = (2e)^{3/2} 2^{m/4e} \refC{so-base}(2,e,m-1)^{((m-1)m+2)/4} \cdot (\gamma_{4em}^2/8e)^{m/4} \\& \leq (2e)^{3/2} 2^{m/4e} \refC{so-base}(2,e,m)^{((m-1)m+2)/4} \cdot 2^{1/4e} \refC{so-base}(2,e,m)^{m/2} \\& = (2e)^{3/2} 2^{(m+1)/4e} \refC{so-base}(2,e,m)^{(m^2-m+2 + 2m)/4} \\& = \refC{so-psi-mult}(2,e,m), \end{align*} for the exponent of $\eta$: \[ \refC{so-psi-eta}(2,e,m-1) = 7 = \refC{so-psi-eta}(2,e,m), \] for the exponent of $\abs{\disc(R)}$: \begin{align*} \phantom{{} = {}} \refC{so-psi-R}(2,e,m-1) + \refC{so-psi-L}(2,e,m-1) & = \frac{(m-1)m+26}{16e} + \frac{m}{8e} = \refC{so-psi-R}(2, e, m), \end{align*} for the exponent of $\abs{\disc(L)}$: \begin{align*} \frac{m+1}{m} \refC{so-psi-L}(2,e,m-1) & = \frac{(m+1)}{m} \cdot \frac{m}{8e} = \frac{m+1}{8e} = \refC{so-psi-L}(2,e,m). \end{align*} \medskip \subsubsection*{Case~(c), part~(iii)} This is the case where $D$ has type~II and $r=2$. When $m=r=2$, from \cref{pre-induction}(iii) and (iv)(c), we have \begin{align*} & \phantom{{} = {}} [L:Rv_1 + Rv_2] = [L:M][M:Rv_1 + Rv_2] \\& = \frac{\abs{\disc(M)}^{1/2}}{\abs{\disc(L)}^{1/2}} [M:Rv_1 + Rv_2] \\& \leq \frac{(\gamma_{8e}^2/8e)^{2e} \abs{\disc(R)} \abs{\disc(L)}^{1/2}}{\abs{\disc(L)}^{1/2}} (\gamma_e/8e)^{2e} (\gamma_{8e}^2/8e)^{2e} \eta^{28e} \abs{\disc(R)}^8 \abs{\disc(L)}^{1/2} \\& = (\gamma_e/8e)^{2e} (\gamma_{8e}^2/8e)^{4e} \eta^{28e} \abs{\disc(R)}^9 \abs{\disc(L)}^{1/2}. \end{align*} This establishes (iii) when $m=2$ because \begin{gather*} (\gamma_e/8e)^{2e} (\gamma_{8e}^2/8e)^{4e} \leq 1 \cdot \bigl( 2^{1/2} \refC{so-base}(2,e,2)^2 \bigr)^{4e} = \refC{so-index-mult}(2,e,2), \\ \refC{so-index-eta}(2,e,2) = 28e, \quad \refC{so-index-R}(2,e,2) = 9, \quad \refC{so-index-L}(2,e,2) = 1/2. \end{gather*} When $m \geq 3$, we have (using \cref{disc-lattice-complement}, \cref{pre-induction}(iv)(c) and \eqref{eqn:Mperp-index}) \begin{align*} & \phantom{{} = {}} [L : Rv_1 + \dotsb + Rv_m] \\& = [L : M + M^\perp] [M : Rv_1 + Rv_2] [M^\perp : Rv_3 + \dotsb + Rv_m] \\& \leq \abs{\disc(M)} [M : Rv_1 + Rv_2] [M^\perp : Rv_3 + \dotsb + Rv_m] \\& \leq (\gamma_{4em}^2/8e)^{4e} \abs{\disc(R)}^2 \abs{\disc(L)}^{2/m} \\& \qquad \cdot (\gamma_e/8e)^{2e} (\gamma_{4em}^2/8e)^{2e} \eta^{28e} \abs{\disc(R)}^8 \abs{\disc(L)}^{1/m} \\& \qquad \cdot \refC{so-index-mult}(2,e,m-2) \eta^{\refC{so-index-eta}(2,e,m-2)} (\gamma_{4em}^2/8e)^{4e\refC{so-index-L}(2,e,m-2)} \\& \qquad \cdot \abs{\disc(R)}^{\refC{so-index-R}(2,e,m-2) + 2\refC{so-index-L}(2,e,m-2)} \abs{\disc(L)}^{(m+2)/m \cdot \refC{so-index-L}(2,e,m-2)}. \end{align*} Now we can calculate: for the multiplicative constant: \begin{align*} & \phantom{{} = {}} \refC{so-index-mult}(2,e,m-2) (\gamma_e/8e)^{2e} (\gamma_{4em}^2/8e)^{e(6+4\refC{so-index-L}(2,e,m-2))} \\& = 2^{m-2} \refC{so-base}(2,e,m-2)^{e(m-2)m} (\gamma_e/8e)^{2e} \, (\gamma_{4em}^2/8e)^{2em} \\& \leq 2^{m-2} \refC{so-base}(2,e,m)^{e(m-2)m} \cdot 2^2 \refC{so-base}(2,e,m)^{4em} \\& = 2^{m-2+2} \refC{so-base}(2,e,m)^{e(m^2-2m + 4m)} \\& = \refC{so-index-mult}(2,e,m), \end{align*} for the exponent of $\eta$: \[ 28e + \refC{so-index-eta}(2,e,m-2) = 28e + 14e(m-2) = \refC{so-index-eta}(2,e,m), \] for the exponent of $\abs{\disc(R)}$: \begin{align*} & \phantom{{} = {}} 2 + 8 + \refC{so-index-R}(2,e,m-2) + 2\refC{so-index-L}(2,e,m-2) \\& = 10 + \frac{(m-2)(m+14)}{4} + (m-3) = \frac{m^2+16m}{4} = \refC{so-index-R}(2,e,m), \end{align*} for the exponent of $\abs{\disc(L)}$: \begin{align*} & \phantom{{}={}} \frac{2}{m} + \frac{1}{m} + \frac{m+2}{m} \cdot \refC{so-index-L}(2,e,m-2) \\& = \frac{3}{m} + \frac{(m+2)(m-3)}{2m} \\& = \frac{6+(m^2-m-6)}{2m} = \frac{(m-1)m}{2m} = \refC{so-index-L}(2,e,m). \end{align*} \subsubsection*{Case~(c), part~(iv)} For $i=j=1$ or $2$, \cref{pre-induction}(iv)(c) gives \begin{equation} \label{eqn:case-c-iv-base} \abs{\psi(v_i, v_i)}_D \leq 2^{-5/2} \gamma_e^{1/2} \gamma_{4em}^2 \eta^7 \abs{\disc(R)}^{2/e} \abs{\disc(L)}^{1/2em}. \end{equation} This establishes (iv) for $i=j=1$ or~$2$ because, using \eqref{eqn:hermite-constant-bound} and the facts that $1/e \leq 1$, $\pi^{-1/2} < 1$, and $m \geq 2$ so $1/em < (m+1)/4e$ and $(m(m+1)+2)/4 \geq 2$, \begin{align*} 2^{-5/2} \gamma_e^{1/2} \gamma_{4em}^2 & \leq 2^{-5/2} \cdot 4^{1/2e} (e/\pi)^{1/2} \cdot 4^{2/4em} (4em/\pi)^2 \\& = 2^{1/2} 2^{1/e} 2^{1/em} \pi^{-1/2} e^{3/2} \bigl( m\sqrt{2e}/\pi \bigr)^2 \\& < 2^{1/2} 2^1 2^{(m+1)/4e} e^{3/2} \refC{so-base}(2,e,m)^{(m(m+1)+2)/4} = \refC{so-psi-mult}(2,e,m), \\[3pt] 7 & = \refC{so-psi-eta}(2,e,m), \\ \frac{2}{e} & = \frac{2 \cdot 3 + 26}{16e} \leq \frac{m(m+1) + 26}{16e} = \refC{so-psi-R}(2,e,m), \\ \frac{1}{2em} & \leq \frac{2}{8e} \leq \frac{m+1}{8e} \leq \refC{so-psi-L}(d,e,m). \end{align*} For $i=j \geq 3$, induction gives \begin{align*} \abs{\psi(v_j, v_j)}_D & \leq \refC{so-psi-mult}(2,e,m-2) \eta^{\refC{so-psi-eta}(2,e,m-2)} \abs{\disc(R)}^{\refC{so-psi-R}(2,e,m-2)} \abs{\disc(M^\perp)}^{\refC{so-psi-L}(2,e,m-2)} \\& \leq \refC{so-psi-mult}(2,e,m-2) \eta^{\refC{so-psi-eta}(2,e,m-2)} \abs{\disc(R)}^{\refC{so-psi-R}(2,e,m-2)} \\& \qquad \cdot \bigl( (\gamma_{4em}^2/8e)^{4e} \abs{\disc(R)}^2 \abs{\disc(L)}^{(m+2)/m} \bigr)^{\refC{so-psi-L}(2,e,m-2)}. \end{align*} Now we can calculate: for the multiplicative constant: \begin{align*} & \phantom{{}={}} \refC{so-psi-mult}(2,e,m-2) (\gamma_{4em}^2/8e)^{4e\refC{so-psi-L}(2,e,m-2)} \\& = (2e)^{3/2} 2^{(m-1)/4e} \refC{so-base}(2,e,m-2)^{((m-2)(m-1)+2)/4} \cdot (\gamma_{4em}^2/8e)^{(m-1)/2} \\& \leq (2e)^{3/2} 2^{(m-1)/4e} \refC{so-base}(2,e,m)^{(m^2-3m+4)/4} \cdot 2^{1/2e} \refC{so-base}(2,e,m)^{m-1} \\& = (2e)^{3/2} 2^{(m-1+2)/4e} \refC{so-base}(2,e,m)^{(m^2-3m+4 + 4m-4)/4} \\& \leq \refC{so-psi-mult}(2,e,m), \end{align*} for the exponent of $\eta$: \[ \refC{so-psi-eta}(2,e,m-2) = 7 = \refC{so-psi-eta}(2,e,m), \] for the exponent of $\abs{\disc(R)}$: \begin{align*} & \phantom{{}={}} \refC{so-psi-R}(2,e,m-2) + 2\refC{so-psi-L}(2,e,m-2) \\& = \frac{(m-2)(m-1)+26}{16e} + 2 \cdot \frac{m-1}{8e} \\& = \frac{m^2+m+24}{16e} \leq \frac{m(m+1)+26}{16e} = \refC{so-psi-R}(2,e,m), \end{align*} for the exponent of $\abs{\disc(L)}$: \begin{align*} \frac{m+2}{m} \refC{so-psi-L}(2,e,m-2) & = \frac{m+2}{m} \cdot \frac{m-1}{8e} \\& = \frac{m^2+m-2}{8em} \leq \frac{m(m+1)}{8em} = \refC{so-psi-L}(2,e,m). \qedhere \end{align*} \end{proof} \begin{lemma} \label{R-cap-Rdag} Let $(D,\dag)$ be a division $\mathbb{Q}$-algebra with a positive involution. Let $V$ be a left $D$-vector space with a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$. Let $L$ be a $\mathbb{Z}$-lattice in $V$ such that $\Trd_{D/\mathbb{Q}} \psi(L \times L) \subset \mathbb{Z}$. Let $R = \Stab_D(L)$. Then $\disc(L) R^\dag \subset R$. \end{lemma} \begin{proof} Let $a \in R$ and $x, y \in L$. Then \[ \Trd_{D/\mathbb{Q}} \psi(a^\dag x, y) = \Trd_{D/\mathbb{Q}} \bigl( a^\dag \psi(x,y) \bigr) = \Trd_{D/\mathbb{Q}} \bigl( \psi(x,y) a^\dag \bigr) = \Trd_{D/\mathbb{Q}} \psi(x,ay). \] Since $x, ay \in L$, we conclude that $\Trd_{D/\mathbb{Q}} \psi(a^\dag x, y) \in \mathbb{Z}$. Since this holds for all $y \in L$, we have $a^\dag x \in L^*$. Consequently, \[ \disc(L)a^\dag x = [L^*:L]a^\dag x \in L. \] This holds for all $x \in L$, so $\disc(L)a^\dag \in \Stab_D(L) = R$. \end{proof} To complete the proof of \cref{minkowski-hermitian-perfect}, we combine \cref{weakly-unitary-induction} and \cref{R-cap-Rdag}. The resulting exponent of $\abs{\disc(L)}$ in (iii) is $\refC{so-index-eta}(d,e,m)+\refC{so-index-L}(d,e,m)$ and the exponent of $\abs{\disc(L)}$ in (iv) is $\refC{so-psi-eta}(d,e,m)+\refC{so-psi-L}(d,e,m)$, while the other constants in \cref{minkowski-hermitian-perfect} are the same as the corresponding constants in \cref{weakly-unitary-induction}. \section{Application to the Zilber--Pink conjecture} \label{sec:ZP-high-level} In this section we study special subvarieties of PEL type from the point of view of Shimura data. The main result of the section is that Shimura datum components of simple PEL type I and~II lie in a single $\mathbf{GSp}_{2g}(\mathbb{R})$-conjugacy class, which we describe explicitly. We also establish a bound on the dimension of all special subvarieties of PEL type in $\mathcal{A}_g$, demonstrating that \cref{main-theorem-zp} is indeed a consequence of the Zilber--Pink conjecture. We end the section by outlining the strategy of the proof of \cref{main-theorem-zp} carried out in the subsequent sections. For our notation and terminology around Shimura datum components, see \cite[sec.\ 2A and~2B]{ExCM}. \subsection{Shimura data} \label{subsec:shimura-data} Let $L=\mathbb{Z}^{2g}$, let $V = L_\mathbb{Q}$ and let $\phi:L\times L\to\mathbb{Z}$ be the symplectic form represented, in the standard basis, by the matrix $J_{2g}$. Let $\mathbf{G}=\mathbf{GSp}(V, \phi)=\mathbf{GSp}_{2g}$ and let $\Gamma=\mathbf{Sp}_{2g}(\mathbb{Z})$. Let $X^+$ denote the $\mathbf{G}(\mathbb{R})^+$-conjugacy class of the morphism $h_0 \colon \mathbb{S}\to\mathbf{G}_\mathbb{R}$ given by \begin{equation} \label{eqn:h0} h_0(a+ib) \mapsto \fullsmallmatrix{a}{b}{-b}{a}^{\oplus g}. \end{equation} Then $(\mathbf{G}, X^+)$ is a Shimura datum component and there is a natural $\mathbf{G}(\mathbb{R})^+$-equivariant bijection $X^+ \cong \mathcal{H}_g$ where $\mathcal{H}_g$ is the Siegel upper half-space. The moduli space of principally polarised abelian varieties of dimension~$g$, denoted $\mathcal{A}_g$, is the Shimura variety component whose complex points are $\Gamma \backslash X^+$. Let $S$ be a special subvariety of PEL type of $\mathcal{A}_g$, as defined in section~\ref{subsec:intro-zp-context}, and let $R$ be its generic endomorphism ring. Choose a point $x \in X^+$ whose image $s \in \mathcal{A}_g$ is an endomorphism generic point in $S(\mathbb{C})$. Then $x$ induces an isomorphism $H_1(A_s, \mathbb{Z}) \cong L$ and hence the action of $R$ on $A_s$ induces an action of $R$ on $L$. Let $\mathbf{H}$ denote the centraliser in $\mathbf{G}$ of the action of $R$ on~$L$, which is a $\mathbb{Q}$-algebraic group. We call $\mathbf{H}$ the \defterm{general Lefschetz group} of $S$. Note that $\mathbf{H}$ is only defined up to conjugation by $\Gamma$, because different choices of $x$ may lead to isomorphisms $H_1(A_s, \mathbb{Z}) \cong L$ which differ by $\Gamma$. (The group $\mathbf{H}$ is isomorphic to the Lefschetz group of an endomorphism generic abelian variety parameterised by $S$, as defined in \cite{Mil99}, thanks to \cite[Theorem~4.4]{Mil99}. However it seems to be more common to call $\mathbf{H} \cap \mathbf{Sp}$ or $(\mathbf{H} \cap \mathbf{Sp})^\circ$ the Lefschetz group, so we have added the adjective ``general'' by analogy with the general symplectic and general orthogonal groups.) The special subvariety of PEL type $S$ is a Shimura subvariety component of $\mathcal{A}_g$ associated with a Shimura subdatum component of the form $(\mathbf{H}^\circ, X_\mathbf{H}^+) \subset (\mathbf{G}, X^+)$, where $\mathbf{H}$ is the general Lefschetz group of~$S$. We say that $(\mathbf{H}, X_\mathbf{H}^+) \subset (\mathbf{G}, X^+)$ is a \defterm{Shimura subdatum component of simple PEL type I or~II} if it is a Shimura subdatum component associated with a special subvariety of PEL type, where $\mathbf{H}$ is the general Lefschetz group, and its generic endomorphism algebra is a division algebra with positive involution of type I or~II. Note that in the simple type I or~II case, $\mathbf{H} = \mathbf{H}^\circ$. \subsection{Representatives of conjugacy classes of Shimura data of simple PEL type I or~II} \label{subsec:shimura-representatives} The Shimura subdatum components of $(\mathbf{G}, X^+)$ of simple PEL type I or~II lie in only finitely many $\mathbf{G}(\mathbb{R})^+$-conjugacy classes. Indeed, we shall now explicitly describe finitely many Shimura subdatum components which represent these $\mathbf{G}(\mathbb{R})^+$-conjugacy classes. Note that, for convenience, these representative subdatum components are not of simple PEL type, although they are of PEL type. This generalises \cite[Lemma~6.1]{QRTUI}, which is the case $g=2$, $d=2$, $e=m=1$. Let $d$, $e$, $m$ be positive integers such that $d^2em = 2g$, $d=1$ or $2$ and $dm$ is even. For fixed $g$, there are only finitely many integers $d,e,m$ satisfying these conditions. As we shall show, each triple $d,e,m$ corresponds to a single $\mathbf{G}(\mathbb{R})^+$-conjugacy class of Shimura subdatum components of simple PEL type I or~II. Let $D_0 = \mathrm{M}_d(\mathbb{Q})^e$. Define a $\mathbb{Q}$-algebra homomorphism $\iota_0 \colon D_0 \to \mathrm{M}_{2g}(\mathbb{Q})$ as follows: \begin{itemize} \item when $d=1$: $\iota_0(a_1, \dotsc, a_e) = a_1 I_m \oplus \dotsb \oplus a_e I_m$. \item when $d=2$: \[ \iota_0\Bigl( \fullmatrix{a_1}{b_1}{c_1}{d_1}, \dotsc, \fullmatrix{a_e}{b_e}{c_e}{d_e} \Bigr) = \fullmatrix{a_1I_{2m}}{b_1I_{2m}}{c_1I_{2m}}{d_1I_{2m}} \oplus \dotsb \oplus \fullmatrix{a_eI_{2m}}{b_eI_{2m}}{c_eI_{2m}}{d_eI_{2m}}. \] \end{itemize} We view $V$ as a left $D_0$-module via $\iota_0$. Let $t$ denote the involution of $D_0$ which is transpose on each factor. Since $dm$ is even, $\iota_0(D_0)$ commutes with $J_{2g}$ and so, for all $a \in D_0$ and $x, y \in V$, we have \[ \phi(ax, y) = x^t \iota(a)^t J_{2g} y = x^t J_{2g} \iota(a)^t y = \phi(x, a^t y). \] Thus $\phi \colon V \times V \to \mathbb{Q}$ is a $(D_0,t)$-compatible symplectic form. By \cref{tr-skew-hermitian-form,tr-non-deg}, there is a unique non-degenerate $(D_0,t)$-skew-Hermitian form $\psi_0 \colon V \times V \to D_0$ such that $\phi = \Trd_{D_0/\mathbb{Q}} \psi_0$. Let $\mathbf{H}_0$ denote the centraliser of $\iota_0(D_0)$ in $\mathbf{G}$. In other words, \begin{equation} \label{eqn:H0} \mathbf{H}_0 = \{ g_1^{\oplus d} \oplus g_2^{\oplus d} \oplus \dotsb \oplus g_e^{\oplus d} : g_1, \dotsc, g_e \in \mathbf{GSp}_{dm}, \, \nu(g_1) = \dotsb = \nu(g_e) \}, \end{equation} where $\nu \colon \mathbf{GSp}_{dm} \to \mathbb{G}_m$ denotes the symplectic multiplier character. This is a connected $\mathbb{Q}$-algebraic group, and it is equal to the general Lefschetz group of a special subvariety of PEL type in which endomorphism generic points correspond to abelian varieties isogenous to a product of the form $A_1^d \times \dotsb \times A_e^d$ where $A_1, \dotsc, A_e$ are pairwise non-isogenous simple abelian varieties of dimension $dm/2$ with $\End(A_1) = \dotsb = \End(A_e) = \mathbb{Z}$. \begin{lemma} \label{conj-class-mt} Let $(\mathbf{H}, X_\mathbf{H}^+) \subset (\mathbf{GSp}_{2g}, \mathcal{H}_g)$ be a Shimura subdatum component of simple PEL type I or II. Let $D$ be the generic endomorphism algebra of $(\mathbf{H}, X_\mathbf{H}^+)$ and let $F$ be the centre of $D$. Then $\mathbf{H}_\mathbb{R}$ is a $\mathbf{G}(\mathbb{R})^+$-conjugate of the group $\mathbf{H}_0$ constructed above for the parameters \[ d = \sqrt{\dim_F(D)} = 1 \text{ or } 2, \quad e = [F:\mathbb{Q}], \quad m = 2g/d^2e. \] \end{lemma} \begin{proof} The tautological family of principally polarised abelian varieties on $X^+$ restricts to a family of principally polarised abelian varieties on $X_\mathbf{H}^+$. The polarisation induces a Rosati involution $\dag$ of the endomorphism algebra of this family, namely~$D$. As we saw in the construction of the general Lefschetz group, $D$ acts on $V$. Via this action, the symplectic form $\phi \colon V \times V \to \mathbb{Q}$ is $(D,\dag)$-compatible. Since $(D,\dag)$ is a simple $\mathbb{Q}$-algebra with a positive involution of type I or~II, there is an isomorphism $\alpha \colon (D_{0,\mathbb{R}},\dag) \to (D_\mathbb{R},t)$ of $\mathbb{R}$-algebras with involution (where $D_0 = \mathrm{M}_d(\mathbb{Q})^e$ for the parameters $d$ and $e$ specified in the lemma). We obtain an action of $D_{0,\mathbb{R}}$ on $V_\mathbb{R}$ by composing the action of $D_\mathbb{R}$ with $\alpha$. Since $\phi$ is $(D,\dag)$-compatible, it is also $(D_{0,\mathbb{R}},t)$-compatible under the via~$\alpha$. Hence there is a unique non-degenerate $(D_{0,\mathbb{R}},t)$-skew-Hermitian form $\psi_\alpha \colon V_\mathbb{R} \times V_\mathbb{R} \to D_{0,\mathbb{R}}$ such that $\phi = \Trd_{D_{0,\mathbb{R}}/\mathbb{R}} \psi_\alpha$, where ``$(D_{0,\mathbb{R}},t)$-skew-Hermitian'' refers to the action via~$\alpha$. (Note that $\psi_\alpha$ is in general different from $\psi_0$ because $\psi_0$ is $(D_{0,\mathbb{R}},t)$-skew-Hermitian with respect to the action via $\iota_0$.) By \cref{D0-basis}, there exists a $D_{0,\mathbb{R}}$-basis $v_1, \dotsc, v_m$ for $V_\mathbb{R}$ with respect to the action via~$\iota_0$ which is symplectic (if $d=1$) or unitary (if $d=2$) with respect to $\psi_0$. There likewise exists a $D_{0,\mathbb{R}}$-basis $w_1, \dotsc, w_m$ for $V_\mathbb{R}$ with respect to the action via $\alpha$ which is symplectic or unitary with respect to $\psi_\alpha$. Define $\gamma \in \mathbf{GL}(V_\mathbb{R})$ by \[ \gamma(\iota_0(a_1)v_1 + \dotsb + \iota_0(a_m)v_m) = \alpha(a_1)w_1 + \dotsb + \alpha(a_m)w_m \] for all $a_1, \dotsc, a_m \in D_{0,\mathbb{R}}$. Because $v_1, \dotsc, v_m$ and $w_1, \dotsc, w_m$ are symplectic or unitary bases (depending on $d$) with respect to $\psi_0$ and $\psi_\alpha$ respectively, we have \[ \psi_\alpha(\gamma(v_i), \gamma(v_j)) = \psi_\alpha(w_i, w_j) = \psi_0(v_j, v_j) \] for all $i, j$. Because $\psi_0$ and $\psi_\alpha$ are $(D_{0,\mathbb{R}},t)$-skew-Hermitian with respect to the actions via $\iota_0$ and $\alpha$ respectively, it follows that \[ \psi_\alpha(\gamma(v), \gamma(w)) = \psi_0(v, w) \] for all $v, w \in V_\mathbb{R}$. Taking the reduced trace, we obtain $\phi(\gamma(v), \gamma(w)) = \phi(v, w)$ for all $v, w \in V_\mathbb{R}$. In other words, $\gamma \in \mathbf{Sp}(V_\mathbb{R}, \phi) \subset \mathbf{G}(\mathbb{R})^+$. Since $\gamma$ is an isomorphism between the representations of $D_{0,\mathbb{R}}$ given by $\alpha$ and~$\iota_0$, $\gamma \mathbf{H}_0 \gamma^{-1}$ is the centraliser in $\mathbf{G}$ of the action of $D_{0,\mathbb{R}}$ via $\alpha$. In other words, $\gamma \mathbf{H}_0 \gamma^{-1}$ is the centraliser in $\mathbf{G}$ of the action of $D_\mathbb{R}$, which is the general Lefschetz group~$\mathbf{H}$. \end{proof} \begin{lemma}\label{unique-datum} For each triple of positive integers $d,e,m$ satisfying $d^2em=2g$, $d=1$ or $2$ and $dm$ even, there exists a unique Shimura subdatum component $(\mathbf{H}_0,X^+_0)$ of $(\mathbf{G},X^+)$ with group~$\mathbf{H}_0$. Furthermore, the Hodge parameter $h_0$ from \eqref{eqn:h0} is in~$X_0^+$. \end{lemma} \begin{proof} First note that $h_0 \in X^+$ and $h_0$ factors through $\mathbf{H}_{0,\mathbb{R}}$. Hence if $X_0^+$ denotes the $\mathbf{H}_0(\mathbb{R})^+$-conjugacy class of $h_0$ in $\Hom(\mathbb{S}, \mathbf{H}_{0,\mathbb{R}})$, then $(\mathbf{H}_0, X_0^+)$ is a Shimura subdatum component of $(\mathbf{G}, X^+)$. To establish the uniqueness, let $X_0^+$ now denote any subset of $X^+$ such that $(\mathbf{H}_0, X_0^+)$ is a Shimura datum component. Let $\mathbf{H}_0^\mathrm{ad}$ denote the quotient of $\mathbf{H}_0$ by its centre. By \cite[Proposition 5.7 (a)]{Mil05}, $X_0^+$ is in bijection with its image $(X^+_0)^{\mathrm{ad}}\subset\Hom(\mathbb{S},\mathbf{H}^\mathrm{ad}_{0,\mathbb{R}})$ under composition with the natural map $\mathbf{H}_{0,\mathbb{R}}\to\mathbf{H}^{\mathrm{ad}}_{0,\mathbb{R}}$. Observe that $\mathbf{H}^\mathrm{ad}_0\cong\mathbf{PGSp}_{md}^e$. Therefore, $(X^+_0)^{\mathrm{ad}}$ is a product of $\mathbf{PGSp}_{md}(\mathbb{R})^+$-conjugacy classes of morphisms $\mathbb{S}\to\mathbf{PGSp}_{md,\mathbb{R}}$ satisfying conditions SV1--SV3 from \cite[section~4]{Mil05}. From \cite[Prop 1.24]{Mil05}, and the following paragraphs, there exists only one $\mathbf{PGSp}_{md}(\mathbb{R})$-conjugacy class $X_{md}$ of morphisms $\mathbb{S}\to\mathbf{PGSp}_{md,\mathbb{R}}$ satisfying SV1--SV3. It has two connected components $X^+_{md}$ and $X^-_{md}$ corresponding to the connected components of $\mathbf{PGSp}_{md}(\mathbb{R})$. In other words, $(X^+_0)^{\mathrm{ad}}$ is equal to a direct product of copies of the spaces $X^+_{md}$ and $X^-_{md}$. Consider the morphisms $h^+_2,h^-_2:\mathbb{S}\to\mathbf{GL}_{2,\mathbb{R}}$ defined by \[h^+_2:a+ib\mapsto\fullsmallmatrix{a}{b}{-b}{a}\text{ and } h^-_2:a+ib\mapsto\fullsmallmatrix{a}{-b}{b}{a}.\] Then $(h^+_2)^{\oplus md/2}$ and $(h^-_2)^{\oplus md/2}$ are non-$\mathbf{GSp}_{md}(\mathbb{R})^+$-conjugate morphisms $\mathbb{S}\to\mathbf{GSp}_{md,\mathbb{R}}$ satisfying SV1--SV3. Therefore, the images of their $\mathbf{GSp}_{md}(\mathbb{R})^+$-conjugacy classes in $\Hom(\mathbb{S},\mathbf{PGSp}_{md,\mathbb{R}})$ are precisely $X^+_{md}$ and $X^-_{md}$. It follows that $(X^+_0)^{\mathrm{ad}}$ is the image in $\Hom(\mathbb{S},\mathbf{PGSp}^e_{md,\mathbb{R}})$ of the $\mathbf{GSp}_{md}^e(\mathbb{R})^+$-conjugacy class of an element $h\in\Hom(\mathbb{S},\mathbf{GSp}^e_{md,\mathbb{R}})$ of the form \[\bigl( (h^{\pm}_2)^{\oplus md/2}, \dotsc, (h^{\pm}_2)^{\oplus md/2} \bigr),\] for some sequence of signs in $\{\pm\}^e$. Since the image of $h$ in $\Hom(\mathbb{S},\mathbf{GSp}_{md^2e,\mathbb{R}})$ (obtained by repeating each component $d$ times block-diagonally) lies in a Shimura datum, it satisfies condition~SV2 of \cite{Mil05}, that is, the stabiliser of $h$ in $\mathbf{GSp}_{md^2e}(\mathbb{R})$ is compact modulo the centre. This only holds when \[ h=h^+=:((h^+_2)^{\oplus md/2}, \dotsc, (h^+_2)^{\oplus md/2}) \text{ or } h=h^-:=((h^-_2)^{\oplus md/2}, \dotsc, (h^-_2)^{\oplus md/2}).\] This can be checked by observing that the centraliser of $h^+_2 \oplus h^-_2$ in $\mathbf{GSp}_4(\mathbb{R})$ is non-compact modulo the centre. Note that the image of $h^+$ in $\Hom(\mathbb{S},\mathbf{GSp}_{md^2e,\mathbb{R}})$ is equal to $h_0$. Since $h_0 \in X^+$ while the image of $h^-$ is not in $X^+$, we conclude that $X_0^+$ must be equal to the $\mathbf{H}_0(\mathbb{R})$-conjugacy class of $h_0$. \end{proof} \begin{corollary} \label{conj-class-datum} If $(\mathbf{H}, X_\mathbf{H}^+) \subset (\mathbf{G}, X^+)$ is a Shimura subdatum component of simple PEL type I or II and $\mathbf{H} = g\mathbf{H}_0 g^{-1}$ for $g \in \mathbf{G}(\mathbb{R})^+$, then $X_\mathbf{H}^+ = gX_0^+$ where $(\mathbf{H}_0, X_0^+) \subset (\mathbf{G}, X^+)$ is the unique Shimura subdatum component given by \cref{unique-datum}. \end{corollary} \subsection{Dimension of special subvarieties of PEL type} The following two results establish a lower bound on the codimension of special subvarieties of PEL type in $\mathcal{A}_g$. When $g \geq 3$, this guarantees that intersections between a Hodge generic curve and all proper special subvarieties of PEL type in $\mathcal{A}_g$ are predicted to be ``unlikely'' by the Zilber--Pink conjecture. \begin{lemma} \label{codim-simple-pel} Let $S \subset \mathcal{A}_g$ be a special subvariety of simple PEL type, not equal to~$\mathcal{A}_g$. Then $\dim(S) \leq \dim(\mathcal{A}_g) - g^2/4$. \end{lemma} \begin{proof} Let $D$ be the generic endomorphism algebra of $S$. Following our usual notation, let $F$ be the centre of $D$ and let \[ d = \sqrt{\dim_F(D)}, \quad e = [F:\mathbb{Q}], \quad m = 2g/d^2e. \] When $D$ has Albert type~IV, we need some additional notation. Let $s \in S(\mathbb{C})$. Then $D_\mathbb{R} \cong \mathrm{M}_d(\mathbb{C})^{e/2}$ acts $\mathbb{R}$-linearly on the tangent space $T_0(A_s(\mathbb{C}))$. For each $i = 1, \dotsc, e/2$, let $r_i$ denote the multiplicity in $T_0(A_s(\mathbb{C}))$ of the standard representation of the $i$-th factor $\mathrm{M}_d(\mathbb{C})$ of $D_\mathbb{R}$. Similarly let $s_i$ denote the multiplicity of the complex conjugate of the standard representation of the $i$-th factor of $D_\mathbb{R}$. The values $r_i$ and $s_i$ are independent of the choice of $s \in S(\mathbb{C})$, and satisfy $r_i + s_i = dm$. The dimension of special subvarieties of simple PEL type was determined by Shimura \cite[4.1]{Shi63}. Note that our $m$ is the same as $m$ in \cite{Shi63}, while our $e$ is called $g$ in \cite{Shi63}. For a more modern account of this theory, see \cite[chapter~9]{BL04}. For each type of endomorphism algebra~$D$, we quote the dimension of the special subvariety from \cite[4.1]{Shi63} and use some elementary inequalities. When $D$ has type~I, $d=1$, $em = 2g$ and $e \geq 2$ since $S \neq \mathcal{A}_g$, so $m \leq g$. Hence \[ \dim(S) = \tfrac{1}{2} \tfrac{m}{2} \bigl( \tfrac{m}{2} + 1 \bigr)e \leq \tfrac{1}{2} g \bigl( \tfrac{1}{2}g + 1 \bigr) = \tfrac{1}{4}g^2 + \tfrac{1}{2}g. \] When $D$ has type~II, $d=2$, $em = g/2$ and $m \leq g/2$ so \[ \dim(S) = \tfrac{1}{2} m(m+1)e \leq \tfrac{1}{4}g \bigl( \tfrac{1}{2}g + 1 \bigr) = \tfrac{1}{8}g^2 + \tfrac{1}{4}g. \] When $D$ has type~III, $d=2$, $em = g/2$ and $m \leq g/2$ so \[ \dim(S) = \tfrac{1}{2} m(m-1)e \leq \tfrac{1}{4}g \bigl( \tfrac{1}{2}g - 1 \bigr) = \tfrac{1}{8}g^2 - \tfrac{1}{4}g. \] When $D$ has type~IV, $2g = d^2em$ and $e \geq 2$ since $F$ is a CM field, so $m \leq g$. Furthermore $r_i + s_i = dm$ so $r_is_i \leq d^2m^2/4$ for each~$i$. Hence, \[ \dim(S) = \sum_{i=1}^{e/2} r_i s_i \leq \tfrac{1}{2}e \cdot \tfrac{1}{4}d^2m^2 = \tfrac{1}{4}gm \leq \tfrac{1}{4}g^2. \] Hence in all cases, \[ \dim(S) \leq \tfrac{1}{4}g^2 + \tfrac{1}{2}g = \tfrac{1}{2}g(g+1) - \tfrac{1}{4}g^2 = \dim(\mathcal{A}_g) - \tfrac{1}{4}g^2. \qedhere \] \end{proof} \begin{lemma} \label{codim-pel} Let $S \subset \mathcal{A}_g$ be a special subvariety of PEL type, not equal to $\mathcal{A}_g$. Then $\dim(S) \leq \dim(\mathcal{A}_g) - g + 1$. \end{lemma} \begin{proof} Note that $g^2/4 \geq g-1$ for all real numbers~$g$, so \cref{codim-simple-pel} implies the claim for special subvarieties of simple PEL type. Let $S \subset \mathcal{A}_g$ be a special subvariety of non-simple PEL type. By adding level structure, we may obtain a finite cover $S' \to S$ which is a fine moduli space of abelian varieties with PEL structure. Then there is a universal abelian scheme with PEL structure $A \to S'$. Since $S'$ is of non-simple PEL type, $A$ is a non-simple abelian scheme. Thus there exist non-trivial abelian schemes $A_1, A_2 \to S'$ such that $A$ is isogenous to $A_1 \times A_2$. (There may be multiple choices of isogeny decompositions of $A$. Choose any such decomposition.) Let $g_1$, $g_2$ denote the relative dimensions of $A_1$ and $A_2$ respectively. Let \[ T = \{ (s, s_1, s_2) \in S' \times \mathcal{A}_{g_1} \times \mathcal{A}_{g_2} : A_s \text{ is isogenous to } A_{s_1} \times A_{s_2} \}. \] Since isogenies $A_s \to A_{s_1} \times A_{s_2}$ gives rise to Hodge classes on $A_s \times A_{s_1} \times A_{s_2}$, $T$ is a countable union of special subvarieties of $S' \times \mathcal{A}_{g_1} \times \mathcal{A}_{g_2}$. By construction, the projection $T \to S'$ is surjective on $\mathbb{C}$-points. An irreducible complex algebraic variety cannot be contained in the union of countably many proper closed subvarieties. Hence there exists an irreducible component $T^+ \subset T$ such that the image of $T^+$ is dense in $S'$. Hence $\dim(T^+) \geq \dim(S') = \dim(S)$. Given any two abelian varieties $A_{s_1}$ and $A_{s_2}$ over $\mathbb{C}$, there are only countably many isomorphism classes of abelian varieties which are isogenous to $A_{s_1} \times A_{s_2}$. Furthermore each abelian variety of dimension~$g$ carries only finitely many PEL structures parameterised by $S'$ (the natural morphism $S' \to \mathcal{A}_g$ is finite). Hence the projection $T \to \mathcal{A}_{g_1} \times \mathcal{A}_{g_2}$ has countable fibres. Therefore \[ \dim(T^+) \leq \dim(\mathcal{A}_{g_1} \times \mathcal{A}_{g_2}) = \frac{g_1(g_1+1)}{2} + \frac{g_2(g_2+1)}{2}. \] Since $g_1+g_2=g$, we obtain \[ \tfrac{1}{2}g_1(g_1+1) + \tfrac{1}{2}g_2(g_2+1) = \tfrac{1}{2}\bigl( (g_1+g_2)^2 - 2g_1g_2 + (g_1+g_2) \bigr) = \tfrac{1}{2}(g^2+g) - g_1g_2. \] Therefore \[ \dim(S) \leq \dim(T^+) \leq \dim(\mathcal{A}_g) - g_1g_2. \] Now $g_1g_2 = g_1(g-g_1)$ is a quadratic function of $g_1$ with a maximum at $g_1=g/2$. Hence, for $1 \leq g_1 \leq g-1$, $g_1g_2$ is minimised when $g_1 = 1$ or $g-1$. Thus $g_1g_2 \geq g-1$. \end{proof} \subsection{High-level proof strategy} \label{subsec:proof-strategy-high-level} We now outline the strategy for the proof of \cref{main-theorem-zp}, which is completed in the subsequent sections of the paper. Because the Shimura subdatum components $(\mathbf{H}, X_\mathbf{H}^+)$ associated with special subvarieties of simple PEL type I or~II lie in only finitely many $\mathbf{G}(\mathbb{R})$-conjugacy classes, it suffices to prove \cref{main-theorem-zp} ``one $\mathbf{G}(\mathbb{R})$-conjugacy class at a time.'' Thanks to \cref{conj-class-mt}, this means that we choose positive integers $d,e,m$ and let $\mathbf{H}_0$ be the subgroup of $\mathbf{G} = \mathbf{GSp}_{2g}$ defined in \eqref{eqn:H0} for these $d,e,m$. We prove \cref{main-theorem-zp} with $\Sigma$ replaced by $\Sigma_{d,e,m}$, namely, the union of the endomorphism generic loci of all proper special subvarieties of $\mathcal{A}_g$ of simple PEL type I or~II whose underlying group is $\mathbf{G}(\mathbb{R})$-conjugate to $\mathbf{H}_0$. Let $\pi$ denote the standard quotient map $X^+ \to \mathcal{A}_g(\mathbb{C})$ and let $\mathcal{F}_g$ denote a Siegel fundamental set of $X^+$. In order to prove \cref{main-theorem-zp} for $\Sigma_{d,e,m}$, we follow the same proof strategy as \cite{QRTUI} (which proves the analogous result for $g=2$, $d=2$, $e=m=1$). The idea is to apply the Habegger--Pila--Wilkie counting theorem \cite[Corollary 7.2]{HP16} to a definable set of the form \[ D = \{ (y,z) \in Y \times \mathcal{C} : z \in X_y \} \] where $Y \subset \mathbb{R}^n$ is a parameter space for subsets $X_y \subset X^+$ and $\mathcal{C} = \pi^{-1}(C(\mathbb{C})) \cap \mathcal{F}_g$. The parameter space $Y$ has the following properties: \begin{enumerate}[itemsep=2pt] \item For every rational point $y \in Y \cap \mathbb{Q}^n$, $X_y$ is a pre-special subvariety of $X^+$ whose underlying group is $\mathbf{G}(\mathbb{R})$-conjugate to $\mathbf{H}_0$. \item For every point $s \in \Sigma_{d,e,m}$, there exists $z \in \pi^{-1}(s) \cap \mathcal{F}_g$ such that $z$ lies in $X_y$ for some rational point $y \in Y \cap \mathbb{Q}^n$, with the height $H(y)$ polynomially bounded in terms of the discriminant of the endomorphism ring of $A_s$. \end{enumerate} Consequently, if $C \cap \Sigma_{d,e,m}$ is infinite and the large Galois orbits conjecture holds, then the number of points of $(y,z) \in D$ with $y \in Y \cap \mathbb{Q}^n$ grows reasonably quickly with respect to $H(y)$. Then the Habegger--Pila--Wilkie theorem tells us that $D$ contains a path whose projection to $Y$ is semialgebraic and whose projection to $\mathcal{C}$ is non-constant. We can conclude by a functional transcendence argument as in \cite[sec.~6.5]{QRTUI}. \subsection{Proof strategy: parameter space} \label{subsec:parameter-space} The strategy described in section~\ref{subsec:proof-strategy-high-level} has been applied many times before. The new ingredient required to apply it to our case is to construct a suitable parameter space $Y$ for special subvarieties of simple PEL type I or~II and prove that it satisfies property~(2) above. To construct $Y$, we will choose a suitable representation $\rho \colon \mathbf{G} \to \mathbf{GL}(W)$ where $W$ is a $\mathbb{Q}$-vector space and a vector $w_0 \in W$ such that $\Stab_{\mathbf{G}}(w_0) = \mathbf{H}_0$. Then define $Y$ to be the ``expanded $\rho$-orbit'' of $w_0$ in $W_\mathbb{R}$: \[ Y = \Aut_{\rho(\mathbf{G})}(W_\mathbb{R}) \, \rho(\mathbf{G}(\mathbb{R})) \, w_0. \] For each $y \in Y$, define $\mathbf{H}_y = \Stab_{\mathbf{G}_\mathbb{R}}(y)$ and \[ X_y = \{ z \in X^+ : z(\mathbb{S}) \subset \mathbf{H}_y \}. \] If $y \in Y \cap \mathbb{Q}^n$, then $\mathbf{H}_y$ is a $\mathbb{Q}$-algebraic subgroup of $\mathbf{G}$, which is $\mathbf{G}(\mathbb{R})^+$-conjugate to $\mathbf{H}_0$. By \cref{conj-class-mt}, we have $\mathbf{H}_{y,\mathbb{R}} = g\mathbf{H}_{0,\mathbb{R}}g^{-1}$ for some $g \in \mathbf{G}(\mathbb{R})^+$ and, for each component $X_y^+$ of $X_y$, $(\mathbf{H}_0, g^{-1}X_y^+)$ is a Shimura subdatum component. By \cref{unique-datum}, $g^{-1}X_y^+ = X_0^+$. Hence, $X_y$ is connected and $(\mathbf{H}_y, X_y)$ is a Shimura subdatum component of $(\mathbf{G}, X^+)$. This establishes property~(1) of section~\ref{subsec:proof-strategy-high-level}. To establish property~(2) of section~\ref{subsec:proof-strategy-high-level}, we use the method of \cite[Proposition~6.3]{QRTUI}. All we have to do is understand how fundamental sets in $\mathbf{H}_y$ vary through the $\mathbf{G}(\mathbb{R})$-conjugacy class. A quantitative description of these fundamental sets is given by \cite[Theorem~1.2]{QRTUI}, but it requires as input a suitable representation $\rho$ and bounds on certain vectors in $\rho$. This input is given in \cref{reps-closed-orbits,rep-bound-arithmetic}, which together generalise \cite[Proposition~5.1]{QRTUI} (which is the case $d=2$, $e=m=1$). \section{Construction of representation and closed orbit} \label{sec:representation} This section constructs the representation required for the strategy outlined in section~\ref{subsec:parameter-space} and proves that it satisfies conditions (i) and~(ii) of \cite[Theorem~1.2]{QRTUI}. These conditions are algebraic and geometric in nature. We also prove a small piece of arithmetic information about the representation, namely \cref{reps-closed-orbits}(v), which will be used to obtain more substantial arithmetic properties in section~\ref{sec:rep-bound}. This section generalises \cite[sections 5.2 and~5.3]{QRTUI}. We will actually construct two representations $\rho_L, \rho_R \colon \mathbf{G} \to \mathbf{GL}(W)$, which are induced by left and right multiplication respectively in $\End(V)$. The representation to which we shall apply \cite[Theorem~1.2]{QRTUI} is $\rho_L$, while $\rho_R$ is an auxiliary object required at the end of section~\ref{sec:rep-bound}. \begin{proposition} \label{reps-closed-orbits} Let $d$, $e$ and $m$ be positive integers such that $dm$ is even. Let $n = d^2em$. Let $L = \mathbb{Z}^n$ and let $\phi \colon L \times L \to \mathbb{Z}$ be the standard symplectic form as in section~\ref{subsec:shimura-data}. Let $V = L_\mathbb{Q}$, let $\mathbf{G} = \mathbf{GSp}(V, \phi) = \mathbf{GSp}_{n,\mathbb{Q}}$ and let $\Gamma = \mathbf{Sp}_n(\mathbb{Z})$. Let $E_0$ be a $\mathbb{Q}$-subalgebra of $\End(V) = \mathrm{M}_n(\mathbb{Q})$ such that $E_{0,\mathbb{C}} \cong \mathrm{M}_{dm}(\mathbb{C})^e$ and the resulting $E_{0,\mathbb{C}}$-module structure on $V_\mathbb{C}$ is isomorphic to the direct sum of $d$ copies of each of the $e$ irreducible representations of $E_{0,\mathbb{C}}$. Let $\mathbf{H}_0$ be the $\mathbb{Q}$-algebraic subgroup of $\mathbf{G}$ whose $k$-points are \[ \mathbf{H}_0(k) = (E_0 \otimes_\mathbb{Q} k) \cap \mathbf{G}(k) \] for each field extension $k$ of~$\mathbb{Q}$. Then there exists a $\mathbb{Q}$-vector space $W$, a $\mathbb{Z}$-lattice $\Lambda \subset W$, $\mathbb{Q}$-algebraic representations $\rho_L, \rho_R \colon \mathbf{G} \to \mathbf{GL}(W)$, a vector $w_0 \in \Lambda$ and a constant $\newC{du-multiplier}$ such that: \begin{enumerate}[(i)] \item $\Stab_{\mathbf{G},\rho_L}(w_0) = \Stab_{\mathbf{G},\rho_R}(w_0) = \mathbf{H}_0$; \item the orbit $\rho_L(\mathbf{G}(\mathbb{R}))w_0$ is closed in $W_\mathbb{R}$; \item $\rho_L$ and $\rho_R$ commute with each other; \item $\rho_L(\Gamma)$ and $\rho_R(\Gamma)$ stabilise $\Lambda$; \item for each $u \in \mathbf{G}(\mathbb{R})$, if the group $\mathbf{H}_u = u \mathbf{H}_{0,\mathbb{R}} u^{-1}$ is defined over~$\mathbb{Q}$, then there exists $d_u \in \mathbb{R}_{>0}$ such that \[ d_u \rho_L(u) \rho_R(u) w_0 \in \Lambda \quad \text{and} \quad d_u \leq \refC{du-multiplier} \abs{\disc(S_u)}^{1/2} \] where $S_u$ denotes the ring $u E_{0,\mathbb{R}} u^{-1} \cap \mathrm{M}_n(\mathbb{Z})$. \end{enumerate} \end{proposition} In our application to \cref{main-theorem-zp}, $\mathbf{H}_0$ shall be equal to the group $\mathbf{H}_0$ defined in~\eqref{eqn:H0}. To achieve this, let $d=1$ or~$2$ and define $D_0$ and $\iota_0 \colon D_0 \to \mathrm{M}_n(\mathbb{Q})$ as in section~\ref{subsec:shimura-representatives} (with $n=2g$). Let $E_0$ be the centraliser of $\iota_0(D_0)$ in $\mathrm{M}_n(\mathbb{Q})$, that is, \begin{equation} \label{eqn:E_0} E_0 = \{ f_1^{\oplus d} \oplus f_2^{\oplus d} \oplus \dotsb \oplus f_e^{\oplus d} \in \mathrm{M}_n(\mathbb{Q}) : f_1, \dotsc, f_e \in \mathrm{M}_{dm}(\mathbb{Q}) \}. \end{equation} It is immediate that intersecting this algebra $E_0$ with $\mathbf{G}$ yields the same group $\mathbf{H}_0$ as in \eqref{eqn:H0}. Furthermore, the map $(f_1, \dotsc, f_e) \mapsto f_1^{\oplus d} \oplus f_2^{\oplus d} \oplus \dotsb \oplus f_e^{\oplus d}$ is an isomorphism of $\mathbb{Q}$-algebras $\mathrm{M}_{dm}(\mathbb{Q})^e \to E_0$. By decomposing $V$ as a direct sum of $dm$-dimensional subspaces, matching the block diagonal decomposition of elements of $E_0$, we see that $V$ is isomorphic to the sum of $d$ copies of each of the $e$ irreducible representations of $E_0$. After extending scalars to~$\mathbb{C}$, we conclude that $E_0$ as defined by \eqref{eqn:E_0} satisfies the conditions of \cref{reps-closed-orbits}. Allowing more general choices of $E_0$ in \cref{reps-closed-orbits} than simply \eqref{eqn:E_0}, and only imposing conditions on $E_0$ after extending scalars to~$\mathbb{C}$ ensures that the proposition could be used as part of a similar strategy for proving the Zilber--Pink conjecture for special subvarieties of simple PEL type III and~IV, as well as types I and~II. \subsection{Construction of the representation} Let $\sigma_L, \sigma_R \colon \mathbf{G} \to \mathbf{GL}(\End(V))$ denote the left and right multiplication representations of~$\mathbf{G}$: \begin{equation*} \sigma_L(g)f = gf, \quad \sigma_R(g)f = fg^{-1}. \end{equation*} Note that $\sigma_R(g)f = fg^{-1}$ rather than $fg$ so that $\sigma_R$ is a group representation. The representations $\rho_L$ and $\rho_R$ in \cref{reps-closed-orbits} are induced by $\sigma_L$ and $\sigma_R$ via a linear algebra construction which we shall now explain, and hence one may think of $\rho_L(u)\rho_R(u)$ in \cref{reps-closed-orbits}(v) as being induced by conjugation by $u \in \mathbf{G}(\mathbb{R})$. Let $\nu \colon \mathbf{G} = \mathbf{GSp}_n \to \mathbb{G}_m$ denote the symplectic multiplier character. Let $W = \bigwedge\nolimits^{mn} \End(V)$, which is a $\mathbb{Q}$-vector space of dimension $\binom{n^2}{mn}$. The representations required by \cref{reps-closed-orbits} are defined as \[ \rho_L = \bigwedge\nolimits^{mn} \sigma_L \otimes \nu^{-mn/2}, \; \rho_R = \bigwedge\nolimits^{mn} \sigma_R \otimes \nu^{mn/2} \; \colon \mathbf{G} \to \mathbf{GL}(W). \] The powers of $\nu$ are chosen so that both $\rho_L$ and $\rho_R$ restrict to the trivial representation on the scalars $\mathbb{G}_m \subset \mathbf{GSp}_n$. Next we construct a vector $w_0 \in W$ satisfying \cref{reps-closed-orbits}(i). Observe that $\dim_\mathbb{Q}(E_0) = e(dm)^2 = mn$ so $\bigwedge\nolimits^{mn} E_0$ is a 1-dimensional subspace of $W$. This was the reason we used the $mn$-th exterior power to define $W$. Because $E_0$ is a subring of $\End(V)$, for any field extension $k$ of $\mathbb{Q}$, \[ \Stab_{\mathbf{G}(k),\sigma_L}(E_0) = \mathbf{G}(k) \cap (E_0 \otimes_\mathbb{Q} k) = \mathbf{H}_0(k). \] Similarly $\Stab_{\mathbf{G}(k),\sigma_R}(E_0) = \mathbf{H}_{d,e,m}(k)$. Consequently, \[ \Stab_{\mathbf{G},\rho_L}(\bigwedge\nolimits^{mn} E_0) = \Stab_{\mathbf{G},\rho_R}(\bigwedge\nolimits^{mn} E_0) = \mathbf{H}_0. \] The action of $E_0$ on $\bigwedge\nolimits^{mn} E_0$ via the $mn$-th exterior power of the left regular representation is multiplication by the non-reduced norm $\Nm_{E_0/\mathbb{Q}}$. Choose an isomorphism $\eta \colon E_{0,\mathbb{C}} \to \mathrm{M}_{dm}(\mathbb{C})$. Let $f \in E_{0,\mathbb{C}}$ and $\eta(f) = (f_1, \dotsc, f_e) \in \mathrm{M}_{dm}(\mathbb{C})^e$. Since the irreducible representations of $E_{0,\mathbb{C}}$ are projections onto the simple factors of $\mathrm{M}_{dm}(\mathbb{C})^e$, and each irreducible representation appears $d$ times in $V_\mathbb{C}$, we have \[ \det(f) = \prod_{i=1}^e \det(f_i)^d. \] Hence \[ \Nm_{E_{0,\mathbb{C}}/\mathbb{C}}(f) = \prod_{i=1}^e \Nm_{\mathrm{M}_{dm}(\mathbb{C})/\mathbb{C}}(f_i) = \prod_{i=1}^e \det(f_i)^{dm} = \det(f)^m. \] If $f \in \mathbf{H}_0(\mathbb{Q}) \subset \mathbf{G}(\mathbb{Q})$, then $\det(f) = \nu(f)^{n/2}$ so \[ \Nm_{E_0/\mathbb{Q}}(f) = \nu(f)^{mn/2}. \] Hence the action of $\mathbf{H}_0$ on $\bigwedge\nolimits^{mn} E_0$ via $\rho_L$ is multiplication by $\Nm_{E_0/\mathbb{Q}} \otimes \nu^{-mn/2} = 1$. Thus for any $w \in \bigwedge\nolimits^{mn} E_0$, we have $\rho_L(\mathbf{H}_0)w = w$ while $\Stab_{\mathbf{G},\rho_L}(w) \subset \Stab_{\mathbf{G},\rho_L}(\bigwedge\nolimits^{mn} E_0) = \mathbf{H}_0$. Thus $\Stab_{\mathbf{G},\rho_L}(w) = \mathbf{H}_0$. For similar reasons, the action of $\mathbf{H}_0$ on $\bigwedge\nolimits^{mn} E_0$ via $\bigwedge\nolimits^{mn} \sigma_R$ is multiplication by $\Nm_{E_0/\mathbb{Q}}^{-1}$, and hence the action of $\mathbf{H}_0$ on $\bigwedge\nolimits^{mn} E_0$ via $\rho_R$ is trivial. It follows that for any $w \in \bigwedge\nolimits^{mn} E_0$, $\Stab_{\mathbf{G},\rho_R}(w) = \mathbf{H}_0$. Let $\Lambda = \bigwedge\nolimits^{mn} \mathrm{M}_n(\mathbb{Z})$, which is a $\mathbb{Z}$-lattice in $W$. Let $S_0 = E_0 \cap \mathrm{M}_n(\mathbb{Z})$, which is an order in $E_0$. Then $\bigwedge\nolimits^{mn} S_0$ is a free $\mathbb{Z}$-module of rank 1 contained in $\Lambda$. Choose $w_0$ to be a generator of $\bigwedge\nolimits^{mn} S_0$ (it does not matter which we choose). Since $w_0 \in \bigwedge\nolimits^{mn} E_0$, the argument above shows that $w_0$ satisfies \cref{reps-closed-orbits}(i). It is clear that $\rho_L$ and $\rho_R$ commute, so \cref{reps-closed-orbits}(iii) holds. It is also immediate that \cref{reps-closed-orbits}(iv) holds. Most of this section will be devoted to proving \cref{reps-closed-orbits}(ii). Since the proof of \ref{reps-closed-orbits}(v) is short, let us first include that here. \begin{proof}[Proof of \cref{reps-closed-orbits}(v)] By definition, \[ \rho_L(u) \rho_R(u) = \bigwedge\nolimits^{mn} \sigma_L(u)\sigma_R(u) \in \mathbf{GL}(\bigwedge\nolimits^{mn} \End(V)), \] where $\sigma_L(u)\sigma_R(u) \in \mathbf{GL}(\End(V))$ is conjugation by $u$. Hence $\rho_L(u)\rho_R(u)w_0$ is a generator for the $\mathbb{Z}$-module $\bigwedge^{mn} uS_0u^{-1}$. Let $d_u = \covol(S_u)/\covol(uS_0u^{-1})$ with respect to the volume form induced by the non-reduced trace form on $S_{u,\mathbb{R}}$. Then $d_u\rho_L(u)\rho_R(u)w_0$ is a generator for $\bigwedge^{mn} S_u$ and therefore is in $\Lambda$. Conjugation by $u$ pulls back $\Tr_{S_{u,\mathbb{R}}/\mathbb{R}}$ to $\Tr_{S_{0,\mathbb{R}}/\mathbb{R}}$. Hence \[ d_u = \covol(S, \Tr_{S_{u,\mathbb{R}}/\mathbb{R}})/\covol(S_0, \Tr_{S_{0,\mathbb{R}}/\mathbb{R}}) = \sqrt{\abs{\disc(S_u)}/\abs{\disc(S_0)}}. \qedhere \] \end{proof} \subsection{Proof of closed orbit} According to \cite[Prop.~6.3]{BHC62}, in order to show that $\rho_L(\mathbf{G}(\mathbb{R}))w_0$ is closed in $W_\mathbb{R}$ (in the real topology), it suffices to prove that $\rho_L(\mathbf{G}(\mathbb{C}))w_0$ is Zariski closed in $W_\mathbb{C}$. Therefore, for the rest of this section, we shall deal entirely with linear algebra and algebraic geometry over $\mathbb{C}$. Let \[ Q = \{ g \in \End(V_\mathbb{C}) : \exists s \in \mathbb{C} \text{ s.t.\ for all } v, v' \in V_\mathbb{C}, \phi(gv, gv') = s \phi(v,v') \}. \] Note that $Q$ is equal to the union of $\mathbf{G}(\mathbb{C})$ with the set of elements of $\End(V_\mathbb{C})$ whose image is contained in a $\phi$-isotropic subspace of $V_\mathbb{C}$. In particular \[ \mathbf{G}(\mathbb{C}) = \{ g \in Q : \det(g) \neq 0 \}. \] Let $e_1, \dotsc, e_n$ be a symplectic basis for $(V_\mathbb{C}, \phi)$. Then $Q$ is a Zariski closed subset of $\End(V_\mathbb{C})$ because it is defined by the polynomial equations \begin{gather*} \phi(ge_i, ge_j) = 0 \text{ for each } i, j \text{ except when } \{ i,j \} = \{ 2k-1,2k \} \text{ for some } k, \\ \phi(ge_1, ge_2) = \phi(ge_3, ge_4) = \dotsb = \phi(ge_{n-1}, ge_n). \end{gather*} Furthermore, $Q$ is a homogeneous subset of $\End(V_\mathbb{C})$, that is, it is closed under multiplication by scalars. Consequently, for any map from $\End(V_\mathbb{C})$ to another vector space whose coordinates are given by homogeneous polynomials of the same positive degree, the image of $Q$ is homogeneous and Zariski closed. (This is because such a map induces a morphism of varieties between the associated projective spaces, and the image of the projective algebraic set $(Q \setminus \{0\})/\mathbb{G}_m$ under such a morphism will again be a projective algebraic set.) Note that $\sigma_L : \mathbf{G}(\mathbb{C}) \to \mathbf{GL}(\End(V_\mathbb{C}))$ extends to a $\mathbb{C}$-algebra homomorphism $\End(V_\mathbb{C}) \to \End(\End(V_\mathbb{C})) \cong \mathrm{M}_{n^2}(\mathbb{C})$ defined by the formula $\sigma_L(g)f = gf$. Considering $\sigma_L$ as a representation of the multiplicative monoid $\End(V_\mathbb{C})$, it induces a monoid representation \[ \bigwedge\nolimits^{mn} \sigma_L \colon\End(V_\mathbb{C}) \to \End(\bigwedge\nolimits^{mn} \End(V_\mathbb{C})). \] Here $\bigwedge\nolimits^{mn} \sigma_L$ is a homogeneous morphism of degree $mn$, so the set $(\bigwedge\nolimits^{mn} \sigma_L)(Q)w_0$ is a homogeneous Zariski closed subset of $W_\mathbb{C}$. \begin{lemma} \label{exist-u} There exist vectors $u_1, \dotsc, u_m \in V$ such that the map $\delta \colon \End(V) \to V^m$ defined by $\delta(f) = (f(u_1), \dotsc, f(u_m))$ restricts to an isomorphism of $\mathbb{Q}$-vector spaces $E_0 \to V^m$. \end{lemma} \begin{proof} By the hypothesis of \cref{reps-closed-orbits}, we can decompose $V_\mathbb{C}$ as a direct sum of irreducible $E_{0,\mathbb{C}}$-modules \begin{equation} \label{eqn:VC-decomposition} V_\mathbb{C} = \bigoplus_{i=1}^e \bigoplus_{j=1}^d V_{ij} \end{equation} such that the action of $E_{0,\mathbb{C}} \cong \mathrm{M}_{dm}(\mathbb{C})^e$ on $V_{ij}$ factors through the $i$-th copy of $\mathrm{M}_{dm}(\mathbb{C})$. Since $\mathrm{M}_{dm}(\mathbb{C})$ is a simple algebra, it has a unique irreducible representation (up to isomorphism), so we may choose an isomorphism of $\mathrm{M}_{dm}(\mathbb{C})$-modules $\theta_{ij} \colon \mathbb{C}^{dm} \to V_{ij}$. Label the standard basis of $\mathbb{C}^{dm}$ as $e_{k\ell}$ for $1 \leq k \leq d$, $1 \leq \ell \leq m$. Given $f \in E_{0,\mathbb{C}}$, write $\eta(f) = (f_1, \dotsc, f_e) \in \mathrm{M}_{dm}(\mathbb{C})^e$. For $i = 1, \dotsc, e$, $k = 1, \dotsc, d$ and $\ell = 1, \dotsc, m$, let $f_{i,k\ell} \in \mathbb{C}^{dm}$ denote the column of $f_i$ indexed by $k$ and~$\ell$ (ordered to match the basis vectors $w_{k\ell}$). The action of $E_{0,\mathbb{C}}$ on $V_{ij}$ factors through the $i$-th copy of $\mathrm{M}_{dm}(\mathbb{C})$ and $\theta_{ij}$ is an $\mathrm{M}_{dm}(\mathbb{C})$-module homomorphism, so \[ f(\theta_{ij}(e_{k\ell})) = \theta_{ij}(f_i(e_{k\ell})) = \theta_{ij}(f_{i,k\ell}). \] For $\ell = 1, \dotsc, m$, let \[ u_\ell = \sum_{i=1}^e \sum_{j=1}^d \theta_{ij}(e_{j\ell}) \in V_\mathbb{C}. \] (Note that the index $j$ is used twice in this expression.) Then \begin{equation} \label{eqn:f-ul} f(u_\ell) = \sum_{i=1}^e \sum_{j=1}^d \theta_{ij}(f_{i,j\ell}). \end{equation} If $f \in \ker(\alpha) \cap E_{0,\mathbb{C}}$, then $f(u_\ell) = 0$ for $\ell = 1, \dotsc, m$. Since \eqref{eqn:VC-decomposition} is a direct sum and the~$\theta_{ij}$ are injective, it follows from \eqref{eqn:f-ul} that $f_{i,j\ell} = 0$ for all $i$, $j$ and $\ell$. In other words $f=0$. Thus $\delta|_{E_{0,\mathbb{C}}}$ is injective. In particular $\delta|_{E_0}$ is injective. Since $\dim_\mathbb{Q}(E_0) = \dim_\mathbb{C}(E_{0,\mathbb{C}}) = ed^2m^2 = \dim_\mathbb{Q}(V^m)$ and $\delta$ is a linear map, it follows that $\delta|_{E_0}$ is an isomorphism $E_0 \to V^m$. \end{proof} \begin{lemma} There exists a linear function $\zeta \colon W \to \mathbb{Q}$ such that $\zeta(w_0) \neq 0$ and \[ \zeta \bigl( (\bigwedge\nolimits^{mn} \sigma_L)(g)w \bigr) = \det(g)^m \zeta(w) \] for all $g \in \End(V)$ and all $w \in W$. \end{lemma} \begin{proof} Define $\zeta$ to be the linear map on $mn$-th exterior powers induced by $\delta$ from \cref{exist-u}. Then $\zeta$ is a linear map $W = \bigwedge\nolimits^{mn} \End(V) \to \bigwedge\nolimits^{mn} V^m \cong \mathbb{Q}$. We identify $\bigwedge\nolimits^{mn} V^m$ with $\mathbb{Q}$ (the choice of isomorphism $\bigwedge\nolimits^{mn} V^m \cong \mathbb{Q}$ is not important). Since $\delta|_{E_0}$ is an isomorphism $E_0 \to V^m$ and $w_0$ is a generator of $\bigwedge\nolimits^{mn} E_0$, we deduce that $s(w_0)$ is a generator of $\bigwedge\nolimits^{mn} V^m$. In particular $\zeta(w_0) \neq 0$. Let $\tau_L \colon \End(V) \to \End(V^m)$ denote the direct sum of $m$ copies of the tautological representation of $\End(V)$ on $V$. Then \[ \delta(\sigma_L(g)f) = \tau_L(g)\delta(f) \] for all $f, g \in \End(V)$. Taking the $mn$-th exterior power, we get \[ \zeta \bigl( (\bigwedge\nolimits^{mn} \sigma_L)(g)w \bigr) = \det(\tau_L(g)) \zeta(w) = \det(g)^m \zeta(w) \] for all $g \in \End(V)$ and $w \in W$. \end{proof} \begin{lemma} $\rho_L(\mathbf{G}(\mathbb{C}))w_0 = \{ w \in (\bigwedge\nolimits^{mn} \sigma_L)(Q)w_0 : \zeta(w) = \zeta(w_0) \}$. \end{lemma} \begin{proof} If $g \in \mathbf{G}(\mathbb{C})$, then we can write $g = s g'$ where $s \in \mathbb{C}^\times$ and $g' \in \mathbf{Sp}_n(\mathbb{C})$. (Choose $s$ to be a square root of $\nu(g)$.) Then $g' \in Q$, $\rho_L(g) = (\bigwedge\nolimits^{mn} \sigma_L)(g')$ and \[ \zeta(\rho_L(g)w_0) = \det((\bigwedge\nolimits^{mn} \sigma_L)(g'))^m \zeta(w_0) = \zeta(w_0) \] so $\rho_L(g)w_0$ is in $\{ w \in (\bigwedge\nolimits^{mn} \sigma_L)(Q)w_0 : \zeta(w) = \zeta(w_0) \}$. Conversely, if $w = (\bigwedge\nolimits^{mn} \sigma_L)(g)w_0$ for some $g \in Q$ and $\zeta(w) = \zeta(w_0)$, then \[ \det(g)^m \zeta(w_0) = \zeta \bigl( (\bigwedge\nolimits^{mn} \sigma_L)(g)w_0 \bigr) = \zeta(w) = \zeta(w_0). \] Since $\zeta(w_0) \neq 0$, we deduce that $\det(g)^m = 1$. In particular $\det(g) \neq 0$. Together with $g \in Q$, this implies that $g \in \mathbf{GSp}_n(\mathbb{C})$. Furthermore, \[ \rho_L(g) = (\bigwedge\nolimits^{mn} \sigma_L)(g) \otimes \det(g)^m = (\bigwedge\nolimits^{mn} \sigma_L)(g) \] so $\rho_L(g)w_0 = w$. Thus $w \in \rho_L(\mathbf{G}(\mathbb{C}))w_0$. \end{proof} Thus $\rho_L(\mathbf{G}(\mathbb{C}))w_0$ is Zariski closed in $W_\mathbb{C}$, so by \cite[Prop.~6.3]{BHC62} $\rho_L(\mathbf{G}(\mathbb{R}))w_0$ is closed in $W_\mathbb{R}$ in the real topology. \section{Arithmetic bound for the representation} \label{sec:rep-bound} In this section, we bound the vectors $v_u$ of \cite[Theorem~1.2]{QRTUI} (here renamed~$w_u$), when applied to the representation~$\rho_L$ defined in section~\ref{sec:representation}. This bound is arithmetic in nature, being in terms of discriminants of orders in $\mathbb{Q}$-division algebras. The argument generalises \cite[section~5.5]{QRTUI} and \cref{minkowski-hermitian-perfect} plays the role of \cite[Lemma~5.7]{QRTUI}. \begin{proposition} \label{rep-bound-arithmetic} Let $d$, $e$ and $m$ be positive integers such that $dm$ is even. Let $n = d^2em$. Let $L = \mathbb{Z}^n$ and let $\phi \colon L \times L \to \mathbb{Z}$ be the standard symplectic form as in section~\ref{subsec:shimura-data}. Let $\mathbf{G} = \mathbf{GSp}(L_\mathbb{Q}, \phi) = \mathbf{GSp}_{n,\mathbb{Q}}$ and let $\Gamma = \mathbf{Sp}_n(\mathbb{Z})$. Let $\mathbf{H}_0$ be the subgroup of $\mathbf{G}$ defined in~\eqref{eqn:H0}. Let $W$, $\Lambda \subset W$, $\rho_L, \rho_R \colon \mathbf{G} \to \mathbf{GL}(W)$ and $w_0 \in \Lambda$ be as in \cref{reps-closed-orbits}. Then there exist positive constants $\newC{rep-multiplier}$, $\newC{rep-exponent}$, $\newC{guh-multiplier}$ and $\newC{guh-exponent}$ such that, for each $u \in \mathbf{G}(\mathbb{R})$, if the group $\mathbf{H}_u = u \mathbf{H}_{0,\mathbb{R}} u^{-1}$ is defined over~$\mathbb{Q}$ and $L_\mathbb{Q}$ is irreducible as a representation of $\mathbf{H}_u$ over $\mathbb{Q}$, then \begin{enumerate}[(a)] \item there exists $w_u \in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R}) w_0$ such that $\rho_L(u) w_u \in \Lambda$ and \[ \abs{w_u} \leq \refC{rep-multiplier} \abs{\disc(R_u)}^{\refC{rep-exponent}}; \] \item there exists $\gamma \in \Gamma$ and $h \in \mathbf{H}_0(\mathbb{R})$ such that \[ \length{\gamma uh} \leq \refC{guh-multiplier} \abs{\disc(R_u)}^{\refC{guh-exponent}}, \] \end{enumerate} where $R_u$ denotes the ring $\End_{\mathbf{H}_u}(L) \subset \mathrm{M}_n(\mathbb{Z})$. \end{proposition} Note that $L_\mathbb{Q}$ is irreducible as a representation of $\mathbf{H}_u$ if and only if $R_{u,\mathbb{Q}}$ is a division algebra. Because $R_{u,\mathbb{R}}$ is $\mathbf{G}(\mathbb{R})$-conjugate to $\End_{\mathbf{H}_0}(L_\mathbb{R})$, $R_{u,\mathbb{Q}}$ is an $\mathbb{R}$-split algebra with positive involution. Hence whenever $R_{u,\mathbb{Q}}$ is a division algebra, it must be type I or~II in the Albert classification, and $d$ must equal $1$ or~$2$ for \cref{rep-bound-arithmetic} to be non-vacuous. \medskip Let $V = L_\mathbb{Q} = \mathbb{Q}^{2g}$. Let $\phi \colon L \times L \to \mathbb{Z}$ be the standard symplectic form as in section~\ref{subsec:shimura-data}. Define $D_0 = \mathrm{M}_d(\mathbb{Q})^e$, $\iota_0 \colon D_0 \to \mathrm{M}_n(\mathbb{Q})$, $t \colon D_0 \to D_0$ and $\psi_0 \colon V \times V \to D_0$ as in \cref{subsec:shimura-representatives}. By \cref{D0-basis}, we can choose a $D_0$-basis $w_1, \dotsc, w_m$ for $V$ which is either symplectic or unitary depending on the type of~$D_0$. Define a symmetric $\mathbb{Q}$-bilinear form $\sigma_0 \colon V \times V \to \mathbb{Q}$ by \[ \sigma_0(x_1w_1 + \dotsb + x_mw_m, y_1w_1 + \dotsb + y_mw_m)) = \Trd_{D_0/\mathbb{Q}} \sum_{i=1}^m x_i y_i^t \] for all $x_1, \dotsc, x_m, y_1, \dotsc, y_m \in D_0$. This bilinear form is positive definite because $t$ is a positive involution. In fact, a lengthy calculation shows that $\sigma_0$ is the standard Euclidean inner product on $V = \mathbb{Q}^n$, but we shall not need this fact. \medskip \pagebreak As in the statement of \cref{rep-bound-arithmetic}, let $u \in \mathbf{G}(\mathbb{R})$ be such that $\mathbf{H}_u = u\mathbf{H}_{0,\mathbb{R}}u^{-1}$ is defined over $\mathbb{Q}$ and $V$ is irreducible as a representation of $\mathbf{H}_u$. Let $D = \End_{\mathbf{H}_u}(V)$, which is $D$ is a division algebra of type~I or~II depending on whether $d=1$ or~$2$. By construction, $V$ is a left $D$-vector space of dimension~$m$. Because $\iota_0(D_0) = \End_{\mathbf{H}_0}(V)$ and $\mathbf{H}_u = u\mathbf{H}_{0,\mathbb{R}}u^{-1}$, we have \[ D = u\iota_0(D_{0,\mathbb{R}})u^{-1} \cap \mathrm{M}_n(\mathbb{Q}). \] Let $\alpha \colon D_{0,\mathbb{R}} \to D_\mathbb{R}$ be the isomorphism of $\mathbb{R}$-algebras \[ \alpha(d) = u \iota_0(d) u^{-1}. \] Let $\dag = \alpha \circ t \circ \alpha^{-1}$, which is a positive involution of $D_\mathbb{R}$. A calculation using the fact that $u \in \mathbf{G}(\mathbb{R}) = \mathbf{GSp}_n(\mathbb{R})$ shows that $\phi$ is $(D_\mathbb{R},\dag)$-compatible, that is, $\dag$ is the adjoint involution of $D_\mathbb{R}$ with respect to~$\phi$. This has two consequences: \begin{enumerate} \item $\dag$ is defined over $\mathbb{Q}$, that is, $\dag$ is an involution of $D$ and not just of $D_\mathbb{R}$. \item There is a non-degenerate $(D,\dag)$-skew-Hermitian form $\psi \colon V \times V \to D$ such that $\phi = \Trd_{D/\mathbb{Q}} \psi$, thanks to \cref{tr-skew-hermitian-form}. \end{enumerate} We are thus in a position to apply \cref{minkowski-hermitian-perfect} (with $R=R_u=\Stab_D(L)$). Let $v_1, \dotsc, v_m$ be the resulting weakly symplectic or weakly unitary $D$-basis for $V$. Define a $\mathbb{Q}$-bilinear form $\sigma \colon V \times V \to \mathbb{Q}$ by \[ \sigma \Bigl( \sum_{i=1}^m x_i v_i, \sum_{i=1}^m y_i v_i \Bigr) = \Trd_{D/\mathbb{Q}} \sum_{i=1}^m x_i y_i^\dag \] for all $x_1, \dotsc, x_m, y_1, \dotsc, y_m \in D$. \begin{lemma} \label{sigma-integral} The bilinear form $\sigma$ is symmetric and positive definite. It takes integer values on $R_uv_1 + \dotsb + R_uv_m$ and it satisfies \[ \abs{\disc(R_uv_1 + \dotsb + R_uv_m, \sigma)} = d^{-d^2em} \abs{\disc(R_u)}^m. \] \end{lemma} \begin{proof} The form $\sigma$ is symmetric because $\Trd_{D/\mathbb{Q}}(xy^\dag) = \Trd_{D/\mathbb{Q}}(yx^\dag)$ and it is positive definite because $\dag$ is a positive involution of~$D$. For each $a \in R_u$ and $y \in L$, the map \[ x \mapsto \phi(x, a^\dag y) = \phi(ax, y) \] is $\mathbb{Z}$-linear and maps $L$ into $\mathbb{Z}$. Since $\phi$ is a perfect pairing on $L$, this implies that $a^\dag y \in L$ for all $y \in L$. Hence $a^\dag \in \Stab_D(L) = R_u$. Thus if $x_1, \dotsc, x_m, y_1, \dotsc, y_m \in R_u$, then each $x_iy_i^\dag$ is in $R_u$ and so $\Trd_{D/\mathbb{Q}}(x_iy_i^\dag) \in \mathbb{Z}$. Hence $\sigma(\sum x_i v_i, \sum y_i v_i) \in \mathbb{Z}$. For each $i$, the restriction of $\sigma$ to $R_uv_i$ is isometric to the inner product associated with $\abs{\cdot}_D$ on $R_u$. Hence $\abs{\disc(R_uv_i, \sigma)} = d^{-d^2e} \abs{\disc(R_u)}$ and so \[ \abs{\disc(R_uv_1 + \dotsb + R_uv_m, \sigma)} = d^{-d^2em} \abs{\disc(R_u)}^m. \qedhere \] \end{proof} \pagebreak \begin{lemma} \label{theta-def} There exists an $\mathbb{R}$-linear map $\theta \colon V_\mathbb{R} \to V_\mathbb{R}$ with the following properties: \begin{enumerate}[(i)] \item $\theta(\alpha(a)x) = \iota_0(a)\theta(x)$ for all $a \in D_{0,\mathbb{R}}$, $x \in V_\mathbb{R}$; \item $\psi = \alpha \circ \theta^* \psi_0$; \item $\sigma_0(\theta(x), \theta(x)) \leq \newC{theta-sigma-multiplier} \abs{\disc(R_u)}^{\newC{theta-sigma-exponent}} \sigma(x,x)$ for all $x \in V_\mathbb{R}$, where the constants depend only on $d$, $e$ and $m$ (and not on $u$). \end{enumerate} \end{lemma} \begin{proof} Use \cref{semi-orthogonal-normalise} to choose $s_1, \dotsc, s_m \in D_\mathbb{R}^\times$ such that $s_1^{-1} v_1, \dotsc, s_m^{-1} v_m$ is a symplectic or $\alpha$-unitary $D_\mathbb{R}$-basis for $V_\mathbb{R}$. Define $\theta \colon V_\mathbb{R} \to V_\mathbb{R}$ by \[ \theta(x_1v_1 + \dotsb + x_mv_m) = \iota_0(\alpha^{-1}(x_1 s_1))w_1 + \dotsb + \iota_0(\alpha^{-1}(x_m s_m))w_m \] for all $x_1, \dotsc, x_m \in D_\mathbb{R}$. Claim~(i) holds because $\alpha \colon D_{0,\mathbb{R}} \to D_\mathbb{R}$ is a ring homomorphism. Claim~(ii) holds because $s_1^{-1} v_1, \dotsc, s_m^{-1} v_m$ is a symplectic or $\alpha$-unitary $D_\mathbb{R}$-basis for $V_\mathbb{R}$ while $w_1, \dotsc, w_m$ is a symplectic or unitary $D_{0,\mathbb{R}}$-basis for $D_{0,\mathbb{R}}^m$. Thus \[ \alpha(\psi_0(\theta(s_i^{-1} v_i), \theta(s_j^{-1} v_j))) = \alpha(\psi_0(w_i, w_j)) = \psi(s_i^{-1} v_i, s_j^{-1} v_j) \text{ for all } i, j. \] For claim~(iii): for every $x = x_1v_1 + \dotsb + x_mv_m \in V_\mathbb{R}$, where $x_1, \dotsc, x_m \in D_\mathbb{R}$, \begin{align} \sigma_0(\theta(x), \theta(x)) & = \Trd_{D_{0,\mathbb{R}}/\mathbb{R}} \bigl( \sum_{i=1}^m \alpha^{-1}(x_is_i) \alpha^{-1}(x_is_i)^t \bigr) \notag \\& = \sum_{i=1}^m \Trd_{D_{0,\mathbb{R}}/\mathbb{R}} \bigl( \alpha^{-1}(x_is_i s_i^\dag x_i^\dag) \bigr) \notag \\& = \sum_{i=1}^m \Trd_{D_\mathbb{R}/\mathbb{R}}(x_is_i s_i^\dag x_i^\dag) = \sum_{i=1}^m \abs{x_is_i}_D^2 \notag \\& \leq \sum_{i=1}^m \abs{x_i}_D^2 \abs{s_i}_D^2 \leq \bigl( \max_{i=1,\dotsc,m} \abs{s_i}_D^2 \bigr) \sigma(x, x). \label{eqn:sigma0-sigma-bound} \end{align} Thanks to \cref{semi-orthogonal-normalise} and \cref{minkowski-hermitian-perfect}(iv), we have \[ \max_{i=1, \dotsc, m} \abs{s_i}_D^2 \leq (de)^{1/2} \max_{i,j = 1, \dotsc, m} \abs{\psi(v_i, v_j)}_D \leq \newC* \abs{\disc(R_u)}^{\newC*} \] where the constants depend only on $d$, $e$ and $m$. Combined with \eqref{eqn:sigma0-sigma-bound}, this proves claim~(iii). \end{proof} \begin{lemma} \label{h-in-h0} Let $h = u^{-1} \theta^{-1} \colon V_\mathbb{R} \to V_\mathbb{R}$. Then $uh = \theta^{-1} \in \mathbf{Sp}_n(\mathbb{R})$ and $h \in \mathbf{H}_0(\mathbb{R})$. \end{lemma} \begin{proof} Firstly, $\theta \in \mathbf{Sp}_n(\mathbb{R})$ by the following calculation, which relies on \cref{theta-def}(ii): \begin{align*} \theta^* \phi = \theta^*({\Trd_{D_{0,\mathbb{R}}/\mathbb{R}}} \circ \psi_0) & = \theta^*({\Trd_{D_\mathbb{R}/\mathbb{R}}} \circ \alpha \circ \psi_0) \\& = {\Trd_{D_\mathbb{R}/\mathbb{R}}} \circ \alpha \circ (\theta^* \psi_0) = {\Trd_{D_\mathbb{R}/\mathbb{R}}} \circ \psi = \phi. \end{align*} Since $\mathbf{Sp}_n(\mathbb{R}) \subset \mathbf{GSp}_n(\mathbb{R}) = \mathbf{G}(\mathbb{R})$ and $u \in \mathbf{G}(\mathbb{R})$, it follows that $h \in \mathbf{G}(\mathbb{R})$. By definition, $\mathbf{H}_0 = Z_\mathbf{G}(\iota_0(D_0))$ and so it remains to prove that $h$ commutes with the action of $D_0$ on $V$. For $a \in D_{0,\mathbb{R}}$ and $x \in V_\mathbb{R}$, we have \begin{align*} h(\iota_0(a)x) = u^{-1} \theta^{-1} (\iota_0(a)x) & = u^{-1} \alpha(a) \theta^{-1}(x) \\& = \iota_0(a) u^{-1} \theta^{-1}(x) = \iota_0(a)h(x) \end{align*} where we use \cref{theta-def}(i) and the fact that $\alpha(a) = u\iota_0(a)u^{-1}$ (from the definition of $\alpha$). Thus $h$ commutes with all $a \in \iota_0(D_0)$. \end{proof} \begin{lemma} \label{z-basis-bound} There exists a $\mathbb{Z}$-basis $\{ e_1', \dotsc, e_n' \}$ for $L$ such that the coordinates of the vectors $\theta(e_1'), \dotsc, \theta(e_n')$ in $V_\mathbb{R} = \mathbb{R}^n$ are polynomially bounded in terms of $\abs{\disc(R_u)}$. \end{lemma} \begin{proof} Let $\lambda_1, \dotsc, \lambda_n$ denote the successive minima of $R_u v_1 + \dotsb + R_u v_m$ with respect to~$\sigma$. By \cref{minkowski-2nd} and \cref{sigma-integral}, we have \[ \lambda_1 \lambda_2 \dotsm \lambda_n \leq \gamma_{d^2em}^{n/2} \covol(R_u v_1 + \dotsb + R_u v_m) \leq \newC{sigma-lambda-multiplier} \abs{\disc(R_u)}^{-m} \] where $\refC{sigma-lambda-multiplier}$ depends only on $d$, $e$ and~$m$. For each $i$, $\lambda_i^2 = \sigma(v,v)$ for some $v \in R_u v_1 + \dotsb + R_u v_m$ and so $\lambda_i \geq 1$ by \cref{sigma-integral}. We deduce that, for each $i$, \[ \lambda_i \leq \refC{sigma-lambda-multiplier} \abs{\disc(R_u)}^{-m}. \] Let $\lambda_1', \dotsc, \lambda_n'$ denote the successive minima of $L$ with respect to $\sigma$. Since $R_u v_1 + \dotsb + R_u v_m \subset L$, $\lambda_i' \leq \lambda_i$ for each~$i$. By \cite[Theorem~4]{Wey40}, there exists a $\mathbb{Z}$-basis $e_1', \dotsc, e_n'$ for $L$ such that \[ \sqrt{\sigma(e_i', e_i')} \leq \newC{weyl-multiplier} \lambda_i' \] where $\refC{weyl-multiplier}$ depends only on~$n$. Combining the above inequalities, we obtain \begin{equation*} \sigma(e_i', e_i') \leq \newC* \abs{\disc(R_u)}^{-2m}. \end{equation*} Combining this with \cref{theta-def}(iii), we obtain that \[ \sigma_0(\theta(e_i'), \theta(e_i')) \leq \refC{theta-sigma-multiplier} \abs{\disc(R_u)}^{\refC{theta-sigma-exponent}} \sigma(e_i', e_i') \leq \refC{sigma0-multiplier} \abs{\disc(R_u)}^{\refC{sigma0-exponent}} \] for some constants $\newC{sigma0-multiplier}$, $\newC{sigma0-exponent}$ independent of $u \in \mathbf{G}(\mathbb{R})$. Since $\sigma_0$ is a fixed positive definite quadratic form on $V_\mathbb{R}$, this implies that the coordinates of the vectors $\theta(e_1'), \dotsc, \theta(e_n')$ are likewise bounded by a polynomial in $\abs{\disc(R_u)}$. \end{proof} Let $\gamma'$ be the matrix in $\mathbf{GL}_n(\mathbb{Z})$ which maps the vectors $v'_1, \dotsc, v'_n$ to the standard basis of $L = \mathbb{Z}^n$. \begin{lemma} \label{entries'-bound} The entries of the matrices $\gamma' uh, (\gamma' uh)^{-1} \in \mathbf{GL}_n(\mathbb{R})$ are polynomially bounded in terms of $\disc(R_u)$. \end{lemma} \begin{proof} Let $A = \gamma' uh = \gamma' \theta^{-1} \in \mathbf{GL}_n(\mathbb{R})$. Observe that $A$ maps the vectors $\theta(e_1'), \dotsc, \theta(e_n')$ to the standard basis. In other words, the entries of $A^{-1}$ are the coordinates of $\theta(e_1'), \dotsc, \theta(e_n')$ and so are bounded by \cref{z-basis-bound}. By \cref{h-in-h0}, $\det(uh)=1$, while $\abs{\det(\gamma')} = 1$ since $\gamma' \in \mathbf{GL}_n(\mathbb{Z})$. Hence $\abs{\det(A)}=1$. By Cramer's rule, each entry of $A$ is a fixed polynomial in the entries of $A^{-1}$, multiplied by $\det(A)$. We conclude that the entries of $A$ are polynomially bounded in terms of $\disc(R_u)$. \end{proof} We now show that we can modify $\gamma' \in \mathbf{GL}_n(\mathbb{Z})$ to obtain $\gamma \in \mathbf{Sp}_n(\mathbb{Z})$, with a similar bound on $\gamma uh$. This establishes \cref{rep-bound-arithmetic}(b), and we will subsequently use it to prove \cref{rep-bound-arithmetic}(a). \begin{lemma} \label{entries-bound} There exists $\gamma \in \Gamma = \mathbf{Sp}_n(\mathbb{Z})$ such that the entries of $\gamma uh$ and $(\gamma uh)^{-1}$ are polynomially bounded in terms of $\abs{\disc(R_u)}$. \end{lemma} \begin{proof} Let $e_1, \dotsc, e_n$ denote the standard basis of $L = \mathbb{Z}^n$. According to \cref{h-in-h0}, $uh \in \mathbf{Sp}_n(\mathbb{R})$. Consequently \[ \phi(\gamma'^{-1} e_i, \gamma'^{-1} e_j) = \phi((uh)^{-1}\gamma'^{-1} e_i, \, (uh)^{-1} \gamma'^{-1} e_j) \text{ for all } i, j \in \{ 1, \dotsc, n \}. \] By \cref{entries'-bound}, the entries of $(uh)^{-1} \gamma'^{-1}$ are polynomially bounded in terms of $\abs{\disc(R_u)}$, and hence the same is true of the values $\phi(\gamma'^{-1} e_i, \gamma'^{-1} e_j)$. Hence, by \cite[Lemma~4.3]{Orr15}, there exists a symplectic $\mathbb{Z}$-basis $\{ f_1, \dotsc, f_n \}$ for $(L, \phi)$ whose coordinates with respect to $\{ \gamma'^{-1}e_1, \dotsc, \gamma'^{-1}e_n \}$ are polynomially bounded in terms of $\abs{\disc(R_u)}$. Applying $\gamma'$, we deduce that the coordinates of $\gamma' f_1, \dotsc, \gamma' f_n$ with respect to the standard basis are polynomially bounded. Let $\gamma \in \mathbf{GL}_n(\mathbb{Z})$ be the matrix such that $e_i = \gamma f_i$ for each~$i = 1, \dotsc, n$. Since $\{ f_1, \dotsc, f_n \}$ is a symplectic basis, we have $\gamma \in \Gamma$. We have just shown that the coordinates of $\gamma' f_i = \gamma' \gamma^{-1} e_i$ are polynomially bounded for each~$i$. In other words, the entries of the matrix $\gamma' \gamma^{-1}$ are polynomially bounded in terms of $\abs{\disc(R_u)}$. Multiplying $(\gamma'uh)^{-1}$ by $\gamma' \gamma^{-1}$ and applying \cref{entries'-bound}, we deduce that the entries of $(\gamma uh)^{-1}$ are polynomially bounded in terms of $\abs{\disc(R_u)}$. Thanks to \cref{h-in-h0}, $\abs{\det(\gamma uh)} = 1$, so it follows that the entries of $\gamma uh$ are also polynomially bounded in terms of $\abs{\disc(R_u)}$. \end{proof} Let $S_u = \End_{R_u}(L) = uE_{0,\mathbb{R}}u^{-1} \cap \mathrm{M}_n(\mathbb{Z})$, where $E_0$ is defined in \eqref{eqn:E_0}. By \cref{reps-closed-orbits}(v), there exists $d_u \in \mathbb{R}_{>0}$ such that \[ d_u \rho_R(u) \rho_L(u) w_0 \in \Lambda \quad \text{and} \quad d_u \leq \refC{du-multiplier} \abs{\disc(S_u)}^{1/2}. \] In order to prove \cref{rep-bound-arithmetic}(a), we shall use the vector \[ w_u = d_u\rho_R(\gamma u)w_0 \in W_\mathbb{R}. \] Observe first that $d_u\rho_R(\gamma u) \in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R})$ thanks to \cref{reps-closed-orbits}(iii), and that $\rho_L(u)w_u = \rho_R(\gamma) d_u \rho_R(u) \rho_L(u) w_0$ is in $\Lambda$ thanks to \cref{reps-closed-orbits}(iv) . Hence $w_u$ satisfies the qualitative conditions of \cref{rep-bound-arithmetic}(a), and it only remains to prove the bound for $\abs{w_u}$. \pagebreak \begin{lemma} \label{length-wu} $\abs{w_u} \leq \newC* \abs{\disc(R_u)}^{\newC*}$. \end{lemma} \begin{proof} According to \cref{reps-closed-orbits}(i), $\mathbf{H}_{d,e,m} = \Stab_{\rho_R(\mathbf{G})}(w_0)$. Therefore \begin{align*} w_u = d_u\rho_R(\gamma u)w_0 = d_u\rho_R(\gamma uh)w_0. \end{align*} The homomorphism $\rho_R \colon \mathbf{G} \to \mathbf{GL}(W)$ is given by fixed polynomials in the entries and inverse determinant. Since the entries of $\gamma uh$ and $\det(\gamma uh)^{-1}$ are bounded by \cref{entries-bound}, we deduce that the entries of $\rho_R(\gamma uh)$ are likewise polynomially bounded in terms of $\disc(R_u)$. Meanwhile, by definition, $d_u$ is polynomially bounded in terms of $\disc(S_u)$. By \cref{disc-R-S}, $\disc(S_u)$ is polynomially bounded in terms of $\disc(R_u)$. We conclude that $\abs{w_u}$ is polynomially bounded in terms of $\abs{\disc(R_u)}$, as required. \end{proof} \section{Cases of Zilber--Pink}\label{cases-of-ZP} In this section, we prove Theorems \ref{main-theorem-zp} and \ref{unconditional}. \subsection{Proof of Theorem \ref{main-theorem-zp}} Instead of proving \cref{main-theorem-zp}, we will prove the following, more general theorem. (Recall that, by \cref{codim-pel}, for $g\geq 3$, all proper special subvarieties of PEL type of $\mathcal{A}_g$ have codimension at least $2$.) \begin{theorem}\label{ZP-end} Let $g\geq 3$ and let $C$ be an irreducible algebraic curve in $\mathcal{A}_g$. Let $S$ denote the smallest special subvariety of $\mathcal{A}_g$ containing $C$. Let $\Theta$ denote the set of special subvarieties of $\mathcal{A}_g$ of simple PEL type I or~II of dimension at most $\dim(S)-2$. Let $\Sigma$ denote the set of points in $\mathcal{A}_g(\mathbb{C})$ endormorphism generic in some $Z\in\Theta$. If $C$ satisfies Conjecture \ref{LGO-general}, then $C\cap\Sigma$ is finite. \end{theorem} Conjecture \ref{LGO-general} is the natural generalisation of Conjecture \ref{galois-orbits}. \begin{conjecture}\label{LGO-general} Let $C$ and $\Sigma$ be as in Theorem \ref{ZP-end} and let $L$ be a finitely generated subfield of $\mathbb{C}$ over which $C$ is defined. Then there exist positive constants $\newC{ZP-end-mult}$ and $\newC{ZP-end-exp}$ such that \begin{align*} \#\Aut(\mathbb{C}/L)\cdot s\geq\refC{ZP-end-mult}|\disc(\End(A_s))|^{\refC{ZP-end-exp}} \end{align*} for all $s\in C\cap\Sigma$. \end{conjecture} The proof follows closely \cite[sec.~6]{QRTUI}. We refer to notation and terminology from \cite[sec. 2.2 and 2.4]{Orr18}. Let $L=\mathbb{Z}^{2g}$ and let $\phi:L\times L\to\mathbb{Z}$ be the standard symplectic form as in section~\ref{subsec:shimura-data}. Let $\mathbf{G}=\mathbf{GSp}(L_\mathbb{Q}, \psi)=\mathbf{GSp}_{2g}$ and let $\Gamma=\mathbf{Sp}_{2g}(\mathbb{Z})$. Define $h_0 \colon \mathbb{S}\to\mathbf{G}_\mathbb{R}$ as in \eqref{eqn:h0} and let $X^+$ denote the $\mathbf{G}(\mathbb{R})$-conjugacy class of $h_0$ in $\Hom(\mathbb{S}, \mathbf{G}_\mathbb{R})$. Then $(\mathbf{G}, X^+)$ is a Shimura datum component and so $\Stab_{\mathbf{G}(\mathbb{R})}(h_0) = \mathbb{R}^\times K^+_\infty$ where $K^+_\infty$ is a maximal compact subgroup of $\mathbf{G}(\mathbb{R})^+$ \cite[chapter~6]{Mil05}. \pagebreak Let $(\mathbf{P}, \mathbf{S}, K_\infty)$ be a Siegel triple for $\mathbf{G}$, as defined in \cite[sec.~2B]{Orr18}, where $K_\infty$ is a maximal compact subgroup of $\mathbf{G}(\mathbb{R})$ such that $K^+_\infty=\mathbf{G}(\mathbb{R})^+\cap K_\infty$. By the results of Borel quoted in \cite[sec.~2D]{Orr18}, there exists a Siegel set $\mathfrak{S} \subset \mathbf{G}(\mathbb{R})$ with respect to $(\mathbf{P}, \mathbf{S}, K_\infty)$ and a finite set $C_\mathbf{G}\subset\mathbf{G}(\mathbb{Q})$ such that $\mathcal{F}_{\mathbf{G}}=C_\mathbf{G}\mathfrak{S}$ is a fundamental set for $\Gamma$ in $\mathbf{G}(\mathbb{R})$. Let $\mathcal{F} = (\mathcal{F}_\mathbf{G} \cap \mathbf{G}(\mathbb{R})^+) h_0$. Since $\Gamma \subset \mathbf{G}(\mathbb{R})^+$, $\mathcal{F}$ is a fundamental set in $X^+$ for~$\Gamma$. If we denote by $\pi:X^+\to\mathcal{A}_g$ the uniformising map, then $\pi|_{\mathcal{F}}$ is definable in the o-minimal structure $\mathbb{R}_{\rm an,exp}$ (see \cite{PS10} for the original result and \cite{kuy:ax-lindemann} for a formulation in notations more similar to ours). As explained in section \ref{subsec:proof-strategy-high-level}, $\Sigma$ is the union of sets $\Sigma_{d,e,m}$, where $d$, $e$, $m$ are positive integers satisfying $d^2em=2g$, $d=1$ or $2$ and $dm$ is even. Since there are only finitely many choices for such $d$, $e$, $m$ (given~$g$), in order to prove \cref{ZP-end}, it suffices to prove that $C \cap \Sigma_{d,e,m}$ is finite for each $d$, $e$, $m$. From now on, we fix such integers $d$, $e$ and~$m$. Let $\mathbf{H}_0 \subset \mathbf{G}$ be the group defined in \eqref{eqn:H0} associated with these parameters. Let $X_0^+ = \mathbf{H}_0(\mathbb{R})^+h_0$, so that $(\mathbf{H}_0, X_0^+)$ is the unique Shimura subdatum of $(\mathbf{G}, X^+)$ given by \cref{unique-datum}. By \cref{reps-closed-orbits,rep-bound-arithmetic}, there exists a finitely generated, free $\mathbb{Z}$--module $\Lambda$, a representation $\rho_L:\mathbf{G}\to\mathbf{GL}(\Lambda_\mathbb{Q})$ such that $\Lambda$ is stabilised by $\rho_L(\Gamma)$, a vector $w_0\in\Lambda$ and positive constants $\refC{rep-multiplier}$ and $\refC{rep-exponent}$ such that: \begin{enumerate}[(i)] \item $\Stab_{\mathbf{G},\rho_L}(w_0) = \mathbf{H}_0$; \item the orbit $\rho_L(\mathbf{G}(\mathbb{R}))w_0$ is closed in $\Lambda_\mathbb{R}$; \item for each $u \in \mathbf{G}(\mathbb{R})$, if the group $\mathbf{H}_u = u \mathbf{H}_{0,\mathbb{R}} u^{-1}$ is defined over~$\mathbb{Q}$ and $L_\mathbb{Q}$ is irreducible as a representation of $\mathbf{H}_u$ over $\mathbb{Q}$, then there exists $w_u \in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R}) w_0$ such that $\rho_L(u) w_u \in \Lambda$ and \[ \abs{w_u} \leq \refC{rep-multiplier} \abs{\disc(R_u)}^{\refC{rep-exponent}}, \] where $R_u$ denotes the ring $\End_{\mathbf{H}_u}(L) \subset \mathrm{M}_{2g}(\mathbb{Z})$. \end{enumerate} By \cite[Theorem 1.2]{QRTUI}, there exist positive constants $\newC{QRTUI-multiplier}$ and $\newC{QRTUI-exponent}$ with the following property: for every $u\in\mathbf{G}(\mathbb{R})$ and $w_u\in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R}) w_0$ such that $\mathbf{H}_u = u \mathbf{H}_{0,\mathbb{R}} u^{-1}$ is defined over~$\mathbb{Q}$ and $\rho_L(u) w_u \in \Lambda$, there exists a fundamental set for $\Gamma\cap\mathbf{H}_u(\mathbb{R})$ in $\mathbf{H}_u(\mathbb{R})$ of the form \[ B_u\mathcal{F}_\mathbf{G} u^{-1}\cap\mathbf{H}_u(\mathbb{R}),\] where $B_u\subset\Gamma$ is a finite set such that \[\abs{\rho_L(b^{-1}u)w_u} \leq \refC{QRTUI-multiplier} \abs{w_u}^{\refC{QRTUI-exponent}}\] for every $b\in B_u$. \medskip For any $w\in\Lambda_\mathbb{R}$, we write $\mathbf{G}(w)$ for the real algebraic group $\Stab_{\mathbf{G}_\mathbb{R},\rho_L}(w)$. Fixing a basis for $\Lambda$, we may refer to the height $\mathrm{H}(w)$ of any $w\in\Lambda$ (namely, the maximum of the absolute values of its coordinates with respect to this basis.) \begin{lemma} \label{z-w} Let $P\in\Sigma_{d,e,m}$. There exists $z\in \pi^{-1}(P)\cap\mathcal{F}$ and \[w\in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R})\rho_L(\mathbf{G}(\mathbb{R})^+)w_0\cap\Lambda\] such that $z(\mathbb{S})\subset\mathbf{G}(w)$ and \[\mathrm{H}(w)\leq \refC{QRTUI-multiplier}\refC{rep-multiplier}^{\refC{QRTUI-exponent}} \abs{\disc(R)}^{\refC{rep-exponent}\refC{QRTUI-exponent}},\] where $R=\End(A_P)\cong\End_{\mathbf{G}(w)}(L)\subset \mathrm{M}_{2g}(\mathbb{Z})$. \end{lemma} \begin{proof} Let $z'\in \pi^{-1}(P)\cap\mathcal{F}$. Since $P\in\Sigma_{d,e,m}$, it is an endomorphism generic point of a special subvariety $S \subset \mathcal{A}_g$ of simple PEL type I or~II with parameters $d,e,m$. Therefore, there is a Shimura subdatum component $(\mathbf{H}, Y^+) \subset (\mathbf{G}, X^+)$ of simple PEL type I or~II such that $\pi(Y^+) = S$ and $z' \in Y^+$. (In particular, $z'(\mathbb{S})\subset\mathbf{H}_{\mathbb{R}}$.) By \cref{conj-class-mt}, $\mathbf{H}_{\mathbb{R}} = u\mathbf{H}_{0,\mathbb{R}}u^{-1}$ for some $u\in\mathbf{G}(\mathbb{R})^+$, and so we write $\mathbf{H}_u=\mathbf{H}$. By \cref{conj-class-datum}, \[Y^+=uX^+_0=u\mathbf{H}_0(\mathbb{R})^+h_0=\mathbf{H}_u(\mathbb{R})^+uh_0. \] Let $R_u = \End_{\mathbf{H}_u}(L)$. Since $\mathbf{H}_u$ is the general Lefschetz group of $S$, $R_u$ is the generic endomorphism ring of~$S$ and, hence, isomorphic to $\End(A_P)$. Since $S$ is a special subvariety of simple PEL type, $R_{u,\mathbb{Q}}$ is a division algebra. Hence $L_\mathbb{Q}$ is irreducible as a representation of~$\mathbf{H}_u$. By \cref{rep-bound-arithmetic}(a), there exists $w_u \in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R}) w_0$ such that \[ \rho_L(u) w_u \in \Lambda \quad \text{ and } \quad \abs{w_u} \leq \refC{rep-multiplier} \abs{\disc(R_u)}^{\refC{rep-exponent}}. \] Hence, by \cite[Theorem 1.2]{QRTUI}, there exists a fundamental set for $\Gamma\cap\mathbf{H}_u(\mathbb{R})$ in $\mathbf{H}_u(\mathbb{R})$ of the form \[ \mathcal{F}_u=B_u\mathcal{F}_\mathbf{G} u^{-1}\cap\mathbf{H}_u(\mathbb{R}),\] where $B_u\subset\Gamma$ is a finite set such that \[\abs{\rho_L(b^{-1}u)w_u} \leq \refC{QRTUI-multiplier} \abs{w_u}^{\refC{QRTUI-exponent}}\] for every $b\in B_u$. Therefore, we can write $z' \in \mathbf{H}_u(\mathbb{R})^+ uh_0$ as \[ z' = \gamma b f u^{-1}\cdot uh_0 \] for some $\gamma \in \Gamma \cap \mathbf{H}_u(\mathbb{R})$, $b \in B_u$, and $f \in \mathcal{F}_\mathbf{G}$. Let \[ z = b^{-1}\gamma^{-1}z' = fh_0 \in \mathcal{F}_\mathbf{G} h_0\cap X^+=\mathcal{F}, \] where the last equality uses the fact that $\Stab_{\mathbf{G}(\mathbb{R})}(h_0)\subset\mathbf{G}(\mathbb{R})^+$. Since $b, \gamma \in \Gamma$, we obtain $z \in \pi^{-1}(P) \cap \mathcal{F}$. Let $w=\rho_L(b^{-1}u)w_u$. As in \cite[Proposition 6.3]{QRTUI}, we can show that $z(\mathbb{S}) \subset \mathbf{G}(w)$ and that $\mathbf{G}(w)$ is a $\Gamma$-conjugate of $\mathbf{H}_u$, so $R_u \cong \End_{\mathbf{G}(w)}(L)$. Consequently, $z$ and $w$ satisfy the requirements of the lemma. \end{proof} \begin{corollary} Let $b\in\mathbb{R}$. The points $P\in\Sigma$ such that $\abs{\disc(\End(A_P))} \leq b$ belong to finitely many proper special subvarieties of simple PEL type I or II. \end{corollary} \begin{proof} The proof is essentially the same as \cite[Corollary 6.4]{QRTUI}. \end{proof} The proof of Theorem \ref{ZP-end} now proceeds as in \cite[sec.~6.5]{QRTUI} with some modifications, which we outline below (following the notation from \cite[sec.~6.5]{QRTUI} \textit{mutatis mutandis}). \begin{enumerate}[(1)] \item The argument is carried out inside $X^+ \cong \mathcal{H}_g$ instead of $\mathcal{H}_2$. \item If $P \in \Sigma_{d,e,m}$ and $\sigma \in \Aut(\mathbb{C})$, then $\End(A_{\sigma(P)}) \cong \End(A_P)$. Therefore, if $P$ is endomorphism generic in the special subvariety $Z \in \Theta$, then $\sigma(P)$ is endomorphism generic in the special subvariety $\sigma(Z)$, which is also of simple PEL type I or~II with the same parameters $d,e,m$. Furthermore, $\dim(\sigma(Z)) = \dim(Z)$ so $\sigma(Z) \in \Theta$ and $\sigma(P)$ is also in $\Sigma_{d,e,m}$. \item In the definition of the definable set~$D$, we replace $\mathbf{G}(\mathbb{R})$ with $\mathbf{G}(\mathbb{R})^+$. That is, $w\in\Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R})\rho_L(\mathbf{G}(\mathbb{R})^+)w_0$, as in \cref{z-w}. Then \[g_t \in \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{R}) \rho_L(\mathbf{G}(\mathbb{R})^+)\] for all $t$. So $g_t^{-1}z_t$ is in the same connected component of $X$ as $z_t \in \mathcal{C} \subset X^+$. We conclude that $g_t^{-1}z_t$ lies on the unique pre-special subvariety of $X^+\cong\mathcal{H}_g$ associated with $\mathbf{H}_0$, namely, $X^+_0$ (see \cref{unique-datum}). \item By the inverse Ax--Lindemann conjecture, the smallest algebraic subset of $X^+$ containing $\tilde{C}$ is an irreducible component of $\pi^{-1}(S)$, which we call~$\tilde{S}$. \item The morphism $\cdot : \tilde{B}\times (X^+)^\vee\to (X^+)^\vee\cong\mathcal{H}_g^\vee$ (which is used but not defined in the penultimate paragraph of \cite[sec.~6.5]{QRTUI}), is given by \begin{align*} (a\rho_L(g),x)\mapsto g\cdot x, \end{align*} for each $a\in\Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{C})$ and $\rho_L(g)\in\rho_L(\mathbf{G}(\mathbb{C}))$. This is well-defined since \begin{align*} \Aut_{\rho_L(\mathbf{G})}(\Lambda_\mathbb{C})\cap\rho_L(\mathbf{G}(\mathbb{C}))\subset\rho_L(Z(\mathbf{G})(\mathbb{C}))\text{ and }\ker(\rho_L)\subset Z(\mathbf{G}), \end{align*} and $Z(\mathbf{G})$, the centre of $\mathbf{G}$, acts trivially on $(X^+)^\vee$. \item In the final step, we conclude that $\tilde{B}\cdot(X^+_0)^\vee$ has uncountable intersection with $\tilde{C}$ and, hence, contains it. Therefore, $\tilde{S}$ is contained in $\tilde{B}\cdot(X^+_0)^\vee$, but \[ \dim(\tilde{B} \cdot (X^+_0)^\vee) \leq 1 + \dim(X_0^+) \leq \dim(S) - 1, \] delivering the contradiction. \end{enumerate} \subsection{Proof of Theorem~\ref{unconditional}} If $C$ is an algebraic curve over a number field, and $\mathfrak{A}\to C$ is an abelian scheme of even relative dimension $g$, we say that $s\in C(\overline \bQ)$ is an \defterm{exceptional quaternionic point} if $\End(\mathfrak{A}_s)\otimes_\mathbb{Z}\mathbb{Q}$ is a non-split totally indefinite quaternion algebra over a totally real field of degree $e$ such that $4e$ does not divide~$g$. Note that these are precisely the points for which: \begin{enumerate}[(i)] \item $\mathfrak{A}_s$ is simple and $D := \End(\mathfrak{A}_s) \otimes \mathbb{Q}$ has type I or~II; and \item $\mathfrak{A}_s$ is exceptional in the sense of \cite[Definition~8.1]{ExCM}, that is, $D$ is not isomorphic to a subring of $\mathrm{M}_g(\mathbb{Q})$. \end{enumerate} Indeed, if $\mathfrak{A}_s$ is simple, then $D$ is a division algebra and hence embeds into $\mathrm{M}_g(\mathbb{Q})$ if and only if $\dim_\mathbb{Q}(D)$ divides~$g$. If $D$ has type~I, then $\dim_\mathbb{Q}(D)$ always divides~$g$, while if $D$ has type~II, then $\dim_\mathbb{Q}(D) = 4e$. In order to prove \cref{unconditional}, it suffices to prove the following theorem, by the same argument as in \cite[sec.~6.7]{QRTUI}. This theorem is a direct generalisation of \cite[Theorem~6.5]{QRTUI}. Note that the image of $C \to \mathcal{A}_g$ is Hodge generic if and only if the generic Mumford--Tate group of the abelian scheme $\mathfrak{A} \to C$ is $\mathbf{GSp}_{2g,\mathbb{Q}}$. \begin{theorem}\label{EQ-scheme} Let $C$ be a irreducible algebraic curve and let $\mathfrak{A}\to C$ be a principally polarised non-isotrivial abelian scheme of even relative dimension $g$ such that the image of the morphism $C\to\mathcal{A}_g$ induced by $\mathfrak{A}$ is Hodge generic. Suppose that $C$ and $\mathfrak{A}$ are defined over a number field $L$ and that there exists a smooth curve \( C' \), a semiabelian scheme \( \mathfrak{A}' \to C' \) and an open immersion \( \iota \colon C \to C' \), all defined over \( \overline \bQ \), such that \( \mathfrak{A} \cong \iota^* \mathfrak{A}' \) and, for some point \( s_0 \in C'(\overline\mathbb{Q}) \setminus C(\overline\mathbb{Q}) \), the fibre \( \mathfrak{A}'_{s_0} \) is a torus. Then there exist positive constants $\newC{GOEQ-scheme-mult}$ and $\newC{GOEQ-scheme-exp}$ such that, for any exceptional quaternionic point $s\in C$, \[\#\Aut(\mathbb{C}/L)\cdot s\geq\refC{GOEQ-scheme-mult}\abs{\disc(\End(\mathfrak{A}_s))}^{\refC{GOEQ-scheme-exp}}.\] \end{theorem} \begin{proof} After replacing $L$ by a finite extension, we may assume that \( C' \), \( \mathfrak{A}' \to C' \) and \( \iota \colon C \to C' \) are all defined over $L$. After replacing $C'$ by its normalisation and $\mathfrak{A}'$ by its pullback to this normalisation, we may assume that $C'$ is smooth. (Note that this step, which is required in order to apply \cite[Theorem~8.2]{ExCM}, was erroneously omitted in the proofs of \cite[Prop.~9.2]{ExCM} and \cite[Theorem~6.5]{QRTUI}.) Observe that \( \mathfrak{A} \to C \) satisfies the conditions of \cite[Theorem 8.2]{ExCM}. Let $s\in C$ be an exceptional quaternionic point. The image of $s$ under the map $C\rightarrow\mathcal{A}_g$ induced by $\mathfrak{A}\to C$ is in the intersection between the image of $C$ and a proper special subvariety of PEL type. Since $C$ is a curve defined over $\overline \bQ$ and special subvarieties of $\mathcal{A}_g$ are defined over $\overline \bQ$, it follows that $s\in C(\overline \bQ)$. The remainder of the proof proceeds as in the proof of \cite[Theorem 6.5]{QRTUI}. \end{proof}
train/arxiv
BkiUeIg4eIOjRurQN71a
5
1
\section{Introduction} \input{text/introduction} \section{Methods Included} \input{text/methods} \section{Architecture} \input{text/architecture} \section{Benchmarks} \input{text/benchmarks} \section*{Acknowledgment} This work is supported by the Research Experiences for Undergraduates (REU) funding of an NSF grant (IIS/CPS-1652038) and the Fulton Undergraduate Research Initiative (FURI) program at Arizona State University. Part of the NVIDIA GPUs used for this work was donated by NVIDIA Corporation. The CPU servers used for this work were donated by Intel Corporation. \bibliographystyle{IEEEtran} \subsection{Toolbox Structure} There are two programing languages used to implement all the methods. L1, NLR-CS, TVAL-3, D-AMP are implemented in Matlab. ReconNet, ISTA-Net, LAPRAN are implemented in Python with Pytorch\cite{paszke2019pytorch}. CSGM, CSGAN, LDAMP are implemented in Python with Tensorflow\cite{abadi2016tensorflow}. We provide a unified interface to run all the methods. Specifically, the common parameters of all methods are listed as follows: \begin{enumerate} \item dataset: the name of dataset to be used \item input\_channel: number of channels training/testing images have \item input\_width: width of training/testing images \item input\_height: height of training/testing images \item m: number of measurements/outputs of sensing matrix \item n: number of inputs to sensing matrix \end{enumerate} Besides, there are method-specific parameters that are included in a container-like object called "specifics". In python, it is a dictionary with its keys as parameter names and its values as actual parameters. In Matlab, it is a structure array with its field names as parameter names, and its field values are the parameters. We also provide the functionality to directly call certain methods from the main interface. The parameters of the main interface are listed below: \begin{enumerate} \item sensing: method of sensing \item reconstruction: method of reconstruction \item stage: training or testing(model-based methods do not have this parameter) \item default: will use default parameters if it's true. Will override other parameters set manually. \item dataset: same as method's corresponding parameter. \item input\_channel: same as method's corresponding parameter. \item input\_width: same as method's corresponding parameter. \item input\_height: same as method's corresponding parameter. \item m: same as method's corresponding parameter. \item n: same as method's corresponding parameter. \item specifics: specific parameter settings of chosen reconstruction method. Will be passed to the actual method. \end{enumerate} Given an image to be sensed and reconstructed, for model-based methods, it can be directly reconstructed by calling the main function of specific methods. For data-driven methods, the networks of specific methods have to be trained first. We provide pre-trained networks of each method at five compression ratios(2,4,8,16,32) on six datasets. We also provide the functionality of training new networks on new datasets and compression ratios from scratch. More details regarding how to use our code are listed in the main page of our Github repository(\url{https://github.com/PSCLab-ASU/OpenICS}). \subsection{Benchmark Design} \textbf{Dataset.} We use six widely used datasets to evaluate all the methods in benchmark. They are MNIST\cite{mnist}, CIFAR10\cite{cifar10}, CIFAR10(grayscaled), CELEBA\cite{celeba}, Bigset, Bigset(grayscaled). Bigset stands for a manually composed dataset. It was initially used in \cite{bigset1,bigset2,bigset3} in the domain of single image super-resolution. Later it was used in LAPRAN\cite{lapran} for image compressive sensing. The training set of Bigset was composed of 91 images from \cite{bigsettrain} and 200 images from the BSD\cite{bsd} dataset. The 291 images are augmented (rotation and flip) and cut into 228688 patches as training samples. The testing set of Bigset consists of image patches from Set5\cite{set5} and Set14\cite{set14} with same patch size. For MNIST, CIFAR10, CIFAR10(gray), the image size of samples is 32x32. For CELEBA, Bigset(gray) and Bigset, the image size of samples is 64x64. \textbf{Compression ratios.} We take five different compression ratios to evaluate each method: 2,4,8,16,32. The compression is always performed channel-wise, i.e., for colored images(with RGB color channels), we perform the compression over each channel seperately. The measurements of all three channels are then grouped together for subsequent reconstruction. The training procedure of each data-driven method is almost the same as the original training guideline provided by the original authors. The discrepancy is detailed in our github repository. \textbf{Metrics.} We evaluate all the methods from two aspects: reconstruction accuracy and reconstruction speed. The reconstruction accuracy is quantified with two metrics: PSNR(0-48) and SSIM(0-1) between reconstructed images and original images in the testing set. Higher values indicate higher accuracy. For each experiment we conduct, the reported results are the averaged values of both metrics over all the samples of the corresponding testing set. The reconstruction speed is quantified with the number of images reconstructed per second. This value is averaged over all the samples in the testing set as well. \textbf{Benchmark calculation.} After obtaining the all raw benchmark results of each method(total of $6 \text{\ datasets} \times 5 \text{\ compression ratios} \times 3 \text{\ metrics} = 90 \text{\ raw results}$), we use the following equation to calculate the final benchmark score: \begin{equation} score=\sum_{i=1}^{90}w_{dataset}*w_{cr}*w_{metric}*\bar{v}_i \end{equation} $\bar{v}_i$ is the ith normalized raw experiment result. Due to the different value ranges of each metric, we have to normalize the raw values to 0-100 range to avoid the dominance of one metric over the others. The function used to normalize PSNR values is $\bar{v}=10^{\frac{v}{48}-1}*100$. The function used to normalize SSIM values is $\bar{v}=10^{v-1}*100$. The function used to normalize reconstruction speed values is $\bar{v}=\frac{100}{1+1/log(1+v)}$. $w_{dataset}$ is the weight of the corresponding dataset of $v_i$. Different weights are assigned to different datasets according to their relative complexity in reconstruction compared with other datasets. The relative complexity is determined based on the results reported in literatures\cite{l1,tval3,nlrcs,damp,reconnet,istanet,ldamp,lapran,csgm,csgan} in the domain of image compressive sensing. $w_{cr}$ is the weight of the corresponding compression ratio of $v_i$. Since images compressed at higher compression ratios are more difficult to reconstruct than images compressed at lower compression ratios, we assign higher weights to higher compression ratios. $w_{metric}$ is the weight of corresponding metric of $v_i$. we assign different weights to different metrics as $\text{PSNR}:\text{SSIM}:\text{Speed}=1:1:2$. As such, there is no bias between reconstruction accuracy and reconstruction speed. One can specify own weights to different metrics to make the score reflects one's own preferences. The actual weights are listed in table~\ref{wdataset},table~\ref{wcr} and table~\ref{wmetric}. \begin{table}[] \centering \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{\textbf{Dataset}} & \multicolumn{1}{c|}{\textbf{Weight}} \\ \hline MNIST & 1/21 \\ \hline CelebA & 4/21 \\ \hline CIFAR10 & 3/21 \\ \hline CIFAR10 Gray & 2/21 \\ \hline Bigset & 6/21 \\ \hline Bigset Gray & 5/21 \\ \hline \end{tabular} \vspace{4mm} \caption{Weights of Datasets} \label{wdataset} \end{table} \begin{table}[] \centering \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{\textbf{Compression ratio}} & \multicolumn{1}{c|}{\textbf{Weight}} \\ \hline 2 & 1/31 \\ \hline 4 & 2/31 \\ \hline 8 & 4/31 \\ \hline 16 & 8/31 \\ \hline 32 & 16/31 \\ \hline \end{tabular} \vspace{4mm} \caption{Weights of compression ratios} \label{wcr} \end{table} \begin{table}[] \centering \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{\textbf{Metric}} & \multicolumn{1}{c|}{\textbf{Weight}} \\ \hline PSNR & 1/4 \\ \hline SSIM & 1/4 \\ \hline Reconstruction speed & 1/2 \\ \hline \end{tabular} \vspace{4mm} \caption{Weights of metrics} \label{wmetric} \end{table} \subsection{Benchmark Results} The raw benchmark results are listed in Table~\ref{ldampresults},\ref{istanetresults},\ref{csganresults},\ref{lapranresults},\ref{csgmresults},\ref{reconnetresults},\ref{tval3results},\ref{l1results},\ref{dampresults} and \ref{nlrcsresults} in the appendix. The benchmark score of each method is shown in Table~\ref{benchmark} and Fig~\ref{histgram}. \begin{table}[] \centering \begin{tabular}{|l|r|r|r|} \hline \multicolumn{1}{|c|}{Method} & \multicolumn{1}{c|}{Speed} & \multicolumn{1}{c|}{Accuracy} & \multicolumn{1}{c|}{Score} \\ \hline LDAMP & 30.25 & 17.21 & 47.46 \\ \hline ISTA-Net & 30.02 & 20.69 & 50.71 \\ \hline CSGAN & 32.58 & 19.03 & 51.61 \\ \hline LAPRAN & 34.69 & 23.60 & 58.30 \\ \hline CSGM & 4.75 & 13.07 & 17.82 \\ \hline ReconNet & 37.00 & 19.15 & 56.15 \\ \hline TVAL-3 & 18.43 & 18.92 & 37.35 \\ \hline L1 & 3.78 & 19.69 & 23.46 \\ \hline D-AMP & 2.35 & 21.83 & 24.19 \\ \hline NLR-CS & 1.69 & 20.35 & 22.04 \\ \hline \end{tabular} \vspace{4mm} \caption{The benchmark scores} \label{benchmark} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{fig/chart.pdf} \caption{The benchamark scores} \label{histgram} \end{figure} LAPRAN has the highest benchmark score due to its prominent performance in accuracy and speed. LDAMP has the highest performance in accuracy but bad performance in speed due to its heavyweight design of network structure(more than 200 neural layers). ReconNet has the highest performance in reconstruction speed due to its lightweight design in structure(only seven layers), but its performance in accuracy is limited as well. In general, model-based methods have lower performance on both accuracy and speed than data-driven methods due to their static, pre-defined signal prior and iterative running process. CSGM is a special case in data-driven methods. The unsatisfying performance in accuracy is due to the GAN model it uses, which is DCGAN\cite{dcgan} proposed in 2015. Over the past few years, there have been more successful GAN models proposed, such as StyleGAN\cite{stylegan} that has much higher performance in modeling signals from data, which may improve the performance of CSGM if it is used. For all the model-based methods, NLR-CS and D-AMP have higher performance in reconstruction accuracy but lower performance in reconstruction speed compared with the other two methods. To conclude, in general, data-driven methods achieve the highest performance in terms of accuracy and performance. With enough training data and hardware platforms that have sufficient computation capacity, one should always choose end-to-end data-driven methods. If there is no sufficient data, one should choose model-based methods that have the highest reconstruction accuracy. If the reconstruction speed is a critical factor to consider as well, TVAL-3 has a significantly higher reconstruction speed than other model-based methods and comparable reconstruction accuracy to other methods. \subsection{Model-based Methods} Model-based methods use pre-defined models based on prior knowledge of the signals to perform the reconstruction. The included model-based methods are listed and summarized in table~\ref{methodlist}. \textbf{L1\cite{l1}}: The first reconstruction methods in the domain of compressive sensing(Only the total-variation-based methods are currently implemented). \textbf{NLR-CS\cite{nlrcs}}: A reconstruction method based on non-local low-rank regularization. \textbf{TVAL-3\cite{tval3}}: An efficient image reconstruction method based on total variation minimization. \textbf{D-AMP\cite{damp}}: An reconstruction method based on model-based image denoising algorithms. \subsection{Data-driven Methods} Data-driven methods do not rely on pre-defined models of signals. Instead, they use neural networks to model the images and perform the reconstruction tasks. The included data-driven methods are listed below. \textbf{ReconNet\cite{reconnet}}: An end-to-end reconstruction network based on convolutional neural networks. \textbf{LDAMP\cite{ldamp}}: An end-to-end reconstruction network built from the unrolled iterative image denoising process by replacing the model-based image denoisers with neural-network-based denoisers. \textbf{ISTA-Net\cite{istanet}}: An end-to-end reconstruction network built by unrolling the conventional iterative shrinkage-thresholding algorithm. \textbf{LAPRAN\cite{lapran}}: An end-to-end reconstruction network based on deep laplacian pyramid neural networks. \textbf{CSGM\cite{csgm}}: An iterative reconstruction method based on generative adversial neural network. \textbf{CSGAN\cite{csgan}}: A variant of CSGM method enhanced by meta-learning to improve reconstruction speed. \begin{table}[] \centering \begin{tabular}{|l|l|l|l|} \hline Methods & Data dependent & Running process & Platform \\ \hline L1 & No & Iterative & CPU \\ \hline TVAL-3 & No & Iterative & CPU \\ \hline NLR-CS & No & Iterative & CPU \\ \hline D-AMP & No & Iterative & CPU \\ \hline ReconNet & Yes & End-to-end & GPU \\ \hline ISTA-Net & Yes & End-to-end & GPU \\ \hline LDAMP & Yes & End-to-end & GPU \\ \hline CSGM & Yes & Iterative & GPU \\ \hline LAPRAN & Yes & End-to-end & GPU \\ \hline CSGAN & Yes & Iterative & GPU \\ \hline \end{tabular} \vspace{4mm} \caption{List of methods included in OpenICS} \label{methodlist} \end{table}
train/arxiv
BkiUdAg5qX_AY1mDINSn
5
1
\section{Introduction} In discrete geometry, linear programming bounds are important for bounding the quality of geometric configurations \cite{delsarte77,MR1181534,MR1973059,bachoc09,cohn07}. These bounds have been used to show that constructions are optimal, for example the sphere packings coming from the $\mathsf{E}_8$ and Leech lattices \cite{MR3664816,MR3664817}. Moreover, the best known asymptotic bounds for binary codes \cite{MR439403}, spherical codes, sphere packing densities \cite{KL78,MR3229046}, and certain problems in Euclidean Ramsey theory \cite{MR3341578,MR4439455} are derived from the linear programming bounds. However, for many instances the linear programming bounds are not sharp and research is being done into semidefinite programming bounds. Initiated by Schrijver for binary codes, and Bachoc and Vallentin for the kissing number problem, three-point semidefinite programming bounds have been developed which take into account interactions between triples of points, as opposed to pairs of points for the linear programming bounds \cite{MR2236252,bachoc08,MR2947943,cohnlaatsalmon}. For the equiangular lines problem this has been generalized to a hierarchy of $k$-point bounds \cite{deLaat2021}. For many problems, these higher-order bounds lead to significant improvements, including new optimality proofs via sharp bounds; see e.g. \cite{MR2469257,MR2947943,MR4263438}. However, until now no new asymptotic results in the dimension have been derived from these bounds. The Lasserre hierarchy and the dual sum-of-squares approach by Parrilo are important for obtaining bounds for hard problems in combinatorial optimization \cite{lasserre01, Par00}. De Laat and Vallentin have generalized the Lasserre hierarchy for the independent set problem to a continuous setting so that it can be applied to problems in discrete geometry \cite{laat15}. The first level of the hierarchy reduces to the linear programming bound. The second level, however, is a $4$-point bound from which the $k$-point with $k=4$ mentioned above can be derived by removing many of the constraints. Hence the second level of the Lasserre hierarchy, although more difficult to compute, is likely to improve bounds. Until now the second level has only been computed for an energy minimization problem in dimension $3$ \cite{laat16}. We would also like to mention that an adaptation of this hierarchy for the sphere packing problem was given recently \cite{cohnsalmon}. In summary, computing the higher levels of this hierarchy is a promising approach for various problems in discrete geometry. In this paper we compute the second and third level of the Lasserre hierarchy for the equiangular lines problem with a fixed angle $\arccos \alpha$. This problem asks to determine the maximum number $N_\alpha(n)$ of lines through the origin in $\mathbb{R}^n$ such that the angle between any pair of lines is $\arccos \alpha$. Recently, there have been several breakthroughs for this problem, starting with the result by Bukh in 2016 that $N_\alpha(n)$ is at most linear in $n$ for any fixed $\alpha$ \cite{bukh16}. In 2018 Balla, Dr\"axler, Keevash, and Sudakov showed $\limsup_{n\to\infty} N_\alpha(n)/n$ is at most $1.93$ unless $\alpha = 1/3$, in which case it is $2$ \cite{balla18}. In 2021 Jiang, Tidor, Yao, Zhang, and Zhao showed \begin{equation}\label{eq:realasymptoticslope} N_{\alpha}(n) = \lfloor (a+1)(n-1)/(a-1) \rfloor \end{equation} for $\alpha = 1/a$ with $a$ an odd integer $a \ge 3$ and all sufficiently large $n$ \cite{MR4334975}. The significance of these particular inner products $\alpha = 1/a$ lies in the fact that the equiangular lines problem without fixed angle can be solved in any dimension by computing $N_\alpha(n)$ for a given finite list of such $\alpha$; see \cite{larman77}. The linear bound by Bukh holds for all dimensions, but the slope is huge. The results from \cite{balla18} and \cite{MR4334975} hold only for $n \geq n_\alpha$ for a very large $n_\alpha$. In these results, the parameters are so large because the proofs rely on Ramsey theory. An asymptotically linear bound not relying on Ramsey theory was given by Balla in 2021. He proved (Theorem 1 in \cite{balla2021}) for all dimensions $n$ and for $\alpha \in (0,1)$ the bound \begin{equation}\label{eq:ballasmalldim} N_{\alpha}(n) \leq \frac{\sqrt{n}}{2 \alpha^3} + \frac{(1+\alpha)n}{2 \alpha} . \end{equation} Our main result is the following conjecture, which we prove for $\alpha = 1/a$ with $a = 3,5,7,9,11$. \begin{conjecture}\label{conj:las2} Let $\alpha \in (0,1)$. The optimal objective of the second level $\mathrm{las}_2(n)$ of the Lasserre hierarchy for bounding $N_\alpha(n)$ satisfies \begin{equation}\label{eq:ourbound} \mathrm{las}_2(n) \leq c_\alpha + \frac{(1+\alpha)n}{2 \alpha} \end{equation} for $n \geq n_\alpha$, where $c_\alpha$ and $n_\alpha$ are constants not depending on the dimension. \end{conjecture} For the values of $\alpha$ mentioned, we provide both an explicit small $c_\alpha$ and an explicit small $n_\alpha$, so we give new bounds in many dimensions not covered by (\ref{eq:realasymptoticslope}). Furthermore, because of the $\sqrt{n}$ term, we also improve on (\ref{eq:ballasmalldim}) for these $\alpha$. Our explicit linear bounds are listed in Table~\ref{table:asymptotic}. These are the first asymptotic bounds in the dimension coming from semidefinite programming. We have also computed the third level of the Lasserre hierarchy, with which we obtain new bounds for fixed dimensions. More detailed results are given in Section \ref{sec:applications}. The third level also provides numerical evidence for the existence of linear bounds for all dimensions, of which the asymptotic slopes improve on (\ref{eq:ballasmalldim}) and our bound (\ref{eq:ourbound}). \begin{table} \begin{tabular}{@{}lll@{}} \toprule $\alpha$ & $f_\alpha(n)$ & $n_\alpha$\\ \midrule $1/3$ & $2n+4$ & $13$\\ $1/5$ & $3n+30$ & $87$\\ $1/7$ & $4n+116$ & $261$\\ $1/9$ & $5n+316$ & $166018$\\ $1/11$ & $6n+699$ & $751307$\\ \bottomrule \end{tabular} \bigskip \caption{We prove $N_\alpha(n) \leq f_\alpha(n)$ for all $n \geq n_\alpha$. By solving finitely many semidefinite programs the constants $n_{1/9}$ and $n_{1/11}$ could also be made small.} \label{table:asymptotic} \end{table} We would like to reflect briefly on how surprising Conjecture~\ref{conj:las2} is for a semidefinite programming bound. For fixed $n$, finite convergence of the Lasserre hierarchy is guaranteed, so in principle this hierarchy can be used to solve any equiangular lines problem. However, for a fixed level of the hierarchy, it could have been that the bound on $N_\alpha(n)$ becomes very bad for large $n$. Indeed, this is the behaviour we see for the Delsarte and $k$-point bounds, where for any fixed $\alpha$ and $k$, the bound on $N_\alpha(n)$ grows rapidly as a function of $n$. This is a testament to the strength of the Lasserre hierarchy. For the Lasserre hierarchy we follow \cite{laat15}. Consider the infinite graph $G$ on the vertex set $S^{n-1}$, where two distinct vertices $x$ and $y$ are adjacent if $x \cdot y$ do not lie in some prescribed set $D$. For finite $D$ we call such $G$ a spherical finite distance graph and for the equiangular lines problem with a fixed angle we set $D = \{\pm \alpha\}$. An independent set is a subset of the vertex set in which no two vertices are adjacent. Denote by $\mathcal{I}_t$ the set of all independent sets of size at most $t$. This set is given a topology; see \cite{laat15}. Let $\mathcal{C}(X)$ be the space of real-valued continuous functions on a topological space $X$, and define the operator $ A_t \colon \mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)_{\mathrm{sym}} \to \mathcal{C}(\mathcal{I}_{2t}) $ by \[ A_tK(S) = \sum_{\substack{J,J' \in \mathcal{I}_t\\ J\cup J' = S}} K(J,J'). \] Here $\mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)_\mathrm{sym}$ is the space of continuous functions $K(J,J')$ that are symmetric in $J$ and $J'$. We define $\mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)_{\succeq 0}$ to be the cone of positive kernels, which is the set of the symmetric continuous functions that satisfy \[ \sum_{i,j=1}^N c_i c_j K(J_i, J_j) \geq 0 \] for all $N \geq 0$, $c \in \mathbb{R}^N$, and $J_1,\ldots,J_N \in \mathcal{I}_t$. The Lasserre hierarchy for this problem is \begin{mini} {}{K(\emptyset, \emptyset)}{}{} \label{pr:las} \addConstraint{K \in \mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)_{\succeq 0}}{}{} \addConstraint{A_t K(S) \leq -1_{\mathcal{I}_{=1}}(S),\qquad}{}{S \in \mathcal{I}_{2t}\setminus\{\emptyset\},} \end{mini} where $1_{\mathcal{I}_{=1}}$ is the indicator function of the set $\mathcal{I}_{=1}$ of independent sets of size $1$. Let us verify that this program bounds the independence number. If $C$ is an independent set and $K$ a feasible kernel, then we have \[ 0 \leq \sum_{\substack{J,J' \in \mathcal{I}_t\\J,J' \subseteq C}} K(J,J') = \sum_{\substack{S \in \mathcal{I}_{2t}\\ S \subseteq C}} A_tK(S) \leq K(\emptyset, \emptyset) - |C|, \] which shows any feasible solution to \eqref{pr:las} gives an upper bound on the independence number. To compute the second ($t=2$) and third ($t=3$) levels we make essential use of symmetry. The action of the orthogonal group $\Ort{n}$ on $S^{n-1}$ induces an action on $\mathcal{I}_t$ and $\mathcal{I}_{2t}$, and hence a linear action on $\mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)$ and $\mathcal{C}(\mathcal{I}_{2t})$ by $\gamma K(J,J') = K(\gamma^{-1} J, \gamma^{-1} J')$ and $\gamma f(S) = f(\gamma^{-1} S)$. By compactness of the orthogonal group, one may restrict to $\Ort{n}$-invariant kernels, which are kernels $K$ satisfying $\gamma K = K$ for all $\gamma \in \Ort{n}$. After this restriction, it is sufficient to impose the constraints $A_t K(S) \leq -1_{\mathcal{I}_{=1}}(S)$ for orbit representatives $S$ of $\mathcal{I}_{2t}\setminus\{\emptyset\}$, of which there are finitely many. Indeed, we may enumerate them by listing all matrices of size at most $2t$ that are positive semidefinite and have rank at most $n$ with ones on the diagonal and elements from $D$ elsewhere. These matrices are Gram matrices of the orbit representatives and, up to simultaneous permutations of rows and columns, are in one to one correspondence with the orbits. The first level of the hierarchy is equal to the Lov\'asz theta prime number \cite{laat16}, and this can be reduced to the Delsarte, Goethals, Seidel linear programming bound using Schoenberg's characterization \cite{schoenberg42,bachoc09}. In this paper we give a procedure for parametrizing the kernels $K \in \mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)$ for general $t$, which we can use to write a truncation of the $t$-th level of the hierarchy as a semidefinite program. We construct positive, $\Ort{n}$-invariant kernels as follows. Under the action of $\Ort{n}$, the space $\mathcal{I}_t$ decomposes as a disjoint union of finitely many orbits $X_1,\ldots, X_N$. Fix an orbit representative $R_i$ for each orbit $X_i$, and let $H_i$ be the stabilizer subgroup of $\Ort{n}$ with respect to $R_i$. Given an irreducible, unitary representation $\pi \colon \Ort{n} \to \mathrm{U}(V)$, for each $i$ we let $e_{\pi,i,1},\ldots,e_{\pi,i,d_{\pi,i}}$ be a basis of the space of invariants \[ V^{H_i} = \big\{ v \mid \pi(h) v = v \text{ for all } h \in H_i \big\}. \] For $J \in \mathcal I_t$, define $i(J)$ to be the index such that $J \in X_{i(J)}$ and let $s \colon \mathcal I_t \to O(n)$ be a function such that $s(J) R_{i(J)} = J$ for all $J \in \mathcal I_t$. For each $\pi$ let $F^\pi$ be a positive semidefinite matrix whose rows and columns are indexed by pairs $(i,j)$ with $1 \leq i \leq N$ and $1 \leq j \leq d_{\pi, i}$, where we assume only finitely many of these matrices are nonzero. The kernel $K \in \mathcal{C}(\mathcal{I}_t \times \mathcal{I}_t)$ defined by \begin{equation}\label{eq:kernelfourier} K(J_1,J_2) = \sum_\pi \sum_{j_1, j_2} F_{(i(J_1),j_1), (i(J_2), j_2)}^\pi \big\langle \pi(s(J_1)) e_{\pi,i(J_1),j_1}, \pi(s(J_2)) e_{\pi,i(J_2),j_2} \big\rangle \end{equation} is continuous, positive, $\Ort{n}$-invariant, and does not depend on the choice of the function $s$. Moreover, any positive, $\Ort{n}$-invariant kernel can be written as a uniformly absolutely converging sequence of such kernels (in fact, as a single infinite series of the above form). A similar characterization of positive, invariant kernels goes back to Schoenberg \cite{schoenberg42} for the sphere and Bochner for general homogeneous spaces \cite{bochner41}. In \cite[Theorem 3.4.4]{laatthesis16} a variant for finitely many orbits is given. In all applications in this paper, the vectors in $R_i$ are linearly independent. In this case, the stabilizer subgroups $H_i$ when considering the $t$-th level of the Lasserre hierarchy are isomorphic to $S(R_i) \times \Ort{n-t_i}$, where $t_i = |R_i|$ and $S(R_i)$ is a finite subgroup of $\Ort{t_i}$. To compute the inner product in \eqref{eq:kernelfourier}, we need to explicitly construct bases of $V^{H_i}$. Historically, a description of the invariant subspace $V^{\Ort{n-t}}$ was found when studying the Stiefel manifold $\Ort{n}/\Ort{n-t}$. In 1977, Gelbart discovered a beautiful connection between the harmonic analysis of the Stiefel manifold and the representation theory of general linear groups \cite{gelbart74}. Using weight theory, the irreducible representations of $\Ort{n}$ and of $\GL{t}$ can both be labeled by tuples $\lambda$ of integers, and he showed that the dimension of $V^{\Ort{n-t}}$ is equal to the dimension of the representation of $\GL{t}$ with the same label $\lambda$. Later Gross and Kunze \cite{grosskunze77} gave an explicit isomorphism with additional properties between $V^{\Ort{n-t}}$ and the representation of $\GL{t}$. To construct explicit bases of the invariant subspaces and to compute the inner products, we will use: bases of the representations of $\GL{t}$; the isomorphism by Gross and Kunze; and further techniques to deal with the finite groups $G$. With this explicit description of the invariant subspaces, the kernel in \eqref{eq:kernelfourier} can be evaluated at points. This allows us to set up and compute the second and third level of the Lasserre hierarchy for spherical finite distance problems. In Section~\ref{sec:gltrep} and \ref{sec:Gross-Kunze} we discuss an explicit formulation of the matrix coefficients of $\GL{t}$ and the construction of Gross and Kunze. In Section~\ref{sec:stabinv} these constructions are used to compute the invariants subspaces. In Section~\ref{sec:sdp} an efficient semidefinite programming formulation is presented. In Section~\ref{sec:applications} we discuss computational results, where we first give improved bounds on $N_\alpha(n)$ in fixed dimensions, as shown in Figures~\ref{fig:a5} and \ref{fig:a7}, and then consider bounds for more general spherical finite distance problems. In Section~\ref{sec:asymptotics} an asymptotic analysis of the bounds is given, which leads to the proof of Conjecture~\ref{conj:las2} for the values of $\alpha$ mentioned. Finally, in Section~\ref{sec:moreasymptotics} we give a formulation for the limit semidefinite program. \section{Representations of the general linear group} \label{sec:gltrep} In this section we give an explicit procedure to compute the matrix coefficients of the representations of the general linear group $\GL{t}$ over the complex numbers. We use Weyl's construction of the representations, following Chapter 15 in \cite{fulton91}. It is necessary to go through this material in some detail, since we will use the specifics of this construction in Section \ref{sec:stabinv} and \ref{sec:moreasymptotics}. Let $\lambda = (\lambda_1, \dots, \lambda_t)$ be a partition of $d$ with $\lambda_1 \geq \dots \geq \lambda_t \geq 0$. The Young diagram associated to $\lambda$ consists of left-aligned rows of boxes, with $\lambda_i$ boxes in the $i$th row. For instance, the Young diagram of the partition $(4,3,1)$ is \[ \ydiagram{4,3,1} \] To the partition $\lambda$ we may associate two other tuples of integers: The conjugate partition $\mu$, with $\mu_j$ the length of column $j$ in the diagram of $\lambda$, and the tuple of integers $a = (a_1, \dots, a_t)$, defined by $a_i = \lambda_{i} - \lambda_{i+1}$ (setting $\lambda_{t+1} = 0$), so that $a_i$ is the number of columns with length $i$ in the Young diagram of $\lambda$. In our example, the conjugate partition is $\mu = (3,2,2,1)$ and we have $a = (1,2,1)$. Let $\mathbb{C}^t$ be the tautological representation of $\GL{t}$, which is the representation that sends a matrix to itself, and consider the subrepresentation of $(\mathbb{C}^t)^{\otimes d}$ given by \[ A^{a}\mathbb{C}^t = \mathrm{Sym}^{a_t}(\Lambda^t \mathbb{C}^t) \otimes \mathrm{Sym}^{a_{t-1}}(\Lambda^{t-1} \mathbb{C}^t) \otimes \cdots \otimes \mathrm{Sym}^{a_1}(\mathbb{C}^t). \] To improve readability, we will use~$\cdot$ for both the tensor and symmetric tensor products, since the length of the wedge product prevents confusion. To obtain an irreducible representation, we consider a quotient of $A^{a}\mathbb{C}^t$. Let $p$ and $q$ be the lengths of two consecutive columns of the Young diagram and let $v_1, \dots, v_p, w_1, \dots, w_q \in \mathbb{C}^t$. Let $r$ be an integer with $1 \leq r \leq q$. Consider the difference between an element of $A^a\mathbb{C}^t$ of the form \[ x \cdot (v_1 \wedge \cdots \wedge v_p)\cdot (w_1 \wedge \cdots \wedge w_q) \cdot y \] and \begin{align*} &x \cdot \Big(\sum (v_1 \wedge \cdots \wedge w_1 \wedge \cdots \wedge w_r \wedge \cdots \wedge v_p) \\ &\quad \quad \cdot (v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_r} \wedge w_{r+1} \wedge \cdots \wedge w_q) \Big)\cdot y, \end{align*} where the sum is over all $1 \leq i_1 < i_2 < \cdots < i_r \leq p$ and the elements $w_1, \dots, w_r$ are inserted at the positions $i_1, \dots, i_r$ in $v_1 \wedge \cdots \wedge v_p$. Next, consider the subspace generated by such differences. This subspace is invariant under the action of $\GL{t}$ and hence the quotient of $A^a \mathbb{C}^t$ by this subspace is a representation of $\GL{t}$. This representation is denoted by $W$ and the associated group homomorphism by \begin{equation} \label{eq:rhorep} \rho \colon \GL{t} \to \mathrm{GL}(W). \end{equation} This is an irreducible polynomial representation of $\GL{t}$, and all irreducible polynomial representations of $\GL{t}$ are obtained in this way for a unique $\lambda$; see Theorem 6.3 and Proposition 15.47 in \cite{fulton91}. We now construct a basis of $W$ using the \emph{semistandard tableaux} on the Young diagram of~$\lambda$. A tableau is obtained by placing in each box of the Young diagram an integer between $1$ and $t$. We say a tableau is semistandard if the entries in each row are nondecreasing and the entries in each column are strictly increasing. Let $T$ be a tableau and let $T(i,j)$ be the entry of $T$ in the $i$th row and the $j$th column. Define $e_T$ to be the image in $W$ of the element \[ \prod_{j = 1}^{\lambda_1} e_{T(1,j)} \wedge e_{T(2,j)} \wedge \cdots \wedge e_{T(\mu_j,j)} \in A^{a}\mathbb{C}^t, \] where $e_1,\dots,e_t$ is the standard basis of $\mathbb{C}^t$. The set of all $e_T$ with $T$ a semistandard tableau on $\lambda$ is a basis of $W$ (Proposition 15.55 in \cite{fulton91}). We would now like to compute the matrix coefficients of the representation $\rho$ with respect to the semistandard tableaux basis. For a tableau $T$ we have \begin{align*} & \rho(A) e_T = \rho(A) \prod_{j = 1}^{\lambda_1} e_{T(1,j)} \wedge e_{T(2,j)} \wedge \cdots \wedge e_{T(\mu_j,j)} \\ & \quad = \prod_{j = 1}^{\lambda_1} A e_{T(1,j)} \wedge A e_{T(2,j)} \wedge \cdots \wedge A e_{T(\mu_j,j)} \\ & \quad = \sum_{S} \prod_{j = 1}^{\lambda_1} A_{S(1,j),T(1,j)} e_{S(1,j)} \wedge A_{S(2,j),T(2,j)} e_{S(2,j)} \wedge \cdots \wedge A_{S(\mu_j,j),T(\mu_j,j) } e_{S(\mu_j,j)}, \end{align*} where the sum runs over all tableaux $S$. Using multilinearity, we obtain \begin{align*} \rho(A) e_T &= \sum_{S} A_{S,T} \left( \prod_{j = 1}^{\lambda_1} e_{S(1,j)} \wedge e_{S(2,j)} \wedge \cdots \wedge e_{S(\mu_j,j)} \right) = \sum_{S} A_{S,T} e_S, \end{align*} where we define \[ A_{S,T} = \prod_{j = 1}^{\lambda_1} \prod_{i=1}^{\mu_j} A_{S(i,j),T(i,j)}. \] Let $T_1, \dots, T_m$ be the list of all semistandard tableaux on the Young diagram associated to $\lambda$. The dimension $m$ is given by (Theorem 6.3 of~\cite{fulton91}) \begin{equation}\label{eq:dimension} m = \dim W = \prod_{1 \leq i < j \leq t} \frac{\lambda_i - \lambda_j + j - i}{j - i}. \end{equation} Equip $W$ with the inner product $\langle \cdot, \cdot \rangle$ for which $e_{T_1},\ldots,e_{T_m}$ are orthonormal. For each tableau $S$ we then have \[ e_S = \sum_{i = 1}^m \langle e_{T_i} , e_{S} \rangle e_{T_i}. \] The proof of Proposition 15.55 in~\cite{fulton91} may be used to give an algorithm to compute these numbers, which we describe below. The matrix coefficients of the representation $W$ are then given by \begin{equation}\label{eq:matrixcoeffs} \langle e_{T_i} , \rho(A) e_{T_j} \rangle = \sum_S A_{S,T_j} \langle e_{T_i}, e_S \rangle. \end{equation} Let $T$ be a tableau on the Young diagram associated to $\lambda$ and $e_T$ the associated element of $W$. We encode $T$ as a list of vectors of integers, where the $i$th vector corresponds to the $i$th column of $T$. For instance the tableau \[ \ytableausetup{centertableaux} \begin{ytableau} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 \end{ytableau} \] will be encoded as $\{ (a_1,a_2,a_3),(b_1,b_2),(c_1,c_2) \}$. We order such tableaux using the reverse lexicographical ordering, i.e. $T' > T$ if the last entry in the last vector where $T'$ differs from $T$ is larger. For an integer $1 \leq s \leq \lambda_1$, let $(a_1,\dots,a_p)$ be the $s$-th column of $T$ and $(b_1,\dots,b_q)$ be the $(s+1)$-th column of $T$. For $1 \leq r \leq q$ and integers $1 \leq i_1 < \cdots < i_r \leq p$, let $\Phi(T,s,r,i_1,\ldots,i_r)$ be the tableau obtained from $T$ where we replace the first $r$ entries in column $s+1$ by $a_{i_1},\ldots,a_{i_r}$ and replace the $i_1,\ldots,i_r$ entries in column $s$ by $b_1,\ldots,b_r$. The algorithm below returns a vector with the coordinates of $e_T$ in the basis $e_{T_1}, \ldots, e_{T_m}$. The algorithm is recursive and terminates since at every call we increase the position of the tableau in the reverse lexicographic order. The last step uses the quotient in the definition of $W$. \begin{algorithm} \caption{Algorithm to decompose a tableau into semistandard tableaux}\label{euclid} \begin{algorithmic} \Procedure{ssdecomp}{$T$} \State If there is a column in $T$ with two identical entries, return $0$. \State Else, let $\sigma$ be the permutation of $T$ which orders each column to become strictly increasing and does not exchange elements between different columns. \State Replace $T$ by $\sigma(T)$. \State If $T$ is semistandard, return a vector with $\mathrm{sign}(\sigma)$ at the appropriate entry and zeros otherwise. \State Else, find $r$ and $s$ such that $T(r,s) > T(r,s + 1)$. \State Return $\mathrm{sign}(\sigma) \sum_{1 \leq i_1 < \dots < i_r \leq p} \text{\sc ssdecomp}(\Phi(T, s, r, i_1,\ldots,i_r))$. \EndProcedure \end{algorithmic} \end{algorithm} \section{The Gross-Kunze construction} \label{sec:Gross-Kunze} We see the group $\Ort{n-t}$ as a subgroup of $\Ort{n}$ by fixing the first $t$ coordinates. Given a representation $\pi \colon \Ort{n} \to \mathrm{GL}(V)$, the space of $\Ort{n-t}$-invariants is \[ V^{\Ort{n-t}} = \big\{v \in V \mid \pi(A) v = v \text{ for all } A \in \Ort{n-t}\big\}. \] In this section we give an explicit description of the $\Ort{n-t}$-invariant subspaces of the irreducible representations of $\Ort{n}$ for $2t < n$. For this we give an exposition of a construction by Gross and Kunze \cite{gross1984finite} and prove additional properties required in our setting. First, Gross and Kunze construct invariants in the case of the complex orthogonal group \[ \Ort{n, \mathbb{C}} = \left\{ Q \in \mathbb{C}^{n \times n} \mid Q^{\sf T} Q = 1 \right\}, \] whereas we are interested in the real orthogonal group $\Ort{n}$. This problem is resolved by complexifying. There is a natural inclusion $\Ort{n} \subseteq \Ort{n,\mathbb{C}}$ and $\Ort{n,\mathbb{C}}$ is the complexification of $\Ort{n}$, which means that for any smooth homomorphism $\alpha$ from $\Ort{n}$ to a complex Lie group $G$, there is a unique holomorphic homomorphism $\alpha_\mathbb{C}$ from $\Ort{n,\mathbb{C}}$ to $G$ with $\alpha(A) = \alpha_\mathbb{C}(A)$ for all $A \in \Ort{n}$; see e.g. \cite[Chapter 15]{MR3025417}. Let $V$ be a finite dimensional, complex vector space. We consider a representation of $\Ort{n}$ on $V$ to be a smooth group homomorphism from $\Ort{n}$ to $\mathrm{GL}(V)$ and a representation of $\Ort{n,\mathbb{C}}$ on $V$ to be a holomorphic group homomorphism from $\Ort{n,\mathbb{C}}$ to $\mathrm{GL}(V)$. Holomorphic maps are in particular smooth and $\Ort{n}$ is a smooth submanifold of $\Ort{n,\mathbb{C}}$. Hence the restriction of a representation of $\Ort{n,\mathbb{C}}$ to $\Ort{n}$ is a representation of $\Ort{n}$. By the definition of the complexification and setting $G = \mathrm{GL}(V)$, we see that any representation $\pi$ of $\Ort{n}$ is the restriction of a unique representation $\pi_\mathbb{C}$ of $\Ort{n,\mathbb{C}}$. We now show that complexification interacts well with invariant subspaces. Since $\Ort{n-t} \subseteq \Ort{n-t, \mathbb{C}}$ we have \[ V^{\Ort{n-t,\mathbb{C}}} \subseteq V^{\Ort{n-t}}. \] For the other direction, we use polar decomposition. Any matrix $A \in \Ort{n,\mathbb{C}}$ can be written uniquely as \[ A = U e^{i X} \] with $U$ in $\Ort{n}$ and $X$ in the Lie algebra $\mathfrak{o}(n)$ of $\Ort{n}$ consisting of the skew-symmetric matrices of size $n$; see, e.g., \cite[Proposition~15.2.1]{MR3025417}. Since $\pi_\mathbb{C}$ is a homomorphism we have \[ \pi_\mathbb{C}(U e^{iX}) = \pi_\mathbb{C}(U) \pi_\mathbb{C}(e^{iX}) = \pi_\mathbb{C}(U) e^{d\pi_\mathbb{C}(iX)} = \pi_\mathbb{C}(U) e^{id\pi_\mathbb{C}(X)} = \pi(U) e^{i d\pi(X)}, \] where $d\pi_\mathbb{C} \colon \mathfrak o(n, \mathbb{C}) \to \mathfrak{gl}(n)$ and $d\pi \colon \mathfrak o(n) \to \mathfrak{gl}(n)$ are the differentials of $\pi_\mathbb{C}$ and~$\pi$. Here we used that $d\pi_\mathbb{C}$ is complex linear since $\pi_\mathbb{C}$ is holomorphic. Now let $v$ be a vector in $V$ invariant under $\Ort{n-t}$. Then $d\pi(X)v = 0$ for any $X$ in the Lie algebra $\mathfrak o(n-t)$. By the above, for $A \in \Ort{n-t, \mathbb{C}}$ there are $U \in \Ort{n-t}$ and $X \in \mathfrak{o}(n-t)$ such that $A = Ue^{iX}$. Hence we have \[ \pi_\mathbb{C}(A) v = \pi(U) e^{i d\pi(X)} v = \pi(U) (I + id\pi(X) + \frac{1}{2} (id\pi(X))^2 + \ldots) v = \pi(U) v = v, \] which shows \[ V^{\Ort{n-t}} \subseteq V^{\Ort{n-t, \mathbb{C}}}. \] This shows the invariant subspaces agree in the real and complex case. The $\Ort{n-t, \mathbb{C}}$-invariant subspaces of the irreducible representations of $\Ort{n,\mathbb{C}}$ are described by Gross and Kunze in \cite{gross1984finite}. The irreducible representations are induced by representations of the general linear group. Let $(\rho,W)$ be an irreducible, polynomial representation of $\GL{t}{}$. Let $\omega$ be the complex $t \times n$ matrix \[ \omega = \begin{pmatrix} I_t\ i I_t\ 0 \end{pmatrix}, \] and let $\epsilon$ be the $n \times t$ matrix \[ \epsilon = \begin{pmatrix} I_t \\ 0 \end{pmatrix}. \] For each $w \in W$, define a function $\phi(w) \colon \Ort{n,\mathbb{C}} \to W$ by \begin{equation} \label{eq:defoforthrep1} \phi(w) (x) = \rho(\omega x \epsilon)w. \end{equation} We then define the vector space of right translates of such functions \[ V = \mathrm{span} \left\{ \pi_{\mathbb{C}}({\xi}) \phi(w) \mid \xi \in \Ort{n,\mathbb{C}},\, w \in W \right\}, \] where $\pi_{\mathbb{C}}({\xi}) \phi(w) (x) = \phi(w) (x \xi)$. This space carries an action of $\Ort{n,\mathbb{C}}$ by right translation. Because $h \epsilon = \epsilon$ for $h \in \Ort{n-t,\mathbb{C}}$, it follows that the subspace $\phi(W)$ is invariant under right translation by an element of $\Ort{n - t,\mathbb{C}}$. The following theorem is a special case of Theorem~7.6 in \cite{gross1984finite}. \begin{theorem}\label{thm:GK} The vector space $V$ with right translation is a finite dimensional, holomorphic representation of $\Ort{n,\mathbb{C}}$. Moreover, the space $V^{\Ort{n - t,\mathbb{C}}}$ is precisely $\phi(W)$. Every holomorphic irreducible representation of $\Ort{n,\mathbb{C}}$ with non-trivial $\Ort{n - t,\mathbb{C}}$-invariants is of this form for a unique polynomial representation $(\rho,W)$ of $\GL{t}$. \end{theorem} Restricting to $\Ort{n}$ gives a full description of the invariant subspaces in the real case. Henceforth when we speak of $V$ we shall mean the representation of $\Ort{n}$ obtained by restricting the representation of $\Ort{n,\mathbb{C}}$ constructed in Theorem \ref{thm:GK}. We define on $V$ the inner product \[ \langle f, g \rangle = \int_{\Ort{n}} \langle f(x), g(x) \rangle \, dx, \] where in the integrand any inner product on $W$ is used. By standard properties of the Haar measure, this makes the representation of $\Ort{n}$ unitary. By uniqueness of such inner products, this shows that the above integral is independent of the choice of inner product on $W$ up to multiplication by a positive real number. We now show that in the semistandard tableaux basis, the matrix coefficients of $\pi$ are real. This allows us to restrict to real matrices in the semidefinite program. \begin{proposition} \label{prop:reality} The numbers $\langle \phi(e_{T_i}), \pi(y) \phi(e_{T_j}) \rangle$ are real for all $y \in \Ort{n}$. \end{proposition} \begin{proof} We may define a conjugation $\overline{(\cdot)}$ on $W$ by conjugating the components of a vector in the semistandard tableaux basis. Let $\eta$ be the orthogonal matrix $ \mathrm{diag} (I_t,-I_{n - t})$. We then have $\omega \eta = ( I_t \, - i I_t \, 0 )$. Given the fact that the matrix coefficients of the representation $\rho$ in the semistandard tableaux basis are polynomials with real coefficients, we then have \[ \rho(\omega \eta x y \epsilon) \overline{v} = \overline{\rho(\omega x y \epsilon) v} \] for all $x, y \in \Ort{n}$ and all $v \in W$. Hence we have \begin{align*} \langle \phi(e_{T_i}), \pi(y) \phi(e_{T_j}) \rangle &= \int_{\Ort{n}} \langle \rho(\omega x \epsilon) e_{T_i}, \rho(\omega x y \epsilon) e_{T_j} \rangle\, dx \\ & = \int_{\Ort{n}} \langle \rho(\omega \eta x \epsilon) e_{T_i}, \rho(\omega \eta x y \epsilon) e_{T_j} \rangle\, dx \\ & = \int_{\Ort{n}} \langle \overline{ \rho(\omega x \epsilon) e_{T_i}}, \overline{\rho(\omega x y \epsilon) e_{T_j}} \rangle\, dx \\ & = \overline{\langle \phi(e_{T_i}), \pi(y) \phi(e_{T_j})\rangle}.\qedhere \end{align*} \end{proof} \section{Invariants under the stabiliser subgroups}\label{sec:stabinv} Recall that $\mathcal I_t$ is the set of subsets of $S^{n-1}$ of size at most $t$ with inner products in the finite subset $D$ of $[-1, 1)$. We fix a representative $R_i$ for each orbit $X_i$ of the action of $\Ort{n}$ on $\mathcal I_t$ and let $H_i$ be the stabilizer subgroup of $\Ort{n}$ with respect to $R_i$. In Section~\ref{sec:Gross-Kunze} we give a construction of the irreducible representations $V$ of $\Ort{n}$ indexed by tuples $\lambda$, and we give a basis $\phi(e_{T_1}),\ldots,\phi(e_{T_m})$ of $V^{\Ort{n-t}}$, where $T$ ranges over the semistandard tableaux on the Young diagram associated to $\lambda$ containing entries from $1$ to $t$. In this section we show how we can use this to construct bases $e_{\pi,i,1},\ldots,e_{\pi,i,d_{\pi, i}}$ of the spaces $V^{H_i}$ as needed in \eqref{eq:kernelfourier}. We can choose the representatives $R_i$ to lie in the span of the first $t_i = |R_i|$ coordinates. We assume for notational simplicity that the vectors in $R_i$ are linearly independent, which is true for all applications considered in this paper. We think of $\Ort{n-t_i}$ as acting on the last $n-t_i$ coordinates, so that \[ H_i = S(R_i) \oplus \Ort{n-t_i}, \] where $S(R_i)$ is a finite group of orthogonal transformations that act in the first $t_i$ coordinates and act on $R_i$ by permuting elements. We will abuse notation and think of these transformations as $t_i \times t_i$ matrices, knowing that they act as the identity on the last $n-t_i$ coordinates. We have \[ V^{H_i} = \big( V \vspace{0mm} ^{\Ort{n - t_i}} \big) \vspace{0mm} ^{S(R_i)} \] and we may first construct a basis of $V^{\Ort{n-t_i}}$ and then a basis of the $S(R_i)$-invariants inside $V^{\Ort{n-t_i}}$. Given the definitions of the representations $\rho$ and~$\pi$ of $\GL{t}$ and $\Ort{n}$ and recalling $t_i \leq t$, we have \begin{equation}\label{eq:pirho-unfold} (\pi(\xi) \phi(e_T)) (x) = \rho(\omega x \xi \epsilon)e_T = \sum_{S} \prod_{j = 1}^{\lambda_1} \prod_{k=1}^{\mu_j} (\omega x \xi \epsilon)_{S(k,j),T(k,j)} e_S, \end{equation} where the sum runs over all tableaux $S$ with entries from $1$ to $t$ on the Young diagram associated to the partition $\lambda$. This shows we have $\pi(\xi) \phi(e_T) = \phi(e_T)$ for all $\xi \in \Ort{n-t_i}$ if the tableau $T$ only has entries ranging from $1$ to $t_i$, and thus the vectors $\phi(e_T)$ lie inside $V^{\Ort{n-t_i}}$ as we range over such $T$. As before, the subspace $V^{\Ort{n-t_i}}$ has dimension equal to the corresponding representation of $\GL{t_i}$. Hence a basis of $V^{\Ort{n-t_i}}$ is given by the set of $\phi(e_T)$, where $T$ ranges over semistandard tableaux with entries from $1$ to $t_i$. Henceforth we enumerate the tableaux in such a way that the first $m_i$ semistandard tableaux give a basis for $V^{\Ort{n-t_i}}$ and the bases for the spaces $V^{\Ort{n}} \subseteq V^{\Ort{n-1}} \subseteq \dots \subseteq V^{\Ort{n-t}}$ are nested. To use the basis of $V^{\Ort{n-t_i}}$ to construct a basis for $V^{H_i}$ we use the following procedure. First, we construct the matrices generating the groups $S(R_i)$. For this we fix an ordering of the vectors in $R_i$ and denote also by $R_i$ the $n \times t_i$ matrix with these vectors as columns. The Gram matrix $R_i^{\sf T}R_i$ of $R_i$ is a matrix with ones in the diagonal and elements from $D$ elsewhere. Given a permutation $\sigma$ of the columns, denote by $P_\sigma$ the corresponding $t_i \times t_i$ permutation matrix. If $P_\sigma^{\sf T} R_i^{\sf T}R_iP_\sigma = R_i^{\sf T}R_i$, then there exists a corresponding $r_\sigma \in S(R_i)$ given by \[ r_\sigma = R_i P_\sigma (R_i^{\sf T}R_i)^{-1} R_i^{\sf T}. \] and we observe that $r_\sigma$ satisfies $r_\sigma R_i = R_i P_\sigma$. To obtain the generators for the group $S(R_i)$, we take the generators $\sigma$ of the symmetric group on $t_i$ elements which satisfy $P_\sigma^{\sf T} R_i^{\sf T}R_iP_\sigma = R_i^{\sf T}R_i$ and take the corresponding matrices $r_\sigma$. To obtain a basis for the space $V^{H_i}$ we look for linear combinations of the vectors $\phi(e_{T_j})$ with $1 \leq j \leq m_i$ such that $\pi(r_\sigma) \sum_{j} c_j \phi(e_{T_j}) = \sum_{j} c_j \phi(e_{T_j})$ and in order to do so we compute a basis for the solution space of the system \begin{equation}\label{eq:kernel-invariant} \sum_{j=1}^{m_i}\big\langle \phi(e_{T_k}),\ (\pi(r_\sigma) - I) \phi(e_{T_j}) \big\rangle c_j = 0 \end{equation} for all $1 \leq k \leq m_i$ and all generators $r_\sigma$ of $S(R_i)$. For each basis element $c$ we get the basis element $\sum_j c_j \phi(e_{T_j})$ of $V^{H_i}$. Let us look at an example. Let $t = 2$ and consider an orbit $X$ such that the elements of $X$ have cardinality two. Let $r$ be the $n \times n$ orthogonal matrix which maps $e_2 $ to $ -e_2$ and fixes the orthogonal complement of $e_2$. We may choose a representative $R = \{ v_1,v_2\}$ of $X$ such that $rv_1 = v_2$ and $rv_2 = v_1$. Then the group $S(R)$ is the group of two elements $\{ I , r\}$. Unwinding definitions with \eqref{eq:pirho-unfold} and using \[ \rho(\omega x r \epsilon) = \rho(\omega x \epsilon )\rho \left( \begin{bmatrix} 1 & 0 \\ 0 & -1 \\ \end{bmatrix} \right), \] we have $\pi(r)\phi(e_T) = \phi(e_T)$ if there is an even number of twos in the tableau $T$ and $\pi(r)\phi(e_T) = - \phi(e_T)$ if there is an odd number of twos in $T$. Then $H = S(R) \oplus \Ort{n-2}$ is the stabiliser of $R$. A basis of $V^{H}$ is given by the set of those $\phi(e_T)$ for which $T$ is a semistandard tableau with an even number of twos. \section{Semidefinite programming formulation} \label{sec:sdp} In this section we give the semidefinite programming formulation. We also discuss the projective formulation of equiangular lines for the case of bounding $N_\alpha(n)$, and we give implementation details. To solve \eqref{pr:las} using semidefinite programming we parametrize the positive kernels using \eqref{eq:kernelfourier}. To do this explicitly we need to compute inner products of the form \[ \big\langle \pi(s(J_1)) e_{\pi,i(J_1),j_1}, \pi(s(J_2)) e_{\pi,i(J_2),j_2} \big\rangle. \] Here $V$ is the irreducible representation of $\Ort{n}$ as constructed in Section~\ref{sec:Gross-Kunze}, and $\{e_{\pi,i,j}\}_j$ is the basis of $V^{H_i}$ as constructed in Section~\ref{sec:stabinv}, where $H_i$ is the stabilizer subgroup of $\Ort{n}$ with respect to the orbit representative $R_i$ of the orbit $X_i$. Recall that $s$ is a function $\mathcal I_t \to \Ort{n}$ such that $s(J)R_{i(J)} = J$ for all $J \in \mathcal I_t$, and define $S = s(J_1)^{-1} s(J_2)$. Then the above inner product is equal to \[ \big\langle e_{\pi,i_1,j_1}, \pi(S) e_{\pi,i_2,j_2} \big\rangle, \] where $i_1 = i(J_1)$ and $i_2 = i(J_2)$. Due to the choice of $\epsilon$ in the definition of $\pi$ we only need to compute the top-left $2t \times t$ part of $S$. For this we first show how $s(J)$ can be computed. Let $J \in X_i$ and fix an ordering of the vectors in $R_i$ and $J$, and denote also by $R_i$ and $J$ the matrices with these vectors as their columns. Let $A_i = R_i^{\sf T}R_i$ and let $P$ be a permutation matrix for which $A_i = P^{\sf T} J^{\sf T} J P$. Let $Q$ be an orthogonal matrix with the first $t_i$ columns given by the corresponding columns of $ J P A_i^{-1} R_i^{\sf T}. $ Then $Q R_i = JP$, and we can define $s(J) = Q$. Then the top-left $t \times t$ block of $S = s(J_1)^{-1} s(J_2)$ is given by the corresponding block of \[ (J_1 P_{i_1} A_{i_1}^{-1} R_{i_1}^{\sf T})^{-1} J_2 P_{i_2} A_{i_2}^{-1} R_{i_2}^{\sf T}, \] We can then extend this to the $2t \times t$ block by setting the $t \times t$ block below the top-left $t \times t$ block to any upper triangular matrix which ensures the columns are orthonormal. We then define $w_{\pi,i,j}$ by $\phi(w_{\pi,i,j}) = e_{\pi,i,j}$ and evaluate the inner product as follows: \begin{align*} &\big\langle e_{\pi,i_1,j_1}, \pi(S) e_{\pi,i_2,j_2} \big\rangle\\ &\quad = \int_{\Ort{n}} \big\langle \phi(w_{\pi,i_1,j_1})(x), \phi(w_{\pi,i_2,j_2})(xS)\big\rangle \, dx\\ &\quad = \int_{\Ort{n}} \big\langle \rho(\omega x \epsilon) w_{\pi,i_1,j_1}, \rho (\omega x S \epsilon) w_{\pi,i_2,j_2} \big\rangle\, dx \\ &\quad = \sum_k \int_{\Ort{n}} \big\langle \rho(\omega x \epsilon) w_{\pi,i_1,j_1}, e_{T_k} \big\rangle \big\langle e_{T_k}, \rho (\omega x S \epsilon) w_{\pi,i_2,j_2} \big\rangle\, dx, \end{align*} where the matrix coefficients in the integrand of the last integral can be computed using \eqref{eq:matrixcoeffs}. Notice that the integrand is a polynomial in the top-left $2t \times 2t$-part of $x$. By using the recursive approach from \cite{gorinlopez08} for evaluating the integral of a monomial over $\Ort{n}$ we can compute the inner product $\langle e_{\pi,i_1,j_1}, \pi(S) e_{\pi,i_2,j_2} \big\rangle$ explicitly as a polynomial in the entries of $S$ where the coefficients are rational functions in the dimension $n$. This shows how we can express the kernel $K$ from \eqref{eq:kernelfourier} explicitly in terms of the matrices $F^\pi$ defining the kernel. By fixing $d$ and considering representations $\pi$ with $|\lambda| \leq d$ we can now generate the following semidefinite program, where the entries of the constraint matrices are explicitly computed rational functions in the dimensions $n$ over the ground field of the algebraic numbers. \begin{mini} {}{K(\emptyset, \emptyset)}{}{} \label{pr:lassdp} \addConstraint{F^{\pi} \succeq 0,}{}{\pi = \pi_\lambda, \lambda = (\lambda_1,\ldots,\lambda_t), |\lambda| \leq d} \addConstraint{A_t K(R) \leq -1_{\mathcal{I}_{=1}}(R),\qquad}{}{R \in \mathcal R_{2t}.} \end{mini} In the program, $\mathcal R_{2t}$ is a set containing a representative of each orbit in $\mathcal I_{2t} \setminus \{\emptyset\}$. To bound $N_\alpha(n)$ we consider the spherical finite distance problem with $D = \{\pm\alpha\}$. In this case we can alternatively obtain upper bounds by considering the graph on the projective space $\mathbb{RP}^{n-1}$ with $[x] \sim [y]$ if $x\cdot y = \alpha$ or $x \cdot y = -\alpha$. This leads to a reduction in the number of variables and constraints. The kernels are defined in the same way as in (\ref{eq:kernelfourier}), but there are some minor differences. Firstly, the invariant space $V^{H_i}$ is unchanged if $|\lambda|$ is even, but vanishes if $|\lambda|$ is odd. This follows from the fact that $-I$ stabilises every element in the projective space and the formula \[ \pi(-I) \phi(v)(x)=\rho(- \omega x \epsilon)v = (-1)^{|\lambda|} \rho(\omega x \epsilon)v = (-1)^{|\lambda|} \phi(v)(x) \, . \] Secondly, there are fewer orbits, since now we consider two Gram matrices as representatives of the same orbit not only if they are are equal up to simultaneous permutations of rows and columns, but also if they are equal up to simultaneous multiplication by $-1$ of a subset of rows and columns. Both of these changes lead to a smaller semidefinite program. In contrast to the nonprojective version, the resulting semidefinite programs exist over the rational numbers for $t=2$ and the odd integers $a$ we tried. We have observed empirically that for $t = 2$ the projective and nonprojective bounds are the same, but that for $t=3$ and $d > 4$ there exist cases where they are different. We wrote the software to set up the semidefinite programs in Julia~\cite{bezanson17} using the Nemo~\cite{fieker17} computer algebra system, where we use Calcium \cite{MR4398788} for exact arithmetic with algebraic numbers. For solving the semidefinite programs we use the solver SDPA-GMP~\cite{nakata10}. Setting up the semidefinite programs, especially for the third level of the hierarchy, is computationally demanding, and we took great care to make the code efficient. We pregenerate the part of the zonal matrices not depending on the dimension $n$, which is fast for $t=2, 3$ and $d\leq 4$, but takes about a week for the $t=3$ and $d=5$ case on a single core of a modern computer. After that the whole process of setting up and solving the semidefinite programs takes a few seconds for $\mathrm{las}_2$ using degree $4$ polynomials and a few hours for $\mathrm{las}_3$ using degree $5$ polynomials. For the case $t=3$ and $d \ge 4$ the algebraic numbers become complicated which means setting up the semidefinite programs in such a way that the entries of the constraint matrices are rational functions in $n$ is no longer feasible. Hence for these cases we only generate the semidefinite programs for fixed dimensions. The source code and data files for the proofs can be found in the ancillary files of the arXiv version of this paper, including documentation on how to use the code. \section{New bounds in fixed dimensions} \label{sec:applications} \subsection{Improved bounds on $N_\alpha(n)$ } \label{sec:equiangular-lines} There has been extensive research into the determination of $N_\alpha(n)$. As mentioned in the introduction, usually only inner products $\alpha = 1/a$ with $a$ an odd natural number are considered, since these are the only cases for which there may exist equiangular line configurations of size larger than $2n$ \cite{lemmens73}, and the determination of $N_\alpha(n)$ for these values of $\alpha$ solves the equiangular lines problem without fixed angle in dimension $n$ \cite{larman77}. In \cite{lemmens73, neuimaier89, cao21} the equations \[ N_{1/3}(n) = \begin{cases} 28 & \text{for } 7 \leq n \leq 14\text{, and}\\ 2(n-1) & \text{for } n \geq 15\end{cases} \] and \[ N_{1/5}(n) = \begin{cases} 276 & \text{for } 23 \leq n \leq 185\text{, and}\\ \lfloor \frac{3}{2}(n-1)\rfloor & \text{for } n \geq 185,\end{cases} \] are shown, but for $a \geq 7$ less is known (see, e.g., \cite[Appendix A]{kao22+}). As mentioned in the introduction, the correct values are known for very large dimensions and there exist general upper and lower bounds. \begin{figure} \input{pgfplot-a5.tikz} \caption{Bounds for $N_{1/5}(n)$.} \label{fig:a5} \end{figure} \begin{figure} \begin{center} \input{pgfplot-a7.tikz} \end{center} \caption{Bounds for $N_{1/7}(n)$. The dashed line extends \eqref{eq:stable} until dimension \eqref{eq:dimlinearcon}, from which it continues with the construction of ${\lfloor (n-1)(a+1)/(a-1)\rfloor}$ lines.} \label{fig:a7} \end{figure} A fundamental result in the linear/semidefinite programming approach is the Delsarte, Goethals, and Seidel~\cite{delsarte77} linear programming bound. Since this bound takes into account constraints between pairs of points, it is called a $2$-point bound which we denote by $\Delta_2$. In \cite{bachoc08, barg14} this is generalized to a $3$-point bound $\Delta_3$ and computed for the equiangular lines problem, and in \cite{musin14,deLaat2021} this is generalized to a $k$-point bound $\Delta_k$ and computed for $k=4,5,6$. As mentioned in the introduction we have $\Delta_2 = \mathrm{las}_1$. The bounds $\mathrm{las}_2$ and $\mathrm{las}_3$ considered in this paper thus provide an alternative generalization of the linear programming bound. As shown in Figure~\ref{fig:a7}, for $a = 7$ and for many dimensions the bounds $\mathrm{las}_2$ and $\mathrm{las}_3$ are much stronger than any of the previous bounds coming from semidefinite programming or elsewhere. Similar results hold for larger values of $a$, but we do not include the plots because they look qualitatively similar. One common feature of $\Delta_k$ for $k = 3,4,5,6$ is that starting at $n = a^2 - 2$ they stabilize and produce the constant bound \begin{equation}\label{eq:stable} \frac{(a^2-2)(a^2-1)}{2} \end{equation} on $N_{1/a}(n)$ until a certain dimension $D_k(a)$. For $a = 5$ there is an exceptional configuration related to the Leech lattice of $(a^2-2)(a^2-1)/2 = 276$ points in dimension $a^2-2$ \cite{conway69,lemmens73}. This configuration stays optimal until a construction of ${\lfloor (n-1)(a+1)/(a-1)\rfloor}$ equiangular lines in dimension $n$ (see \cite{bukh16}) matches \eqref{eq:stable}, which happens in dimension \begin{equation} \label{eq:dimlinearcon} \frac{(a^2-2)(a-1)^2}{2} + 1. \end{equation} The general situation is different, however, since it is known that a configuration of $(a^2-2)(a^2-1)/2$ lines in dimension $a^2-2$ cannot exist for a number of values of $a$ starting at $a=7$ \cite{bannai04, nebe12}. In fact, in \cite{kao22+} it is shown that for those values of $a$ there is no configuration of $(a^2-2)(a^2-1)/2$ equiangular lines in dimensions $a^2-2 \leq n \leq D_4(a)$. It is therefore of interest to find better semidefinite programming bounds on $N_{1/a}(a^2 - 2)$ and to increase the range of dimensions for which we know that \eqref{eq:stable} gives an upper bound on $N_{1/a}(n)$. As can be seen in Table~\ref{table:new-ranges}, the bounds $\mathrm{las}_2$ and $\mathrm{las}_3$ are equal to \eqref{eq:stable} for a significantly larger range of dimensions. They, however, do not give any improvements over the linear programming bound for dimension $a^2 - 2$ or any lower dimension, which in itself is interesting since we know the linear programming bound is not always correct in this range and the Lasserre hierarchy has to give the correct value for sufficiently large $t$. In \cite{yu17, kao22+} explicit quadratic expressions are given for $D_3(a)$ and $D_4(a)$. Based on the data in Table~\ref{table:new-ranges} we conjecture that for $\mathrm{las}_2$ and $\mathrm{las}_3$ the corresponding expressions are cubic instead of quadratic. Recall that \eqref{eq:dimlinearcon} is quartic in $a$. \begin{table} \begin{tabular}{@{}ccccccccc@{}} \toprule $a$ & $\Delta_2=\mathrm{las}_1$ & $\Delta_3$ & $\Delta_4$ & $\Delta_5$ & $\Delta_6$ & $\mathrm{las}_2$ & $\mathrm{las}_3$ & \text{intersection}\\ \midrule $5$ & $23$ & $60$ & $65$ & $69$ & $70$ & $82$ $(80)$ & $90$ $(89)$ & $185$\\ $7$ & $47$ & $131$ & $145$ & $158$ & $169$ & $243$ $(239)$ & $272$ $(271)$ & $847$\\ $9$ & $79$ & $227$ & $251$ & $273$ & $300$ & $535$ $(530)$ & $610$ $(610)$ & $2529$ \\ $11$ & $119$ & $347$ & $381$ & $413$ & $448$ & $1000$ $(993)$ & $1152$ $(1152)$ & $5951$ \\ $13$ & $167$ && & & & $1676$ $(1668)$ & $1946$ $(1946)$ & $12025$ \\ $15$ & $223$ && & & & $2604$ $(2595)$ & $3040$ $(3040)$ & $21855$ \\ $17$ & $287$ && & & & $3823$ $(3813)$ & $4483$ $(4483)$ & $36737$ \\ $19$ & $359$ && & & & $5374$ $(5362)$ & $6321$ $(6321)$ & $58159$ \\ $21$ & $439$ && & & & $7294$ $(7281)$ & $8603$ $(8603)$ & $87801$ \\ $23$ & $527$ && & & & $9626$ $(9611)$ & $11377$ $(11377)$ & $127535$ \\ $25$ & $623$ && & & & $12407$ $(12391)$ & $14692$ $(14692)$ & $179425$ \\ \bottomrule \end{tabular} \bigskip \caption{The largest dimension~$n$ for which the respective bounds can be used to show \eqref{eq:stable} holds. In parentheses we list the largest dimension for which the bound is exactly equal to $(a^2-2)(a^2-1)/2$. The bound $\Delta_3$ is computed in~\cite{barg14, king16} and the bounds $\Delta_4$, $\Delta_5$, $\Delta_6$ are computed in~\cite{deLaat2021}. The column labeled `intersection' shows the dimension from \eqref{eq:dimlinearcon} for reference.} \label{table:new-ranges} \end{table} \subsection{Bounds for more general distance sets} \label{sec:quasi-unbiased} In this section we discuss some applications of $\mathrm{las}_2$ to problems where the allowed distance set $D$ is not of the form $\{\pm \alpha\}$. For some of these computations the cardinality of $D$ is $3$ as opposed to $2$. Here nothing changes in the formulation of the bounds, but the number of orbits to consider increases greatly and $\mathrm{las}_3$ becomes too hard to compute. For instance, with inner products $\{1/7, -1/7\}$ there are $156$ orbits with sets of size $6$, while with inner products $\{1/7, -1/7, 0\}$ there are $25506$ such orbits. For completeness we also mention applications we tried where we did not get new results. The first application we consider is to a problem with sets of matrices having orthogonal rows. Following~\cite{araya17, kao22+}, a Hadamard matrix of order $n$ is a $(+1,-1)$-valued $n \times n$ matrix $H$ such that $HH^{\sf T} = n I$, and a weighing matrix of order $n$ and weight $k$ is a $(+1,-1,0)$-valued $n \times n$ matrix such that $WW^{\sf T}=k I$. Two Hadamard matrices $H_1$ and $H_2$ of order $n$ are said to be quasi-unbiased for parameters $(l,a)$ if $a^{-1/2}H_1 H_2^{\sf T}$ is a weighing matrix of weight $l$. Note that necessarily $l = n^2/a$. Hadamard matrices $H_1,\ldots,H_f$ of order $n$ are said to be quasi-unbiased for parameters $(l,a)$ if they are pairwise quasi-unbiased for those parameters. Table 1 of Araya, Harada, and Suda~\cite{araya17} has a list of the possible parameters for quasi-unbiased Hadamard matrices of order up to $48$ together with bounds for the maximum size of these sets. Kao, Suda, and Yu~\cite{kao22+} applied the semidefinite programming bound $\Delta_3$ to derive bounds based on an observation that normalizing the rows of a set of $f$ quasi-unbiased Hadamard matrices for parameters $(l, a)$ gives a spherical $3$-distance set in $S^{n-1}$ with $|X| = nf$ and inner products $\{\pm l^{-1/2}, 0\}$. We apply $\mathrm{las}_2$ using polynomial representations of degree $|\lambda| \leq 8$ to give the bounds as listed in Table~\ref{table:QUHM}. \begin{table} \begin{tabular}{@{}ccccccc@{}} \toprule $n$ & $l$ & $a$ & Lower bound from~\cite{araya17} & Upper bound from~\cite{araya17} & $\Delta_3$~\cite{kao22+} & $\mathrm{las}_2$\\ \midrule $16$ & $4$ & $64$ & $8$ & $35$ & $15$ & $15$ \\ $24$ & $4$ & $144$ & $2$ & $85$ & $25$ & $23$ \\ $24$ & $9$ & $64$ & $16$ & $85$ & $95$ & $95$ \\ $32$ & $4$ & $256$ & $8$ & $155$ & $47$ & $31$ \\ $36$ & $9$ & $144$ & -& $199$ & $79$ & $ 67$ \\ $40$ & $4$ & $400$ & -& $247$ & $101$ & $39$ \\ $40$ & $25$& $64$ & -& $28$ & $30$ & $30$ \\ $48$ & $4$ & $576$ & $2$ & $361$ & $276$ & $47$ \\ $48$ & $9$ & $256$ & $16$ & $361$ & $104$ & $96$ \\ $48$ & $16$ & $144$ & -& $361$ & $316$ & $316$ \\ $48$ & $36$ & $64$ & $2$ & $28$ & $30$ & $30$ \\ \bottomrule \end{tabular} \bigskip \caption{Improved bounds on the maximum number of quasi-unbiased Hadamard matrices of order $n$ with parameters $(l,a)$.} \label{table:QUHM} \end{table} We applied our bound to a problem involving ``Q-antipodal Q-polynomial schemes with 3 classes'' as described by Martin, Muzychuk, Williford~\cite{martin07}, but we observed that $\mathrm{las}_2$ produces the same results as listed in the table in~\cite{kao22+}. Another application of bounds for $3$-distance sets is to the maximum size of a $3$-distance set in $S^{n-1}$ for any possible choice of angles \cite{musin11,szollosi20}. Here a theorem by Nozaki~\cite{nozaki11} is used so that for each inner product $d_3$ only finitely many other inner products $d_1$ and $d_2$ need to be considered. By using sums-of-squares techniques the bounds $\Delta_2 = \mathrm{las}_1$ and $\Delta_3$ can be applied \cite{musin11,szollosi20,lin20+}. We applied $\mathrm{las}_2$ to this problem, and did get improvements for some parameters, but overall could not get new results because like $\Delta_2$ and $\Delta_3$, $\mathrm{las}_2$ becomes unbounded as $d_3 \to 1$. Finally, we tried to disprove the existence of certain strongly regular graphs; see Cameron~\cite[Chapter 8]{beineke04} for an introduction to these graphs and also the more complete monograph by Brouwer and Van Maldeghem~\cite{brouwer22}. The existence of a strongly regular graph with a given set of parameters implies the existence of a spherical $2$-distance set in a certain dimension and with certain inner products; see Theorem~5.1 in~\cite[Chapter 8]{beineke04}. We applied $\mathrm{las}_3$ with $|\lambda| \leq 5$ to all sets of parameters listed as open in~\cite[Chapter 12]{brouwer22} with at most $200$ vertices, without getting a new result. \section{Asymptotic analysis of the bounds} \label{sec:asymptotics} In this section we explain how we obtain a computer generated and verified proof of Conjecture~\ref{conj:las2} for $a = 3,5,7,9,11$. First we observe empirically that the degree of the required representations does not grow. In fact, for large dimensions, we only use the representations with $\lambda = (0,0), (3,1), (4,0)$. Though it is not necessary for our proof, we have the following conjecture: \begin{conjecture}\label{con:lambdas} The optimal value of $\mathrm{las}_2$ can be obtained by using only the representations with $\lambda = (0,0), (2,0), (3,1), (4,0)$. Moreover, for dimensions beyond the stable range the representation with $\lambda = (2,0)$ is not needed. \end{conjecture} As numerical evidence for Conjecture~\ref{conj:las2} we observed that the function $\mathrm{las}_2(n)$ can be expanded in terms of $n, 1, n^{-1}, n^{-2}, \dots$, and through interpolation we find the first few coefficients in this expansions for many values of $\alpha$. We found that the first expansion coefficient satisfies the formula $(1 + \alpha)/(2\alpha)$. For $\alpha = 1/5$, the expansion of $\mathrm{las}_{2}(n)$ seems to be particularly well-behaved having rational coefficients: \begin{equation}\label{eqn:las2expansion} 3n + 6 + \frac{120}{n} + \frac{5530}{n^2} + \frac{1449485}{3n^3} + \frac{2961283225}{72n^4} + O\left(\frac{1}{n^5}\right). \end{equation} To prove Conjecture~\ref{conj:las2} for a given value of $\alpha$ we construct, for each sufficiently large $n$, a feasible solution to the semidefinite program $\mathrm{las}_2(n)$ such that the corresponding sequence of objective values is linear in $n$ with slope $(1 + \alpha)/(2\alpha)$. For this we consider the perturbed hierarchy $\mathrm{las}_{2,4}(n)$, where we subtract $1/n^4$ from the right hand side of each inequality constraint in \eqref{pr:lassdp} and force each eigenvalue of each block matrix to be at least $1/n^4$. Let $\{F^{\lambda}(n)\}_\lambda$ be the optimal solution of $\mathrm{las}_{2,4}(n)$ lying on the central path of the interior-point method. We now make the ansatz that there exist matrices $A^{\lambda,k}$, whose entries are algebraic numbers of low degree and reasonable bitsize, such that \begin{equation}\label{eq:expansion} F^{\lambda}(n)_{(i_1,j_1),(i_2, j_2)} = \sum_{k=0}^\infty A_{(i_1,j_1), (i_2,j_2)}^{\lambda,k} n^{1+\lambda_1+2\lambda_2 - t_{i_1} - t_{i_2}-k}, \end{equation} where as before $t_i$ is the cardinality of the orbit representative $R_i$. Then we use the interior-point solver to numerically compute a near optimal solution approximately on the central path of $\mathrm{las}_{2,4}(n)$ for dimensions $N, N+1, \dots, N+L$, and we use this to compute approximations of the coefficient matrices $A^{\lambda,0},\ldots,A^{\lambda,l-1}$ via interpolation. We then use the LLL algorithm to find the entries of the coefficient matrices exactly as algebraic numbers, and denote by $\mathrm{sol}(n)$ the solution whose matrices are given by the truncation of \eqref{eq:expansion} using these $l$ rounded coefficient matrices. We found good values for the parameters $l$, $L$, and $N$ through experimentation. If the dimension $N_\alpha$, beyond which the solutions are feasible, is to be made small, the number of terms $l$ in the truncation should not be too small and not too large. Perhaps this is due to Runge's phenomenon. After that, $N$ and $L$ should be chosen such that we find $A^{\lambda,0},\ldots,A^{\lambda,l-1}$ in sufficiently high precision so that we can round them correctly. For the results presented in this paper we use $N= 10^{100}$ and the parameters $l$ and $L$ listed in Table~\ref{table:asymptoticparameters}. \begin{table} \begin{tabular}{@{}cccccccc@{}} \toprule $\alpha$ & $f_\alpha(n)$ & $N_\alpha$ & $l$ & $L$ & $N_{\alpha,4}$ & $N_{\alpha,5}$\\ \midrule $1/3$ & $2n+4$ & $500$ & $9$ & $12$ & $17$ & $12$ \\ $1/5$ & $3n+30$ & $2235$ & $10$ & $15$ & $253$ & $87$\\ $1/7$ & $4n+116$ & $13739$ & $9$ & $11$ & $4638$ & $261$\\ $1/9$ & $5n+316$ & $166018$ & $9$ & $12$ \\ $1/11$ & $6n+699$ & $751307$ & $9$ & $12$ \\ \bottomrule \end{tabular} \bigskip \caption{Parameters used to obtain the interpolations.} \label{table:asymptoticparameters} \end{table} Next, we verify that $\mathrm{sol}(n)$ is a solution for $\mathrm{las}_2(n)$. Since the entries of both $\mathrm{las}_2(n)$ and $\mathrm{sol}(n)$ are exact rational functions in $n$, we can compute the slack in the inequality constraints of the solution $\mathrm{sol(n)}$ to $\mathrm{las}_2(n)$ as exact rational functions in $n$ (where a positive slack means the inequality constraint is satisfied strictly). We then fix an integer $n_\alpha$ and evaluate these rational functions at $n_\alpha+n$. We then verify that the coefficients of the numerator and denominator polynomials are all positive, which proves the slacks are positive for $n \geq n_\alpha$, and hence the inequality constraints are satisfied for all $n \geq n_\alpha$. Then we compute the determinants of the leading principal submatrices, evaluate these rational functions in $n_\alpha+n$, and check that all coefficients of the numerator and denominator polynomials are positive, which proves the solution matrices are positive semidefinite for all $n \geq n_\alpha$. Finally we check that the objective function is linear in $n$ with slope $(\alpha +1)/(2\alpha)$, which gives a computer verified proof of Conjecture~\ref{conj:las2} for this value of $\alpha$. Note that although floating point computations are used to obtain the proofs, the verification procedure is implemented entirely in exact arithmetic. To make the value $n_\alpha$, beyond which we can prove our bound holds, as small as possible we solve finitely many semidefinite programs in fixed dimensions. We only do this for the values $a = 3,5,7$, but in principle it could be done for $a = 9,11$ too. First, we solve $\mathrm{las}_{2,4}(n)$ or $\mathrm{las}_{2,5}(n)$. Then, we approximate the floating point solution by a rational solution and we check in exact arithmetic whether the rounded solution is feasible for $\mathrm{las}_2(n)$ and has objective below $f_\alpha(n)$. The reason we use two different perturbations is that $\mathrm{las}_{2,4}(n)$ does not give good enough bounds in low dimensions and it is too difficult to find a feasible solution for $\mathrm{las}_{2,5}(n)$ in high dimensions. We use $\mathrm{las}_{2,4}(n)$ for $N_{\alpha,4} \leq n < N_\alpha$ and $\mathrm{las}_{2,5}(n)$ for $N_{\alpha,5} \leq n \leq N_{\alpha, 4}$. For $\alpha = 1/7$, for example, the interpolation procedure shows that \[ \mathrm{las}_2(n) \leq 4n + a_1 + a_2 n^{-1} + \dots + a_8 n^{-7} \quad \text{for all} \quad n \geq 13739, \] for certain explicitly given $a_1,\ldots,a_8 \in \mathbb Q[\sqrt{2}]$. From this we can then derive that $N_{1/7}(n) \leq 4n+116$ for all $n \geq 13739$. Next we solve finitely many semidefinite programs to decrease the dimension $N_\alpha = 13739$ to $n_\alpha = 261$. In Table~\ref{table:asymptotic} we list the bounds obtained with this approach. Note that the same approach works, in principle, for other values of $\alpha$ not listed in the table, but we did not perform these computations. For $t=3$ we seem to get asymptotically linear bounds with a better slope than with $t=2$. However, since computing the third level of the hierarchy for $d > 5$ is currently too computationally demanding we do not have the equivalent of Conjecture~\ref{con:lambdas}. The bounds might very well improve as we increase $d$ beyond $5$, and we do not know the slope of the asymptotically linear behaviour as the degree $d$ goes to infinity. In Table~\ref{table:las3} we give the numerically computed slopes for $d=4$ and $d=5$. \begin{table} \begin{center}{ \begin{tabular}{@{}ccccccccccc@{}} \toprule $a$ & $\tfrac{a+1}{2}$ & $d=4$ & $d=5$ & $\tfrac{a+1}{a-1}$ & & $a$ & $\tfrac{a+1}{2}$ & $d=4$ & $d=5$ & $\tfrac{a+1}{a-1}$ \\ \cmidrule{1-5} \cmidrule{7-11} $5$ & $3$ & $2.000$ & $2.000$ & $1.500$ & & $19$ & $10$ & $6.948$ & $4.156$ & $1.111$\\ $7$ & $4$ & $2.003$ & $2.003$ & $1.333$ & & $21$ & $11$ & $7.975$ & $4.773$ & $1.100$\\ $9$ & $5$ & $2.428$ & $2.065$ & $1.250$ & & $23$ & $12$ & $9.018$ & $5.428$ & $1.091$\\ $11$ & $6$ & $3.171$ & $2.268$ & $1.200$ & & $25$ & $13$ & $10.071$ & $6.117$ & $1.083$\\ $13$ & $7$ & $4.038$ & $2.617$ & $1.167$ & & $27$ & $14$ & $11.130$ & $6.836$ & $1.077$\\ $15$ & $8$ & $4.968$ & $3.066$ & $1.143$ & & $29$ & $15$ & $12.193$ & $7.583$ & $1.071$\\ $17$ & $9$ & $5.943$ & $3.585$ & $1.125$ & & $31$ & $16$ & $13.258$ & $8.354$ & $1.067$\\ \bottomrule \end{tabular}} \end{center} \bigskip \caption{Approximate slopes of $\mathrm{las}_3$ for degrees $d=4$ and $d=5$ and inner products $\alpha=1/a$ together with the slope $(a+1)/2$ given by $\mathrm{las}_2$ and the correct asymptotic slope $(a+1)/(a-1)$ proven by~\cite{MR4334975}.} \label{table:las3} \end{table} \section{The limit semidefinite program} \label{sec:moreasymptotics} In this section we give a formulation for the limit semidefinite program. To do so we prove a fact about the asymptotic behaviour of the Gross-Kunze construction. Let $\langle u, v\rangle_{\U{t}}$ be the unique (up to positive scalars) inner product on $W$ such that the restriction of $\rho$ to the compact group $\U{t}$ is a unitary representation. The following conjucture describes the asymptotic behaviour of the inner product defined in Section~\ref{sec:Gross-Kunze}. \begin{conjecture} \label{conj:zonalofid_for_large_n} For each $\lambda$ there exists a strictly positive scalar $c$ such that \[ n^{|\lambda|} \langle \phi(u), \phi(v) \rangle = c \langle u, v \rangle_{\U{t}} + O(n^{-1}) \] for all $u, v \in W$. \end{conjecture} We prove this conjecture for $t = 2,3$ and $|\lambda| \leq 4$. First, we set up a semidefinite program for which the matrix $M$ defined by $M_{i,j} = \langle e_{T_i}, e_{T_j} \rangle_{\U{t}}$ is the unique (up to positive scalars) solution. Consider the differential \[ d\rho \colon \mathfrak{gl}(t) \to \mathfrak{gl}(W) \] of \eqref{eq:rhorep}, which is a representation of Lie algebras. Here $\mathfrak{gl}(t)$ and $\mathfrak{gl}(W)$ are the Lie algebras of all complex linear endomorphisms of $\mathbb{C}^t$ and $W$, respectively. We now give the matrix coefficients of $d\rho$. Because $d\rho$ is linear, it suffices to compute the matrix coefficients in the basis $E_{r,s} = e_r e_s^{\sf T}$. We have $E_{r,s} e_k = \delta_{s k} e_r$ and hence \[ d\rho(E_{r,s}) e_T = \sum_{S \in D_{r,s}(T)} e_S, \] where the sum ranges over the set $D_{r,s}(T)$ of all tableaux $S$ which may be obtained from $T$ by changing exactly one $s$ to an $r$. The matrix coefficients of the representation are then given by \[ \langle e_{T_i}, d \rho (E_{r, s}) e_{T_j} \rangle = \sum_{S\in D_{r,s}(T_j)} \langle e_{T_i}, e_S \rangle. \] By uniqueness of the inner product, $M$ is the unique (up to positive scalars) positive definite matrix satisfying the equation \[ \rho(x)^* M \rho(x) = M \] for all $x \in U(t)$. By differentiating, this condition implies \begin{equation} \label{eq:Mliealgcond} d\rho(X)^* M = - M d\rho(X) \end{equation} for all $X \in \dU{t}$. Using the exponential map and connectedness of the unitary group, this condition is sufficient too. By linearity it suffices to enforce this condition on a basis of the skew-hermitian complex matrices $\dU{t}$. The explicit formula for the matrix coefficients of $d \rho$ and an explicit choice of basis of $\dU{t}$ give a practical way of checking whether a given matrix defines the inner product $\langle \cdot, \cdot \rangle_{U(t)}$. We now consider the asymptotic expansion of $n^{|\lambda|} \langle \phi(u), \phi(v) \rangle$. Let $a$ be a multi-index with $|a| = 2|\lambda|$ and let $x^a = \prod_{i,j=1}^n x_{ij}^{a_{ij}}$ be the corresponding monomial on $\Ort{n}$. For fixed $a$, Theorem 4.3 in \cite{banica2010orthogonal} gives the asymptotic expansion: \[ \int_{\Ort{n}} x^a\, dx = n^{-|\lambda|}\sum_{k= 0}^\infty H_k(a)n^{-k}, \] where $H_k(a)$ is a certain combinatorial quantity related to the Brauer algebra. This gives an expansion of the form \[ \langle \phi(e_{T_i}), \phi(e_{T_j}) \rangle = n^{-|\lambda|} \sum_{k=0}^\infty c^k_{ij} n^{-k}, \] for certain integers $c^k_{ij}$. We wrote code to calculate the quantity $H_0 (a)$ and hence the leading coefficient matrix $(c^0_{ij})$. For $t = 2,3$ and $|\lambda| \leq 4$, we verified in exact arithmetic that this matrix $(c^0_{ij})$ is a positive definite solution to the system (\ref{eq:Mliealgcond}). As a sidenote, we suspect that this asymptotic expansion is related to the expansion (\ref{eqn:las2expansion}) and ultimately to our asymptotic solutions in Section~\ref{sec:asymptotics}. We now give a formulation for the limit semidefinite program. In our current set-up for $\textup{las}_t$, we use the Gross-Kunze construction with a representation of $\GL{t}{}$. However, we could have used the construction with $\GL{2t}{}$. In this case, we could still locate the $\Ort{n-t}$-invariants using the argument from Section \ref{sec:stabinv}. Using the Gross-Kunze construction with $2t$ is computationally more expensive, since there are more variables, but it has the following property. Let $S = \textup{diag}(s,I_{n - 2t})$ be a an orthogonal matrix which is block-diagonal with the first block $s$ of size $2t \times 2t$. All matrices $S$ occurring in $\mathrm{las}_t$ can be chosen to be of this form. We then have \begin{align*} \langle \phi(e_{T_i}), \pi(S) \phi(e_{T_j}) \rangle & = \int_{\Ort{n}} \langle \rho(\omega x \epsilon ) e_{T_i}, \rho(\omega x S \epsilon )e_{T_j} \rangle \,dx \\ & = \int_{\Ort{n}} \langle \rho(\omega x \epsilon ) e_{T_i}, \rho(\omega x \epsilon s)e_{T_j} \rangle \,dx \\ & = \sum_k \left( \int_{\Ort{n}} \langle \rho(\omega x \epsilon ) e_{T_i}, \rho(\omega x \epsilon ) e_{T_k} \rangle\, dx \right) \langle e_{T_k}, \rho(s) e_{T_j} \rangle \\ & = \sum_k \langle \phi(e_{T_i}), \phi(e_{T_k}) \rangle \langle e_{T_k}, \rho(s) e_{T_j} \rangle. \end{align*} In short, if $2t$ is used in the Gross-Kunze construction, then a separation of variables occurs where all dependence on $n$ is in the $\langle \phi(e_{T_i}), \phi(e_{T_k}) \rangle$ term. Hence the limit semidefinite program as $n \to \infty$ can be written in terms of the inner product~$\langle \cdot, \cdot \rangle_{U(t)}$. \section*{Acknowledgements} We thank Christine Bachoc, Anurag Bishnoi, Henry Cohn, Giulia Montagna, Fernando Oliveira, and Frank Vallentin for helpful discussions. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
train/arxiv
BkiUeE825V5ha7jY9cwX
5
1
\section{Introduction}\label{sec1} With the advent of data collection technologies, more and more data, such as remote sensing data or environmental monitoring data, are collected in space and managed by geographical information systems. In many applications, a response of interest is observed on a set of sites in space, and it is of interest to apply a geostatistical regression model to predict the response at unsampled sites with the aid of auxiliary/explanatory variables. For example, in precision agriculture, it is of interest to predict crop yield based on some explanatory variables involving, for example, climatic conditions, soil types, fertilizers, cropping practices, weeds and topographic features. Not only do we aim to identify the important explanatory variables, but the precision of yield also depends on how well the explanatory variables are chosen, which if not chosen properly, may result in poor performance, particularly when the number of explanatory variables is large. Clearly, model selection is essential in geostatistics. There are two different asymptotic frameworks in geostatistics. One is called the increasing domain asymptotic framework, where the observation region grows with the sample size. The other is called the fixed domain asymptotic (or infill asymptotic) framework, where the observation region is bounded and fixed with more and more data taken more densely in the region. It is known that the two frameworks lead to possibly different asymptotic behaviors in covariance parameter estimation. However, little is known about their effects on model selection. In general, asymptotic behaviors of the estimated parameters under the increasing domain framework are more standard. For example, the maximum likelihood estimates of covariance parameters are typically consistent and asymptotically normal when fitted by a correct model [\citet{Mardia1984}]. In contrast, not all covariance parameters can be estimated consistently under the fixed domain asymptotic framework, even for the simple exponential covariance model in one dimension with no consideration of explanatory variables [\citet{Ying1991}; Chen, Simpson and Ying (\citeyear{Chen2000})]. The readers are refereed to \citet{Stein1999} for more details regarding fixed domain asymptotics. Some discussion concerning which asymptotic framework is more appropriate can also be found in \citet{Zhang2005}. Many model selection methods have been applied in geostatistical regression, such as Akaike's information criterion [AIC, \citet{Akaike1973}], Bayesian information criterion [BIC, Schwartz (\citeyear{Schwart1978})], the generalized information criterion [GIC, \citet{Nishii1984}] and the cross validation method [\citet{Stone1974}]. Note that GIC contains a range of criteria, including both AIC and BIC, governed by a tuning parameter. Although theoretical properties of these selection methods have been thoroughly established in linear regression and time series model selection [e.g., \citet{Shao1997}, McQuarrie and Tsai (\citeyear{McQuarrie1989}), \citet{Ing2005}, \citet{Ing2007}], only limited results are available for selecting geostatistical regression models. For example, \citet{Hoeting2006} provided some heuristic arguments for AIC in geostatistical model selection when the spatial process of interest is observed with no measurement error. They show via a simulation study that spatial dependence has to be considered, which if ignored, may result in unsatisfactory results. \citet{Huang2007} developed a technique of estimating the mean squared prediction error for a general spatial prediction procedure using the generalized degrees of freedom and derived some asymptotic efficiency results. For linear mixed models, \citet{Jiang2003} developed some consistent procedures similar to GIC. \citet{Pu2006} derived conditions under which GIC is selection consistent. \citet{Jiang2008} introduced a fence method for mixed model selection and showed its consistency under some regularity conditions. \citet{Jones2011} proposed a modified BIC, which replaces the sample size in the penalty of the original BIC by an effective sample size to account for correlations in linear mixed models. \citet{Vaida2005} proposed the conditional Akaike's information criterion (CAIC) and argued that it is more appropriate than AIC when the focus is on subjects/clusters requiring prediction of random effects. In addition, selection among semiparametric regression models and penalized smoothing spline models [e.g., Chapter~4, Ruppert, Wand and Carroll (\citeyear{Puppert2003})] can also be formulated in terms of random-effect selection in linear mixed models. The asymptotic theory of AIC for this type of model was given by \citet{Shi1999}, and that for BIC was given by \citet{Bunea2004}. A recent review of linear and generalized linear mixed model selection can also be found in M{\"u}ller, Scealy and Welsh (\citeyear{Muller2013}). Although the geostatistical regression model can be regarded as a linear mixed model with one random effect, its asymptotic behavior is surprisingly subtler than a usual linear mixed model for the following three reasons. First, variables in a geostatistical regression model are sampled from a spatial process, resulting in small ``effective sample size'' unless the spatial domain is allowed to grow quickly. Second, unlike some random-effect models with independent random components, spatial dependence forces all variables to depend in a complex way, making it very difficult to handle asymptotically. Third, under the fixed domain asymptotic framework, classical regularity conditions are generally not satisfied, and traditional approaches for establishing asymptotic results are typically not applicable. To the best of our knowledge, asymptotic properties of GIC for geostatistical regression model selection have yet to be developed, particularly under the fixed domain asymptotic framework, where nonstandard behaviors are often expected. In this article, we focus on GIC for geostatistical regression model selection regardless of whether the covariance model is correctly or wrongly specified. Although a conditional-type criterion, such as CAIC may be more appropriate when spatial prediction is of main interest, it is beyond the scope of this paper. Major accomplishments are listed in the following: \begin{longlist}[(1)] \item[(1)] We establish a general theory of GIC for the selection consistency and the asymptotic loss efficiency under mild regularity conditions in a general mixed domain asymptotic framework, which includes both the fixed and increasing domain asymptotics. In particular, we allow the possibilities that some covariance parameters may converge to a nondegenerate distribution and the covariance model may be mis-specified. \item[(2)] We provide some examples that satisfy the aforementioned regularity conditions under exponential covariance models in one and two dimensions, and demonstrate how selection consistency is affected by candidate regressors. \end{longlist} We shall show that the asymptotic behaviors of GIC are related to how fast the domain grows with the sample size. In addition, some nonstandard properties of GIC under the fixed domain asymptotic framework will be highlighted. For example, under fixed domain asymptotics, GIC fails to identify the correct order of polynomial consistently regardless of the tuning parameter value, even when the underlying covariance model is correctly specified. On the other hand, for a properly chosen tuning parameter value, GIC is selection consistent when candidate explanatory variables are generated from some spatial dependent processes. This article is organized as follows. Section~\ref{models and criterion} gives a brief introduction of geostatistical regression models and GIC. Our main results for the consistency and the asymptotic loss efficiency of GIC are presented in Sections~\ref{section:selection for correct cov model} and \ref{section:covariance model selection}. Specifically, in Section~\ref{section:selection for correct cov model}, we assume that the covariance model is specified correctly. While in Section~\ref{section:covariance model selection}, we consider the covariance model to be mis-specified. In Section~\ref{section:examples}, we provide some examples that satisfy the regularity conditions. Finally, a brief discussion is provided in Section~\ref{discussion}. \section{Models and criteria} \label{models and criterion} \subsection{Geostatistical regression models} \label{subsec:geo model} Consider a spatial process, $\{S(\mathbf{s})\dvtx\break \mathbf{s}\in D\subset\mathbb{R}^d\}$. Suppose that we observe data $\{Z(\mathbf{s}_{n1}),\ldots,Z(\mathbf{s}_{nn})\}$ according to the following measurement equation: \begin{eqnarray}\label{geo data} Z(\mathbf{s}_{ni}) &=& S(\mathbf{s}_{ni}) + \epsilon( \mathbf{s}_{ni}) \nonumber \\[-8pt] \\[-8pt] \nonumber &=& \mu_0(\mathbf{s}_{ni})+\eta( \mathbf{s}_{ni})+\epsilon(\mathbf{s}_{ni});\qquad i=1,\ldots,n, \end{eqnarray} where $\mu_0(\cdot)$ is the mean function, $\eta(\cdot)$ is a zero-mean Gaussian spatial dependent process with $ \sup_{\mathbf{s}\in D}\mathrm{E}(\eta^2(\mathbf{s}))<\infty$ and $\{\epsilon(\mathbf{s}_{ni})\dvtx i=1,\ldots,n\}$ are Gaussian white-noise variables with variance $v^2$, independent of $S(\cdot)=\mu_0(\cdot)+\eta(\cdot)$, corresponding to measurement errors. In addition to $Z(\mathbf{s}_{ni})$'s, we observe $\mathbf{x}(\mathbf{s}_{ni})= (1, x_1(\mathbf{s}_{ni}),\ldots, x_{p_n}(\mathbf{s}_{ni}))'$, a $(p_n+1)$-vector of explanatory variables, for $i=1,\ldots,n$. We consider the geostatistical regression model \[ Z(\mathbf{s}_{ni}) = {\mathbf x}(\mathbf{s}_{ni})' \bolds\beta_{n}+\eta(\mathbf{s}_{ni})+ \epsilon( \mathbf{s}_{ni});\qquad \mathbf{s}_{ni}\in D, i=1,\ldots,n, \] where $\bolds\beta_{n}=(\beta_0,\beta_1,\ldots,\beta_{p_n})'$. This model reduces to the usual linear regression model when $\eta(\cdot)$ is absent. Similarly to linear regression, a large model that contains many insignificant variables may produce a large variance, resulting in low predictive power. On the other hand, a model that ignores some important variables may suffer from a large bias. To strike a good balance between (squared) bias and variance, it is essential to include only significant variables in the model. Clearly, variable selection is essential not only in regression but also in geostatistical regression. We use $\alpha\subseteq\{1,\ldots,p_n\}$ to denote a model, which consists of the indices of the corresponding explanatory variables. Let $\mathcal{A}_n\subseteq2^{\{1,\ldots,p_n\}}$ be the set of all candidate models with $\varnothing$ being the intercept-only model. Let ${\mathbf X}_{n}$ be the $n\times (p_n+1)$ matrix with the $i$th row, $\mathbf{x}(\mathbf{s}_{ni})'$; $1\leq i\leq n$. Also let ${\mathbf X}_{n}(\alpha)$ be an $n\times (p(\alpha)+1)$ sub-matrix of ${\mathbf X}_{n}$ containing a column $\mathbf{1}$ (corresponding to the intercept) and the columns corresponding to $\alpha\in\mathcal{A}_n$, and $\bolds\beta_{n}(\alpha)$ be the sub-vector of $\bolds\beta_{n}$ corresponding to ${\mathbf X}_{n}(\alpha)$. A model $\alpha$ is said to be correct if $\mu_0({\mathbf s})$ can be written as $ \beta_0+\sum_{j\in\alpha} \beta_j x_j({\mathbf s})$ for all ${\mathbf s}\in D$. If there exists a correct model, we denote the correct model having the smallest number of variables by $\alpha_n^0= \operatorname{arg\, min}_{\alpha\in\mathcal{A}_n^0}p(\alpha)$, where $\mathcal{A}_n^0$ is the set of all correct models. The geostatistical regression model $\alpha$ can be written in a matrix form as \begin{equation} \qquad\mathbf{Z}_{n}=\bigl(Z(\mathbf{s}_{n1}),\ldots,Z( \mathbf{s}_{nn})\bigr)' = {\mathbf X}_{n}( \alpha)\bolds\beta_{n}(\alpha)+\bolds\eta_{n}+\bolds \epsilon_{n};\qquad \alpha\in\mathcal{A}_n, \label{setup} \end{equation} where $\bolds\eta_{n}=(\eta(\mathbf{s}_{n1}),\ldots,\eta(\mathbf{s}_{nn}))'\sim N(0,{\bolds\Sigma}_{n\eta})$ and $\bolds\epsilon_{n}=(\epsilon(\mathbf{s}_{n1}),\ldots,\epsilon(\mathbf{s}_{nn}))'$ $\sim N(0,v^2{\mathbf I}_{n})$ with ${\bolds\Sigma}_{n\eta}=\mathrm{E}(\bolds\eta_{n}\bolds\eta^{\prime}_{n})$ and ${\mathbf I}_{n}$ denoting the $n \times n$ identity matrix. Hence the mean and the variance of $\mathbf{Z}_{n}$ conditional on $\mathbf{X}_{n}$ based on model $\alpha\in\mathcal{A}_n$ are $\mathbf{X}_{n}(\alpha)\bolds\beta_{n}(\alpha)$ and \begin{equation} \bolds{\Sigma}_{n}(\bolds{\theta})= \bolds{\Sigma}_{n\eta}+v^2{ \mathbf I}_{n}, \label{Sigma} \end{equation} where $\bolds{\theta}$ is a covariance parameter vector belonging to some parameter space $\Theta$. Throughout the paper, we assume that $\bolds{\Sigma}_{n}(\bolds{\theta})$ is continuous on $\bolds{\theta}\in\Theta$. We denote the true covariance matrix by $\bolds\Sigma_{n0}$ and the true mean of $\mathbf{Z}_{n}$ conditional on $\mathbf{X}_{n}$ by $\bolds\mu_{n0}$. In other words, given $\mathbf{X}_{n}$, the data $\mathbf{Z}_{n}$ are generated from $N(\bolds\mu_{n0},\bolds\Sigma_{n0})$. In order to facilitate mathematical exposition, the asymptotic results established in Sections~\ref{section:selection for correct cov model} and \ref{section:covariance model selection} focus only on the case where $\mathbf{X}_{n}$ is nonrandom. These results are also valid in the almost sure sense when $\mathbf{X}_{n}$ is random, provided that the required conditions involving $\mathbf{X}_{n}$ hold for almost all sequences $\mathbf{X}_{n}$; $n\in\{1, 2, \ldots\}$. We further illustrate these results in Section~\ref{section:examples} using either random or nonrandom~$\mathbf{X}_{n}$. \subsection{Generalized information criterion} For notational simplicity, we suppress the dependence of $\mathbf{X}_{n}, \mathbf{X}_{n}(\alpha), \bolds{\beta}_{n}, \bolds{\beta}_{n}(\alpha), \mathbf{Z}_{n}$, $\bolds{\eta}_{n}$, $\bolds{\epsilon}_{n}$, ${\bolds\Sigma}_{n\eta}$, ${\mathbf I}_{n}$, ${\bolds\Sigma}_{n}(\bolds{\theta})$, ${\bolds\Sigma}_{n0}$, $\bolds\mu_{n0}$ and $\mathbf{s}_{ni}$ on $n$ in the rest of this paper. To estimate $\bolds{\beta}$ and $\bolds{\theta}$, we\break consider maximum likelihood (ML). We assume that $\bolds\Sigma^{-1}(\bolds\theta)$ and\break $(\mathbf{X}'\bolds\Sigma^{-1}(\bolds\theta)\mathbf{X})^{-1}$ exist for $\bolds\theta\in\Theta$. The ML estimate of $\bolds{\theta}$ based on $\alpha\in\mathcal{A}_n$, denoted by $\hat{\bolds\theta}(\alpha)$, is obtained by maximizing the following profile log-likelihood function: \begin{eqnarray*} \ell(\alpha;\bolds\theta)& = & -\tfrac{1}{2}n\log(2\pi)-\tfrac{1}{2} \log\det\bigl({\bolds\Sigma}({\bolds\theta})\bigr) \\ & &{}-\tfrac{1}{2}\bigl({\mathbf Z}-\hat{\bolds\mu}(\alpha;\bolds\theta) \bigr)'{\bolds\Sigma}^{-1} ({\bolds\theta}) \bigl({\mathbf Z}-\hat{\bolds\mu}(\alpha;\bolds\theta)\bigr), \end{eqnarray*} where $\hat{\bolds\mu}(\alpha;\bolds\theta)=\mathbf{X}(\alpha)\hat{\bolds\beta}(\alpha;\bolds\theta)$, and \[ \hat{\bolds\beta}(\alpha;\bolds\theta) =\bigl({\mathbf X}( \alpha)'{\bolds\Sigma}^{-1}(\bolds\theta){\mathbf X}( \alpha)\bigr)^{-1} {\mathbf X}(\alpha)'{\bolds \Sigma}^{-1}(\bolds\theta){\mathbf Z}. \] Specifically, $\ell (\alpha;\hat{\bolds\theta}(\alpha) )= \sup_{\bolds\theta\in\Theta} \ell(\alpha;\bolds\theta)$, and $\hat{\bolds{\beta}} (\alpha;\hat{\bolds{\theta}}(\alpha) )$ is the ML estimate of $\bolds\beta(\alpha)$. For $\alpha\in\mathcal{A}_n$ and $\bolds{\theta}\in\Theta$, let \begin{eqnarray} {\mathbf M}(\alpha;\bolds\theta)& =& {\mathbf X}(\alpha) \bigl({\mathbf X}( \alpha)'{\bolds\Sigma}^{-1}(\bolds\theta){\mathbf X}( \alpha)\bigr)^{-1}{\mathbf X}(\alpha)'{\bolds \Sigma}^{-1}(\bolds\theta), \label{fn:M} \\ {\mathbf A}(\alpha;\bolds\theta) &=& {\mathbf I}-{\mathbf M}(\alpha;\bolds \theta). \label{fn:A} \end{eqnarray} Then $\hat{\bolds\mu}(\alpha;\bolds\theta)={\mathbf M}(\alpha;\bolds\theta)\mathbf{Z}$ and $\mathbf{Z}-\hat{\bolds\mu}(\alpha;\bolds\theta)={\mathbf A}(\alpha;\bolds\theta)\mathbf{Z}$. Note that ${\mathbf M}^2(\alpha;\bolds\theta) ={\mathbf M}(\alpha;\bolds\theta)$, ${\mathbf M}(\alpha;\bolds\theta)\mathbf{X}(\alpha)=\mathbf{X}(\alpha)$, and \begin{eqnarray*} {\mathbf M}(\alpha;\bolds\theta)'\bolds{\Sigma}^{-1}( \bolds{\theta}){\mathbf M}(\alpha;\bolds\theta) &=& \bolds{\Sigma}^{-1}( \bolds{\theta}){\mathbf M}(\alpha;\bolds\theta), \\ {\mathbf A}(\alpha;\bolds\theta)'\bolds{\Sigma}^{-1}( \bolds{\theta}){\mathbf A}(\alpha;\bolds\theta)& =& \bolds{\Sigma}^{-1}( \bolds{\theta}){\mathbf A}(\alpha;\bolds\theta). \end{eqnarray*} Therefore, by (\ref{fn:M}) and (\ref{fn:A}), the profile log-likelihood function can also be written as \begin{eqnarray} \label{log like fun alpha} \ell(\alpha;\bolds\theta) &=& -\tfrac{1}{2}n\log(2\pi) - \tfrac{1}{2}\log\det\bigl(\bolds\Sigma(\bolds\theta)\bigr) -\tfrac{1}{2} \bolds\mu_0'\bolds\Sigma^{-1}(\bolds\theta) \mathbf{A}(\alpha;\bolds\theta)\bolds\mu_0 \nonumber \\ &&{} -\bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta)\mathbf{A}(\alpha;\bolds\theta) (\bolds\eta+\bolds\epsilon) - \tfrac{1}{2}(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta) (\bolds\eta+\bolds\epsilon) \\ &&{} +\tfrac{1}{2}(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta)\mathbf{M}(\alpha;\bolds\theta) (\bolds\eta+ \bolds\epsilon); \qquad \alpha\in\mathcal{A}_n, \bolds{\theta}\in\Theta.\nonumber \end{eqnarray} To identify the smallest correct model $\alpha_n^0$, one may adopt the GIC of \citet{Nishii1984}, \begin{equation} \Gamma_{\tau_n}(\alpha) = -2\ell\bigl(\alpha;\hat{\bolds\theta}(\alpha) \bigr) + {\tau_n} p(\alpha);\qquad \alpha\in\mathcal{A}_n, \label{unknown GIC} \end{equation} where ${\tau_n}$ is a tuning parameter controlling the trade-off between goodness-of-fit and the model parsimoniousness. The criterion includes AIC (when ${\tau_n}=2$) and BIC [when ${\tau_n}=\log(n)$] as special cases, and has been widely used in many statistical areas. The model selected by GIC based on ${\tau_n}$ is denoted by $ \hat{\alpha}_{\tau_n} =\operatorname{arg\, min}_{\alpha\in\mathcal{A}_n} \Gamma_{\tau_n}(\alpha)$. In the next section, we shall first investigate GIC for variable selection when the covariance model is correctly specified. \section{Variable selection under a correct covariance model} \label{section:selection for correct cov model} The asymptotic properties of GIC will be derived in terms of the Kullback--Leibler (KL) loss, which for $\alpha\in\mathcal{A}_n$ and $\bolds\theta\in\Theta$ is given by \begin{eqnarray*} L(\alpha;\bolds\theta) &=& \int_{\mathbf{Y}\in\mathbb{R}^n}f(\mathbf{Y};\bolds \mu_0,\bolds\Sigma_0) \log\frac{f(\mathbf{Y};\bolds\mu_0,\bolds\Sigma_0)} { f(\mathbf{Y};\hat{\bolds\mu}(\alpha;\bolds\theta),\bolds{\Sigma}(\bolds\theta))}\,d\mathbf{Y} \\ &=& \frac{1}{2}\log\det\bigl(\bolds\Sigma(\bolds\theta)\bigr)- \frac{1}{2}\log\det(\bolds\Sigma_0) +\frac{1}{2} \operatorname{tr}\bigl(\bolds\Sigma_0\bolds\Sigma^{-1}( \bolds\theta)\bigr) \\ &&{} -\frac{n}{2}+\frac{1}{2}\bigl(\hat{\bolds\mu}(\alpha;\bolds \theta)-\bolds\mu_0\bigr)'{\bolds\Sigma}^{-1}( \bolds\theta) \bigl(\hat{\bolds\mu}(\alpha;\bolds\theta)-\bolds\mu_0 \bigr), \end{eqnarray*} where $\hat{\bolds\mu}(\alpha;\bolds\theta)=\mathbf{X}(\alpha)\hat{\bolds\beta}(\alpha;\bolds\theta)$ and $f(\cdot;\bolds\mu,\bolds\Sigma)$ is the Gaussian density function with mean $\bolds\mu$ and covariance matrix $\bolds\Sigma$. Note that $L(\alpha;\bolds\theta)\geq 0$, for any $\alpha\in\mathcal{A}_n$ and $\bolds\theta\in\Theta$. When $\bolds{\mu}_0$ is known, the KL loss for $\bolds{\theta}\in\Theta$ is given by \[ L_0(\bolds\theta)= \tfrac{1}{2} \bigl\{\log\det\bigl(\bolds \Sigma(\bolds\theta)\bigr)- \log\det(\bolds\Sigma_0)+\operatorname{tr} \bigl(\bolds\Sigma_0\bolds\Sigma^{-1}(\bolds\theta)\bigr)-n \bigr\}. \] Then the optimal vector of $\bolds{\theta}\in\Theta$, which minimizes the KL loss, is given by \[ \bolds\theta_0= \mathop{\operatorname{arg\, inf}}_{\bolds\theta\in\Theta} L_0(\bolds\theta). \] Clearly, $\bolds{\Sigma}_0=\bolds{\Sigma}(\bolds{\theta}_0)$ and $L_0(\bolds{\theta}_0)=0$, if the covariance model class contains the correct model. In this case, $\bolds{\theta}_0$ is the true covariance parameter vector of $\bolds{\theta}$. Let $R(\alpha;\bolds\theta)=\mathrm{E}(L(\alpha;\bolds\theta))$. By (\ref{fn:M}) and (\ref{fn:A}), we have \begin{eqnarray} \label{fn:KL loss} L(\alpha;\bolds\theta) &=& L_0(\bolds\theta) +\tfrac{1}{2}\bolds \mu_0'\bolds\Sigma^{-1}(\bolds\theta) \mathbf{A}(\alpha;\bolds\theta)\bolds\mu_0 \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} +\tfrac{1}{2}(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta)\mathbf{M}(\alpha;\bolds\theta) (\bolds\eta+ \bolds\epsilon), \\ \label{fn:KL risk} R(\alpha;\bolds\theta) &=& L_0(\bolds\theta)+\tfrac{1}{2}\bolds \mu_0'\bolds\Sigma^{-1}(\bolds\theta) \mathbf{A}(\alpha;\bolds\theta)\bolds\mu_0 \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} +\tfrac{1}{2}\operatorname{tr}\bigl( \bolds\Sigma^{-1}(\bolds \theta)\mathbf{M}(\alpha;\bolds\theta)\bolds\Sigma_0\bigr), \end{eqnarray} for $\alpha\in\mathcal{A}_n$ and $\bolds\theta\in\Theta$, where $\bolds\mu_0'\bolds\Sigma^{-1}(\bolds\theta) \mathbf{A}(\alpha;\bolds\theta)\bolds\mu_0= \|\bolds{\Sigma}^{-1/2}(\bolds{\theta}) \mathbf{A}(\alpha;\bolds{\theta})\bolds{\mu}_0\|^2$, which results\vspace*{1pt} from using a wrong regression model, and is equal to $0$ when $\alpha\in\mathcal{A}_n^0$. In particular, for $\alpha\in\mathcal{A}_n^0$ and $\bolds\Sigma_0=\bolds\Sigma(\bolds\theta_0)$, \begin{eqnarray} L(\alpha;\bolds\theta_0) &=& \tfrac{1}{2}(\bolds\eta+\bolds \epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+\bolds\epsilon), \label{fn:KL loss 0} \\ R(\alpha;\bolds\theta_0) &=& \tfrac{1}{2}p(\alpha). \label{fn:KL risk 0} \end{eqnarray} Consider a model selection procedure $\hat{\alpha}$ that maps data to $\alpha\in\mathcal{A}_n$. We say that $\hat{\alpha}$ is consistent if $ \lim_{n\rightarrow\infty}P \{\hat{\alpha}=\alpha_n^0 \}=1$, and $\hat\alpha$ is asymptotically loss efficient if \begin{equation} \frac{L(\hat\alpha;\hat{\bolds\theta}(\hat\alpha))}{ \min_{\alpha\in\mathcal{A}_n} L(\alpha;\hat{\bolds\theta}(\alpha))} \mathop{\rightarrow}\limits^{P}1, \label{loss efficiency} \end{equation} as $n\rightarrow\infty$. When $\eta(\cdot)$ is absent, geostatistical regression reduces to the usual linear regression with a property that $ \lim_{n\rightarrow\infty}P \{L(\alpha_n^0)= \inf_{\alpha\in\mathcal{A}_n}L(\alpha) \}=1$; see \citet{Shao1997} for more details. In this case, pursuing consistency is equivalent to finding the model with the smallest KL loss. However, $\alpha_n^0$ may not always lead to the smallest KL loss when $\bolds{\Sigma}_\eta$ has to be estimated, making asymptotic loss efficiency more difficult to derive. In addition, the possible inconsistency of $\hat{\bolds{\theta}}(\alpha)$ for $\alpha\in\mathcal{A}_n$ under the fixed domain asymptotic framework further complicates the development of asymptotic theory for GIC. Let $\lambda_{\min}(\mathbf{Q})$ and $\lambda_{\max}(\mathbf{Q})$ be the smallest and the largest eigenvalue of a square matrix $\mathbf{Q}$. We impose the following regularity conditions for model selection: \begin{longlist}[(C3)] \item[(C1)] $\lambda_{\min}(\bolds\Sigma(\bolds\theta))>0$ for all $n$ and $\bolds\theta\in\Theta$, and \[ \limsup_{n\rightarrow\infty}\sup_{\bolds\theta\in\Theta}\lambda_{\max} \bigl(\bolds\Sigma^{-1/2}(\bolds\theta)\bolds\Sigma_0\bolds \Sigma^{-1/2}(\bolds\theta)\bigr)<\infty. \] \item[(C2)] For $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, there exists $\bolds\theta_\alpha\in\Theta$, not depending on $n$, such that \begin{eqnarray*} \sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}\biggl |\frac{\ell(\alpha;\hat{\bolds\theta}(\alpha)) -\ell(\alpha;\bolds\theta_\alpha)}{R(\alpha;\bolds\theta_\alpha) -L_0(\bolds{\theta}_0)} \biggr|&= & o_p(1), \\ \sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}\biggl |\frac{L(\alpha;\hat{\bolds\theta}(\alpha)) -L(\alpha;\bolds\theta_\alpha)}{R(\alpha;\bolds\theta_\alpha) -L_0(\bolds{\theta}_0)} \biggr|& =& o_p(1). \end{eqnarray*} Moreover, \begin{eqnarray*} \sup_{\alpha\in\mathcal{A}_n^0}\bigl |\ell\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr) -\ell(\alpha;\bolds\theta_0) \bigr| &=& O_p(1), \\ \sup_{\alpha\in\mathcal{A}_n^0} \bigl|L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr) -L( \alpha;\bolds\theta_0) \bigr| &=& O_p(1). \end{eqnarray*} \item[(C3)] For $\bolds\theta_\alpha$ defined in (C2), \[ \lim_{n\rightarrow\infty}\sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{1}{(R(\alpha;\bolds\theta_\alpha)-L_0(\bolds{\theta}_0))^q}=0, \] for some $q>0$. \item[(C4)] For $\bolds\theta_\alpha$ defined in (C2), \[ \lim_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n \setminus \mathcal{A}^{0}_n} \biggl|\frac{\operatorname{tr} (\bolds\Sigma_0(\bolds\Sigma^{-1}_{0}- \bolds\Sigma^{-1}(\bolds\theta_\alpha)) \mathbf{M}(\alpha;\bolds\theta_\alpha))}{R(\alpha;\bolds\theta_\alpha)- L_0(\bolds{\theta}_0)} \biggr| =0. \] \item[(C5)] For $\bolds\theta_\alpha$ defined in (C2), \[ \sup_{\alpha\in\mathcal{A}_n \setminus \mathcal{A}^{0}_n} \biggl|\frac{\operatorname{tr} (((\bolds\eta+\bolds\epsilon)(\bolds\eta+\bolds\epsilon)'-\bolds\Sigma_0) (\bolds\Sigma^{-1}(\bolds\theta_\alpha)-\bolds\Sigma^{-1}(\bolds\theta_0)) )} { R(\alpha;\bolds\theta_\alpha)-L_0(\bolds{\theta}_0)} \biggr|=o_{p}(1). \] \end{longlist} While $L_0(\bolds{\theta}_0)=0$ for a correct spatial covariance model, we still keep $L_0(\bolds{\theta}_0)$ in (C2)--(C5) because $L_0(\bolds{\theta}_0) \neq 0$ under covariance mis-specification, which will be discussed in Section~\ref{section:covariance model selection}. In the rest of this section, we shall assume $\bolds\Sigma_0=\bolds\Sigma(\bolds\theta_0)$, yielding $L_0(\bolds{\theta}_0)=0$. Condition (C1), imposing some constraints on the family of covariance matrices parameterized by $\bolds\theta \in \Theta$, is usually satisfied when $\Theta$ is compact. Condition (C2) generally holds when $\hat{\bolds\theta}(\alpha)$ converges in probability to some $\bolds\theta_\alpha\in\Theta$, not necessarily equal to $\bolds\theta_{0}$. Surprisingly, it can hold even if $\hat{\bolds\theta}(\alpha)$ does not converge in probability; see Section~\ref{section:examples} for some examples in which the domain $D$ is fixed with $n$. Condition (C3) is easily met when $ |\mathcal{A}_n\setminus\mathcal{A}_n^0 |$ (i.e., the number of models in $\mathcal{A}_n\setminus\mathcal{A}_n^0$) is bounded and \[ \min_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}\bigl \|\bolds\Sigma^{-1/2}(\bolds \theta_\alpha)\mathbf{A}(\alpha;\bolds\theta_\alpha) \bolds \mu_0 \bigr\|^2\rightarrow\infty, \] as $n\rightarrow\infty$. Moreover, (C5) can be verified using some moment bounds for quadratic forms in $\bolds\eta+\bolds\epsilon$, and (C4) is ensured by (C3) when $p_n$ is bounded. Conditions (C1)--(C5) appear to be natural generalizations of the conditions used to establish the asymptotic loss efficiency in usual linear regression models. To see this, note that if $\bolds\Sigma_{0}=\bolds\Sigma(\bolds\theta_{0})$ is known (or, equivalently, $\Theta=\{\bolds\theta_{0}\}$), then (C1), (C2), (C4) and (C5) become redundant, and only (C3) is needed, which corresponds to (A.3) of \citet{Li1987} or (2.6) of \citet{Shao1997}. This is the only assumption needed to derive the asymptotic loss efficiency of AIC under model (\ref{setup}) with $\eta(\cdot)=0$, $v^{2}$ known, $\mathbf{s}_{i}=i$; $i=1,\ldots, n$, and $ |\mathcal{A}_n^0 |\leq 1$. For more details, see Theorem~1 of \citet{Shao1997}. On the other hand, when $\bolds\theta_{0}$ is unknown, (C1), (C2), (C4) and (C5) seem indispensable for dealing with the inherent difficulties in model selection under (\ref{setup}). That is, the ML estimate of $\bolds\theta$ may not only vary across candidate models, but may also converge to wrong parameter vectors or have no probability limits. In the following theorem, these four conditions will be used in conjunction with (C3) to establish the consistency and the asymptotic loss efficiency of AIC, extending Theorem~1 of \citet{Shao1997} to the geostatistical model described in (\ref{setup}) and (\ref{Sigma}). \begin{thmm} Consider the data generated from (\ref{geo data}) and the model given by (\ref{setup}) and (\ref{Sigma}) with $\bolds{\theta}_0$ being the true covariance parameter vector [i.e., $\operatorname{var}(\mathbf{Z})= \bolds\Sigma(\bolds\theta_0)$]. Suppose that conditions \textup{(C1)--(C5)} are satisfied: \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}_n^0 |\leq 1$, then $\hat\alpha_2$ is asymptotically loss efficient. If, in addition, $ |\mathcal{A}_n^0 |=1$ and $ \limsup_{n\rightarrow\infty}p(\alpha_n^0)<\infty$, then $\hat\alpha_2$ is consistent. \item[(ii)] If $ |\mathcal{A}_n^0 |\geq 2$ for sufficiently large $n$ and either of the following is satisfied for some $m>0$, \begin{eqnarray} \lim_{n\rightarrow\infty}\sum_{\alpha\in\mathcal{A}_n^0} \frac{1}{p^m(\alpha)} &=& 0,\label{infty correct models} \\ \lim_{n\rightarrow\infty}\sum_{\alpha\in\mathcal{A}_n^0\setminus\{\alpha_n^0\}} \frac{1}{(p(\alpha)-p(\alpha_n^0))^m} &=& 0. \label{infty correct models 2} \end{eqnarray} Then $\hat\alpha_2$ is asymptotically loss efficient. If, in addition, (\ref{infty correct models 2}) holds and $ \limsup_{n\rightarrow\infty}p(\alpha_n^0)<\infty$, then $\hat{\alpha}_2$ is consistent. \end{longlist} \label{theorem:unknown AIC} \end{thmm} \begin{pf} We begin by showing that \begin{equation} \Gamma_2(\alpha) = \nu + 2L(\alpha;\bolds\theta_\alpha) + o_p\bigl(L(\alpha;\bolds\theta_\alpha)\bigr), \label{loss eff among incorrect models} \end{equation} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, where $\nu=n\log(2\pi)+\log\det(\bolds\Sigma(\bolds\theta_0))+(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0)(\bolds\eta+\bolds\epsilon)$ is independent of $\alpha$. By (\ref{unknown GIC}) and (C2), we have \begin{eqnarray*} \Gamma_2(\alpha) &=& -2\ell(\alpha;\bolds\theta_\alpha) + 2p( \alpha) +o_p\bigl(R(\alpha;\bolds\theta_\alpha)\bigr) \\ &=& n\log(2\pi) + \log\det\bigl(\bolds\Sigma(\bolds\theta_\alpha)\bigr) + \mathbf{Z}'\mathbf{A}(\alpha;\bolds\theta_\alpha)' \bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{A}(\alpha;\bolds \theta_\alpha)\mathbf{Z} \\ &&{} +2p(\alpha)+o_p\bigl(R(\alpha;\bolds\theta_\alpha)\bigr) \\ &=& n\log(2\pi) + \log\det\bigl(\bolds\Sigma(\bolds\theta_\alpha)\bigr) + \bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha)\bolds \mu_0 \\ &&{} +2\bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) \\ &&{} +(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) +2p(\alpha)+o_p\bigl(R(\alpha;\bolds \theta_\alpha)\bigr), \end{eqnarray*} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. It follows from (\ref{fn:KL loss}) that \begin{eqnarray} \label{loss eff asmp 0} \Gamma_2(\alpha) &=& n\log(2\pi) + \log\det\bigl(\bolds \Sigma(\bolds\theta_0)\bigr) + (\bolds\eta+\bolds\epsilon)' \bolds\Sigma^{-1}(\bolds\theta_\alpha) (\bolds\eta+\bolds \epsilon) \nonumber\\ &&{} - \operatorname{tr}\bigl(\bolds\Sigma(\bolds\theta_0)\bolds \Sigma^{-1}(\bolds\theta_\alpha)\bigr) +n + 2L(\alpha;\bolds \theta_\alpha) \nonumber\\ &&{} -2(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) \nonumber\\ &&{} +2\bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) +2p(\alpha)+o_p\bigl(R(\alpha;\bolds \theta_\alpha)\bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &=& n\log(2\pi) + \log\det\bigl(\bolds\Sigma(\bolds\theta_0)\bigr) + (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_0) (\bolds\eta+\bolds\epsilon) \\ &&{} +\operatorname{tr} \bigl(\bigl((\bolds\eta+\bolds\epsilon) (\bolds\eta+\bolds \epsilon)'-\bolds\Sigma(\bolds\theta_0)\bigr) \bigl( \bolds\Sigma^{-1}(\bolds\theta_\alpha)-\bolds \Sigma^{-1}(\bolds\theta_0)\bigr) \bigr)\nonumber \\ &&{} + 2L(\alpha;\bolds\theta_\alpha) -2(\bolds\eta+\bolds \epsilon)'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+\bolds\epsilon)+2p( \alpha)\nonumber \\ &&{} +2\bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon)+ o_p\bigl(R(\alpha;\bolds\theta_\alpha)\bigr),\nonumber \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. Therefore, by (C5), for (\ref{loss eff among incorrect models}) to hold, it suffices to show that \begin{eqnarray} (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon)-p(\alpha) &=& o_p\bigl(R(\alpha;\bolds \theta_\alpha)\bigr), \label{loss eff asmp 2} \\ \bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) &=& o_p\bigl(R(\alpha;\bolds\theta_\alpha) \bigr), \label{loss eff asmp 3} \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, and \begin{equation} \sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \biggl|\frac{L(\alpha;\bolds\theta_\alpha)}{R(\alpha;\bolds\theta_\alpha)}-1\biggr |=o_p(1). \label{loss eff asmp 1} \end{equation} First, we prove (\ref{loss eff asmp 2}). By (C4), we have \[ \mathrm{E}\bigl\{(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds \theta_\alpha) (\bolds\eta+\bolds\epsilon)\bigr\} - p(\alpha) = o\bigl(R( \alpha;\bolds\theta_\alpha)\bigr), \] uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. Let $c(\alpha)=\operatorname{tr}(\bolds\Sigma(\bolds\theta_0)\bolds\Sigma^{-1}(\bolds\theta_\alpha)\mathbf{M}(\alpha;\bolds\theta_\alpha))/ p(\alpha)$. Then by (\ref{fn:M}) and (C1), $ \limsup_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}c(\alpha)<\infty$. Thus for (\ref{loss eff asmp 2}) to hold, it suffices to show that \begin{eqnarray*} &&(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon)-c(\alpha)p(\alpha)\\ &&\qquad = o_p\bigl(R(\alpha;\bolds \theta_\alpha)\bigr), \end{eqnarray*} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. Applying Chebyshev's inequality, we have for any $\varepsilon>0$, \begin{eqnarray*} &&P \biggl\{\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \biggl|\frac{(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha)(\bolds\eta+\bolds\epsilon)-c(\alpha)p(\alpha)}{ R(\alpha;\bolds\theta_\alpha)} \biggr| >\varepsilon \biggr\} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{\mathrm{E} |(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha)(\bolds\eta+\bolds\epsilon)-c(\alpha)p(\alpha) |^{2q}}{ \varepsilon^{2q}R^{2q}(\alpha;\bolds\theta_\alpha)} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{c_1\{\operatorname{tr}(\bolds\Sigma(\bolds\theta_0)\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha)\bolds\Sigma(\bolds\theta_0)\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha))\}^q}{ \varepsilon^{2q}R^{2q}(\alpha;\bolds\theta_\alpha)} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{c_2 p^q(\alpha)} { \varepsilon^{2q}R^{2q}(\alpha;\bolds\theta_\alpha)} \\ &&\qquad\leq \sum _{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{c_3}{\varepsilon^{2q}R^q(\alpha;\bolds\theta_\alpha)}, \end{eqnarray*} for some constants $c_1,c_2,c_3>0$, where the second inequality follows from Theorem~2 of \citet{Whittle1960} that $\mathrm{E}(|\mathbf{y}'\mathbf{A}\mathbf{y}-\mathrm{E}(\mathbf{y}'\mathbf{A}\mathbf{y})|)^{2q}\leq c_1(\operatorname{tr}(\mathbf{A}^2))^q$ for $\mathbf{y}=\bolds\Sigma^{-1/2}(\bolds\theta_0)(\bolds\eta+\bolds\epsilon)\sim N(\mathbf{0},\mathbf{I})$ and $\mathbf{A}=\bolds\Sigma^{1/2}(\bolds\theta_0)\bolds\Sigma^{-1}(\bolds\theta_\alpha)\mathbf{M}(\alpha;\bolds\theta_\alpha)\bolds\Sigma^{1/2}(\bolds\theta_0)$, the third inequality follows from (C1), and the last inequality follows from (C4). Therefore by (C3), we obtain (\ref{loss eff asmp 2}). Next, we prove (\ref{loss eff asmp 3}). Similar to the proof of (\ref{loss eff asmp 2}), we have \begin{eqnarray*} && P \biggl\{\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \biggl| \frac{\bolds\mu_0'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha)(\bolds\eta+\bolds\epsilon)} { R(\alpha;\bolds\theta_\alpha)} \biggr|>\varepsilon \biggr\} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{\mathrm{E} |\bolds\mu_0'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha)(\bolds\eta+\bolds\epsilon) |^{2q}} { \varepsilon^{2q}R^{2q}(\alpha;\bolds\theta_\alpha)} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{c_4 (\bolds\mu_0'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha)\bolds\Sigma(\bolds\theta_0)\mathbf{A}(\alpha;\bolds\theta_\alpha)'\bolds\Sigma^{-1}(\bolds\theta_\alpha)\bolds\mu_0)^q} { \varepsilon^{2q}R^{2q}(\alpha;\bolds\theta_\alpha)} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{c_5(\bolds\mu_0'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha)\bolds\mu_0)^q} {\varepsilon^{2q}R^{2q}(\alpha;\bolds\theta_\alpha)} \\ &&\qquad\leq \sum _{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{c_6}{\varepsilon^{2q}R^q(\alpha;\bolds\theta_\alpha)}, \end{eqnarray*} for some constant $c_4,c_5,c_6>0$, where the second inequality follows from Theorem~2 of \citet{Whittle1960} that $\mathrm{E}(|\mathbf{a}'\mathbf{y}|)^{2q}\leq c_4(\mathbf{a}'\mathbf{a})^q$ for $\mathbf{y}=\bolds\Sigma^{-1/2}(\bolds\theta_0)(\bolds\eta+\bolds\epsilon)\sim N(\mathbf{0},\mathbf{I})$ and $\mathbf{a}=\bolds\Sigma^{1/2}(\bolds\theta_0)\mathbf{A}(\alpha;\bolds\theta_\alpha)\bolds\Sigma^{-1}(\bolds\theta_\alpha)\bolds{\mu}_0$, the third inequality follows from~(C1), and the last inequality follows from (\ref{fn:KL risk}). Therefore by (C3), we obtain~(\ref{loss eff asmp 3}). It remains to prove (\ref{loss eff asmp 1}). By (\ref{fn:KL loss}) and (\ref{fn:KL risk}), for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, \begin{eqnarray*} L(\alpha;\bolds\theta_\alpha) - R(\alpha;\bolds\theta_\alpha) &=& (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) \\ &&{} -\operatorname{tr}\bigl(\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha)\bolds\Sigma( \bolds\theta_0)\bigr). \end{eqnarray*} It follows from (C1), (C3) and an argument similar to one used to prove (\ref{loss eff asmp 2}) that \begin{eqnarray*} &&\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \biggl| \frac{L(\alpha;\bolds\theta_\alpha)}{R(\alpha;\bolds\theta_\alpha)}-1 \biggr|\\ &&\qquad =\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \biggl| \frac{(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha)(\bolds\eta+\bolds\epsilon)} { R(\alpha;\bolds\theta_\alpha)} \\ &&\hspace*{83pt}{}-\frac{\operatorname{tr}(\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha)\bolds\Sigma(\bolds\theta_0)) }{R(\alpha;\bolds\theta_\alpha)} \biggr|=o_p(1). \end{eqnarray*} This gives (\ref{loss eff asmp 1}). Thus (\ref{loss eff among incorrect models}) is established. (i) If $ |\mathcal{A}_n^0 |=0$, it follows from (\ref{loss eff among incorrect models}), (\ref{loss eff asmp 1}) and (C2) that $\hat{\alpha}_2$ is asymptotically loss efficient. If $ |\mathcal{A}_n^0 |= 1$ and $ \lim_{n\rightarrow\infty}p(\alpha_n^0)=\infty$, by (\ref{loss eff among incorrect models}), to show the asymptotic loss efficiency of $\hat\alpha_2$, it suffices to show that \begin{equation} \Gamma_2(\alpha) = \nu + 2L(\alpha;\bolds\theta_0) + o_p\bigl(L(\alpha;\bolds\theta_0)\bigr); \qquad \alpha\in \mathcal{A}_n^0. \label{uniform result 1} \end{equation} By (\ref{log like fun alpha}), (\ref{fn:KL loss 0}) and (C2), \begin{eqnarray} \label{eq:AIC} \Gamma_2(\alpha) &=& -2\ell(\alpha;\bolds \theta_0) + 2p(\alpha)+ O_p(1)\nonumber \\ &=& \nu -2\bigl\{(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds \theta_0) (\bolds\eta+\bolds\epsilon)-p(\alpha)\bigr\} \\ &&{} + 2L(\alpha;\bolds\theta_0)+ O_p(1);\qquad \alpha\in \mathcal{A}_n^0.\nonumber \end{eqnarray} Therefore, by (\ref{fn:KL loss 0}), (\ref{fn:KL risk 0}) and an argument similar to that used to prove (\ref{loss eff among incorrect models}), we have \begin{eqnarray} \biggl|\frac{ (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha_n^0;\bolds\theta_0)(\bolds\eta+\bolds\epsilon) -p(\alpha_n^0)}{p(\alpha_n^0)} \biggr| &=& o_p(1), \label{uniform result 1-1} \\ \biggl|\frac{L(\alpha_n^0;\bolds\theta_0)}{R(\alpha_n^0;\bolds\theta_0)}-1 \biggr| &=& o_p(1). \label{uniform result 1-3} \end{eqnarray} These together with (\ref{eq:AIC}) give (\ref{uniform result 1}). If $ |\mathcal{A}_n^0 |=1$ and $ \limsup_{n\rightarrow\infty}p(\alpha_n^0)<\infty$, then the consistency and the asymptotical loss efficiency are ensured by \begin{eqnarray} L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr) - L\bigl(\alpha_n^0; \hat{\bolds\theta}\bigl(\alpha_n^0\bigr)\bigr) &\mathop{\rightarrow}\limits^{P}& \infty, \label{AIC:eq2} \\ \Gamma_2(\alpha)-\Gamma_2\bigl(\alpha_n^0 \bigr) &\mathop{\rightarrow}\limits^{P}& \infty, \label{AIC:eq3} \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n\setminus\{\alpha_n^0\}$, as $n\rightarrow\infty$. First, (\ref{AIC:eq2}) follows from \begin{eqnarray} \label{smallest loss} L\bigl(\alpha_n^0;\hat{\bolds\theta}\bigl( \alpha_n^0\bigr)\bigr)& =& L\bigl(\alpha_n^0; \bolds\theta_0\bigr) + O_p(1)\nonumber \\ &=& \tfrac{1}{2}(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta_0) \mathbf{M}\bigl( \alpha_n^0;\bolds\theta_0\bigr) (\bolds\eta+ \bolds\epsilon) +O_p(1) \\ &=& o_p\bigl(L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr)\bigr),\nonumber \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n\setminus\{\alpha_n^0\}$, where the first equality follows from (C2), the second equality follows from (\ref{fn:KL loss}) and the last equality follows from (\ref{loss eff asmp 1}), (C2), (C3) and $ \limsup_{n\rightarrow\infty}p(\alpha_n^0)<\infty$. It remains to prove (\ref{AIC:eq3}). By (\ref{eq:AIC}), we have \begin{eqnarray} \label{AIC for correct model0} \Gamma_2\bigl(\alpha_n^0\bigr) &=& \nu -(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}( \bolds\theta_0) \mathbf{M}\bigl(\alpha_n^0; \bolds\theta_0\bigr) (\bolds\eta+\bolds\epsilon)+2p\bigl( \alpha_n^0\bigr)+ O_p(1) \nonumber \\[-8pt] \\[-8pt] \nonumber &=& \nu + o_p\bigl(L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr) \bigr), \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n\setminus\{\alpha_n^0\}$, where the last equality follows from an argument similar to that used to prove (\ref{smallest loss}). This together with (\ref{loss eff among incorrect models}) implies (\ref{AIC:eq3}). This completes the proof of (i). (ii) First, suppose that (\ref{infty correct models}) is satisfied. In view of (\ref{loss eff among incorrect models}), it suffices to show that~(\ref{uniform result 1}) holds uniformly for $\alpha\in\mathcal{A}_n^0$. Similarly to the proofs of (\ref{uniform result 1-1}) and (\ref{uniform result 1-3}), we only need to show that \begin{eqnarray} \sup_{\alpha\in\mathcal{A}_n^0} \biggl|\frac{ (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)(\bolds\eta+\bolds\epsilon) -p(\alpha)}{p(\alpha)} \biggr| &=& o_p(1), \label{uniform result 1-2} \\ \sup_{\alpha\in\mathcal{A}_n^0} \biggl|\frac{L(\alpha;\bolds\theta_0)}{R(\alpha;\bolds\theta_0)}-1 \biggr| &=& o_p(1). \label{uniform result 1-4} \end{eqnarray} By an argument similar to that used to prove (\ref{loss eff asmp 2}), we have \begin{eqnarray*} && P \biggl\{ \sup_{\alpha\in\mathcal{A}_n^0} \biggl|\frac{ (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)(\bolds\eta+\bolds\epsilon)-p(\alpha)} {p(\alpha)}\biggr |>\varepsilon \biggr\} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n^0}\frac{\mathrm{E} |(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)(\bolds\eta+\bolds\epsilon)-p(\alpha) |^{2m}}{\varepsilon^{2m}p^{2m}(\alpha)} \\ &&\qquad \leq \sum _{\alpha\in\mathcal{A}_n^0}\frac{c_7}{\varepsilon^{2m}p^m(\alpha)}, \end{eqnarray*} for some constant $c_7>0$, as $n\rightarrow\infty$. This together with (\ref{fn:KL loss 0}), (\ref{fn:KL risk 0}) and (\ref{infty correct models}) gives~(\ref{uniform result 1-2}) and (\ref{uniform result 1-4}). Therefore, (\ref{uniform result 1}) holds uniformly for $\alpha\in\mathcal{A}_n^0$. Finally, suppose that (\ref{infty correct models 2}) is satisfied. If $ \lim_{n\rightarrow\infty}p(\alpha_n^0)=\infty$, it implies (\ref{infty correct models}) and hence $\hat\alpha_2$ is asymptotically loss efficient. If $ \limsup_{n\rightarrow\infty}p(\alpha_n^0)<\infty$, by (\ref{AIC:eq2}) and (\ref{AIC:eq3}), it remains to show that \begin{eqnarray} L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr)- L\bigl(\alpha_n^0; \hat{\bolds\theta}\bigl(\alpha_n^0\bigr)\bigr) &\mathop{\rightarrow}\limits^{P}& \infty, \label{AIC:eq4} \\ \Gamma_2(\alpha)-\Gamma_2\bigl(\alpha_n^0 \bigr) &\mathop{\rightarrow}\limits^{P}& \infty, \label{AIC:eq5} \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n^0\setminus\{\alpha_n^0\}$, as $n\rightarrow\infty$. First, we prove (\ref{AIC:eq4}). By (\ref{fn:KL risk 0}) and (C2), \begin{eqnarray} \label{AIC:eq6} &&L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr)-L\bigl( \alpha_n^0;\hat{\bolds\theta}\bigl(\alpha_n^0 \bigr)\bigr)\nonumber \\ &&\qquad= L(\alpha;\bolds\theta_0)-L\bigl(\alpha_n^0; \bolds\theta_0\bigr) + O_p(1) \nonumber \\[-8pt] \\[-8pt] \nonumber &&\qquad= \tfrac{1}{2}(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta_0) \bigl\{\mathbf{M}(\alpha;\bolds \theta_0)-\mathbf{M}\bigl(\alpha_n^0;\bolds \theta_0\bigr)\bigr\}(\bolds\eta+\bolds\epsilon)+O_p(1) \\ &&\qquad= \tfrac{1}{2} \bigl(p(\alpha)-p\bigl(\alpha_n^0 \bigr) \bigr)+ o_p\bigl(p(\alpha)-p\bigl(\alpha_n^0 \bigr)\bigr),\nonumber \end{eqnarray} uniformly for $\alpha\in\mathcal{A}_n^0\setminus\{\alpha_n^0\}$, where the last equality follows from \begin{eqnarray*} &&\sup_{\alpha\in\mathcal{A}_n^0\setminus\{\alpha_n^0\}}\biggl|\frac{ (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)(\bolds\eta+\bolds\epsilon)-p(\alpha)}{p(\alpha)-p(\alpha_n^0)} \\ &&\hspace*{21pt}\qquad{} -\frac{ (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha_n^0;\bolds\theta_0)(\bolds\eta+\bolds\epsilon) -p(\alpha_n^0)}{p(\alpha)-p(\alpha_n^0)}\biggr | =o_p(1), \end{eqnarray*} which can be obtained in a way similar to the proof of (\ref{uniform result 1-1}). This together with~(\ref{infty correct models 2}) gives (\ref{AIC:eq4}). Next, we prove (\ref{AIC:eq5}). By (\ref{eq:AIC}) and (\ref{AIC:eq6}), we have \begin{eqnarray*} \Gamma_2(\alpha)-\Gamma_2\bigl(\alpha_n^0 \bigr) &=& 2L\bigl(\alpha;\hat{\bolds\theta}(\alpha)\bigr)-2L\bigl( \alpha_n^0;\hat{\bolds\theta}\bigl(\alpha_n^0 \bigr)\bigr) +o_p\bigl(p(\alpha)-p\bigl(\alpha_n^0 \bigr)\bigr) \\ &=& p(\alpha)-p\bigl(\alpha_n^0\bigr)+o_p \bigl(p(\alpha)-p\bigl(\alpha_n^0\bigr)\bigr), \end{eqnarray*} uniformly for $\alpha\in\mathcal{A}_n^0\setminus\{\alpha_n^0\}$. This together with (\ref{infty correct models 2}) gives (\ref{AIC:eq5}). This completes the proof of (ii). \end{pf} \begin{rem} When $\bolds\theta=\bolds\theta_0$ is known, Theorem~\ref{theorem:unknown AIC} reduces to the standard asymptotic theory of AIC in linear regression; see Theorem~1 of \citet{Shao1997}. In this case, (C1), (C2), (C4) and (C5) are not needed. \end{rem} \begin{rem} Although Theorem~\ref{theorem:unknown AIC} only obtains the consistency of $\hat\alpha_2$ under $ \limsup_{n\rightarrow\infty}p(\alpha_n^0)<\infty$, the consistency result can be extended to $ \lim_{n\rightarrow\infty}p(\alpha_n^0)=\infty$ if $p(\alpha_n^0)=o ( \inf_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}R(\alpha;\bolds\theta_0) )$. \end{rem} \begin{rem} When $ |\mathcal{A}_n^0 |\geq 2$, AIC is generally not able to identify $\alpha_n^0$ almost surely. A heavier penalty ${\tau_n}$ of GIC (e.g., BIC) is needed for consistency. \end{rem} \begin{thmm} Consider the data generated from (\ref{geo data}) and the model given by (\ref{setup}) and (\ref{Sigma}) with $\bolds{\theta}_0$ being the true covariance parameter vector [i.e., $\operatorname{var}(\mathbf{Z})= \bolds\Sigma(\bolds\theta_0)$]. Suppose that \textup{(C1)--(C5)} are satisfied. In addition, suppose that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$, and for $\bolds\theta_\alpha$ defined in \textup{(C2)}, \begin{equation} \lim_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}\frac{\tau_n p_n}{R(\alpha;\bolds\theta_\alpha)} =0. \label{eq:C6} \end{equation} \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}_n^0 |=0$, then $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. \item[(ii)] If $ |\mathcal{A}_n^0 |\geq 1$ and \begin{equation} \lim_{n\rightarrow\infty}\sum_{\alpha\in\mathcal{A}_n^0} \frac{1}{p^m(\alpha)}<\infty, \label{cond for GIC} \end{equation} for some $m>0$, then $\hat\alpha_{\tau_n}$ is consistent. \end{longlist} \label{theorem:unknown GIC} \end{thmm} \begin{pf} (i) By (\ref{loss eff among incorrect models}) and (\ref{eq:C6}), we have \begin{equation} \Gamma_{\tau_n}(\alpha) =\nu + 2L(\alpha;\bolds\theta_\alpha) + o_p\bigl(L(\alpha;\bolds\theta_\alpha)\bigr), \label{loss eff among incorrect models 2} \end{equation} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. Thus by (\ref{loss eff asmp 1}) and (C2), $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. (ii) By (\ref{log like fun alpha}) and (C2), we have for $\alpha\in\mathcal{A}_n^0$, \begin{eqnarray} \label{GIC:eq2} \Gamma_{\tau_n}(\alpha)& =& -2\ell(\alpha;\bolds \theta_0) + {\tau_n} p(\alpha)+ O_p(1) \nonumber \\[-8pt] \\[-8pt] \nonumber &=& \nu - (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}( \bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0) (\bolds \eta+\bolds\epsilon) +{\tau_n} p(\alpha) + O_p(1), \end{eqnarray} where $\nu$ is defined in (\ref{loss eff among incorrect models}). By (\ref{cond for GIC}) and an argument similar to that used to prove~(\ref{uniform result 1-2}), we have \begin{equation} \sup_{\alpha\in\mathcal{A}_n^0} \biggl| \frac{(\bolds\eta+\bolds\epsilon)' \bolds\Sigma^{-1}(\bolds\theta_0)\mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+\bolds\epsilon)-p(\alpha)}{{\tau_n} p(\alpha)} \biggr|=o_p(1). \label{GIC:eq1} \end{equation} This and (\ref{GIC:eq2}) give \begin{equation} \Gamma_{\tau_n}(\alpha) = \nu + (\tau_n-1) p(\alpha) + o_p\bigl({\tau_n} p(\alpha)\bigr), \label{GIC:eq3} \end{equation} uniformly for $\alpha\in\mathcal{A}_n^0$. Thus \begin{equation} \lim_{n\rightarrow\infty}P\bigl\{\hat{\alpha}_{\tau_n}\in \mathcal{A}_n^0\setminus\bigl\{\alpha_n^0 \bigr\}\bigr\} =0. \label{GIC:eq4} \end{equation} By (\ref{eq:C6}), (\ref{loss eff among incorrect models 2}) and (\ref{GIC:eq3}), we have \[ \min_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \Gamma_{\tau_n}(\alpha) - \Gamma_{\tau_n}\bigl(\alpha_n^0\bigr) \mathop{\rightarrow}\limits^{P}\infty, \] as $n\rightarrow\infty$. This together with (\ref{GIC:eq4}) implies that $\hat\alpha_{\tau_n}$ is consistent. This completes the proof. \end{pf} Unlike the KL loss function in usual linear regression models, $L(\alpha, \hat{\bolds\theta}(\alpha))$ does not necessarily have the minimum at $\alpha=\alpha^{0}_{n}$, and hence selection consistency may not lead to asymptotic loss efficiency in geostatistical regression models. Nevertheless, when $\bolds\theta=\bolds\theta_0$ is known, Theorem~\ref{theorem:unknown GIC} reduces to the standard asymptotic theory of GIC in linear regression [see Theorem~2 of \citet{Shao1997}], in which selection consistency is known to imply asymptotic loss efficiency. This property continues to hold if $\hat{\bolds{\theta}}(\alpha)$ in (\ref{unknown GIC}), and (\ref{loss efficiency}) is replaced by a common estimate $\hat{\bolds{\theta}}$, independent of $\alpha$. Then for $\alpha\in\mathcal{A}_n^0\setminus\{\alpha_n^0\}$, \[ L(\alpha;\hat{\bolds\theta})-L\bigl(\alpha_n^0;\hat{ \bolds\theta}\bigr) =(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\hat{\bolds\theta}) \bigl(\mathbf{M}(\alpha;\hat{\bolds \theta})-\mathbf{M}\bigl(\alpha_n^0;\hat{\bolds\theta} \bigr)\bigr) (\bolds\eta+\bolds\epsilon)\geq 0, \] almost surely. \begin{coro}\label{coro:unknown GIC} Consider the data generated from (\ref{geo data}) and the model defined in (\ref{setup}) and (\ref{Sigma}) with $\bolds{\theta}_0$ being the true covariance parameter vector [i.e., $\operatorname{var}(\mathbf{Z})= \bolds\Sigma(\bolds\theta_0)$]. Suppose that \textup{(C1)--(C5)} are satisfied with $\hat{\bolds{\theta}}(\alpha)$ and $\bolds{\theta}_\alpha$ in \textup{(C2)--(C5)} being replaced by $\hat{\bolds\theta}$ and a constant vector $\bolds{\theta}_c\in\Theta$, independent of $\alpha$. Let $\hat{\alpha}_{\tau_n}$ be the model selected by a modified GIC criterion with $\hat{\bolds{\theta}}(\alpha)$ in (\ref{unknown GIC}) being replaced by $\hat{\bolds\theta}$. In addition, suppose that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$, and $ \lim_{n\rightarrow\infty} \sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0}\frac{\tau_n p_n} {R(\alpha;\bolds\theta_c)}=0$. \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}_n^0 |=0$, then $\hat\alpha_{\tau_n}$ is asymptotically loss efficient in the sense that $L(\hat{\alpha}_{\tau_n};\hat{\bolds\theta}) / \inf_{\alpha\in\mathcal{A}_n}L(\alpha;\hat{\bolds\theta}) \mathop{\rightarrow}\limits^{P}1$, as $n\rightarrow\infty$. \item[(ii)] If $ |\mathcal{A}_n^0 |\geq 1$ and (\ref{cond for GIC}) holds, then $\hat\alpha_{\tau_n}$ is consistent and asymptotically loss efficient in the sense that $L(\hat{\alpha}_{\tau_n};\hat{\bolds\theta}) / \inf_{\alpha\in\mathcal{A}_n}L(\alpha;\hat{\bolds\theta}) \mathop{\rightarrow}\limits^{P}1$, as $n\rightarrow\infty$. \end{longlist} \end{coro} \section{Variable selection under an incorrect covariance model} \label{section:covariance model selection} In this section, we establish the asymptotic theory of GIC for variable selection, when the covariance model is mis-specified with $\bolds{\Sigma}_0\neq\bolds{\Sigma}(\bolds{\theta}_0)$, yielding $L_0(\bolds{\theta}_0)\neq 0$. To ensure that the asymptotic optimality of GIC for $\bolds{\Sigma}_0=\bolds{\Sigma}(\bolds{\theta}_0)$ carries over to this case, we need a stronger condition in place of (C4): \begin{enumerate} \item[(C4$'$)] For $\bolds\theta_\alpha$ defined in (C2), \[ \lim_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{p_n}{R(\alpha;\bolds\theta_\alpha)-L_0(\bolds{\theta}_0)} =0. \] \end{enumerate} \begin{thmm}\label{theorem:unknown AIC 2} Consider the data generated from (\ref{geo data}) and the model given by (\ref{setup}) and (\ref{Sigma}). Suppose that the conditions \textup{(C1)--(C3), (C4$'$)} and \textup{(C5)} are satisfied: \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}_n^0 |\leq 1$, then $\hat\alpha_2$ is asymptotically loss efficient. If $ |\mathcal{A}_n^0 |= 1$, then $\hat\alpha_2$ is consistent. \item[(ii)] If $ |\mathcal{A}_n^0 |\geq 2$ for sufficient large $n$, $ |\mathcal{A}_n^0 |^q=o(L_0(\bolds\theta_0))$ for some $q>0$, and \begin{equation} \lim_{n\rightarrow\infty}\frac{p_n}{L_0(\bolds\theta_0)} =0, \label{eq:C7} \end{equation} then $\hat\alpha_2$ is asymptotically loss efficient. \end{longlist} \end{thmm} \begin{pf} Let $L^*(\alpha;\bolds\theta_\alpha)=L(\alpha;\bolds\theta_\alpha)-L_0(\bolds{\theta}_0)$; $\alpha\in\mathcal{A}_n\setminus\mathcal{A}^0_n$. We begin by showing that \begin{equation} \Gamma_2(\alpha) = \nu+ 2L^*(\alpha;\bolds\theta_\alpha) + o_p\bigl(L^*(\alpha;\bolds\theta_\alpha)\bigr), \label{loss eff among incorrect models 3} \end{equation} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, where $\nu$ is defined in (\ref{loss eff among incorrect models}) and is independent of $\alpha$. By an argument similar to that used to prove (\ref{loss eff asmp 0}), we have \begin{eqnarray*} \Gamma_2(\alpha) &=& n\log(2\pi) + \log\det(\bolds \Sigma_0) + n -\operatorname{tr}\bigl(\bolds\Sigma_0\bolds \Sigma^{-1}(\bolds\theta)\bigr) \\ &&{} + (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_0) (\bolds\eta+\bolds\epsilon) \\ &&{} +\operatorname{tr} \bigl(\bigl((\bolds\eta+\bolds\epsilon) (\bolds\eta+\bolds \epsilon)'-\bolds\Sigma_0\bigr) \bigl(\bolds \Sigma^{-1}(\bolds\theta_\alpha)-\bolds\Sigma^{-1}( \bolds\theta_0)\bigr) \bigr) \\ &&{} +2L(\alpha;\bolds\theta_\alpha) -2(\bolds\eta+\bolds \epsilon)'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+\bolds\epsilon)+2p( \alpha) \\ &&{} +2\bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon)+ o_p\bigl(R(\alpha;\bolds\theta_\alpha)\bigr) \\ &=& \nu +\operatorname{tr} \bigl(\bigl((\bolds\eta+\bolds\epsilon) (\bolds\eta+ \bolds\epsilon)'-\bolds\Sigma_0\bigr) \bigl(\bolds \Sigma^{-1}(\bolds\theta_\alpha)-\bolds\Sigma^{-1}( \bolds\theta_0)\bigr) \bigr) \\ &&{} + 2L^*(\alpha;\bolds\theta_\alpha)-2(\bolds\eta+\bolds \epsilon)'\bolds\Sigma^{-1}(\bolds\theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+\bolds\epsilon)+2p( \alpha) \\ &&{} +2\bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon)+ o_p\bigl(R(\alpha;\bolds\theta_\alpha)\bigr), \end{eqnarray*} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. Hence by (C5) and an argument similar to that used to prove (\ref{loss eff among incorrect models}), for (\ref{loss eff among incorrect models 3}) to hold, it suffices to show that \begin{eqnarray*} (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{M}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon)-p(\alpha) &=& o_p\bigl(R(\alpha;\bolds \theta_\alpha)-L_0(\bolds{\theta}_0)\bigr), \\ \bolds\mu_0'\bolds\Sigma^{-1}(\bolds \theta_\alpha) \mathbf{A}(\alpha;\bolds\theta_\alpha) (\bolds\eta+ \bolds\epsilon) &=& o_p\bigl(R(\alpha;\bolds\theta_\alpha)-L_0( \bolds{\theta}_0)\bigr), \end{eqnarray*} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, and \begin{equation} \sup_{\alpha\in\mathcal{A}_n \setminus\mathcal{A}_n^0} \biggl|\frac{L^*(\alpha;\bolds\theta_\alpha)} {R(\alpha;\bolds\theta_\alpha)-L_0(\bolds{\theta}_0)}-1 \biggr|=o_p(1). \label{loss eff asmp 6} \end{equation} The above three equations follow from arguments similar to those used to prove~(\ref{loss eff asmp 2})--(\ref{loss eff asmp 1}). (i) Clearly, (\ref{loss eff among incorrect models 3}) implies (\ref{loss eff among incorrect models}). Therefore, if $ |\mathcal{A}_n^0 |= 0$, it follows from (\ref{loss eff asmp 6}) and (C2) that $\hat{\alpha}_2$ is asymptotically loss efficient. On the other hand, if \mbox{$ |\mathcal{A}_n^0 |=1$}, it suffices to show (\ref{AIC:eq2}) and (\ref{AIC:eq3}). First, we prove (\ref{AIC:eq2}). By (C3), (C4$'$) and an argument similar to that used to prove (\ref{smallest loss}), we have $L^*(\alpha_n^0;\hat{\bolds\theta}(\alpha_n^0))=o_p(L^*(\alpha;\hat{\bolds\theta}(\alpha)))$, uniformly for $\alpha\in\mathcal{A}_n\setminus\{\alpha_n^0\}$. Next, we prove (\ref{AIC:eq3}). By (C3), (C4$'$) and an argument similar to that used to prove (\ref{AIC for correct model0}), we have $\Gamma_2(\alpha_n^0)=\nu^* + o_p(L^*(\alpha;\hat{\bolds\theta}(\alpha)))$, uniformly for $\alpha\in\mathcal{A}_n\setminus\{\alpha_n^0\}$. This together with (\ref{loss eff among incorrect models 3}) implies (\ref{AIC:eq3}), and hence the proof of (i) is complete. (ii) In view of (\ref{loss eff among incorrect models}), it suffices to show that \begin{equation} \Gamma_2(\alpha) = \nu^* + 2L(\alpha;\bolds\theta_0) + o_p\bigl(L(\alpha;\bolds\theta_0)\bigr), \label{AIC for correct model-3} \end{equation} uniformly for $\alpha\in\mathcal{A}_n^0$, where $\nu^*=\nu-2L_0(\bolds\theta_0)$ with $\nu$ being defined in (\ref{loss eff among incorrect models}). By an argument similar to that used to prove (\ref{eq:AIC}), we have \begin{eqnarray} \label{loss eff among incorrect models 7} \Gamma_2(\alpha) &=& \nu^* -2\bigl\{(\bolds\eta+\bolds \epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+\bolds\epsilon)-p( \alpha)\bigr\} \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} + 2L(\alpha;\bolds\theta_0)+ O_p(1); \qquad\alpha\in \mathcal{A}_n^0. \end{eqnarray} Therefore, by an argument similar to that used to prove (\ref{uniform result 1}), we only need to show that \begin{equation} (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_0) \mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+ \bolds\epsilon)-p(\alpha) = o_p\bigl(L(\alpha;\bolds \theta_0)\bigr), \label{loss eff asmp 7} \end{equation} uniformly for $\alpha\in\mathcal{A}_n^0$ and \begin{equation} \sup_{\alpha\in\mathcal{A}_n^0} \biggl|\frac{L(\alpha;\bolds{\theta}_0)}{R(\alpha;\bolds{\theta}_0)}-1 \biggr| =o_p(1). \label{loss eff asmp 8} \end{equation} First, we prove (\ref{loss eff asmp 7}). Clearly, by (\ref{fn:M}) and (C1), we have \begin{equation} \mathrm{E}\bigl((\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds \theta_0) (\bolds\eta+\bolds\epsilon)\bigr)=c(\alpha)p(\alpha), \label{AIC:eq1} \end{equation} where $ \limsup_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n}c(\alpha)<\infty$. Hence by (\ref{fn:KL risk}) and (\ref{eq:C7}), $c(\alpha)p(\alpha)-p(\alpha)=o(R(\alpha;\bolds\theta_0))$ uniformly for $\alpha\in\mathcal{A}_n^0$. It remains to show that \[ (\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds \theta_0) \mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+ \bolds\epsilon) - c(\alpha)p(\alpha) = o_p\bigl(R(\alpha;\bolds \theta_0)\bigr), \] uniformly for $\alpha\in\mathcal{A}_n^0$. Applying Chebyshev's inequality, we have for any $\varepsilon>0$, \begin{eqnarray*} && P \biggl\{\sup_{\alpha\in\mathcal{A}_n^0}\biggl |\frac{(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)(\bolds\eta+\bolds\epsilon)-c(\alpha)p(\alpha)}{ R(\alpha;\bolds\theta_0)} \biggr| >\varepsilon \biggr\} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n^0} \frac{\mathrm{E} |(\bolds\eta+\bolds\epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)(\bolds\eta+\bolds\epsilon)-c(\alpha)p(\alpha) |^{2m}}{ \varepsilon^{2m}R^{2m}(\alpha;\bolds\theta_0)} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n^0} \frac{c_1\{\operatorname{tr}(\bolds\Sigma_0\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0)\bolds\Sigma_0\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0))\}^m}{ \varepsilon^{2m}R^{2m}(\alpha;\bolds\theta_0)} \\ &&\qquad\leq \sum_{\alpha\in\mathcal{A}_n^0} \frac{c_2 p^m(\alpha)} { \varepsilon^{2m}L_0^{2m}(\bolds\theta_0)} \leq \sum _{\alpha\in\mathcal{A}_n^0} \frac{c_3}{\varepsilon^{2m}L_0^m(\bolds\theta_0)}, \end{eqnarray*} where the second-to-last equality follows from (C1) and $R(\alpha;\bolds\theta_0)\geq L_0(\bolds\theta_0)$, for $\alpha\in\mathcal{A}_n$, and the last equality follows from (\ref{eq:C7}). Taking $m=1/q$, we obtain (\ref{loss eff asmp 7}). Next, we prove (\ref{loss eff asmp 8}). By (\ref{fn:KL loss}), (\ref{fn:KL risk}) and (\ref{AIC:eq1}), we have for $\alpha\in\mathcal{A}_n^0$, \[ L(\alpha;\bolds\theta_0)-R(\alpha;\bolds\theta_0) = \tfrac{1}{2} \bigl\{(\bolds\eta+\bolds\epsilon)'\bolds \Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds \theta_0) (\bolds\eta+\bolds\epsilon) - c(\alpha)p(\alpha) \bigr\}, \] where $ \limsup_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n^0}c(\alpha)<\infty$. Thus (\ref{loss eff asmp 8}) follows from an argument similar to that used to prove (\ref{loss eff asmp 7}).\vspace*{1pt} Thus we obtain (\ref{AIC for correct model-3}). This completes the proof. \end{pf} \begin{thmm}\label{theorem:unknown GIC 2} Under the setup of Theorem~\ref{theorem:unknown AIC 2}, suppose that\break $ \lim_{n\rightarrow\infty}\tau_n =\infty$, and \begin{equation} \lim_{n\rightarrow\infty}\sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \frac{\tau_n p_n}{R(\alpha;\bolds\theta_\alpha)-L_0(\bolds{\theta}_0)} = 0. \label{eq:C8} \end{equation} \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}_n^0 |=0$, then $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. \item[(ii)] If $ |\mathcal{A}_n^0 |\geq 1$, $ |\mathcal{A}_n^0 |^q=o(L_0(\bolds\theta_0))$ for some $q>0$, and (\ref{cond for GIC}) is satisfied, then $\hat\alpha_{\tau_n}$ is consistent and asymptotically loss efficient. \end{longlist} \end{thmm} \begin{pf} (i) By (\ref{loss eff among incorrect models 3}) and (\ref{eq:C8}), we have $\Gamma_{\tau_n}(\alpha) = \nu + 2L^*(\alpha;\bolds\theta_\alpha) +\break o_p(L^*(\alpha; \bolds\theta_\alpha))$, uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$, and hence \begin{equation} \Gamma_{\tau_n}(\alpha) = \nu^* + 2L(\alpha;\bolds\theta_\alpha) + o_p\bigl(L(\alpha;\bolds\theta_\alpha)\bigr), \label{loss eff among incorrect models 5} \end{equation} uniformly for $\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0$. In addition, (\ref{loss eff asmp 6}) gives \begin{equation} \sup_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \biggl |\frac{L(\alpha;\bolds\theta_\alpha)}{R(\alpha;\bolds\theta_\alpha)}-1 \biggr|=o_p(1). \label{loss eff among incorrect models 6} \end{equation} These together with (C2) imply that $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. (ii) First, we prove the asymptotic loss efficiency of $\hat\alpha_{\tau_n}$. By (\ref{loss eff asmp 8}) and (\ref{loss eff among incorrect models 6}), we have \begin{equation} \sup_{\alpha\in\mathcal{A}_n} \biggl|\frac{L(\alpha;\bolds{\theta}_0)}{R(\alpha;\bolds{\theta}_0)}-1 \biggr| =o_p(1). \label{loss eff asmp 9} \end{equation} By (\ref{eq:C8}) and an argument similar to that used to prove (\ref{AIC for correct model-3}), we have \[ \Gamma_{\tau_n}(\alpha) = \nu^* + 2 L(\alpha;\bolds\theta_0) + o_p\bigl(L(\alpha;\bolds\theta_0)\bigr), \] uniformly for $\alpha\in\mathcal{A}_n^0$. This together with (\ref{loss eff among incorrect models 5}), (\ref{loss eff asmp 9}) and (C2) implies that $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. Next, we prove the consistency of $\hat\alpha_{\tau_n}$. By (\ref{loss eff among incorrect models 7}) and (\ref{AIC:eq1}), we have for $\alpha\in\mathcal{A}_n^0$, \begin{eqnarray} \label{GIC2:eq2} \Gamma_{\tau_n}(\alpha) &=& \nu - \bigl\{(\bolds\eta+\bolds \epsilon)'\bolds\Sigma^{-1}(\bolds\theta_0) \mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+\bolds\epsilon)-c( \alpha)p(\alpha)\bigr\} \nonumber \\[-8pt] \\[-8pt] \nonumber &&{} +\bigl(\tau_n-c(\alpha)\bigr) p(\alpha) + o_p\bigl( \tau_n p(\alpha)\bigr). \end{eqnarray} By (\ref{cond for GIC}) and an argument similar to that used to prove (\ref{GIC:eq1}), we have \[ \sup_{\alpha\in\mathcal{A}_n^0} \biggl| \frac{(\bolds\eta+\bolds\epsilon)' \bolds\Sigma^{-1}(\bolds\theta_0)\mathbf{M}(\alpha;\bolds\theta_0) (\bolds\eta+\bolds\epsilon)-c(\alpha)p(\alpha)}{{\tau_n} p(\alpha)} \biggr|=o_p(1). \] Hence by (\ref{GIC2:eq2}), \begin{equation} \Gamma_{\tau_n}(\alpha) = \nu + \bigl(\tau_n-c(\alpha)\bigr) p(\alpha) + o_p\bigl({\tau_n} p(\alpha)\bigr), \label{GIC2:eq3} \end{equation} uniformly for $\alpha\in\mathcal{A}_n^0$. Thus we obtain (\ref{GIC:eq4}). In addition, by (\ref{eq:C8}), (\ref{loss eff among incorrect models 5}) and~(\ref{GIC2:eq3}), \[ \min_{\alpha\in\mathcal{A}_n\setminus\mathcal{A}_n^0} \Gamma_{\tau_n}(\alpha) - \Gamma_{\tau_n}\bigl(\alpha_n^0\bigr) \mathop{\rightarrow}\limits^{P}\infty, \] as $n\rightarrow\infty$. This together with (\ref{GIC:eq4}) implies that $\hat\alpha_{\tau_n}$ is consistent. This completes the proof. \end{pf} \begin{rem} Recall that in (ii) of Theorem~\ref{theorem:unknown GIC}, asymptotic loss efficiency of GIC is generally not satisfied, unless $\hat{\bolds{\theta}}(\alpha)$'s are replaced by a common estimate. In contrast, in (ii) of Theorem~\ref{theorem:unknown GIC 2}, we have, from (\ref{fn:KL loss}) and an argument similar to that used to prove (\ref{loss eff asmp 7}) that $L(\alpha;\bolds\theta_0)=L_0(\bolds\theta_0)+ o_p(L_0(\bolds\theta_0))$, uniformly for $\alpha\in\mathcal{A}_n^0$, which leads to \[ \frac{L(\alpha;\hat{\bolds\theta}(\alpha))}{ \min_{\alpha'\in\mathcal{A}_n} L(\alpha';\hat{\bolds\theta}(\alpha'))} \mathop{\rightarrow}\limits^{P}1, \] for any $\alpha\in\mathcal{A}_n^0$, indicating that the asymptotic loss efficiency can be achieved for any correct model. \end{rem} \section{Examples} \label{section:examples} In this section, we provide some specific examples for GIC that satisfy regularity conditions (C1)--(C5). Throughout this section, we assume that $p_n=p$, $\mathcal{A}_n=\mathcal{A}$, $\mathcal{A}_n^0=\mathcal{A}^0$ and $\alpha_n^0=\alpha^0$ are fixed, and give proofs of the theoretical results in the supplemental material [Chang, Huang and Ing (\citeyear{supp})]. \subsection{One-dimensional examples} \label{one dim exp model} First, we consider spatial models in the one-dimensional space with $D=[0,n^\delta]\subseteq\mathbb{R}$; $\delta\in[0,1)$. We assume the exponential covariance model for $\eta(\cdot)$, \begin{equation} \operatorname{cov}\bigl(\eta(s),\eta\bigl(s^*\bigr)\bigr) = \sigma^2 \exp\bigl(-\kappa\bigl|s-s^*\bigr|\bigr);\qquad s, s^*\in D, \label{exp cov fun 1 dim} \end{equation} where $\sigma^2>0$ is the variance parameter, and $\kappa>0$ is a spatial dependence parameter. We also assume that the data are uniformly sampled at $s_i=i n^{-(1-\delta)}$; $i=1,\ldots,n, s_i\in D$. Clearly, $\delta=0$ corresponds to the fixed domain asymptotic framework with $D=[0,1]$, and a larger $\delta$ corresponds to a faster growth rate of the domain. Note that $\sigma^2\kappa$ is often referred to as a microergodic parameter under fixed domain asymptotics [\citet{Stein1999}]. The following proposition allows us to replace (C1)--(C5) in Theorems \ref{theorem:unknown AIC} and~\ref{theorem:unknown GIC} by simpler conditions. \begin{pro}\label{pro:1 dim} Consider $\bolds\Sigma(\bolds\theta)$ in (\ref{Sigma}), where $\bolds\Sigma_\eta$ is given by (\ref{exp cov fun 1 dim}) and $s_i=in^{-(1-\delta)}$; $i=1,\ldots,n$, for some $\delta\in[0,1)$. Let $\bolds\theta=(v^2,\sigma^2,\kappa)'$. Then for any compact set $\Theta\subseteq(0,\infty)^3$ and any $\bolds\theta_0=(v_0^2,\sigma_0^2,\kappa_0)'\in\Theta$, \begin{eqnarray} \label{proposition:bound eigen} 0 &< & \liminf_{n\rightarrow\infty} \inf_{\bolds\theta\in\Theta} \lambda_{\min} \bigl( \bolds\Sigma^{-1/2}(\bolds\theta)\bolds \Sigma(\bolds\theta_0) \bolds\Sigma^{-1/2}(\bolds\theta) \bigr) \nonumber \\[-8pt] \\[-8pt] \nonumber &\leq & \limsup_{n\rightarrow\infty}\sup_{\bolds\theta\in\Theta} \lambda_{\max} \bigl(\bolds\Sigma^{-1/2}(\bolds\theta) \bolds \Sigma(\bolds\theta_0)\bolds\Sigma^{-1/2}(\bolds\theta) \bigr)< \infty. \end{eqnarray} \end{pro} \begin{pf} The proof follows directly from Proposition~2.1 of Chang, Huang and Ing (\citeyear{Chang2013}). \end{pf} \begin{thmm}\label{theorem of GIC in exp model} Consider the data generated from (\ref{geo data}) and the model given by (\ref{setup}) and (\ref{Sigma}) with $\bolds{\theta}_0$ being the true covariance parameter vector [i.e., $\operatorname{var}(\mathbf{Z})= \bolds\Sigma(\bolds\theta_0)$]. Assume the setup of Proposition~\ref{pro:1 dim} with $\delta\in(0,1)$. Suppose that $\hat{\bolds\theta}(\alpha)\mathop{\rightarrow}\limits^{P}\bolds\theta_\alpha$ for some $\bolds\theta_\alpha\in\Theta$; $\alpha\in\mathcal{A}$, and \begin{equation} \min_{\alpha\in\mathcal{A}\setminus\mathcal{A}^0} R(\alpha;\bolds\theta_\alpha)\rightarrow \infty, \label{assump of risk} \end{equation} as $n\rightarrow\infty$. Then $\hat\alpha_2$ is asymptotically loss efficient if $ |\mathcal{A}^0 |\leq 1$. In addition, suppose that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and $ {\tau_n}=o (\min_{\alpha\in\mathcal{A}\setminus\mathcal{A}^0}R(\alpha;\bolds\theta_\alpha) )$. \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}^0 |=0$, then $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. \item[(ii)] If $ |\mathcal{A}^0 |\geq 1$, then $\hat\alpha_{\tau_n}$ is consistent. \end{longlist} \end{thmm} \begin{rem} The assumption, $\hat{\bolds\theta}(\alpha)\mathop{\rightarrow}\limits^{P}\bolds\theta_\alpha$; $\alpha\in\mathcal{A}$, is generally satisfied under the increasing domain asymptotic framework, and is guaranteed to hold when $R(\alpha;\bolds\theta_0)=o(n^{\delta})$, for all $\alpha\in\mathcal{A}\setminus\mathcal{A}^0$; see Theorem~2.3 of Chang, Huang and Ing (\citeyear{Chang2013}). In fact, as given by Theorems \ref{theorem white-noise gic unknown}--\ref{theorem poly gic unknown}, the assumption continues to hold even if $R(\alpha;\bolds\theta_0)>cn^{\delta}$ for $\alpha\in\mathcal{A}\setminus\mathcal{A}^0$ and some constant $c>0$. \end{rem} Although the theorem is established under the increasing domain asymptotic framework, the theorem remains valid in some situations even when $\hat{\bolds\theta}(\alpha)$ fails to converge for some $\alpha\in\mathcal{A}$ under the fixed domain asymptotic framework with \mbox{$\delta=0$}. As mentioned at the end of Section~\ref{subsec:geo model}, our asymptotic results of GIC are still valid for random $\mathbf{X}$. In what follows, we provide three examples based on different classes of regressors that are either random or fixed. We derive the consistency of GIC not only for $\delta\in(0,1)$ but also for $\delta=0$ without requiring the regularity conditions. The three examples below can be seen to have increasing degrees of smoothness in space, leading to different conditions to ensure the consistency of GIC. \begin{exm}[(White-noise processes)]\label{exm:white-noise} Consider $p$ regressors, $x_j(\cdot)$; $j=1,\ldots,p$, generated from independent white-noise processes with \[ x_j(s)\sim N\bigl(0,v_j^2\bigr);\qquad s\in \bigl[0,n^\delta\bigr], j=1,\ldots,p, \] for some $\delta\in[0,1)$, where $v_j^2>0$; $j=1,\ldots,p$. \end{exm} \begin{exm}[(Spatially dependent processes)]\label{exm:exp var} Consider $p$ regressors, $x_j(\cdot)$; $j=1,\ldots,p$, generated from independent zero-mean Gaussian spatial processes with covariance functions \[ \operatorname{cov}\bigl(x_j(s),x_j\bigl(s' \bigr)\bigr) = \sigma_j^2 \exp \bigl(- \kappa_j\bigl|s-s'\bigr| \bigr);\qquad s,s'\in \bigl[0,n^\delta\bigr], \] for some $\delta\in[0,1)$, where $\sigma_j^2,\kappa_j>0$; $j=1,\ldots,p$. \end{exm} \begin{exm}[(Monomials)]\label{exm:poly} Consider $p$ regressors, $x_j(\cdot)$; $j=1,\ldots,p$, \[ x_j(s)=n^{-\delta j}s^j; \qquad s\in\bigl[0,n^\delta \bigr], \] for some $\delta\in[0,1)$. Note that a scaling factor $n^{-\delta j}$ is introduced to standardize $x_j(\cdot)$ so that $ \frac{1}{n^\delta}\int_0^{n^\delta}(x_j(s)-\bar{x}_j)^2 \,ds$ does not depend on $n$, where $\bar{x}_j= \frac{1}{n^\delta}\int_0^{n^\delta}x_j(s)\,ds$. \end{exm} \begin{thmm}\label{theorem white-noise gic unknown} Consider the model defined in (\ref{setup}) with the white-noise regressors given by Example~\ref{exm:white-noise}. Suppose that $\mathbf{Z}\sim N(\mathbf{X}\bolds\beta_0,\bolds\Sigma(\bolds{\theta}_0))$ conditional on~$\mathbf{X}$, where $\bolds\beta_0=(\beta_{0,0},\ldots,\beta_{0,p})'\in\mathbb{R}^{p+1}$ and $\bolds\theta_0=(v_0^2,\sigma_0^2,\kappa_0)'\in\Theta\subseteq(0,\infty)^3$ are constant vectors, and $\bolds{\Sigma}(\bolds{\theta}_0)$ is given by Proposition~\ref{pro:1 dim} for some $\delta\in[0,1)$. Assume that $\Theta$ is compact and \[ \bolds{\theta}_0+ \biggl( \sum_{j\in\alpha^0\setminus\alpha} \beta_{0,j}^2v_j^2, 0, 0 \biggr)'\in\Theta; \qquad \alpha\in\mathcal{A}. \] If $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and ${\tau_n}=o(n)$, then $ \lim_{n\rightarrow\infty} P \{\hat{\alpha}_{\tau_n} = \alpha^0 \}=1$. \end{thmm} \begin{rem} Theorem~\ref{theorem white-noise gic unknown} assumes $\mathcal{A}^0\neq\varnothing$. Suppose that $\mu_0(\cdot)$ has an additional unobserved term $\zeta(\cdot)$, which is also a white-noise process, \begin{equation} \mu_0(s) = \beta_{0,0} + \sum _{j=1}^p\beta_{0,j}x_j(s) + \zeta(s);\qquad s\in D, \label{assump of mean} \end{equation} and hence $ |\mathcal{A}^0 |=0$. Then by Theorem~\ref{theorem of GIC in exp model} and an argument similar to that in proof of Theorem~\ref{theorem white-noise gic unknown} for $\delta=0$, GIC is also asymptotically loss efficient for $\delta\in[0,1)$, provided that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and ${\tau_n}=o(n)$. \end{rem} \begin{thmm}\label{theorem exp var gic unknown} Consider the model defined in (\ref{setup}) with the spatially dependent regressors given by Example~\ref{exm:exp var}. Suppose that $\mathbf{Z}\sim N(\mathbf{X}\bolds\beta_0,\bolds\Sigma(\bolds{\theta}_0))$ conditional on $\mathbf{X}$, where $\bolds\beta_0=(\beta_{0,0},\ldots,\beta_{0,p})'\in\mathbb{R}^{p+1}$ and $\bolds\theta_0=(v_0^2,\sigma_0^2,\kappa_0)'\in\Theta\subseteq(0,\infty)^3$ are constant vectors, and $\bolds{\Sigma}(\bolds{\theta}_0)$ is given by Proposition~\ref{pro:1 dim} with $\delta\in[0,1)$. Assume that $\Theta$ is compact and $\bolds{\theta}_0+ (0, \sum_{j\in\alpha^0\setminus\alpha}\beta_{0,j}^2\sigma_j^2, \kappa_\alpha^* )'\in\Theta$ for any $\alpha\in\mathcal{A}$, where $ \kappa_\alpha^* = (\sigma_0^2+\sum_{j\in\alpha^0\setminus\alpha}\beta_{0,j}^2\sigma_j^2 )^{-1} (\sum_{j\in\alpha^0\setminus\alpha}\beta_{0,j}^2\sigma_j^2(\kappa_j-\kappa_0) )$. If $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and ${\tau_n}=o(n^{(1+\delta)/2})$, then $ \lim_{n\rightarrow\infty} P \{\hat{\alpha}_{\tau_n} = \alpha^0 \}=1$. \end{thmm} \begin{rem} Theorem~\ref{theorem exp var gic unknown} assumes $\mathcal{A}^0\neq\varnothing$. Suppose that $\mu_0(\cdot)$ is given by~(\ref{assump of mean}), where $\zeta(\cdot)$ is an unobserved spatial dependent process given in Example~\ref{exm:exp var}. Then by Theorem~\ref{theorem of GIC in exp model} and an argument similar to that in proof of Theorem~\ref{theorem exp var gic unknown} for $\delta=0$, GIC is also asymptotically loss efficient for $\delta\in[0,1)$, provided that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and ${\tau_n}=o(n^{(1+\delta)/2})$. \end{rem} \begin{thmm}\label{theorem poly gic unknown} Consider the model defined in (\ref{setup}) with the monomial regressors given by Example~\ref{exm:poly}. Suppose that $\mathbf{Z}\sim N(\mathbf{X}\bolds\beta_0,\bolds\Sigma(\bolds{\theta}_0))$, where $\bolds\beta_0=(\beta_{0,0},\ldots,\beta_{0,p})'\in\mathbb{R}^{p+1}$ and $\bolds\theta_0=(v_0^2,\sigma_0^2,\kappa_0)'\in\Theta\subseteq(0,\infty)^3$ are constant vectors, and $\bolds{\Sigma}(\bolds{\theta}_0)$ is given by Proposition~\ref{pro:1 dim} with $\delta\in(0,1)$. Assume that $\mathcal{A}=\{\varnothing,\{1\}, \{1,2\},\ldots,\{1,\ldots,p\}\}$, $\Theta$ is compact,\vspace*{2pt} and $\bolds{\theta}_0+ (0, \gamma(k), -(\sigma_0^2+\gamma(k))^{-1}\gamma(k)\kappa_0 )'$ $\in\Theta$; $k=0,1,\ldots,p$, where $\gamma(k)=\bolds\beta_0'\mathbf{V}_{p,p}\bolds\beta_0 -\bolds\beta_0'\mathbf{V}_{p,k}\mathbf{V}_{k,k}^{-1}\*\mathbf{V}_{k,p} \bolds\beta_0$ and $\mathbf{V}_{k,p}= ( \frac{1}{i+j-1} )_{(k+1)\times (p+1)}$. If $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and ${\tau_n}=o(n^\delta)$, then $ \lim_{n\rightarrow\infty} P \{\hat{\alpha}_{\tau_n} = \alpha^0 \}=1$. \end{thmm} \begin{rem} Theorem~\ref{theorem poly gic unknown} assumes $\mathcal{A}^0\neq\varnothing$. Suppose that $\mu_0(\cdot)$ is given by~(\ref{assump of mean}), where $\zeta(s)=n^{-\delta k}s^k$; $s\in D$, is an unobserved function with $k>p$. Then by Theorem~\ref{theorem of GIC in exp model}, GIC can be shown to be asymptotically loss efficient for $\delta\in(0,1)$, provided that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and ${\tau_n}=o(n^\delta)$. \end{rem} The results of Theorems \ref{theorem white-noise gic unknown}--\ref{theorem poly gic unknown} show that the consistency of GIC depends on not only the smoothness of regressors in space but also the growth rate of the domain. Evidently, GIC is more difficult to identify the true model when the candidate regressors are smoother in space. Although there exists ${\tau_n}$ such that GIC is consistent for either white-noise regressors or spatially dependent regressors under the fixed domain asymptotic framework, interestingly, as shown in the next theorem, consistent polynomial order selection turns out not possible when the true model has at least one nonzero regression coefficient and $ |\mathcal{A}^0 |\geq 2$ under the fixed domain asymptotic framework. \begin{thmm}[(Inconsistency)]\label{thmm:inconsistent:poly} Consider the same setup as in Theorem~\ref{theorem poly gic unknown}, except that $\delta=0$: \begin{itemize}[(ii)] \item[(i)] If $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$, then $ \lim_{n\rightarrow\infty} P\{\hat{\alpha}_{\tau_n} = \{\varnothing\}\}=1$. \item[(ii)] If $\alpha^0\neq\{\varnothing\}$ and $ \liminf_{n\rightarrow\infty}\tau_n>0$, then $ \lim_{n\rightarrow\infty} P\{\hat{\alpha}_{\tau_n} = \alpha^0\}<1$. \end{itemize} \end{thmm} \subsection{A two-dimensional exponential model} \label{two dim exp model} Consider the multiplicative exponential covariance model \begin{equation} \operatorname{cov}\bigl(\eta(\mathbf{s}),\eta\bigl(\mathbf{s}^*\bigr)\bigr) = \sigma^2\exp\bigl(-\kappa \bigl\{\bigl|s_1-s_1^*\bigr|+\bigl|s_2-s_2^*\bigr| \bigr\}\bigr), \label{exp cov fun 2 dim} \end{equation} parameterized by $\sigma^2>0$ and $\kappa>0$, where $\mathbf{s}=(s_1,s_2)$ and $\mathbf{s}^*=(s_1^*,s_2^*)\in D=[0,n^{\delta/2}]^2\subseteq \mathbb{R}^2; \delta\in[0,1)$. Clearly, $\delta=0$ corresponds to the fixed domain asymptotic framework with $D=[0,1]^2$, and a larger $\delta$ corresponds to a faster growth rate of the domain. Similarly to the one-dimensional case, we first prove (\ref{proposition:bound eigen}), which is the key to show (C1)--(C5). \begin{pro}\label{pro:2 dim} Consider $\bolds\Sigma(\bolds\theta)$ in (\ref{Sigma}) with $\bolds\Sigma_\eta$ given by (\ref{exp cov fun 2 dim}), $v^2=0$, and $\mathbf{s}_k= (i m^{-(1-\delta)},j m^{-(1-\delta)} )$; $k={i+(j-1)}m$; $i,j=1,\ldots,m$, for some integer $m=n^{1/2}$, where $\delta\in[0,1)$. Let $\bolds\theta=(\sigma^2,\kappa)'$. Then (\ref{proposition:bound eigen}) holds for any compact set $\Theta\subseteq(0,\infty)^2$ and any $\bolds\theta_0=(\sigma_0^2,\kappa_0)'\in\Theta$. \end{pro} \begin{pf} Write \begin{equation} \bolds\Sigma(\bolds\theta) = \sigma^2\mathbf{B}(\bolds\theta)\otimes \mathbf{B}(\bolds\theta), \label{Sigma 2 dim} \end{equation} where $\mathbf{B}(\bolds\theta)= (\rho^{|i-j|} )_{m\times m}$ and $\rho=\exp(-\kappa m^{-(1-\delta)})$. By (\ref{Sigma 2 dim}), \begin{eqnarray*} &&\lambda_{\max} \bigl(\bolds\Sigma^{-1/2}(\bolds\theta)\bolds \Sigma(\bolds\theta_0)\bolds\Sigma^{-1/2}(\bolds\theta)\bigr) \\ &&\qquad\leq \frac{\sigma_0^2}{\sigma^2}\lambda_{\max} \bigl( \bigl(\mathbf{B}(\bolds \theta_0)\otimes\mathbf{B}(\bolds\theta_0)\bigr) \bigl( \mathbf{B}^{-1}(\bolds\theta)\otimes\mathbf{B}^{-1}(\bolds \theta)\bigr) \bigr) \\ & &\qquad=\frac{\sigma_0^2}{\sigma^2}\lambda_{\max} \bigl(\bigl(\mathbf{B}(\bolds \theta_0)\mathbf{B}^{-1}(\bolds\theta)\bigr) \otimes\bigl( \mathbf{B}(\bolds\theta_0)\mathbf{B}^{-1}(\bolds\theta) \bigr) \bigr) \\ &&\qquad= \frac{\sigma_0^2}{\sigma^2}\lambda_{\max}^2 \bigl(\bigl( \mathbf{B}(\bolds\theta_0)\mathbf{B}^{-1}(\bolds\theta) \bigr) \bigr) <\infty, \end{eqnarray*} where the last inequality follows from Proposition~2.1 of Chang, Huang and Ing (\citeyear{Chang2013}). This gives the last inequality of (\ref{proposition:bound eigen}). The proof for the first inequality of~(\ref{proposition:bound eigen}) is analogous and omitted. This completes the proof. \end{pf} \begin{thmm}\label{theorem of GIC in exp model 2} Consider the data generated from (\ref{geo data}), the model given by~(\ref{setup}) and (\ref{Sigma}) and the setup of Proposition~\ref{pro:2 dim} with $\delta\in[0,1)$. Suppose that $\hat{\bolds\theta}(\alpha)\mathop{\rightarrow}\limits^{P}\bolds\theta_\alpha$ for some $\bolds\theta_\alpha\in\Theta$; $\alpha\in\mathcal{A}$, and (\ref{assump of risk}) holds. Then $\hat\alpha_2$ is asymptotically loss efficient if $ |\mathcal{A}^0 |\leq 1$. In addition, suppose that $ \lim_{n\rightarrow\infty}{\tau_n}=\infty$ and $ {\tau_n}=o (\min_{\alpha\in\mathcal{A}\setminus\mathcal{A}^0}R(\alpha;\bolds\theta_\alpha) )$. \begin{longlist}[(ii)] \item[(i)] If $ |\mathcal{A}^0 |=0$, then $\hat\alpha_{\tau_n}$ is asymptotically loss efficient. \item[(ii)] If $ |\mathcal{A}^0 |\geq 1$, then $\hat\alpha_{\tau_n}$ is consistent. \end{longlist} \end{thmm} \begin{rem} As in the one-dimensional case, the assumption, $\hat{\bolds\theta}(\alpha)\mathop{\rightarrow}\limits^{P}\bolds\theta_\alpha$; $\alpha\in\mathcal{A}\setminus\mathcal{A}^0$, is generally satisfied. In fact, the assumption is guaranteed to hold when $R(\alpha;\bolds\theta_0)=o(n^{(1+\delta)/2})$, for any $\alpha\in\mathcal{A}$; see Lemma~A.5 of Chang, Huang and Ing (\citeyear{supp}). \end{rem} Here we consider only a multiplicative exponential model because of two difficulties. First, for the two-dimensional exponential covariance model, the asymptotic distribution of the ML estimate of $(\sigma^2\kappa,\kappa)'$ is needed but has yet to be derived unless $\kappa$ is assumed known [Du, Zhang and Mandrekar (\citeyear{Du2009}), \citet{Wang2011}]. Second, our proof relies on a decomposition of the log-likelihood into different layers having different orders of magnitude. Such a decomposition requires an innovative treatment of the log-likelihood for the two-dimensional exponential model. Further research is needed to characterize the asymptotic behavior of GIC under the two-dimensional exponential covariance model or the more general Mat\'{e}rn covariance model [Mat\'{e}rn (\citeyear{Matern1986})], but is beyond the scope of this paper. \section{Summary and discussion} \label{discussion} In this article, we study the asymptotic properties of GIC for geostatistical model selection regardless of whether the covariance model is correct or wrong, and establish conditions under which GIC is consistent and asymptotically loss efficient. Some specific examples that satisfy the regularity conditions are also provided. To the best of our knowledge, this research is the first to provide such results for GIC in geostatistical regression model selection. The method we developed also sheds some light for solving linear mixed model selection problems involving parameters that cannot be estimated consistently. For example, consider a simple Laird--Ware model [\citet{Laird1982}], \begin{equation} Z_{ij} = \mathbf{x}_{ij}'\bolds\beta + \eta_i + \epsilon_{ij};\qquad i=1,\ldots,m, j=1, \ldots,n_i , \label{Laird-Ware model} \end{equation} where $\mathbf{x}_{ij}$'s are $p$-vector of fixed effects, and $\eta_i\sim N(0,\sigma^2)$ is the random effect for subject $i$, independent of $\epsilon_{ij}\sim N(0,v^2)$. Here $\bolds{\beta}\in\mathbb{R}^p$ is the regression-coefficient vector, and $\bolds{\theta}=(\sigma^2,v^2)'$ consists of random-effect parameters. Clearly, $\sigma^2$ in (\ref{Laird-Ware model}) cannot be estimated consistently when $m$ is fixed [\citet{Longford2000}]. Nevertheless, as shown below, it is still possible to derive a condition analogous to (C2). For simplicity, we consider a simple case of (\ref{Laird-Ware model}) with mean zero and no fixed effect. Let $\hat{\bolds{\theta}}$ be the ML estimate of $\bolds{\theta}$ and $\bolds\theta_0=(v_0^2,\sigma_0^2)'$ be the true parameter value. Applying an argument similar to that used to prove (2.10) of Chang, Huang and Ing (\citeyear{Chang2013}), twice the negative log-likelihood of (\ref{Laird-Ware model}) can be written as \begin{equation} \qquad -2\ell(\bolds\theta) = n\log(2\pi) + \sum_{j=1}^m \log n_j + n\log v^2 + n\frac{v_0^2}{v^2} + h(\bolds \theta) + O_p(1), \label{log like fun LW model} \end{equation} where $ n = \sum_{i=1}^m n_i$, $ h(\bolds\theta) =\sum_{i=1}^m \{\bolds\epsilon'_i\bolds\Sigma_i^{-1}\bolds\epsilon_i -\mathrm{E} (\bolds\epsilon'_i\bolds\Sigma_i^{-1}\bolds\epsilon_i ) \}$, $\bolds\epsilon_i=(\epsilon_{i1},\ldots,\epsilon_{i,n_i})'$ and $\bolds\Sigma_j=\sigma^2\mathbf{1}_{n_j}\mathbf{1}_{n_j}' +v^2\mathbf{I}_{n_j}$. We shall show that $\ell(\hat{\bolds\theta}) = \ell(\bolds\theta_0) + O_p(1)$. Applying an argument similar to that used to prove Theorem~2.2 in Chang, Huang and Ing (\citeyear{Chang2013}), \begin{equation} \hat{\bolds\theta}=\bigl(v_0^2,\sigma_0^2 \bigr)'+ \bigl(O_p\bigl(n^{-1/2} \bigr),O_p(1) \bigr)'. \label{Laird-Ware model:ML estimate} \end{equation} Let $\Theta_n=\{\bolds\theta\in\Theta\dvtx|\sigma^2-\sigma_0^2|<M, |v^2-v_0^2|\leq M n^{-1/2} \}$ for any constant \mbox{$M>0$}. By Lemma B.1 of \citet{Chan2011} and an argument similar that used to prove~(2.12) in Chang, Huang and Ing (\citeyear{Chang2013}), we have \begin{eqnarray*} &&\mathrm{E}\Bigl(\sup_{\bolds\theta\in\Theta_n}\bigl|h(\bolds\theta) - h(\bolds \theta_0)\bigr|^2\Bigr) \\ &&\qquad\leq \sup_{\bolds\theta\in\Theta_n} \biggl\{\bigl(v^2-v_0^2 \bigr)^2\operatorname{var} \biggl(\frac{\partial}{\partial v^2}h(\bolds\theta) \biggr) +\bigl(\sigma^2-\sigma_0^2 \bigr)^2\operatorname{var} \biggl(\frac{\partial}{\partial \sigma^2}h(\bolds\theta) \biggr) \biggr\} \\ &&\qquad= O(1), \end{eqnarray*} which implies $h(\hat{\bolds\theta})-h(\bolds\theta_0)=O_p(1)$. This together with (\ref{log like fun LW model}) and (\ref{Laird-Ware model:ML estimate}) gives $\ell(\hat{\bolds\theta}) = \ell(\bolds\theta_0) + O_p(1)$, indicating some possibility to establish the asymptotic theory of GIC for the Laird--Ware model, even when some random-effect parameter cannot be consistently estimated. In this article, we focus only on variable selection under a certain covariance model. Clearly, simultaneous selection of both variables and covariance models is an interesting problem that deserves further investigation. Although we believe that the framework we developed in this article can be generalized to this problem, it will require introducing more complex notation. In addition, some more efforts are needed to completely characterize GIC, even for the exponential covariance model in one dimension. We note that both the candidate regressors in Examples \ref{exm:white-noise} and \ref{exm:exp var} are not of bounded variation (BV), whereas the polynomial regressors given by Example~\ref{exm:poly} are BV functions. It is of interest to know if BV plays an important role. We conducted a small test simulation experiment under the setup of (\ref{setup}) with only one regressor $x(\cdot)$ and $v^2=0.5$, where $\mu(s)=1+x(s)$, $\eta(\cdot)$ is given by (\ref{exp cov fun 1 dim}) with $\sigma^2=0.5$ and $\kappa=1$, and data are sampled at $\{1/n,2/n,\ldots,1\}$. We consider two functions for $x(\cdot)$, which are $f_1(s) =s^2\sin(\pi/s)$ and $f_2(s) = s\sin(\pi/s)$, in combination with three different sample sizes ($n=100, 500, 1000$). Note that $f_1(\cdot)$ is of bounded variation on $[0,1]$, and $f_2(\cdot)$ is not. The results based on 100 simulation replicates with known $\sigma^2$, $\kappa$ and $v^2$ are shown in Table~\ref{spatial dependent}. Clearly, GIC has better ability in identifying the correct model when $f_2(\cdot)$, rather than $f_1(\cdot)$, is used as the regressor, which partially supports that BV may be an important factor. \begin{table} \caption{Frequencies of models selected by BIC based on 100 simulation replicates, where $\varnothing$ denotes the intercept only model and $\alpha^0$ denotes the correct model}\label{spatial dependent} \begin{tabular*}{\textwidth}{@{\extracolsep{4in minus 4in}}lcccc@{}} \hline & \multicolumn{2}{c}{$\bolds{\mu(s)=1 + s^2\sin(\pi/s)}$} & \multicolumn{2}{c@{}}{$\bolds{\mu(s)=1+ s\sin(\pi/s)}$}\\[-6pt] & \multicolumn{2}{c}{\hrulefill} & \multicolumn{2}{c@{}}{\hrulefill}\\ $\bolds{n}$ & $\bm{\varnothing}$ & $\bolds{\alpha^0}$ & $\bm{\varnothing}$ & \multicolumn{1}{c@{}}{$\bolds{\alpha^0}$} \\ \hline \phantom{0}100 & 67 & 33 & 38 & 62\\ \phantom{0}500& 66 & 34 & 23 & 77\\ 1000& 76 & 24 & \phantom{0}8 & 92\\ \hline \end{tabular*} \end{table} \begin{supplement}[id=suppA] \stitle{Supplement to ``Asymptotic theory of generalized information criterion for geostatistical regression model selection''\\} \slink[doi]{10.1214/14-AOS1258SUPP} \sdatatype{.pdf} \sfilename{aos1258\_supp.pdf} \sdescription{The supplement materials contain the proofs of all theorems in Section~\ref{section:examples}.} \end{supplement}
train/arxiv
BkiUdU85qdmDBjMqeEjn
5
1
\section{Introduction}\label{sec:introduction} \subsection{Background and overview} Electron transport through mesoscopic and nanoscale junctions is a complex phenomenon where nonequilibrium statistical mechanics is entwined with quantum many-body effects.\cite{datta_electronic_1997, bruus_many_2004, stefanucci_nonequilibrium_2013, cohen_greens_2020} Systems are driven out of equilibrium by, e.g., an external bias voltage or a temperature gradient, and their response is measured. Perhaps the simplest response observable in many experimental setups is the electronic current. Increasingly, however, is has become both possible and desirable to access so-called higher order transport characteristics. This includes the current's fluctuations and its higher moments\cite{reulet_environmental_2003, bomze_measurement_2005, Gustavsson2006} as well as the statistics of individual electron transfer events.\cite{kung_irreversibility_2012} Interestingly, ultracold atom experiments can simulate electronic transport,\cite{brantut_conduction_2012} and allow for directly extracting statistical distributions of populations in different parts of the system,\cite{mazurenko_cold-atom_2017} marking another path towards detailed characterization of transport. Theoretically, all such information can be obtained from the full counting statistics (FCS) approach pioneered Levitov and Lesovik,\cite{Levitov1993, Levitov1996} where all moments and cumulants of transport events are efficiently represented by a single generating functional. Since its inception, this idea has attracted a great deal of attention.\cite{Bagrets2003, Belzig2005, Flindt2005_2, koch_full_2005, Esposito_Fluctuation_2007, Esposito_Entropy_2007, Esposito2009, Xue2011, Nicolin_Non_2011, Kambly2013, Kaasbjerg2015, Ridley2018, Ridley2019, Schinabeck2020, Kurzmann_Optical_2019} Many experimental studies concentrate on the current noise and the current-to-noise ratio, also known as the Fano factor. In both classical and quantum systems, these quantities already contain information not present in the mean current:\cite{Landauer1998, Blanter2000} for example, they enable probing of effective quasiparticle charges.\cite{deJong1994, DePicciotto1997, Lefloch2003} Moreover, noise measurements have allowed researchers to, e.g., identify electron bunching and anti-bunching during transport;\cite{Blanter2000, Safonov2003, Djuric2005, Gustavsson2006, Tworzydlo2006, Kiesslich2007, Emary2007} reconstruct waiting- and dwell-time distributions;\cite{Beenakker2003, Tang2014, Rudge2016, Ptaszynski2017, Kosov2018, Ridley2018, Stegmann_Real_2020} and determine the number and transmission probabilities of active levels contributing to transport\cite{vandenBrom1999, Cron2001, Djukic2006, Kiguchi2008, Tal2008, Wheeler2010, Schneider2010, Vardimon2013}. Other studies reported the measurement of higher order cumulants that further elucidate the mechanisms underlying electronic transport.\cite{Flindt2009, Fricke2010, Ubbelohde2012} Given sufficient cumulants, it is in principle possible to reconstruct the full FCS. Much of the motivation for this comes from insights regarding noninteracting systems, where the exact FCS is given by the Levitov--Lesovik formula.\cite{Levitov1996, Nazarov2009} There, the ability to measure the FCS could provide indirect access to theoretically intuitive but experimentally unattainable properties like channel coherence\cite{Brandes2008} and entanglement entropy.\cite{Klich_Levitov_2009} This scheme holds also true for interacting systems, where the FCS provides insight onto many-body quantum effects. For example, even though the role of electronic correlations is not yet well understood, it is known that correlation-driven physics like the Kondo effect modify the current noise\cite{Hewson1997kondo, Meir2002, Delattre2009} and its higher order cumulants.\cite{Stegmann_Detection_2015, Ridley2019} Still, the theoretical prediction of the FCS for interacting systems is generally non-straightforward and a variety of theoretical approaches has been applied. Among the approximate approaches used are quantum master equations (QME),\cite{Bagrets2003, Flindt2005_2, Flindt2008, Brandes2008, Flindt2010, Albert2011, simine_vibrational_2012, Schinabeck2014, Kaasbjerg2015, Benito2016, Stegmann2017, Kosov2017, Lead_Geometry} and Green's function based approaches.\cite{Tang2014, Galperin2006, Avriller2009, Schmidt2009, Haupt2010, Novotny2011, Utsumi2013, agarwalla_full_2015, Miwa2017, Stadler2018, cohen_greens_2020} Numerically exact approaches to FCS include the Inchworm quantum Monte Carlo (iQMC) method,\cite{cohen_taming_2015,Ridley2018, Ridley2019} the hierarchical equations of motions technique (HEOM),\cite{Cerrillo2016, Schinabeck2020} the density matrix renormalization group approach\cite{Carr2011, Schmitteckert2014, Carr2015} and the iterative path integral method.\cite{weiss_iterative_2008,segal_numerically_2010,simine_vibrational_2012,agarwalla_full_2015,kilgour_path-integral_2019} A variety of ongoing research programs are aimed at extending exact approaches to new experimentally relevant regimes, and at developing new exact and approximate methodologies. \subsection{Noncrossing approximations} At the present time, methods able to address Kondo physics remain computationally expensive. Here, we propose a simple and inexpensive approximate scheme for evaluating FCS that is based on one variation of the noncrossing approximation (NCA). The NCA and its extensions\cite{Bickers1987,haule_anderson_2001,Eckstein2010} have long been a successful qualitative approach to several aspects of nonequilibrium Kondo physics in quantum transport.\cite{meir_low-temperature_1993, Wingreen_Anderson_1994,hettler_nonequilibrium_1998,nordlander_how_1999,plihal_transient_2005,hartle_decoherence_2013,chen_anderson-holstein_2016,roura-bas_nonequilibrium_2013,Peronaci_Resonant_2018,krivenko_dynamics_2019,atanasova_correlated_2020,Erpenbeck_Resolving_2020} The approximation has multiple, inequivalent formulations, most of which are unsuitable to the evaluation of FCS due to the introduction of an auxiliary pseudoparticle space.\cite{cohen_greens_2020} The formulation used here is a lowest order precursor of the hybridization-expansion-based iQMC method,\cite{cohen_taming_2015,antipov_currents_2017} and the starting point of bold-line schemes that preceded it.\cite{gull_bold-line_2010,gull_numerically_2011,cohen_numerically_2013,cohen_greens_2014,cohen_greens_2014-1} It can easily be used to obtain high order cumulants or the complete FCS generating functional. To highlight the advantages of the NCA, we contrast it with the widely applied QME scheme, which completely neglects Kondo physics.\cite{Miwa2017} We then establish that our NCA provides better results than the QME scheme by comparing with numerically exact data obtained from iQMC. Finally, based on the NCA, we provide a preliminary overview of the signature of nonequilibrium correlation effects in higher order cumulants. \subsection{Quantum master equations}\label{sec:QME} One of the two methods to which we will provide direct comparisons is the QME approach. Similarly to the NCA to be presented below in Sec.~\ref{sec:methodology}, the QME approximation is based on a second order expansion in the dot--lead coupling. In contrast to the NCA, the QME does not employ a Dyson-like diagrammatic resummation scheme. Rather, it uses a Liouville-space resummation based on the Nakajima--Zwanzig equation.\cite{Nakajima1958, Zwanzig1960, Fick1990quantum} This results in an analytically solvable and intuitive equation of motion for the reduced density matrix, which is the method of choice in many contexts.\cite{Erpenbeck_W_Term, Harbola_Quantum_2006, Peskin_2010, Haertle2011, simine_vibrational_2012, Schinabeck2014, Kaasbjerg2015, Kosov2017, Purkayastha_Quantum_2017, Lead_Geometry} QME methods have been widely employed in the evaluation of FCS.\cite{Bagrets2003,Lead_Geometry, Flindt2005_2, Kaasbjerg2015, rudge_fluctuating_2019} Their numerically exact generalization, the HEOM technique,\cite{Tanimura1989, Tanimura2006, Jin2008, Zheng2012, hartle_decoherence_2013, Schinabeck2016, Erpenbeck_RSHQME, Erpenbeck_Hierarchical_2019, Tanimura2020, Erpenbeck_Current_2020} has recently been generalized to FCS in the context of vibrationally coupled electronic transport.\cite{Schinabeck2020} \subsection{Inchworm quantum Monte Carlo method}\label{sec:iQMC} The inchworm quantum Monte Carlo (iQMC) method is a numerically exact framework able to evaluate transport properties in correlated nonequilibrium impurity models.\cite{cohen_taming_2015, Chen2017, Chen2017_2, antipov_currents_2017, Dong2017, Boag2018, cai_inchworm_nodate, cai_numerical_2020, eidelstein_multiorbital_2020} It has recently been used to evaluate the FCS of both particle and energy transport in the presence of electron--electron interactions.\cite{Ridley2018, Lead_Geometry, Ridley2019} In the present context, the iQMC framework can be considered a numerically exact generalization of the NCA method. This does not mean that the NCA is immediately obsolete, just as the availability of HEOM methods has not obviated QME approximations. This is natural because iQMC results are substantially more expensive to obtain than NCA results, especially at steady state. Here, we employ the iQMC method to validate our NCA results and illustrate their usefulness. \subsection{Outline of this work} We will proceed as follows: In Sec.~\ref{sec:model}, we introduce the model system investigated in this work. The FCS formalism is outlined in Sec.~\ref{sec:FCS}. The theoretical NCA framework employed in this work is described in Sec.~\ref{sec:methodology}. In Sec.~\ref{sec:results}, we present our results: comparisons between NCA and QME results are given in \ref{sec:NCA_VS_QME}, physical implications are discussed in \ref{sec:Fano}, and validation with respect to iQMC is presented in \ref{sec:NCA_VS_iQMC}. Finally, in Sec.~\ref{sec:summary}, we conclude and summarize our findings. \section{Model}\label{sec:model} We consider the nonequilibrium Anderson impurity model, a minimal description for a finite, interacting quantum dot coupled to two infinite noninteracting leads. The Hamiltonian is \begin{eqnarray} H &=& H_D + H_B + H_{DB} , \label{eq:H_full} \end{eqnarray} where $H_D$ is in the dot subspace, $H_B$ is in the bath subspace comprising the left and right lead, and $H_{DB}$ encodes the coupling between the dot and the leads. The dot Hamiltonian is given by \begin{eqnarray} H_D &=& \sum_{\sigma=\uparrow,\downarrow} \epsilon_0 d_\sigma^\dagger d_\sigma + U d_\uparrow^\dagger d_\uparrow d_\downarrow^\dagger d_\downarrow. \label{eq:H_D} \end{eqnarray} Here, the $d_\sigma^{\left( \dagger\right)}$ denote creation/annihilation operators for an electron of spin $\sigma$ on the dot, $\epsilon_0$ is the single particle occupation energy, and $U$ determines the strength of the Coulomb interaction. Experimentally, the single particle occupation energy can be tuned by an external gate voltage $\Phi_{\text{gate}}$. We model the influence of such a gate voltage by setting $\epsilon_0 = \Phi_{\text{gate}} - \frac{U}{2}$. The leads are assumed to be a noninteracting continuum, \begin{eqnarray} H_B &=& \sum_{\sigma\in\lbrace\uparrow,\downarrow\rbrace}\sum_{\ell\in\lbrace L,R \rbrace} \sum_{k\in\ell} \epsilon_{k} a_{k\sigma}^\dagger a_{k\sigma} , \end{eqnarray} where the $a_{k\sigma}^{(\dagger)}$ are creation/annihilation operators on a lead level with index $k$, spin $\sigma$ and energy $\epsilon_{k}$. The indices $L$ and $R$ denote the ``left'' and ``right'' lead, respectively. Finally, the coupling between the dot and leads is assumed to take the linear form \begin{eqnarray} H_{DB} &=& \sum_{\sigma\in\lbrace\uparrow,\downarrow\rbrace} \sum_{\ell\in\lbrace L,R \rbrace}\sum_{k\in \ell} \left( V_{k} a_{k\sigma}^\dagger d_\sigma + \text{h.c.} \right) , \label{eq:H_coupl} \end{eqnarray} with coupling parameters $V_{k}$ that can be parameterized in terms of a coupling strength function \begin{eqnarray} \Gamma_{\ell}(\epsilon) &=& \pi \sum_{k\in \ell} |V_{k}|^2 \delta(\epsilon-\epsilon_{k}). \end{eqnarray} We explicitly consider symmetric coupling to the two leads, each of which is taken to be a flat band with a soft cutoff: \begin{equation} \Gamma_{L}(\epsilon)=\Gamma_{R}(\epsilon)=\frac{\Gamma/2}{(1+e^{\nu(\epsilon-\epsilon_c)})(1+e^{-\nu(\epsilon+\epsilon_c)})}. \end{equation} The overall strength of the dot--lead coupling is set by the constant $\Gamma$, which is used as our unit of energy. The coupling strength defines the hybridization functions, \begin{eqnarray} \Delta_{\ell}^<(t) &=& \frac{1}{\pi}\int d\epsilon\ e^{+i\epsilon t}\ \Gamma_{\ell}(\epsilon) f_\ell(\epsilon) , \\ \Delta_{\ell}^>(t) &=& \frac{1}{\pi}\int d\epsilon\ e^{-i\epsilon t}\ \Gamma_{\ell}(\epsilon) (1-f_\ell(\epsilon)). \label{eq:def_Delta_>} \end{eqnarray} Here $f_\ell(\epsilon) \equiv \frac{1}{1+e^{\beta (\epsilon-\mu_\ell)}}$, where $\mu_{{L/R}}= \pm V/2$ are chemical potentials set by a symmetrically applied bias voltage $V$, and $\beta$ is the inverse temperature in the leads. Moreover, we set $\nu=1/\Gamma$ and $\epsilon_c=50\Gamma$ -- much larger than all other energy scales in the problem -- such that we are effectively working in the wide band limit. With our choice of parameters, particle-hole symmetry is obeyed for $\Phi_{\text{gate}}=0$. Throughout this work, the on-site Coulomb repulsion is set to $U=8\Gamma$. This determines a Kondo temperature of $T_K \approx 0.8\Gamma$ for the particle--hole symmetric case,\cite{Hewson1997kondo} which we use as a reference for the emergence of the Kondo phenomenon. Generally, the Kondo temperature depends on the gate voltage\cite{Hewson1997kondo} and we will comment on this at appropriate points below. We will consider three representative lead temperatures: $T=0.25\Gamma<T_K$, $T=0.5\Gamma \lesssim T_K$ and $1.0\Gamma \gtrsim T_K$, whereby $T_K$ is the estimate for the Kondo temperature for the particle-hole symmetric scenario. This means that we are exploring the edge of the Kondo regime rather than the deep Kondo regime where scaling behavior can be extracted. This choice is to some extent motivated by the limitations of the methods used in this work (cf.\ Sec.~\ref{sec:methodology}). \section{FCS and counting fields}\label{sec:FCS} Determining the FCS of an observable means evaluating the generating function of its underlying probability distribution, from which cumulants and moments can be extracted. We provide a brief overview of this approach and the main concepts here, and recommend Refs.~\onlinecite{Esposito2009, Utsumi2019} for more details. Consider an experiment where at time zero the system is prepared in a known initial density matrix where, e.g., the number of electrons in the left lead $L$ is known. The system is allowed to evolve freely until time $t$, when the total number of electrons in lead $L$ is measured. Let $P_L(t,n)$ be the probability that $n$ electrons are found in this measurement. The generating function is then defined as \begin{equation} Z_L(t, \lambda) \equiv \sum_n P_L(t,n) e^{i\lambda n} \equiv \text{Tr}_{D+B} \left\lbrace \rho_\lambda(t) \right\rbrace , \label{eq:general_def_Z} \end{equation} where $\lambda$ is known as the counting field. This defines $\rho_\lambda(t) \equiv e^{-iH_\lambda t} \rho(0) e^{iH_{-\lambda}t}$, a counting-field-modified (or, for brevity, simply ``modified'') density matrix; which in turn defines $H_\lambda \equiv e^{i\lambda/2 N_L} H e^{-i\lambda/2 N_L}$, a modified Hamiltonian. $N_L = \sum_{\sigma\in\lbrace\uparrow,\downarrow\rbrace}\sum_{k\in L} a_{k\sigma}^\dagger a_{k\sigma}$ is the particle number operator in the left lead $L$. Modifying the Hamiltonian by the counting field corresponds to transforming the dot--bath coupling strength of the lead under consideration according to\cite{tang_full-counting_2014} \begin{equation} V_{k}(t_\pm) \rightarrow V_{k} e^{\pm i\lambda/2}, \label{eq:dressing_HI} \end{equation} Where $t_\pm$ is a time variable on either the backward ($+$) or forward ($-$) branch of the Keldysh contour. This idea can be generalized to other observables and counting fields.\cite{Esposito2009} Normally, the generating function $Z_L(t, \lambda)$ itself cannot be directly accessed in experiments. However, experiments can measure its moments and cumulants, or sometimes the probabilities $P_L(t, n)$. In particular, the cumulants $C_L^\alpha(t)$ of the generating function are given by its logarithmic derivatives: \begin{eqnarray} C_L^\alpha(t) &=& (-i)^{\alpha}\frac{\partial^\alpha}{\partial\lambda^\alpha} \ln\left(Z_L(t, \lambda)\right) \Big|_{\lambda=0}. \label{eq:def_cumulant} \end{eqnarray} The first few cumulants have simple physical interpretations. The time derivative of the first cumulant corresponds to the electronic current $I_L(t)$ exiting lead $L$: \begin{eqnarray} C_L^1(t) &=& \braket{N_L(t)}, \\ \frac{\partial}{\partial t} C_L^1(t) &=& I_L(t). \end{eqnarray} The second cumulant is related to the variance of the population in the lead, \begin{eqnarray} C_L^2(t) &=& \braket{N_L^2}(t) - \braket{N_L}^2(t). \end{eqnarray} At steady state, its time derivative is the noise $S_L$: \begin{equation} \lim_{t\rightarrow\infty}\frac{\partial}{\partial t} C_L^2(t) = S_L. \end{equation} Higher order population cumulants and the full probability distributions $P_L(t,n)$ can also be obtained from the generating functional. These have a more complicated relationship with the statistics of the current, but are arguably more straightforward than the latter to describe theoretically. For the scope of the this work, we will also consider the steady state time derivatives of the third and fourth cumulants, $\lim_{t\rightarrow\infty}\frac{\partial}{\partial t} C_L^3(t) \equiv S_{L2}$ and $\lim_{t\rightarrow\infty}\frac{\partial}{\partial t} C_L^4(t) \equiv S_{L3}$. These quantities express the skewness and the bifurcation of the underlying probability distribution, respectively, and are of interest in a variety of contexts.\cite{Belzig2005, Xue2011} Composite observables like the Fano factor $F_L=S_L / I_L$ are often easier to obtain experimentally than the cumulants themselves, because they do not vary with the overall conductivity of the junction. The standard Fano factor can be a problematic quantity for studying Kondo physics, because the low energy features are obscured by the zero bias Nyquist--Johnson singularity.\cite{Blanter2000, CuevasScheer} This stems from the different symmetry of $I$ and $S$ with respect to the bias voltage. Deep in the universal Kondo regime and at very low voltages, this can be rectified by defining a ``backscattering'' current that must be separated from the unitary linear-response current.\cite{sela_fractional_2006,ferrier_universality_2016} Below, we discuss an alternative and more widely applicable approach: to define a set of generalized Fano factors in terms of higher order cumulants, while taking symmetry into account. \section{Methodology}\label{sec:methodology} \subsection{Noncrossing approximation}\label{sec:NCA_Method} NCA refers to a wider class of inequivalent methods that are perturbative in the dot-bath coupling. The name is motivated by the fact these methods only consider contributions to the perturbative series, which have a diagrammatic representation in which the hybridization lines do not cross. The first installment of an NCA method roots back to Grewe and Kuramoto\cite{Grewe_Diagrammatic_1981, Kuramoto_Self_1983} and was employed and extended by various authors to account for finite electron-electron interaction strengths\cite{Pruschke_Anderson_1989, Keiter_The_1990} and nonequilibrium conditions.\cite{Wingreen_Anderson_1994} The corresponding formulation of the NCA uses a pseudoparticle representation in order to make quantum field theoretical methods such as Wick's theorem applicable. This, however, enlarges the underlying Hilbert space into unphysical regions and relies on a representation where the number of electrons on the dot is not well defined at any given time. Consequently, in this formulation of the NCA, the evaluation of FCS it is not straightforward. Still, this NCA scheme is well suited to capture physics at temperatures that are not far below the Kondo temperature and works well in the large $U$ limit and for small bias voltages. The minimal NCA does not correctly capture Kondo physics in the scaling regime at quantitative accuracy, but this can be amended to a large degree with the aid of vertex corrections.\cite{Anders_Beyond_1995, Anders_Perturbational_1994, Grewe_Conserving_2008, Eckstein2010} These more advanced, but expensive, extensions of the NCA method have been successfully employed in recovering the temperature scaling behavior characterizing Kondo phenomena in agreement with numerical renormalization group calculations.\cite{Gerace_Low_2002, Kroha_Conserving_2005} In the present work, we employ a different NCA scheme, which is based on the perturbative expansion of the restricted propagator (cf.\ Eq.~(\ref{eq:def_res_propagator})) in terms of the dot-bath coupling\cite{cohen_numerically_2013, cohen_greens_2014, chen_anderson-holstein_2016} and which represents a precursor of the QMC methods based on the hybridization expansion.\cite{cohen_taming_2015,antipov_currents_2017, gull_bold-line_2010, gull_numerically_2011, cohen_numerically_2013, cohen_greens_2014-1} This NCA formulation employs the occupation number basis of the interacting dot, such that the number of electrons on the dot is a well defined quantity at any given time. This allows for a straightforward calculation of the FCS using the transformation in Eq.\ (\ref{eq:dressing_HI}), while on the downside, tools like Wick's theorem are not applicable. We will now describe the details of the propagator hybridization expansion for the FCS generating function $Z(t,\lambda) \equiv Z_L(t,\lambda)$ within the NCA. The approximation is based on a second order expansion of the time evolution operator in the dot--lead coupling, which is treated self consistently within a Dyson resummation scheme. Using Eq.~\eqref{eq:general_def_Z} in the context of Sec.~\ref{sec:model} and assuming an initial condition factorized between the dot and bath spaces, $\rho(t=0,\lambda)=\rho_B\otimes\rho_D$, we obtain \begin{equation} Z(t, \lambda) = \text{Tr}(\varrho(t, \lambda)) = \sum_{\alpha\beta} \braket{\beta|\rho_D|\beta} K_{\alpha}^{\beta}(t,t,\lambda). \label{eq:Z_NCA} \end{equation} Here, $\alpha$ and $\beta$ are electron number states in the interacting dot, and the vertex function $K_{\alpha}^{\beta}(t,t', \lambda)$ takes the form \begin{eqnarray} K_{\alpha}^{\beta}(t,t', \lambda) &=& \text{Tr}_B \left\lbrace \rho_B \bra{\alpha} U_{-\lambda}^\dagger(t) \ket{\beta}\bra{\beta} U_\lambda(t') \ket{\alpha} \right\rbrace . \nonumber \\ \label{eq:def:K_chi} \end{eqnarray} $\text{Tr}_B$ denotes tracing over the bath degrees of freedom. We have also made use of a modified time evolution operator, $U_{\pm\lambda}(t) \equiv \mathrm{T}\exp(-i\int_0^t H_{\pm\lambda}(\tau) d\tau)$, where $\mathrm{T}$ is the time ordering operator. The vertex function $K_{\alpha}^{\beta}(t,t', \lambda)$ is the central object within the specific NCA method used in this work. In other contexts, without FCS, only the $\lambda=0$ form appears. This can be used to construct approximate expressions for the expectation values of a variety of observables.\cite{Eckstein2010,antipov_currents_2017} To derive the NCA, one starts with the perturbative expansion of Eq.~\eqref{eq:def:K_chi} in the dot--lead coupling $H_{DB}$, \begin{equation} \begin{aligned} K_{\alpha}^{\beta}(t,t',\lambda) &= \sum_{n,m=0}^\infty \hspace*{-0.2cm} (i)^n (-i)^m \int_0^t \hspace*{-0.2cm} d\tau_1 \dots \int_0^{\tau_{n-1}} \hspace*{-0.6cm} d\tau_n \int_0^{t'} \hspace*{-0.25cm} d\tau_1' \dots \int_0^{\tau_{n-1}'} \hspace*{-0.6cm} d\tau_n' \\ & \times \text{Tr}_B \Big\lbrace \rho_B \bra{\alpha} h_{-\lambda}(\tau_1) \dots h_{-\lambda}(\tau_n) e^{iH_0t} \ket{\beta} \\& \hspace*{1cm}\times \bra{\beta} e^{-iH_0t'} h_{\lambda}(\tau_1') \dots h_{\lambda}(\tau_n') \ket{\alpha} \Big\rbrace , \end{aligned} \end{equation} a diagrammatic representation of this expansion can for example be found in Refs.~\onlinecite{cohen_greens_2014, chen_anderson-holstein_2016}. Here, $h(\tau) = e^{iH_0\tau} H_{DB} e^{-iH_0\tau}$ and $H_0 = H_D+H_B$. The NCA is based on the lowest nonvanishing correction, which is then iterated until self consistency. The approximation is obtained by expressing the vertex function in terms of this correction, resulting in the Dyson equation \begin{equation} \begin{aligned} K_{\alpha}^{\beta}(t,t',\lambda) &=k_{\alpha}^{\beta}\left(t,t^{\prime}\right)+\sum_{\alpha^{\prime}\beta^{\prime}}\int\limits _{0}^{t}\int\limits _{0}^{t^{\prime}}d\tau_{1}d\tau_{1}^{\prime} \\ &k_{\beta'}^{\beta}\left(t-\tau_{1},t'-\tau_{1}'\right)\xi_{\alpha'}^{\beta'}\left(\tau_{1}-\tau_{1}', \lambda\right)K_{\alpha}^{\alpha'}\left(\tau_{1},\tau_{1}',\lambda\right). \end{aligned} \label{eq:Dyson_K} \end{equation} This is defined in terms of the the cross-branch hybridization self-energy \begin{eqnarray} \xi_{\alpha}^{\beta}(t, \lambda) &=& \sum_{\sigma\in \lbrace\uparrow, \downarrow\rbrace} \sum_{\ell\in\lbrace L,R \rbrace} \Big( \Delta_{\ell}^<(t) e^{-i\lambda t} \braket{\alpha|d_\sigma|\beta} \braket{\beta|d_\sigma^\dagger|\alpha} \nonumber \\ && +\Delta_{\ell}^>(t) e^{i\lambda t} \braket{\alpha|d_\sigma^\dagger|\beta} \braket{\beta|d_\sigma|\alpha} \Big), \end{eqnarray} and $k_{\alpha}^{\beta}\left(t,t^{\prime}\right)$, a term that is independent of the counting field and will be introduced momentarily. The term NCA refers to the fact that there are no crossing hybridization lines in the diagrammatic representation of the terms included in this approach (see, e.g., Ref.~\onlinecite{cohen_greens_2014}). Higher order expansions such as the one-crossing approximation employ different forms for the cross-branch self-energy.\cite{Pruschke_Anderson_1989,haule_anderson_2001,cohen_greens_2014} We now return to the final quantity defined in Eq.~\eqref{eq:Dyson_K}, $k$. This is a zeroth-order approximation for the vertex function that can be written in the form \begin{equation} k_{\alpha}^{\beta}(t,t') = \delta_{\alpha\beta} G_{\alpha}^*(t)G_{\beta}(t'). \end{equation} Here, \begin{equation} G_{\alpha}(t) = \braket{\alpha |\text{Tr}_B \left( \rho_B U_\lambda(t) \right) | \alpha} \label{eq:def_res_propagator} \end{equation} is a single-branch propagator that is diagonal in the many-particle basis of the dot due to the structure of the Hamiltonian, Eq.~\eqref{eq:H_full}. $G_{\alpha}(t)$ is also treated perturbatively in the dot--lead coupling, \begin{equation} \begin{aligned} G_{\alpha}(t) &= g_{\alpha}(t) - \int_0^t \int_0^{\tau_{1}} d\tau_1 d\tau_2 \\ & \hspace{1cm} \times \text{Tr}_B \Big\lbrace \rho_B \bra{\alpha} e^{-iH_0t} h_\lambda(\tau_1) h_\lambda(\tau_1) \ket{\alpha} \Big\rbrace +\dots, \label{eq:expansion_G} \end{aligned} \end{equation} with $g_{\alpha}(t)=\braket{\alpha|e^{-i H_D t}|\alpha}$ being the propagator on the isolated dot. We note that $G_\alpha(t)$ remains unmodified by the counting field, due to being restricted to one branch of the Keldysh contour such that all counting-field dependence on the right hand sinde of Eq.~(\ref{eq:expansion_G}) cancels out. Again, based on the lowest order of the expansion which is iterated until self consistency while neglecting diagrams that include hybridization lines, $G$ obeys a set of equations similar to those obeyed by $K$, but on a single branch of the Keldysh contour: \begin{equation} \begin{aligned}G_{\alpha}(t) & = g_{\alpha}(t)\\ & -\int_{0}^{t}\int_{0}^{\tau_{1}}d\tau_{1}d\tau_{2}g_{\alpha}\left(t-\tau_{1}\right)\Sigma_{\alpha}\left(\tau_{1}-\tau_{2}\right)G_{\alpha}\left(\tau_{2}\right) \end{aligned} \label{eq:Dyson_G} \end{equation} The single-contour self-energy $\Sigma_{\alpha}(t)$ depends on the propagator $G_{\alpha}(t)$ and is given within the NCA by \begin{eqnarray} \Sigma_{\alpha}(t) &=& \sum_{\sigma\in \lbrace\uparrow, \downarrow\rbrace} \sum_{\ell \in \lbrace L,R \rbrace} \sum_{\beta} \Big( \Delta_{\ell}^<(t) \cdot \braket{\alpha | d_\sigma | \beta} \braket{\beta | d_\sigma^\dagger | \alpha} \nonumber \\&& + \Delta_{\ell}^>(t) \cdot \braket{\alpha | d_\sigma^\dagger | \beta} \braket{\beta | d_\sigma | \alpha} \Big) \cdot G_{\beta}(t) . \label{eq:def_Sigma_G} \end{eqnarray} Again, for a diagrammatic representation of this part of the expansion as well as the single-contour self-energies we refer to Refs.~\onlinecite{cohen_greens_2014, chen_anderson-holstein_2016}. We conclude this section by commenting on the applicability of the NCA method, in particular to Kondo physics. Generally, the NCA is a method which is perturbative in the dot--lead coupling suggesting that its applicability is restricted to the strong interaction regime. Still, its nonlinear nature makes its regime of validity hard to judge from simple analytical considerations. This problem is exacerbated for nonequilibrium systems, systems with lower symmetry, and complex observables. It has been argued that in equilibrium, the NCA provides accurate results for systems exhibiting strong interaction strengths $U$ as long as the temperature $T$ is not too low.\cite{Eckstein2010} However, under nonequilibrium conditions, deviations from this rule have also been reported.\cite{cohen_numerically_2013} Moreover, the method presented here is exact in the atomic limit, independent of the electron--electron interaction strength $U$. It is formulated directly on the Keldysh contour, such that it is applicable to nonequilibrium conditions and not restricted to the linear response regime. Nevertheless, it is known that NCA methods fail to provide accurate results in the small temperature limit. As such and as we noted in Sec.~\ref{sec:introduction}, it fails to provide accurate results for the scaling behavior or the Kondo temperature unless corrections are employed. We therefore focus on higher-energy remnants of Kondo physics and nonequilibrium effects. Even there, the treatment should be considered qualitative rather than quantitative. \subsection{Quantum master equations}\label{sec:QME2} Similar to the NCA approaches outlined in Sec.~\ref{sec:NCA_Method}, the QME method is based on a second order expansion in the dot--bath coupling. In contrast to NCA-based theories, the QME approach does not employ a Dyson scheme to incorporate a subset of diagrammatic contributions to the hybridization. Rather, it uses a Liouville-space resummation. The QME is an equation of motion for the reduced density matrix of the dot, $\varrho(t) = \text{Tr}_B(\rho(t))$, where $\rho$ is the full density matrix of the dot and the bath and $\text{Tr}_B$ signifies a partial trace over the bath degrees of freedom. A formally exact equation of motion is provided by the Nakajima--Zwanzig equation.\cite{Nakajima1958, Zwanzig1960, Fick1990quantum} Expanding this to second order in the dot--bath coupling, in combination with the Markov approximation, results in the equation of motion:\cite{nitzan2013chemical} \begin{eqnarray} \frac{\partial}{\partial t}\varrho(t) &=& -i [H_D, \varrho(t)] \\ && \nonumber \hspace*{-1.5cm} - \hspace*{-0.15cm} \int_0^\infty \hspace*{-0.35cm} d\tau \text{Tr}_L \hspace*{-0.1cm} \left( \hspace*{-0.05cm} \left[ H_{DB}, \hspace*{-0.1cm}\left[e^{-i(H_D+H_B)\tau} H_{DB} e^{i(H_D+H_L)\tau}, \hspace*{-0.05cm} \varrho(t) \rho_L \hspace*{-0.05cm} \right] \hspace*{-0.05cm} \right] \hspace*{-0.05cm} \right) . \end{eqnarray} For the system under consideration, the populations and the coherences of the reduced density matrix decouple due to the form of Hamiltonians $H_D$ and $H_{DB}$ as given in Eqs.~\eqref{eq:H_D} and \eqref{eq:H_coupl}. As such, it is sufficient to consider the populations $p_\alpha(t) = \braket{\alpha|\varrho(t)|\alpha}$ of the reduced density matrix, whose dynamics obey the rate equations \begin{eqnarray} \frac{\partial}{\partial t} p_\alpha(t) &=& \sum_{\ell\in\lbrace L,R\rbrace \atop \beta\neq \alpha} |\vartheta_{\alpha\beta}| \times \label{eq:QME} \\ && \Big( \Gamma_{\ell}(\vartheta_{\alpha\beta}(\epsilon_\alpha - \epsilon_\beta)) f_\ell(\epsilon_\alpha - \epsilon_\beta)) \times p_\beta(t) \nonumber \\ && - \Gamma_{\ell}(\vartheta_{\beta\alpha}(\epsilon_\beta - \epsilon_\alpha)) f_\ell(\epsilon_\beta - \epsilon_\alpha)) \times p_\alpha(t)\Big). \nonumber \end{eqnarray} The $\alpha$ and $\beta$ are, as before, states in the dot subspace; $n_\alpha$ and $n_\beta$ are the number of electrons residing on the dot in the state $\alpha$ and $\beta$, respectively; and $\vartheta_{\alpha\beta} = \pm 1$ if $n_\alpha - n_\beta = \pm 1$, and zero otherwise. To obtains FCS, the populations are dressed by a counting field $p_\alpha(t) \rightarrow p_\alpha(t, \lambda)$.\cite{Bagrets2003} This corresponds to dressing the transition rates in Eq.~\eqref{eq:QME} according to \begin{eqnarray} \Gamma_{\ell}(\vartheta_{\alpha\beta}(\epsilon_\alpha - \epsilon_\beta)) &\rightarrow& \Gamma_{\ell}(\vartheta_{\alpha\beta}(\epsilon_\alpha - \epsilon_\beta)) e^{i\lambda\vartheta_{\alpha\beta}} . \end{eqnarray} The generating function is then calculated as the trace over the modified reduced density matrix, which is the sum over the modified populations: \begin{eqnarray} Z(t, \lambda) &=& \text{Tr}(\varrho(t, \lambda)) = \sum_\alpha p_\alpha(t, \lambda). \end{eqnarray} \section{Results}\label{sec:results} Subsequently, we use the NCA methodology described above to study the four lowest order cumulants, $I_L$, $S_L$, $S_{L2}$, and $S_{L3}$, at steady state. As we are considering the steady state, we henceforth drop the lead index $L$. Further, since these quantities diverge linearly in time, we plot their first time derivative. We will investigate their dependence on bias voltage, gate voltage, and temperature. \begin{figure*}[htb!] \centering \includegraphics{NCA_NEW_FIGSIZE_1st_derivative.pdf} \caption{ NCA results. The first derivative with respect to bias voltage is shown for the current (a), the noise (b), $S_2$ (c) and $S_3$ (d). From top to bottom, the temperature increases from $T=0.25\Gamma$ to $T=0.5\Gamma$ and finally $T=\Gamma$. The black dashed lines, which serve as a guide for the eye, indicate the conditions $\epsilon_0 = \mu_{\text{L/R}}$ and $2\epsilon_0+U = \mu_{\text{L/R}}$ that separate resonant from nonresonant transport. \label{fig:NCA_mpas_dO} } \vspace*{0.305cm} \centering \includegraphics{NCA_NEW_FIGSIZE.pdf} \caption{ NCA results. The second derivative with respect to bias voltage is shown for the current (a), the noise (b), $S_2$ (c) and $S_3$ (d). From top to bottom, the temperature increases from $T=0.25\Gamma$ to $T=0.5\Gamma$ and finally $T=\Gamma$. The black dashed lines, which serve as a guide for the eye, indicate the conditions $\epsilon_0 = \mu_{\text{L/R}}$ and $2\epsilon_0+U = \mu_{\text{L/R}}$ that separate resonant from nonresonant transport. Red solid lines indicate the parameters shown in Fig.~\ref{fig:cuts_ddO_comparison}. } \label{fig:NCA_mpas} \end{figure*} \subsection{Signature of correlations in observables associated to higher order cumulants}\label{sec:NCA_VS_QME} \begin{figure*}[htb!] \centering \includegraphics{QME_NEW_FIGSIZE.pdf} \caption{ QME results. The second derivative with respect to bias voltage is shown for the current (a), the noise (b), $S_2$ (c) and $S_3$ (d). From top to bottom, the temperature increases from $T=0.25\Gamma$ to $T=0.5\Gamma$ and finally $T=\Gamma$. The black dashed lines, which serve as a guide for the eye, indicate the conditions $\epsilon_0 = \mu_{\text{L/R}}$ and $2\epsilon_0+U = \mu_{\text{L/R}}$ that separate resonant from nonresonant transport. } \label{fig:QME_mpas} \end{figure*} \begin{figure}[htb!] \centering \includegraphics{ddO_cuts_NEW_FIGSIZE.pdf} \caption{ NCA results. The second derivative with respect to bias voltage is shown for the current $I$ (upper left), the noise $S$ (upper right), and the higher order cumulants $S_2$ (lower left) and $S_3$ (lower right). The gate voltage is set to $\Phi_{\text{gate}}=2\Gamma$, the corresponding Kondo temperature is estimated to be $T_K \approx 0.87\Gamma$. These are horizontal cuts across the data in Fig.~\ref{fig:NCA_mpas}, as marked by the red solid lines. } \label{fig:cuts_ddO_comparison} \end{figure} We begin by exploring the influence of Kondo physics on higher order cumulants. The aim of this section is to establish the existence of effects in higher order cumulants that are related to the Kondo phenomenon at the edge of the Kondo regime. We do not investigate the scaling regime of the Kondo model, where methods like NRG would be most appropriate. Rather, we establish that the propagator NCA represents a qualitatively better alternative to QME approximations for the study of counting statistics. To this end, we compare NCA results, where a qualitative signature of such phenomena is expected, with QME results, where none is expected. This is already a huge advantage of the NCA over the QME. We are working at the edge of the Kondo regime where the dot-lead coupling is not the smallest energy scale of the system. Therefore, agreement between the NCA and the QME data can not be expected, as this would require far higher temperatures, where the temperature is the largest parameter of the system. We refrain from considering such high temperatures in favor of focusing on the installment of the Kondo phenomenon in higher order cumulants. Later, in Sec.~\ref{sec:NCA_VS_iQMC}, we evaluate that the NCA predictions are more accuracy (yet not qualitative) by comparing with numerically exact iQMC results. Fig.~\ref{fig:NCA_mpas_dO} provides an overview over the first derivatives with respect to bias voltage of observables $I$, $S$, $S_2$, and $S_3$ as a function of bias and gate voltage calculated by the NCA method. These first derivatives, such as the conductance $\partial I/\partial V$, are standard observables in various contexts. Columns of panels correspond to the different observables, while rows correspond to different temperatures. All derivatives with respect to bias voltage presented in this manuscript are calculated using the symmetric finite difference method on a sufficiently dense grid. Fig.\ \ref{fig:NCA_mpas_dO} contains a great deal of information in a rather compact form. To make them easier to understand, it is useful to focus on two particular sets of physical features. First, the transition between resonant and nonresonant transport, which is marked by dashed black lines, and for which agreement between the NCA and the QME results can be found in the large temperature limit. Second, features associated with the emergence of Kondo and mixed-valence physics are visible in some of the observables. The signature of these correlation-driven effects are features centered around zero bias voltage, which are more pronounced for some observables than for others, and which disappear with increasing temperature. Since this central feature and its bias voltage dependency is of primary interest, but can be weak in some regimes, we henceforth consider the second derivative of the cumulants with respect to the bias voltage. Figs.~\ref{fig:NCA_mpas} and \ref{fig:QME_mpas} provide an overview of the behavior of of the second derivatives $\partial^2 I/\partial V^2$, $\partial^2 S/\partial V^2$, $\partial^2 S_2/\partial V^2$, and $\partial^2 S_3/\partial V^2$ as a function of bias gate voltage at different temperatures in the NCA and QME approximations, respectively. To facilitate comparison, the figures employ equivalent false color representations of the data. Yet, we emphasize that in the parameter regime under investigation no agreement between the two approaches can be expected and neither of the two methods is suspected to provide quantitative results. As before, we predominantly focus on two features, first of which is the transition between the resonant and nonresonant transport regime, which is again highlighted by black dashed lines. The associated behavior is clearly apparent in all QME plots and accentuated by the fact that the QME method neglects broadening effects provided by the coupling to the leads. In contrast to that, the NCA method accounts for some broadening provided by the leads, the precise impact of which depends on the parameters of the system as well as the bias and the gate voltage. This broadening leads to an onset of resonant transport, which is smeared over a wider bias regime as compared to the QME results. This fact is emphasized upon considering the second derivative with respect to bias voltage; even to the extent that some features seen in the QME are completely eliminated by broadening in the NCA. However, in particular when comparing the NCA and QME data for $T=1.0\Gamma$, where other effects take a backseat, some qualitative agreement between the approaches is observed. The second feature is the emergence of Kondo physics centered around zero bias voltage, this time clearly visible in the NCA plots, but completely missing from the QME data. As can be seen by comparing the top and middle panels of Fig.~\ref{fig:NCA_mpas}, higher cumulants reveal progressively richer and more complex dependencies on the bias and gate voltages. Thus, they provide increasingly detailed modes of characterization. An interesting point to note is that the temperature at which cumulants exhibit correlated phenomena does not appear to vary significantly with the cumulant order. This is true both in and out of equilibrium, and to some degree supports the idea that the low energy physics is controlled by a few universal energy scales even when a bias voltage is applied. Moreover, we notice that the signatures for the Kondo effect appear more pronounced and extend over a larger bias range close to $\Phi_{\text{gate}}=\pm4\Gamma$, in particular when compared to the particle-hole symmetric case at $\Phi_{\text{gate}}=0$. This can be rationalized with the dependence of the Kondo temperature on the gate voltage, which is estimated to increase from $T_K \approx 0.8\Gamma$ for $\Phi_{\text{gate}}=0$ to $T_K \approx 2.8\Gamma$ for $\Phi_{\text{gate}}=\pm4\Gamma$.\cite{Hewson1997kondo} Still, we emphasize again that the NCA may described certain trends correctly, but is not expected to give quantitatively reliable results for the Kondo temperature. Also, at the parameter regime under investigation, it can not be expected that the shape of the Kondo features is solely determined by the Kondo temperature. A more detailed study of these features requires a more systematic study going beyond the NCA, where also the deep Kondo regime and the scaling behavior can be accessed. Further details are revealed by considering parameters below the resonance condition, at a constant nonzero gate voltage and a range of bias voltages. A cut of this kind across the data of Fig.~\ref{fig:NCA_mpas} is shown in Fig.~\ref{fig:cuts_ddO_comparison}, and the parameters chosen for the cut are marked in Fig.~\ref{fig:NCA_mpas} by solid red lines. We refrain from reproducing the corresponding QME data here, as the QME results does not contain information on the Kondo feature (see Fig.\ \ref{fig:QME_mpas}) and would only complicate the plots. As even(odd) cumulants are symmetric(antisymmetric) with respect to bias voltage, it is instructive to directly compare $I$ with $S_2$; and respectively $S$ with $S_3$. The data reveals that $\partial^2I/\partial V^2$ exhibits a single peak--dip structure which corresponds to the well-known peak in conductance at low bias voltage. Deep in the Kondo regime, the width of the conductance will be given by the Kondo temperature, but we do not expect the NCA to reproduce such physics quantitatively. Here, at the edge of the Kondo regime, we find that the resonance exhibits additional broadening. Comparing this to the results for $\partial^2S_2/\partial V^2$, another shoulder appears at low temperature at a bias voltage of about $V\sim1.5\Gamma$. This indicates that the noise analog of conductance, $\partial S_2/\partial V$, exhibits a structure where two peaks centered around zero bias voltage overlay each other. The underlying physical mechanism for this feature can not be determined with certainty given the present methodology, but it is known that higher order cumulants are sensitive to a wider energy range and the effect may be associated to the availability of different transport channels at higher bias voltages. Similarly, $\partial^2S/\partial V^2$ shows a single pronounced peak centered around zero bias voltage, whereas at low temperature, $\partial^2S_3/\partial V^2$ develops distinctive side peaks at a bias voltage $V\sim\Gamma$. We assume that the width and the magnitude of these features are associated to the Kondo temperature in the deep Kondo regime. The realization that higher order cumulants show richer Kondo features suggests that they can aid the identification of correlation effects. In particular in situations where higher order cumulants are measured but the availability of data is otherwise limited, such that standard procedures like measuring the scaling of the conductance with temperature are not possible. In many experiments and numerical methods, it is difficult to measure small signals like the current at very low bias voltages. In such cases, considering higher order cumulants may provide a diagnostic tool for identifying Kondo correlations that is applicable at higher biases and is characterized by larger signals. \subsection{Generalized Fano factors and their implications}\label{sec:Fano} \begin{figure}[htb!] \centering \includegraphics{Fano_NEW_FIGSIZE.pdf} \caption{ NCA results. The second derivative with respect to bias voltage is shown for the generalized Fano factors $F' = S_2/I$ (a) and $F'' = S_3/S$ (b). From top to bottom, the temperature increases from $T=0.25\Gamma$ to $T=\Gamma$. The black dashed lines, which serve as a guide for the eye, indicate the conditions $\epsilon_0 = \mu_{\text{L/R}}$ and $2\epsilon_0+U = \mu_{\text{L/R}}$ that separate resonant from nonresonant transport. } \label{fig:ddF_NCA_mpas} \end{figure} As noted in Sec.~\ref{sec:FCS}, the Fano factor $F=S/I$ manifests a singularity at zero voltage, where the current (odd with respect to the bias voltage) disappears while the noise (even with respect to the bias voltage) does not. $F$ will be revisited in Sec.~\ref{sec:NCA_VS_iQMC}, where we benchmark the NCA method against numerically exact results. In the following, we consider the generalized Fano factors $F' \equiv S_2/I$ and $F'' \equiv S_3/S$. These are the lowest order ratios comprising only odd and even cumulants, respectively. They are therefore free of singular behavior at zero voltage, making them potentially useful for exploring Kondo physics. For both observables, it is once again more convenient to plot the second derivative with respect to bias voltage. In Fig.~\ref{fig:ddF_NCA_mpas} these are shown at the same parameter ranges used in Figs.~\ref{fig:NCA_mpas} and \ref{fig:QME_mpas}. $F'$ and $F''$, respectively, are shown in the left and right panels, temperature increases as we go to lower panels. Both generalized Fano factors exhibit sharp, well defined Kondo features at low temperatures. As before, these correlation driven features disappear at higher temperatures. The separate cumulants in Fig.~\ref{fig:NCA_mpas} are dominated by the signature of the transition between off-resonant and resonant transport. Remarkably, however, in Fig.~\ref{fig:ddF_NCA_mpas} $F'$ exhibits Kondo features of comparable scale to those delineating the resonant transport edge, and $F''$ is dominated by the Kondo features. This suggests that symmetry-corrected higher order Fano factors contain detailed information regarding correlation effects, and may be a more sensitive probe of such physics than lower order quantities. As the temperature is lowered and the Kondo effect develops, the value of $F'$ and $F''$ at low bias voltages increases, except near the resonance condition. Since the Kondo effect enhances the current $I$, an increase in $F'$ implies that $S_2$ is more strongly enhanced than $I$. Correspondingly, the underlying probability distribution describing electron transfer becomes increasingly skewed. Similarly, while the behavior of the noise is more complicated, $S$ is mostly suppressed by Kondo physics, and the same is true for $S_3$. An increase in $F''$ therefore implies a weaker suppression of $S_3$ than that of $S$, and an increasingly bifurcated probability distribution. A more detailed analysis of the probabilities $P_L(t,n)$ would be interesting in this regard, but is beyond the scope of the present work. \subsection{Comparison with numerically exact results}\label{sec:NCA_VS_iQMC} \begin{figure*}[htb!] \centering \includegraphics{Fano_cuts_NEW_FIGSIZE.pdf} \caption{ Fano factor $F$ (a); and generalized Fano factors $F'=S_2/I$ (b), and $F''=S_3/S$ (c), at a gate voltage of $\Phi_{\text{gate}}=2\Gamma$. Colors correspond to different temperatures. Solid lines are NCA results, dashed lines are QME results, and circles are numerically exact results obtained with iQMC. The dashed black line in the plots highlights the value $1$, which is associated with a (classical) Poissonian distribution. } \label{fig:cuts_F} \end{figure*} It is clear from the data that we have presented so far that, when considering higher order transport cumulants, the NCA method captures physics not accounted for by the QME method. This is not entirely surprising, since it is known to do so for single-particle correlation functions and for the current. However, since both these techniques are approximate, it is not at all obvious that the NCA actually provides higher accuracy as well. We will therefore compare the NCA and QME results to numerically exact iQMC data, in order to assess which approximate method provides more accurate results. Fig.~\ref{fig:cuts_F} depicts the Fano factor $F$ and its generalizations $F'$ and $F''$ as functions of the bias voltage, once again for three different temperatures. Solid lines represent NCA data and dashed lines represent QME data. Dots indicate iQMC results converged with respect to all numerical parameters. Error bars and shading on these dots correspond to confidence intervals (see App.~\ref{app:iQMC_error} for details regarding how these are obtained). We do not consider second derivatives with respect to the bias voltage here, since obtaining these accurately in iQMC involves further technical challenges. Similarly, we refrain from discussing data below a bias voltage of 0.5. We note that in general, lower voltages and higher order cumulants are more difficult to access in iQMC (see Apps.~\ref{app:iQMC_error} and \ref{app:iQMC_lambda}). Consequently, if one is interested in accessing the details and the scaling of the features discussed above, it might be advantageous to resort to another numerically exact method. The left panel of Fig.~\ref{fig:cuts_F} shows the Fano factor $F$. As noted in Sec.~\ref{sec:Fano}, at low bias voltages $F$ is dominated by the Nyquist--Johnson singularity and the isolation of Kondo-related features is difficult, but here we focus on the accuracy of the different methods. Generally speaking, reasonable agreement can be observed between the NCA, QME and the iQMC results for all temperatures, both qualitatively and quantitatively. At high temperatures and low voltages, NCA and QME results are almost indistinguishable from each other and accurately capture the trends in the exact result. Importantly, however, the QME always predicts Poisson statistics with a Fano factor of 1 at large bias voltages. The NCA correctly captures deviations from this, a result validated by the iQMC data. Results for the generalized Fano factor $F'$ are presented in the middle panel of Fig.~\ref{fig:cuts_F}. Overall, the three methods predict a qualitatively similar dependence of $F'$ on bias voltage and temperature, though there are qualitative differences. The QME method predicts larger values than the NCA approach, while the outcome of the NCA calculations is in better agreement with the iQMC data. Despite the increased errors associated with the iQMC results for $F'$, it is possible to establish that the NCA method provides more accurate results than the QMC approach. However, a quantitatively accurate observation, especially at low voltages and temperatures where the Kondo effect can be most cleanly defined and observed, is beyond the iQMC data at hand. For the second generalized Fano factor $F''$ depicted in the right panel of Fig.~\ref{fig:cuts_F}, the error associated with the iQMC scheme dominates the exact data to the extent that trends in the bias and temperature dependence are non obvious. For this Fano factor, the iQMC method in its current implementation breaks down, indicating an area where the usage of approximate schemes is more favorable. When comparing the QME and the NCA results, the QME approach again predicts larger values for $F''$ than the NCA method. As before, the NCA data is in better agreement with the iQMC results, hinting towards a higher accuracy of the NCA method. For a more detailed analysis, better iQMC data is required. \section{Summary}\label{sec:summary} We developed a simple theoretical approach based on the noncrossing approximation (NCA) to the study of full counting statistics (FCS) in nonequilibrium transport, and implemented it for the Anderson impurity model. The approach can be easily generalized to more generic models. Its accuracy can be improved by diagrammatic means, for example by considering one-crossing and vertex corrections. The NCA method requires substantially more modest computational resources than its numerically exact counterpart, the inchworm Monte Carlo (iQMC) method; and is for most practical purposes almost as easy to use as the commonly employed quantum master equations (QMEs). In the present case, the QME and NCA data was generated on a desktop workstation within a few hours and approximately a day, respectively; while the iQMC results were generated over several days on a small cluster. Despite this simplicity, the NCA captures some physics not present in the QME approximation. To showcase the advantages of the NCA approach to FCS, we compared it against the QME method for the first few transport cumulants. Unsurprisingly, this illustrated that the first shows signatures of the Kondo effect while the latter does not. More interestingly, it showed that the NCA predicts a rich and detailed set of features in the higher order cumulants. Experimentally, it is often advantageous to consider ratios between transport cumulants, like the Fano factor. However, at low bias voltages the Fano factor is dominated by a Nyquist--Johnson singularity that obstructs one's view of Kondo-related features. We explored a set of generalized, symmetry-motivated Fano factors constructed from higher order cumulants that are designed to remove this singularity. Within the NCA method, we showed that these quantities embody excellent probes of Kondo physics. Finally, we established the accuracy of the method upon comparison with numerically exact benchmarks obtained from the iQMC scheme. We showed that the predictability of approximate NCA method is superior to data provided by the QME approach. For the Fano factor, we demonstrated that the NCA can even provide qualitative results. \section*{Acknowledgements} A.E. was supported by the Raymond and Beverly Sackler Center for Computational Molecular and Materials Science, Tel Aviv University. E.G. was supported by the Simons Collaboration on the Many Electron Problem. G.C. acknowledges support by the Israel Science Foundation (Grants No.~1604/16 and 218/19). This research was supported by Grant No.~2016087 from the United States-Israel Binational Science Foundation (BSF).
train/arxiv
BkiUbgs5i7PA9KOTUUNE
5
1
\section{Introduction} Gas metallicity is regulated by a complex interplay between star formation, infall of metal-poor gas and outflow of enriched material. A fundamental discovery is the relation between stellar mass $M_\star$\ and metallicity \citep{McClure68,Lequeux79,Garnett02,Tremonti04,Lee06}, with more massive galaxies showing higher metallicities. The origin of this relation is debated, and many different explanations have been proposed, including ejection of metal-enriched gas (e.g., \citealt{Edmunds90,Lehnert96a,Tremonti04}), ``downsizing'', i.e., a systematic dependence of the efficiency of star formation with galaxy mass (e.g., \citealt{Brooks07,Mouchine08,Calura09a}), variation of the IMF with galaxy mass \citep{Koppen07}, and infall of metal-poor gas \citep{Finlator08,Dave10}. The mass-metallicity relation has been studied by \cite{Erb06a} at z$\sim$2.2 and by \cite{Maiolino08} and \cite{Mannucci09b} at z=3--4, finding a strong and monotonic evolution, with metallicity decreasing with redshift at a given mass (see fig.\ref{fig:massmetevol}. The same authors \citep{Erb06a,Erb08,Mannucci09b} have also studied the relation between metallicity and gas fraction, i.e., the effective yields, obtaining clear evidence of the importance of infall in high redshift galaxies. If infall is at the origin of the star formation activity, and outflows are produced by exploding supernovae (SNe), a relation between metallicity and SFR is likely to exist. In other words, SFR is a parameter that should be considered in the scaling relations that include metallicity, such as the mass-metallicity relation. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth]{massmetevol.ps} } \caption{\footnotesize Evolution of the mass-metallicity relation from local to high redshift galaxies from \cite{Mannucci09b}. Data are from \cite{Kewley08} (z=0.07), \cite{Savaglio05} (z=0.7), \cite{Erb06a} (z=2.2) and \cite{Mannucci09b} (z=3--4). } \label{fig:massmetevol} \end{figure} \begin{figure*} \centerline{ \includegraphics[width=0.48\textwidth]{massmet.ps} \includegraphics[width=0.48\textwidth]{sfrmet.ps} } \caption{\footnotesize {\em Left panel:} The mass-metallicity relation of local SDSS galaxies. The grey-shaded areas contain 64\% and 90\% of all SDSS galaxies, with the thick central line showing the median relation. The colored lines show the median metallicities, as a function of $M_\star$, of SDSS galaxies with different values of SFR. {\em Right panel:} median metallicity as a function of SFR for galaxies of different $M_\star$. At all $M_\star$\ with log($M_\star$)$<$10.7, metallicity decreases with increasing SFR at constant mass . } \label{fig:massmet} \end{figure*} \section{The local Fundamental Metallicity Relation} To test the hypothesis of a correlation between SFR and metallicity in the present universe and at high redshift, we have studied several samples of galaxies at different redshifts whose metallicity, $M_\star$, and SFR have been measured. A full description of the data set is given in \cite{Mannucci10} Local galaxies are well measured by the SDSS project \citep{Abazajian09}. Among the $\sim10^6$ galaxies with observed spectra, we selected star forming objects with redshift between 0.07 and 0.30, having a signal-to-noise ratio (SNR) of H$\alpha$\ of SNR$>$25 and dust extinction $A_V<2.5$. Total stellar masses $M_\star$\ from \cite{Kauffmann03a} were used, scaled to the \cite{Chabrier03} initial mass function (IMF). SFRs inside the spectroscopic aperture were measured from the H$\alpha$\ emission line flux corrected for dust extinction as estimated from the Balmer decrement. The conversion factor between H$\alpha$\ luminosity and SFR in \cite{Kennicutt98} was used, corrected to a \cite{Chabrier03} IMF. Oxygen gas-phase abundances were measured from the emission line ratios as described in \cite{Maiolino08}. An average between the values obtain from [NII]$\lambda$6584/H$\alpha$\ and R23=([OII]$\lambda$3727+[OIII]$\lambda$4958,5007)/H$\beta$\ was used. The final galaxy sample contains 141825 galaxies. The grey-shaded area in the left panel of Fig.~\ref{fig:massmet} shows the mass-metallicity relation for our sample of SDSS galaxies. Despite the differences in the selection of the sample and in the measure of metallicity, our results are very similar to what has been found by \cite{Tremonti04}. The metallicity dispersion of our sample, $\sim$0.08~dex, is somewhat smaller to what have been found by these authors, $\sim$0.10~dex, possibly due to different sample selections and metallicity calibration. The left panel of Fig.~\ref{fig:massmet} also shows, as a function of $M_\star$, the median metallicities of SDSS galaxies having different levels of SFR. It is evident that a systematic segregation in SFR is present in the data. While galaxies with high $M_\star$\ (log($M_\star$)$>$10.9) show no correlation between metallicity and SFR, at low $M_\star$\ more active galaxies also show lower metallicity. The same systematic dependence of metallicity on SFR can be seen in the right panel of Fig.~\ref{fig:massmet}, where metallicity is plotted as a function of SFR for different values of mass. Galaxies with high SFRs show a sharp dependence of metallicity on SFR, while less active galaxies show a less pronounced dependence. The dependence of metallicity on $M_\star$\ and SFR can be better visualized in a 3D space with these three coordinates, as shown in Figure~\ref{fig:cfr1}. SDSS galaxies appear to define a tight surface in the space, the Fundamental Metallicity Relation (FMR). The introduction of the FMR results in a significant reduction of residual metallicity scatter with respect to the simple mass-metallicity relation. The dispersion of individual SDSS galaxies around the FMR, is $\sim$0.06~dex when computed across the full FMR and reduces to $\sim$0.05~dex i.e, about 12\%, in the central part of the relation where most of the galaxies are found. The final scatter is consistent with the intrinsic uncertainties in the measure of metallicity ($\sim$0.03~dex for the calibration, to be added to the uncertainties in the line ratios), on mass (estimated to be 0.09~dex by \citealt{Tremonti04}), and on the SFR, which are dominated by the uncertainties on dust extinction. The reduction in scatter with respect to the mass-metallicity relation becomes even more significant when considering that most of the galaxies in the sample cover a small range in SFR, with 64\% of the galaxies ($\pm$1$\sigma$) is contained inside 0.8~dex. The mass-metallicity relation is not an adequate representation of galaxy samples with a larger spread of SFRs, as usually find at intermediate redshifts. \begin{figure*}[t] \centerline{ \includegraphics[width=0.32\textwidth]{3Dfmr3.ps} \includegraphics[width=0.32\textwidth]{3Dfmr4.ps} \includegraphics[width=0.32\textwidth]{3Dfmr5.ps} } \caption{\footnotesize Three projections of the Fundamental Metallicity Relation among $M_\star$, SFR and gas-phase metallicity. Circles without error bars are the median values of metallicity of local SDSS galaxies in bin of $M_\star$\ and SFR, color-coded with SFR as shown in the colorbar on the right. These galaxies define a tight surface in the 3D space, with dispersion of single galaxies around this surface of $\sim$0.05~dex. The black dots show a second-order fit to these SDSS data, extrapolated toward higher SFR. Square dots with error bars are the median values of high redshift galaxies, as explained in the text. Labels show the corresponding redshifts. The projection in the lower-left panel emphasizes that most of the high-redshift data, except the point at z=3.3, are found on the same surface defined by low-redshift data. The projection in the lower-right panel corresponds to the mass-metallicity relation, as in Fig.~\ref{fig:massmet}, showing that the origin of the observed evolution in metallicity up to z=2.5 is due to the progressively increasing SFR. } \label{fig:cfr1} \end{figure*} \begin{figure*} \centerline{ \includegraphics[width=0.48\textwidth]{plotevol2.ps} \includegraphics[width=0.48\textwidth]{plotevol1.ps} } \caption{\footnotesize {\em Left:} Metallicity as a function of SFR for galaxies in the three bins of $M_\star$\ containing high-redshift galaxies. The values of log($M_\star$) are shown by the labels on the left. Empty square dots are the median values of metallicity of local SDSS galaxies, with error bars showing 1$\sigma$ dispersions. Lines are the fits to these data. Solid dots are median values for high-redshift galaxies with z$<$2.5 in the same mass bins, with labels showing redshifts. {\em Right:} metallicity difference from the FMR for galaxies at different redshifts, color-coded in mass as in the left panel. The SDSS galaxies defining the relation are showing at z$\sim$0.1 with their dispersion around the FMR. All the galaxy samples up to z=2.5 are consistent with no evolution of the FMR defined locally. Metallicities lower by $\sim$0.6~dex are observed at z$\sim$3.3. } \label{fig:plotevol} \end{figure*} \section{The FMR at high-redshift} \label{sec:highz} The local galaxies can be compared with several samples of high-redshift objects. We extracted from the literature samples of galaxies in four redshift bins, for a total of $\sim$300 objects, having published values of emission line fluxes, $M_\star$, and dust extinction: 0.5$<$z$<$0.9 (\citealt{Savaglio05}, GDDS galaxies), 1.0$<$z$<$1.6 \citep{Shapley05a,Liu08,Wright09,Epinat09a}, 2.0$<$z$<$2.5 \citep{Erb06a,Law09b,Lehnert09,Forster-Schreiber09}, and 3.0$<$z$<$3.7 \citep{Maiolino08,Mannucci09b}. The same procedure used for the SDSS galaxies was applied to these galaxies. Galaxies at all redshifts follow well defined mass-metallicity relations (see, for example, \citealt{Mannucci09b}, and references therein). For this reason each of these samples, except the one an z$\sim$3.3 that contains 16 objects only, is divided into two equally-numerous samples of low- and high-$M_\star$\ objects. Median values of $M_\star$, SFR and metallicities are computed for each of these samples. Galaxies up to z$\sim$2.5 follow the FMR defined locally, with no sign of evolution. This is an unexpected result, as simultaneously the mass-metallicity relation is observed to evolve rapidly with redshift (see Fig.\ref{fig:massmetevol}). The solution of this apparent paradox is that distant galaxies have, on average, larger SFRs, and, therefore, fall in a different part of the same FMR. In the SDSS sample, metallicity changes more with $M_\star$\ ($\sim$0.5~dex from one extreme to the other at constant SFR, see Fig.~\ref{fig:massmet}) than with SFR ($\sim$0.30~dex at constant mass). Therefore mass is the main driver of the level of chemical enrichment of SDSS galaxies. This is related to the fact that galaxies with high SFRs, the objects showing the strongest dependence of metallicity on SFR (see the right panel of fig.~\ref{fig:massmet}), are quite rare in the local universe. At high redshifts, mainly active galaxies are selected, and the dependence of metallicity on SFR becomes dominant. Galaxies at z$\sim$3.3 show metallicities lower of about 0.6~dex with respect to both the FMR defined by the SDSS sample and galaxies at 0.5$<$z$<$2.5. This is an indication that some evolution of the FMR appears at z$>$2.5, although its size can be affected several potential biases (see \citealt{Mannucci10} for a full discussion). A larger data set at z$>$3 is needed to solve this question. \section{What the FMR is telling us} The interpretation of these results must take into account several effects. In principle, metallicity is a simple quantity as it is dominated by three processes: star formation, infall, outflow. If the scaling laws of each of these three processes are known, the dependence of metallicity on SFR and $M_\star$\ can be predicted. In practice, these three processes have a very complex dependence of the properties of the galaxies, and can introduce scaling relations in many different ways. First, it is not known how {\em outflows}, due to either SNe or AGNs, depend on the properties of the galaxies. Second, {\em infalls} of pristine gas are expected to influence metallicity in two ways: metallicity can be reduced by the direct accretion of metal-poor gas, and can be increased by the star formation activity which is likely to follow accretion. Third, the star formation activity is known to depend on galaxy mass, with heavier galaxies forming a larger fraction of stars at higher redshifts, and this effect produce higher metallicities in more massive galaxies. The dependence of metallicity on SFR can be explained by the dilution effect of the infalling gas. A simple model can be constructed (see \citealt{Mannucci10}) where a variable amount of metal-poor, infalling gas, forming stars according to the Schmidt-Kennicutt law, can explain the dependence of metallicity on SFR. For this scenario to work, the timescales of chemical enrichment must be longer than the dynamical scales of the galaxies, over which the SFR is expected to evolve. In other words, galaxies on the FMR are in a {\em transient phase}: after an infall, galaxies first evolve towards higher SFR and lower metallicities. Later, while gas is converted into stars and new metals are produced, either galaxies drop out of the sample because they have faint H$\alpha$, or evolve toward higher values of mass and metallicities along the FMR. In this scenario, the dependence of metallicity on SFR is due to infall and dominates at high redshifts, where galaxies with massive infalls and large SFRs are found. In contrast, in the local universe such galaxies are rare, most of the galaxies have low level of accretion, and abundances are dominated by the dependence on mass, possibly due to outflow. In many local galaxies, timescales of chemical enrichment can be shorter than the other relevant timescales (e.g., \citealt{Silk93}), and galaxies can be in a {\em quasi steady-state situation}, in which gas infall, star formation and metal ejection occur simultaneously \citep{Bouche09}. Assuming this quasi steady-state situation, in which infall and SFR are slowly evolving with respect to the timescale of chemical enrichment, it can be shown \citep{Mannucci10} that our results support a scenario where outflows are inversely proportional to mass and increase with SFR$^{0.65}$.\\ The small scatter of SDSS galaxies around the FMR can be used to constrain the characteristics of gas accretion. For this infall/outflow scenario to work and produce a very small scatter round the FMR, two conditions are simultaneously required: (1) star formation is always associated to the same level of metallicity dilution due to infall of metal-poor gas; (2) there is a relation between the amount of infalling and outflowing gas and the level of star formation. These conditions for the existence of the FMR fits into the smooth accretion models proposed by several groups \citep{Bournaud09,Dekel09}, where continuos infall of pristine gas is the main driver of the grow of galaxies. In this case, metal-poor gas is continuously accreted by galaxies and converted in stars, and a long-lasting equilibrium between gas accretion, star formation, and metal ejection is expected to established. \begin{figure*}[t] \label{figmet} \centerline{\includegraphics[width=0.8\textwidth]{cresci_fig1.eps}} \caption{\footnotesize Surface brightness of the [OIII]$\lambda$5007 line, velocity map, and gas phase metallicity, plotted as relative abundances of oxygen and hydrogen parameterized in units of $12+log(O/H)$, of the three galaxies in \cite{Cresci10}. Lower metallicity region are surrounded by a more enriched disk. The crosses in each panel mark the position of the continuum peak.} \end{figure*} \section{Abundance gradients in high-redshift galaxies} Recently \citep{Cresci10}, we have obtained a direct evidence of the presence of smooth accretion of gas in high redshift galaxies. We selected three Lyman-break galaxies among the AMAZE \citep{Maiolino08} and LSD \citep{Mannucci09b} samples which show a remarkably symmetric velocity field in the [OIII] emission line, which traces the ionized gas kinematics (see Fig. \ref{figmet}). Such kinematics indicates that these are rotationally supported disks (Gnerucci et al., in preparation), with no evidence for more complex merger-induced dynamics. Near-infrared spectroscopic observations of the galaxies were obtained with the integral field spectrometer SINFONI on VLT, and we used the flux ratios between the main rest-frame optical lines to obtain the metallicity map shown in Fig. \ref{figmet}. An unresolved region with lower metallicity is evident in each map, surrounded by a more uniform disk of higher metal content. In one case, CDFa-C9, the lower metallicity region is coincident with the galaxy center, as traced by the continuum peak, while it is offset by $\sim 0.60''$ (4.6 kpc) in SS22a-C16 and $\sim0.45''$ (3.4 kpc) in SS22a-M38. On the other hand, in all the galaxies the area of lower metallicity is coincident or closer than $0.25''$ (1.9 kpc, half of the PSF FWHM) to the regions of enhanced line emission, tracing the more active star forming regions. The average difference between high and low metallicity regions in the three galaxies is $0.55$ in units of 12+log(O/H), larger than the $\sim0.2-0.4$ dex gradients measured in the Milky Way and other local spirals \citep{van-Zee98} on the same spatial scales. The measured gas phase abundance variations have a significance between 98\% and 99.8\% . It can be shown \citep{Cresci10} that variations of ionization parameter across the galaxies cannot explain the observed gradients of line ratios, and that different metallicities are really requested. Current models of chemical enrichment in galaxies \citep{Molla97} cannot reproduce our observations at the moment, as they assume radially isotropic gas accretion onto the disk and the instantaneous recycling approximation. Nevertheless, the detected gradients can be explained in the framework of the cold gas accretion scenario \citep{Keres05} recently proposed to explain the properties of gas rich, rotationally supported galaxies observed at high redshift \citep{Cresci09,Forster-Schreiber09}. In this scenario, the observed low metallicity regions are created by the local accretion of metal-poor gas in clumpy streams \citep{Dekel09}, penetrating deep onto the galaxy following the potential well, and sustaining the observed high star formation rate in the pre-enriched disk. Stream-driven turbulence is then responsible for the fragmentation of the disks into giant clumps, as observed at $z \geq 2$ \citep{Genzel08,Mannucci09b}, that are the sites of efficient star formation and possibly the progenitors of the central spheroid. This scenario is also in agreement with the dynamical properties of our sample, which appears to be dominated by gas rotation in a disk with no evidence of the dynamical asymmetries typically induced by mergers. The study of the relations between metallicity gas fractions, effective yields, and SFR \citep{Cresci10} show that the low-metallicity regions can be well explained by amounts of infalling gas much larger than in the remaining high-metallicity regions. Our observations of low metallicity regions in these three galaxies at $z\sim3$ therefore provide the evidence for the actual presence of accretion of metal-poor gas in massive high-z galaxies, capable to sustain high star formation rates without frequent mergers of already evolved and enriched sub-units. This picture was already indirectly suggested by recent observational studies of gas rich disks at $z\sim1-2$ \citep{Forster-Schreiber09,Tacconi10}, and is in agreement with the FMR describe above. \bibliographystyle{/Users/filippo/arcetri/Papers/aa-package/bibtex/aa}
train/arxiv
BkiUduw5qoYDgb9DH9uU
5
1
\section{Introduction}\label{sec:1} In this paper, we consider the 2D incompressible Navier--Stokes equations with fractional viscosity \begin{equation} \label{1.1_NS} \left\{ \begin{aligned} & \pt v + \divg (v \otimes v ) + \nabla p + \nu (-\Delta)^{\theta} v = 0, \\ & \divg v = 0, \end{aligned} \right. \end{equation} where $\theta \in [0,1)$ is a given constant, the velocity field $v=v(t,x)$ is defined on $(t,x) \in [0,+\infty) \times \mbt^2$ with zero spatial means \begin{equation} \int_{\mbt^2}^{} v(t,x) \rmd x = 0, \end{equation} and we denote $\mbt^2 = \mbr^2 /( 2\pi \mbz^2)$. Here, for $u \in C^{\infty}(\mathbb{T}^3)$ the fractional Laplacian is defined via the Fourier transform as \begin{align*} \mathcal{F}((- \Delta)^{\theta} u)(\xi) = |\xi|^{2\theta}\mathcal{F}(u)(\xi), \quad \xi \in \mathbb{Z}^2. \end{align*} When $\theta = 1$, System \eqref{1.1_NS} is the 2D Navier-Stokes equations, for which the existence and uniqueness of weak solutions to the Cauchy problem are well-established (see, for example, \cite{Temam-NSbook}). These weak solutions also satisfy the energy equality. In contrast, recently Buckmaster and Vicol showed the nonuniqueness of weak solutions to 3D Navier--Stokes equations in \cite{Buckmuster_Vicol}. The 3D Navier Stokes equations with fractional viscosity was first considered by J.-L. Lions in \cite{Lions59} and the existence and uniqueness of weak solutions to the Cauchy problem for $\theta \in [5/4,\infty)$ was showed in \cite{Lions69}. Moreover, an analogue of the Caffarelli-Kohn-Nirenberg \cite{CKN} result was established in \cite{KatzPavlovic}, showing that the Hausdorff dimension of the singular set, in space and time, is bounded by $5 - 4\theta$ for $\theta \in (1,5/4)$. The existence, uniqueness, regularity and stability of solutions to the 3D Navier--Stokes with fractional viscosity have been studied in \cite{OlsonTiti05,JiuWang14,Wu03,Tao09,Colombo_DeLellis_Massaccesi,Tang_Yu} and references therein. On the other hand, for $\theta \in [1,5/4)$, the non-uniqueness of weak solutions to the 3D Navier Stokes equations with fractional viscosity was showed in \cite{Luo_Titi}, extending the results in \cite{Buckmuster_Vicol}; while for $\theta \in (0,1/5)$, the non-uniqueness of Leray weak solutions was showed in \cite{CdLdR18}. The framework of convex integration, applicable to fluid dynamics, was introduced by De Lellis and Sz{\'e}kelyhidi in \cite{dLSz1,DeLellis_Szekelyhidi_InvMath} for the Euler equations. Since then, it was developed in the series of work in \cite{Isett12,Buckmaster2013transporting,Buckmaster2014,Isett16,BDSV17}, over the resolution of the flexible part of Onsager's conjecture for the 3D Euler equations; see also \cite{CET94} for the rigidity part. Recently, the method was extended to Navier--Stokes equations in \cite{Buckmuster_Vicol}, by developing a framework of convex integration with intermittence. The ideas in \cite{Buckmuster_Vicol} are further developed to treat transport equations, Boussinesq, and stationary Naiver-Stokes equations in \cite{Modena_Szekelyhidi,Buckmaster_Colombo_Vicol,LuoX,Cheskidov_Luo,LTZ}. The purpose of this note is to show that, for the 2D hypoviscous Navier-Stokes equations with $\theta \in [0,1)$, the $C^0_t L^2_x$ weak solutions are not unique. As in \cite{Luo_Titi}, we would like to show a result of $h$-principle type to this system. \begin{thm} \label{thm:1} For any given $\theta \in [0,1)$ and $T \in \mbr_+$, if one has a smooth divergence-free vector field $u = u(t,x)$ with zero spatial mean on $[0,T]\times\mbt^2$, then for any given $\varepsilon_*>0$, there exists a weak solution $v=v(t,x) \in C^0_t L_x^2$ to equations \eqref{1.1_NS}, with zero spatial mean, satisfying \begin{gather} \linfone{v-u} \leq \varepsilon_*, \label{1.3} \\ \supp_t v \subseteq N_{\varepsilon_*} (\supp_t u). \label{1.4} \end{gather} \end{thm} \noindent{}Here for weak solutions, we mean solutions in the sense of distribution, and see \eqref{2.9} for $N_\varepsilon(\cdot)$. Moreover, by choosing $u$ with a compact temporal support, and $\varepsilon_* > 0$ small enough, we have \begin{cor}\label{cor:1} System \eqref{1.1_NS} admits nontrivial $C^0_t L_x^2$ weak solutions with compact temporal supports. Thus, generally, $C^0_t L_x^2$ weak solutions to the Cauchy problem of \eqref{1.1_NS} are not unique. \end{cor} We now make some comments on the analysis in this paper. We shall adapt the 2D stationary flow introduced in \cite{Choffrut_DeLellis_Szekelyhidi} to an intermittent form, inspired by the the intermittent Beltrami flow introduced in \cite{Buckmuster_Vicol} as the basic building block in the intermittent convex integration scheme for 3D Navier--Stokes equations. Meanwhile, in the two-dimensional case, it seems that the method of intermittent jets introduced in \cite{Buckmaster_Colombo_Vicol} or viscous eddies introduced in \cite{Cheskidov_Luo} can not be applied, due to the 3D nature of its Mikado flow structure. Furthermore, we shall use different scaling for the parameters due to the $L^p$ estimates for the 2D Dirichlet kernels. At last, we would like to compare the result of this note with the one of \cite{LTZ}. In \cite{LTZ}, the authors present the 2D intermittent convex integration scheme to show the finite energy weak solutions for 2D Boussinesq equations with diffusive temperature. By taking constant temperature in the solution, \cite{LTZ} can also provide the non-uniqueness result to \eqref{1.1_NS}. The new points got in this note may be given as follows. First, Theorem \ref{thm:1} provides a result of the h-principle type. Secondly, with Theorem \ref{thm:1}, one can construct solutions with compact temporal supports. \section{Iteration Lemma} \label{sec:2} In order to prove the above result in the framework of convex integration, one needs an iteration process on the corresponding Navier--Stokes--Reynolds system \begin{equation} \label{2.1_NSR} \left\{\begin{aligned} & \pt v + \divg (v \otimes v) + \nabla p + \nu (-\Delta)^\theta v = \divg \oR, \\ & \divg v = 0, \end{aligned}\right. \end{equation} where the Reynolds tensor $\oR$ is a symmetric trace-free $2 \times 2$ matrix. {Also} we would apply the scheme of intermittent convex integration to add waves with high frequency and strong concentration to cancel the Reynolds tensor $\oR$ gradually. In order to illustrate our analysis in a clearer manner, we would use several parameters to denote the different scales in the convex integration process. First, for $\theta \in [0,1)$ given in the system \eqref{1.1_NS}, we denote \begin{equation} \label{3.58+3} \theta_* = \left\{\begin{alignedat}{2} & 2\theta-1, \quad & & \frac{1}{2} < \theta < 1, \\ & 0, \quad & & 0 \leq \theta \leq \frac{1}{2}, \end{alignedat}\right. \end{equation} for which, we can easily check that $\theta_* \in [0,1)$. Then we shall choose the index parameter $\alpha \in \mbq_+$ accordingly satisfying \begin{equation} \label{7.16+_alpha} \alpha \leq \frac{1-\theta_*}{8} \in \big( 0, \min\{ \frac{1-\theta}{4}, \frac{1}{8} \} \big]. \end{equation} Now for each $q \in \mbn$, we set \begin{equation} \label{2.2_lambda_q} \lambda_q = A^{(B^q)} \end{equation} to denote the principle frequency for the perturbation waves in the convex integration scheme, and set \begin{equation}\label{2.4_varepsilon} \varepsilon_q = \lambda_q^{-2\beta} \end{equation} to denote the amplitude. Here $B \in \mbn$ would be chosen large enough based on $\alpha$ to satisfy \begin{equation}\label{++.1} B > \frac{320}{\alpha}, \end{equation} and $\beta \in \mbr_+$ would be chosen small enough accordingly to satisfy \begin{equation}\label{++.2} 0 < \beta < \frac{1}{100 B^2}. \end{equation} The parameter $A \in 5 \mbn$ would be chosen at last to be large enough to absorb the absolute constants in the inequalities and to satisfy \begin{equation}\label{++.3} A^\alpha \in 5 \mbn. \end{equation} We note that under these choices, we have \begin{equation}\label{++.4} \lambda_q \in 5 \mbn, \quad \lambda^\alpha_q \in 5 \mbn, \quad \forall\, q \in \mbn. \end{equation} and \begin{equation}\label{++.5} \varepsilon_{q+1}^{-1} \ll \varepsilon_{q+2}^{-1} = \lambda_q^{2 \beta B^2} \leq \lambda_q^{\frac{1}{50}}. \end{equation} In the main parts of this note, we would try to prove this iteration lemma \begin{lem} \label{Lem:2.1} For any given $\theta \in [0,1)$ and $T \in \mbr_+$, if $(v_q, p_q, \oR_q)$ is a smooth solution to \eqref{2.1_NSR} on $[0,T] \times \mbt^2$ with \begin{gather} \Cone{v_q} \leq \lamq^4, \label{2.2_a_vqC1_asmp} \\ \Linfone{\oR_q} \leq A \varepsilon_{q+1}, \label{2.2_b_Rq_l1_asmp}\\ \Cone{\oR_q} \leq \lamq^{10} \label{2.2_c_RC1_asmp} \end{gather} and $\aint_{\mbt^2} v_q \rmd x = 0$, then there exists a smooth solution $(v_{q+1}, p_{q+1}, \oR_{q+1})$ to \eqref{2.1_NSR} with \begin{gather} \Cone{v_{q+1}} \leq \lamqp^4, \label{2.3_a_vqC1_est} \\ \Linfone{\oR_{q+1}} \leq A \varepsilon_{q+2}, \label{2.3_b_Rq_l1_est}\\ \Cone{\oR_{q+1}} \leq \lamqp^{10} \label{2.3_c_RC1_est} \end{gather} and \begin{gather} \supp_t v_{q+1} \cup \supp_t \oR_{q+1} \subset N_{\varepsilon_{q+1}} (\supp_t v_q \cup \supp_t \oR_q), \label{2.4_suppv} \\ \linftwo{v_{q+1} - v_q} \leq A \varepsilon_{q+1}^{\frac{1}{2}}, \label{2.5_L2Increase} \\ \linfone{v_{q+1} - v_q} \leq \varepsilon_{q+1}^{\frac{1}{2}}, \label{2.6_WIncrease} \\ \aint_{\mbt^2} v_{q+1} \rmd x =0, \label{2.6+} \end{gather} where for $S \subseteq [0,T]$ we denote \begin{equation}\label{2.9} N_\varepsilon (S) := \big\{ t \in [0,T] \mid \exists s \in S, \ \text{s.t.} \, |s-t| \leq \varepsilon \big\}. \end{equation} \end{lem} With this iteration lemma we can prove Theorem \ref{thm:1} as follows \begin{proof}[Proof of the main theorem.] Take $v_0 = u$ and we shall define $p_0, \oR_0$ for the Navier--Stokes--Reynolds system \eqref{2.1_NSR} as \[ \oR_0 = \opR \big( \pt v_0 + \nu (-\Delta)^\theta v_0 \big) + v_0 \ootimes v_0 \] and \[ p_0 = - \frac{1}{2} |v_0|^2, \] where $\opR$ would be defined in details in \eqref{3.47_opR} later, $\ootimes$ denotes the trace-free part of the tensor product as \[ f \ootimes g = \begin{pmatrix} \frac{1}{2} f_1 g_1 - \frac{1}{2} f_2 g_2 & f_1 g_2 \\ f_2 g_1 & \frac{1}{2} f_2 g_2 - \frac{1}{2} f_1 g_1 \end{pmatrix}, \quad \forall\, f,g \in \mbr^2. \] Then for $A$ large enough one can use Lemma \ref{Lem:2.1} to get the sequence $\{v_q\}$ with estimates \eqref{2.3_a_vqC1_est}--\eqref{2.6+}. Therefore, by \eqref{2.5_L2Increase}, one has \[ \sum_{q=0}^{\infty} \linftwo{v_{q+1}- v_q} < +\infty, \] which shows the strong convergence of $\{v_q\}$ in $L_t^\infty L_x^2$ to some $v(t,x)$. And by \eqref{2.3_b_Rq_l1_est}, this $v(t,x)$ is a weak solution to \eqref{1.1_NS}. Meanwhile, by \eqref{2.6_WIncrease} and \eqref{2.4_suppv}, one can get \eqref{1.3}--\eqref{1.4}. Moreover, using \eqref{2.2_a_vqC1_asmp} and \eqref{2.5_L2Increase}, we have that for each $q_* \in \mbn$, \begin{align*} \sum_{q=q_*}^{\infty} \| v_{q+1} - v_q \|_{C^0_t H^{\beta'}_x} \ls & \sum_{q=q_*}^{\infty} \linftwo{v_{q+1} - v_q}^{1-\beta'} \big(\cNN{v_{q+1}}{1}^{\beta'} + \cNN{v_{q}}{1}^{\beta'} \big) \\ \ls & \sum_{q=q_*}^\infty A^{1-\beta'} \varepsilon_{q+1}^{\frac{1-\beta'}{2}} \lamqp^{4\beta'} \\ \ls & \sum_{q=q_*}^\infty A^{1-\beta'} \lamqp^{4 \beta' - \beta(1-\beta')}. \end{align*} For $\beta' < \beta / (4+\beta)$, this shows that $\{ v_q \}$ is a Cauchy sequence in $C_t^0 H_x^{\beta'}$ and thus converges strongly and $v(t,x)$ is a $C_t^0 L_x^2$ function. Here we use $a \ls b$ to denote $a \leq Cb$ for some absolute constant $C$ independent of the choice of our parameters $B ,\beta$ and $A$, and would be absorbed by $A$ if needed. \end{proof} For the rest of the paper, we would try to prove Lemma \ref{Lem:2.1}. \section{Mollification} In order to deal with the possible loss of derivatives in the analysis, we first mollify the approximate solutions. Denote \[ \varphi_\ell(x) = \frac{1}{\ell^2} \varphi_1(\frac{x}{\ell}), \quad \tilde \varphi_\ell(t) = \frac{1}{\ell} \tilde\varphi_1(\frac{t}{\ell}) \] as the standard 2D and 1D Friedrichs mollifier sequences respectively, with \[ \supp \varphi_1 \subseteq B_1(0), \quad \supp \tilde\varphi_1 \subseteq (-1,1). \] Then for \begin{equation} \label{+.2_ell} \ell = \lambda_q^{-20}, \end{equation} we can mollify $v_q$ and $R_q$ given in Lemma \ref{Lem:2.1} as \begin{align} v_\ell = & (v_q *_x \varphi_\ell) *_t \tilde\varphi_\ell, \label{2.3_vl}\\ \oR_\ell = & (\oR_q *_x \varphi_\ell) *_t \tilde\varphi_\ell. \label{2.4_Rl} \end{align} Since $(v_q,p_q,\oR_q)$ solves \eqref{2.1_NSR}, we know that $(v_\ell,p_\ell,\oR_\ell)$ solves \begin{equation} \label{+.5_NSR_vl} \left\{\begin{aligned} & \pt v_\ell + \divg (v_\ell \otimes v_\ell) + \nabla p_\ell + \nu (-\Delta)^\theta v_\ell = \divg (\oR_\ell+\Rm), \\ & \divg v_\ell = 0, \end{aligned}\right. \end{equation} where we can choose \begin{align} p_\ell = & (p_q *_x \varphi_\ell) *_t \tilde\varphi_\ell +|v_\ell|^2 - \big(|v_q|^2*_x\varphi_\ell \big) *_t \tilde\varphi_\ell, \\ \Rm = & (v_\ell \ootimes v_\ell) - \big( (v_q \ootimes v_q) *_x \varphi_\ell \big) *_t \tilde\varphi_\ell. \end{align} Using the inductive assumptions \eqref{2.2_a_vqC1_asmp}--\eqref{2.2_c_RC1_asmp}, we have \begin{gather} \CN{v_\ell} \ls \lamq^4 \ell^{-N+1} \ls \ell^{-N}, \quad \forall\, \Nrang, \label{+.8_vl_CN} \\ \CN{\oR_\ell} \ls \lamq^{10} \ell^{-N+1} \ls \ell^{-N}, \quad \forall\, \Nrang, \label{+.9_Rl_CN} \\ \Linfone{\oR_\ell} \leq \Linfone{\oR_q} \leq A \varepsilon_{q+1}, \label{+.12_Rl_l1} \\ \linftwo{v_\ell - v_q} + \linfone{v_\ell - v_q} \ls \| v_\ell - v_q \|_{L^\infty_t L^\infty_x } \ls \ell \Cone{v_q} \ls \lamq^{-16}{.} \label{+.11} \end{gather} Moreover, \begin{align*} \Linfinf{\Rm} \ls & \ell \Cone{v_\ell\ootimes v_\ell} \ls \ell \lamq^8,\\ \CN{\Rm} \ls & \ell^{-N+1} \Cone{v_\ell\ootimes v_\ell} \ls \ell^{-N+1} \lamq^8.\\ \end{align*} Thus, for \[ \Rls \overset{\mathrm{def.}}{=} \oR_\ell + \Rm, \] we have \begin{gather} \Linfone{\Rls} \leq A \varepsilon_{q+1} + \ell \lamq^8 \leq 2 A \varepsilon_{q+1}, \label{+.16_Rl_l1} \\ \CN{\Rls} \ls \ell^{-N} + \ell^{-N+1} \lamq^8 \ls \ell^{-N}, \quad \forall\, \Nrang. \label{+.17_Rl_CN} \end{gather} Here we use the fact that by our choice of the parameters \eqref{++.5} and \eqref{+.2_ell}, it holds \[ \ell \lambda_q^8 \leq \varepsilon_{q+1}{.} \] \section{2D Intermittent Stationary Flow} In this section, we shall choose the sequence of waves with high frequency and strong concentration to perturb the system and construct $v_{q+1}$. As presented in \cite{Buckmuster_Vicol}, the intermittent Beltrami flow is the basic building block in the intermittent convex integration scheme to prove the nonuniqueness of weak solutions to 3D Navier--Stokes equations. Meanwhile, in the two-dimensional case, it seems that the method of intermittent jets introduced in {\cite{Buckmaster_Colombo_Vicol} or viscous eddies introduced in \cite{Cheskidov_Luo}} can not be applied, due to the 3D nature of its Mikado flow structure. Now we shall adapt the 2D stationary flow introduced in \cite{Choffrut_DeLellis_Szekelyhidi} to an intermittent form. First, we specifically choose \begin{align*} \Lambda^+ & = \{ \frac{1}{5} (3e_1 \pm 4 e_2), \frac15 (4e_1\pm 3e_2) \}, \\% \label{3.8_Lamp} \\ \Lambda^- & = \{ \frac{1}{5} (-3e_1 \mp 4 e_2), \frac15 (-4e_1\mp 3e_2) \}, \end{align*} {and denote} \begin{equation} \Lambda = \Lambda^+ \cup \Lambda^-{.} \label{3.10_Lam} \end{equation} {Then} \[ \Lambda \subset \mbs^1 \cap \mbq^2, \quad 5 \Lambda \subset \mbz^2 \] and \begin{equation*} \min_{\substack{\dir,\dir'\in \Lambda \\ \dir \neq -\dir'}} |\dir+\dir'| \geq \frac{\sqrt{2}}{5}. \end{equation*} Now for each $\dir \in \Lambda$ and any frequency parameter $\lambda \in \mbz^+ \cap 5\mbz$, we may denote the 2D stationary flow $b_\dir$ and its potential $\psi_\dir$ as \begin{equation} \label{3.1_def_bpsi} b_\dir (x) = b_{\dir,\lambda}(x) := i \dir^\perp \rme^{i\lambda \dir \cdot x} \quad \text{and} \quad \psi_\dir (x) = \psi_{\dir,\lambda}(x) := \frac{1}{\lambda} \rme^{i\lambda \dir \cdot x}. \end{equation} It is easy to check that \begin{equation} \label{3.2_prop_bpsi} b_{\dir,\lambda}(x) = \nabla^\perp \psi_{\dir,\lambda}(x), \quad \divg b_\dir(x) = 0, \quad \perpdot b_{\dir,\lambda}(x) = \Delta \psi_{\dir,\lambda}(x) = - \lambda^2 \psi_{\dir,\lambda}(x), \end{equation} \begin{equation} \label{3.2+_conj_bpsi} \overline{b_{\dir,\lambda} (x)} = b_{-\dir,\lambda}(x), \quad \overline{\psi_{\dir,\lambda} (x)} = \psi_{-\dir,\lambda}(x), \end{equation} and \begin{equation} \label{3.3+_est_bpsi} \|{b_{\dir,\lambda}}\|_{C^N} \leq \lambda^N, \quad \|{\psi_{\dir,\lambda}}\|_{C^N} \leq \lambda^{N-1}, \quad \forall\, N \in \mbn{,} \end{equation} where \begin{equation*} \dir^\perp = \begin{pmatrix} -k_2 \\ k_1 \end{pmatrix}, \quad \nabla^\perp = \begin{pmatrix} -\partial_{x_2} \\ \partial_{x_1} \end{pmatrix}. \end{equation*} Moreover, we have \begin{lem}[Geometric lemma] \label{Lem:3.2} Denote $\dM$ as the linear space of $2 \times 2$ symmetric trace-free matrices. There exists a set of positive smooth functions $\{ \gamma_\dir \in C^\infty( \dM) \mid \dir \in \Lambda\} $, such that for each $\oR \in \dM$, \begin{gather} \gamma_{-\dir}(\oR) = \gamma_\dir(\oR), \label{3.13} \\ \oR = \sum_{\dir\in \Lambda} (\gamma_\dir(\oR))^2 (\dir \ootimes \dir), \label{3.14} \\ \intertext{and} \gamma_\dir (\oR) \ls (1 + |\oR|)^{\frac{1}{2}}. \label{3.14+} \end{gather} \end{lem} The proof of this lemma is direct, one may check Appendix A for the details. Now as in \cite{Buckmuster_Vicol}, in order to define the intermittent flow we first present the 2D Dirichlet kernel \begin{equation} \label{3.15_Dr} D_r(x) = \frac{1}{2r+1} \sum_{k \in \Omega_r} \rme^{i k \cdot x} \ \in C^\infty(\mbt^2) \end{equation} with $r \in \mbz^+$ and \[ \Omega_r = \{ k=(k_1, k_2)^T \mid k_i \in \mbz, -r \leq k_i \leq r \}. \] By a direct calculation, it holds that for $1 < p \leq \infty$, \begin{equation} \label{3.16_DrEst} \| D_r \|_{L^p} \lesssim r^{1-\frac{2}{p}}, \quad \| D_r \|_{L^2} = 2\pi. \end{equation} We shall note that these $L^p$ estimates are different from the ones in 3D case as in \cite{Buckmuster_Vicol}{,} and this dimensional dependence is partially the reason for which we shall use different scaling for our parameters to be chosen later. Now we can define the {directed-rescaled} Dirichlet kernel with a temporal shift as \begin{equation} \label{3.17_eata_def} \eta_\dir(t,x) = \eta_{\dir,\lambda,\sigma,r,\mu}(t,x) := \left\{ \begin{alignedat}{2} & D_r (\lambda \sigma (\dir\cdot x + \mu t), \lambda \sigma \dir^\perp \cdot x), & \quad \dir & \in \Lambda^+, \\ & \eta_{-\dir,\lambda,\sigma,r,\mu} (t,x), & \quad \dir & \in \Lambda^- \end{alignedat} \right. \end{equation} with \begin{equation} \label{3.18_eta_est} \frac{1}{\mu} \pt \eta_\dir(t,x) = \pm (\dir\cdot \nabla) \eta_\dir(t,x), \quad \forall\, \dir \in \Lambda^\pm \end{equation} and \begin{equation} \label{3.19_eta_norm} \aint_{\mbt^2} \eta_\dir^2 (t,x) \rmd x = 1, \quad \linfp{\eta_\dir} \lesssim r^{1-\frac2p}, \quad \text{for } 1 < p \leq \infty. \end{equation} Here we use parameters $r, \mu, \sigma^{-1}, \lambda \in \mbn$ with \begin{equation} \label{3.19+_parameters} 1 \ll r \ll \mu \ll \sigma^{-1} \ll \lambda \end{equation} and \[ \lambda \sigma \in 5 \mbn, \] one may check \eqref{3.64_parameter_choice} to see the specific choice of these parameters. We shall note that the choice of these parameters, especially that of $\mu$, are dimensionally dependent and thus are different from that of \cite{Buckmuster_Vicol}. Finally, we could define the intermittent 2D stationary flow as \begin{equation} \label{3.20_W_def} \mbw_\dir(t,x) = \mbw_{\dir,\lambda,\sigma,r,\mu} (t,x) := \eta_{\dir,\lambda,\sigma,r,\mu} (t,x) b_{\dir,\lambda}(x). \end{equation} Similar as the 3D intermittent Beltrami flow presented in \cite{Buckmuster_Vicol}, this intermittent flow possesses several important properties. First, for the frequency projector $\mbp_{[\lambda_1,\lambda_2]}$: \[ \mbp_{[\lambda_1,\lambda_2]} f(x) = \mathcal{F}^{-1} (1_{ \{ \lambda_1 \leq \dir \leq \lambda_2\} } \mathcal{F}(f) )(x), \] where $\mathcal{F}$ is the Fourier transform on $\mbt^2$, and for \begin{align*} \mbp_{\geq \lambda} f & = \mbp_{[\lambda,\infty)} f, \\ \mbp_{\neq 0} f & = f - \aint f \rmd x, \end{align*} one has \begin{align} \mbp_{[{\lambda}/{2}, 2 \lambda]} \mbw_{\dir,\lambda}(t,x) & = \mbw_{\dir,\lambda}, \label{3.21}\\ \mbp_{[ \lambda/5, 4 \lambda ]} \big( \mbw_{\dir,\lambda} \ootimes \mbw_{\dir',\lambda} \big) & = \mbw_{\dir,\lambda} \ootimes \mbw_{\dir',\lambda}, \quad \forall\, \dir + \dir' \neq 0, \label{3.21+} \\ \mbp_{\geq (\lambda\sigma)/2} \big(\mbw_{\dir,\lambda} \ootimes \mbw_{\dir',\lambda} \big) & = \mbp_{\neq 0} \big(\mbw_{\dir,\lambda} \ootimes \mbw_{\dir',\lambda} \big), \quad \forall\, \dir,\dir' \in \Lambda. \label{3.26+} \end{align} Similarly, \begin{equation} \label{3.20+_eta_FrePro} \mbp_{\neq 0} \eta_\dir = \mbp_{\geq {(\lambda\sigma)}/{2}} \eta_\dir. \end{equation} Next, one can get \begin{lem} \label{Lem:3.3} For any $\{ a_\dir \mid \dir \in \Lambda \} \subset \mbc$ with $a_{-\dir} = \overline{a_\dir}$, the function \begin{equation} \label{3.22_Wdef} W(t,x) = \sum_{\dir \in \Lambda} a_\dir \mbw_\dir(t,x) \end{equation} is real valued, and for each $\oR \in \dM$, one has \begin{equation} \label{3.23} \sum_{\dir \in \Lambda} (\gamma_\dir(\oR))^2 \aint_{\mbt^2} \mbw_\dir \ootimes \mbw_{-\dir} \rmd x= - \oR. \end{equation} \end{lem} \begin{proof} This result can be checked directly as follows. By \eqref{3.2+_conj_bpsi} and \eqref{3.17_eata_def}, \[ \overline{W(t,x)} = \sum_{\dir \in \Lambda} \overline{a_\dir} \overline{\mbw_\dir(t,x)} = \sum_{\dir \in \Lambda} a_{-\dir} \overline{\eta_{\dir}(t,x)} \overline{b_\dir(t,x)} = \sum_{\dir \in \Lambda} a_{-\dir} {\eta_{-\dir}(t,x)} {b_{-\dir}(t,x)} = W(t,x), \] and \[ \mbw_\dir \ootimes \mbw_{-\dir} = \eta_\dir^2(t,x) \big( b_\dir(x) \ootimes b_{-\dir}(x) \big) = \eta_\dir^2 (t,x) \big( \dir^\perp \ootimes \dir^\perp \big) = \eta_\dir^2 (t,x) ( - \dir \ootimes \dir). \] Then by \eqref{3.14} and \eqref{3.19_eta_norm}, one can get \eqref{3.23}. \end{proof} Moreover, after a direct calculation, one can get \begin{lem} \label{Lem:3.4} If one chooses the parameters as in \eqref{3.19+_parameters}, then for any $1 < p \leq \infty,$ and $K, \Nrang$, one has \begin{align} \linfp{ \mbw_\dir} + \linfp{ \nabla^N \pt^K \mbw_\dir} \lesssim &\lambda^N \big( \lambda \sigma r \mu \big)^K \ r^{1 - \frac2p}, \label{3.24_mbw_est} \\ \linfp{ \eta_\dir} + \linfp{ \nabla^N \pt^K \eta_\dir} \lesssim &\big(\lambda \sigma r \big)^N \big( \lambda \sigma r \mu \big)^K \ r^{1 - \frac2p}. \label{3.25_eta_est} \end{align} \end{lem} \section{Perturbation} To present our perturbation terms, we first define the temporal cutoff as in \cite{Luo_Titi}. Let $\Phi_q(t)$ be a smooth cut-off function with \begin{gather*} 0 \leq \Phi_q \leq 1, \\ \Phi_q(t) = 1 \quad \text{on} \ \supp_t \Rls, \\ \supp \Phi_q(t) \subseteq N_{\ell} (\supp_t \Rls), \\ \| \Phi_q \|_{C^N_t} \ls \ell^{-N}, \quad \forall\, \Nrang. \end{gather*} Then we can set the smooth coefficients \begin{equation} \label{3.28+_ak_def} a_\dir(t,x) = A^{\frac{1}{2}} \varepsilon_{q+1}^{\frac{1}{2}} \gamma_\dir (A^{-1} \varepsilon_{q+1}^{-1} \Rls(t,x)) \Phi_q(t), \end{equation} for $\dir \in \Lambda$. Obviously, \begin{equation} \label{3.27} \supp_t a_\dir \subseteq N_{\ell} (\supp_t \Rls), \end{equation} and by \eqref{3.23}, it is easy to see that \begin{equation} \label{3.34} \sumk a_\dir^2 \aint \mbw_\dir \ootimes \mbw_{-\dir} \rmd x = - \Rls, \end{equation} namely, noting \eqref{3.21+}, \begin{equation} \label{3.42+} - \Rls = \sumkk a_\dir a_{\dir'} \mbp_{=0} \big( \mbw_\dir \ootimes \mbw_{\dir'} \big). \end{equation} Now we can define the perturbation \begin{equation} \label{3.29_wq_def} w_{q+1} = v_{q+1} - v_\ell := \wqp + \wqc + \wqt, \end{equation} where \begin{align} \wqp (t,x) = & \sumk a_\dir(t,x) \mbw_{\dir,\lamqp}(t,x) = \sumk a_\dir(t,x) \eta_{\dir,\lamqp,\sigma,r,\mu}(t,x) b_{\dir,\lamqp}(x), \label{3.30_wqp_def}\\ \wqc (t,x) = & \sumk \nabla^\perp \big( a_\dir(t,x) \eta_{\dir,\lamqp,\sigma,r,\mu}(t,x) \big) \psi_{\dir,\lamqp}(x), \label{3.31_wqc_def} \\ \wqt (t,x) = & \frac{1}{\mu} \Big( \sum_{\dir \in \Lambda^+} - \sum_{\dir \in \Lambda^-} \Big) \mbp_H \mbp_{\neq 0} \big( a_\dir^2(t,x) \mbp_{\neq 0}\eta_{\dir,\lamqp,\sigma,r,\mu}^2(t,x) \dir \big) . \label{3.32_wqt_def} \end{align} {Here} $\mbp_H$ is the Helmholtz--Leray projector \[ \mbp_H f = f - \nabla \big( \Delta^{-1} \divg f \big). \] Moreover, it is direct to check that \begin{gather} \wqp + \wqc = \nabla^\perp \Big( \sumk a_\dir \eta_\dir \psi_\dir \Big), \label{3.33}\\ \divg (\wqp + \wqc) = 0, \quad \divg \wqt = 0, \label{3.33+} \\ \supp_t w_{q+1} \subseteq \bigcup_{\dir \in \Lambda} \supp_t a_\dir \subseteq N_{\ell} (\supp_t \Rls). \label{3.39+} \end{gather} \section{A Priori Estimates for the Perturbations} In this section, we derive a priori estimates for the perturbations given above. \begin{lem}[Estimates for the coefficients] \label{Lem:3.5} For $a_\dir$ defined in \eqref{3.28+_ak_def}, one has \begin{align} \linftwo{a_\dir} \ls & A^{\frac12} \varepsilon_{q+1}^{\frac12}, \label{3.38_ak_est} \\ \cN{a_\dir} \ls & \ell^{-2N}, \quad \forall\, \Nrang. \label{3.39} \end{align} \end{lem} \begin{proof} By Lemma \ref{Lem:3.2} and \eqref{+.16_Rl_l1}--\eqref{+.17_Rl_CN}, we have \[ \linftwo{a_\dir}^2 \ls \int_{\mbt^2} A \varepsilon_{q+1} \cdot \Big( 1 + \frac{|\Rls(t,x)|}{A \varepsilon_{q+1}} \Big) \rmd x \ls A \varepsilon_{q+1} + \Linfone{\Rls} \ls A \varepsilon_{q+1} \] and \begin{align*} \cN{a_\dir} \ls & A^{\frac{1}{2}} \varepsilon_{q+1}^{\frac{1}{2}} \| \Phi_q \|_{C^N_t} + A^{\frac{1}{2}} \varepsilon_{q+1}^{\frac{1}{2}} \cN{\gamma_\dir (A^{-1} \varepsilon_{q+1}^{-1} \Rls)} \\ \ls & A^{\frac{1}{2}} \varepsilon_{q+1}^{\frac{1}{2}} \ell^{-N} + A^{\frac{1}{2}} \varepsilon_{q+1}^{\frac{1}{2}} \cdot (A^{-1} \varepsilon_{q+1}^{-1} )^{N} \ell^{-N}\\ \ls & \ell^{-2N}, \end{align*} which leads to \eqref{3.38_ak_est}--\eqref{3.39}. \end{proof} Now we present an important tool introduced in \cite{Buckmuster_Vicol}, see also \cite{Modena_Szekelyhidi}. \begin{lem}[$L^p$ product estimate] \label{Lem:3.6} If $f, g \in C^\infty(\mbt^2)$, and $g$ is $(\mbt / \kappa)^2$ periodic for some $\kappa \in \mbz^+$, then \begin{equation} \label{3.40} \| fg \|_{L^2(\mbt^2)} \leq \| f \|_{L^2(\mbt^2)} \| g \|_{L^2(\mbt^2)} + C \kappa^{-\frac{1}{2}} \| f \|_{C^1(\mbt^2)} \| g \|_{L^2(\mbt^2)}. \end{equation} \end{lem} \begin{proof} See Lemma 2.1 of \cite{Modena_Szekelyhidi}, and also Lemma 3.6 of \cite{Buckmuster_Vicol}. \end{proof} Then we can derive the estimates on the perturbations as follows. \begin{prop} If one chooses the parameters as in \eqref{3.19+_parameters}, then for $1 < p \leq \infty$ and $\Nrang$, one has \begin{gather} \linftwo{\wqp} \lesssim A^{\frac{1}{2}} \varepsilon_{q+1}^{\frac12} + \ell^{-2} (\lamqp \sigma)^{-\frac12}, \label{3.42_wqp_inf2} \\ \linfp{\wqc} + \linfp{\wqt} \lesssim \ell^{-4} \big( \sigma + \mu^{-1} \big) r^{2-\frac{2}{p}}, \label{3.44_wqcwqt_infp} \\ \linfp{\wqp} + \linfp{w_{q+1}} \lesssim \ell^{-4} r^{1-\frac{2}{p}}, \label{3.43_w_infp} \\ \linfp{\pt \wqp} + \linfp{\pt \wqc} \lesssim \ell^{-4} \lamqp \sigma \mu r^{2-\frac{2}{p}}, \label{3.45_pt_w} \\ \linfp{\nabla^N \wqp} + \linfp{\nabla^N \wqc} + \linfp{\nabla^N \wqt} \lesssim \ell^{-4N} r^{1-\frac{2}{p}} \lamqp^N. \label{3.46_w_cN} \end{gather} \end{prop} \begin{proof} Due to \eqref{3.17_eata_def} and \eqref{3.20_W_def}, $\mbw_\dir(t,\cdot)$ is $\big(\mbt / (\lambda \sigma)\big)^2$-periodic. Thus noting the definition \eqref{3.30_wqp_def} of $\wqp$, and applying Lemma \ref{Lem:3.6}, one can get \[ \linftwo{\wqp} \lesssim \linftwo{a_\dir} \linftwo{\mbw_\dir} + (\lamqp \sigma)^{-\frac12} \| a_\dir \|_{C^1_{t,x}} \linftwo{\mbw_\dir}, \] which, by \eqref{3.24_mbw_est}, \eqref{3.38_ak_est}--\eqref{3.39}, leads to \eqref{3.42_wqp_inf2}. Meanwhile, by \eqref{3.24_mbw_est}, \[ \linfp{\wqp} \lesssim \| a_\dir \|_{C^0_{t,x}} \linfp{\mbw_\dir} \lesssim \ell^{-2} r^{1-\frac{2}{p}}. \] Noting furthermore the definitions \eqref{3.31_wqc_def}--\eqref{3.32_wqt_def} of $\wqc$ and $\wqt$, and using \eqref{3.3+_est_bpsi}, \eqref{3.25_eta_est}, \begin{align*} \linfp{\wqc} \lesssim & \| \psi_\dir \|_{L^\infty_x} \| {a_\dir} \|_{C^1_{t,x}} \big( \linfp{\eta_\dir} + \linfp{\nabla \eta_\dir} \big) \\ \lesssim & \ell^{-2} \big( \sigma r + \lamqp^{-1} \big) r^{1-\frac{2}{p}} \lesssim \ell^{-2} \sigma r^{2-\frac{2}{p}},\\ \linfp{\wqt} \lesssim & \frac{1}{\mu} \| a_\dir \|_{C^0_{t,x}}^2 \linftwopsq{\eta_\dir} \\ \lesssim & \ell^{-4} \frac{1}{\mu} r^{2-\frac{2}{p}}, \end{align*} which yields \eqref{3.44_wqcwqt_infp}--\eqref{3.43_w_infp}. Similarly, by \eqref{3.24_mbw_est}--\eqref{3.25_eta_est} of Lemma \ref{Lem:3.4}, \begin{align*} \linfp{\pt \wqp} \lesssim & \cNN{a_\dir}{1} \big( \linfp{\mbw_\dir} + \linfp{\pt \mbw_\dir} \big) \\ \lesssim & \ell^{-2} \lamqp \sigma r \mu \, r^{1-\frac{2}{p}} \\ \linfp{\pt \wqc} \lesssim & \| \psi_\dir \|_{L^\infty_{t,x}} \cNN{a_\dir}{2} \big( \linfp{\eta_\dir} + \linfp{\pt \nabla \eta_\dir} \big) \\ \lesssim & \frac{1}{\lambda} \ell^{-4} \lamqp^2 \sigma^2 r^2 \mu \, r^{1-\frac{2}{p}}, \end{align*} which leads to \eqref{3.45_pt_w}. At last, \begin{align*} \linfp{\nabla^N \wqp} \lesssim & \cN{a_\dir} \big( \linfp{\mbw_\dir} + \linfp{\nabla^N \mbw_\dir} \big) \\ \lesssim & \ell^{-2N} \lamqp^N r^{1-\frac{2}{p}}, \\ \linfp{\nabla^N \wqc} \lesssim & \cNN{a_\dir}{N+1} \big( \cN{\psi_\dir} (\linfp{\eta_\dir} + \linfp{\nabla \eta_\dir}) + \cNN{\psi_\dir}{0} \linfp{\nabla^{N+1} \eta_\dir} \big) \\ \lesssim & \ell^{-2N-2} \lamqp^N r^{1-\frac{2}{p}}, \\ \linfp{\nabla^N \wqt} \lesssim & \frac{1}{\mu} \cN{a_\dir^2} \big( \linftwop{\eta_\dir} \linftwop{\nabla^N \eta_\dir} \big) \\ \lesssim & \ell^{-4N} \frac{1}{\mu} \big(\lamqp \sigma r\big)^N r^{2-\frac{2}{p}} \\ \lesssim & \ell^{-4N} \lamqp^N r^{1-\frac{2}{p}}, \end{align*} which yields \eqref{3.46_w_cN}. \end{proof} \section{Anti-divergence Operator and Estimates on the Reynolds Stress Tensor} As in \cite{DeLellis_Szekelyhidi_InvMath} and \cite{Choffrut_DeLellis_Szekelyhidi}, we shall define the anti-divergence operator $\opR$ as \begin{defn} For $f \in C^0(\mbt^2,\mbr^2)$, set \begin{equation} \label{3.47_opR} \opR f = \nabla g + (\nabla g)^T - (\divg g) \id, \end{equation} where $g$ satisfies \[ \Delta g = f - \aint_{\mbt^2} f \rmd x \quad \text{and} \quad \aint_{\mbt^2} g = 0. \] \end{defn} \begin{lem}[Lemma 10 of \cite{Choffrut_DeLellis_Szekelyhidi}, Properties of the anti-divergence operator] \label{Lem:3.9} For any $f \in \allowbreak C^0(\mbt^2, \allowbreak \mbr^2)$ with $\aint_{\mbt^2} f \rmd x =0$, one has \[ (\opR f(x))^T = \opR f(x), \quad \tr(\opR f(x)) = 0, \quad \forall\, x \in \mbt^2 \] and \[ \divg \opR f = f, \quad \aint_{\mbt^2} \opR f(x) \rmd x = 0. \] \end{lem} Moreover, with standard Calderon--Zygmund estimates and Schauder estimates, one can get \begin{lem} \label{Lem:3.10} For $1 < p < \infty$, \begin{gather} \| \opR \|_{L^p \to W^{1,p}} \lesssim 1, \quad \| \opR \|_{C^0 \to C^0} \lesssim 1, \label{3.49} \\ \| \opR \mbp_{\neq 0} v \|_{L^p} \lesssim \big\| |\nabla|^{-1} \mbp_{\neq 0} v \big\|_{L^p}. \label{3.50} \end{gather} \end{lem} {And} we could use the following lemma to gain a $\lambda^{-1}$ weight when we apply $\opR$ on certain terms. \begin{lem} \label{Lem:3.11} For any given $1 < p < \infty$, $\lambda \in \mbz^+$, $a \in C^2(\mbt^2,\mbr)$ and $f \in L^p(\mbt^2,\mbr^2)$, one has \[ \big\| |\nabla|^{-1} \mbp_{\neq 0} (a \mbp_{\geq \lambda} f ) \big\|_{L^p} \lesssim \lambda^{-1} \|a\|_{C^2} \|f\|_{L^p}. \] \end{lem} \begin{proof} See Lemma B.1 of \cite{Buckmuster_Vicol}. In fact, \begin{align*} \big\| |\nabla|^{-1} \mbp_{\neq 0} (a \mbp_{\geq \lambda} f) \big\|_{L^p} \leq & \big\| |\nabla|^{-1} \mbp_{\geq {\lambda}/{3}} \big( (\mbp_{\leq {\lambda}/{2}} a) (\mbp_{\geq \lambda} f) \big) \big\|_{L^p} + \big\| |\nabla|^{-1} \mbp_{\neq 0} \big( (\mbp_{\geq {\lambda}/{2}} a) (\mbp_{\geq \lambda} f) \big) \big\|_{L^p} \\ \lesssim & \lambda^{-1} \| (\mbp_{\leq {\lambda}/{2}} a) (\mbp_{\geq \lambda} f)\|_{L^p} + \| (\mbp_{\geq {\lambda}/{2}} a) (\mbp_{\geq \lambda} f) \|_{L^p} \\ \lesssim & \lambda^{-1} \| a \|_{L^\infty} \| \mbp_{\geq {\lambda}} f \|_{L^p} + \| \mbp_{\geq {\lambda}/{2}} a \|_{L^\infty} \| \mbp_{\geq {\lambda}} f \|_{L^p} \\ \lesssim & \lambda^{-1} \big( \|a\|_{L^\infty} + \lambda \| \mbp_{\geq {\lambda}/{2}} a \|_{W^{1,2+}} \big) \| \mbp_{\geq \lambda} f \|_{L^p} \\ \lesssim & \lambda^{-1} \big( \|a\|_{L^\infty} + \| \nabla \mbp_{\geq {\lambda}/{2}} a \|_{W^{1,2+}} \big) \| \mbp_{\geq \lambda} f \|_{L^p} \\ \lesssim & \lambda^{-1} \big( \| a \|_{L^\infty} + \| \nabla^2 a \|_{L^\infty} \big) \| f\|_{L^p}. \qedhere \end{align*} \end{proof} Now we shall settle an expression formula for $\oR_{q+1}$. In fact, noting that both $(v_\ell, p_\ell, \Rls)$ and $(v_{q+1}, p_{q+1}, \oR_{q+1})$ solve \eqref{2.1_NSR}, and using the definitions \eqref{3.29_wq_def}--\eqref{3.32_wqt_def}, one can get \begin{align*} \divg \oR_{q+1} = & \pt v_{q+1} + \divg (v_{q+1} \ootimes v_{q+1}) + \nabla p_{q+1} + \nu (-\Delta)^\theta v_{q+1} \\ = & \big( \pt v_\ell + \divg (v_\ell \ootimes v_\ell) + \nabla p_\ell + \nu (-\Delta)^\theta v_\ell - \divg \Rls \big) \\ & + \pt \big( \wqp + \wqc + \wqt \big) + \divg \big( v_\ell \ootimes w_{q+1} + w_{q+1} \ootimes v_\ell \big)\\ & + \divg \big( \wqp \ootimes \wqp + (\wqc + \wqt) \ootimes w_{q+1} + \wqp \ootimes (\wqc + \wqt) \big) \\ & + \nabla (p_{q+1} - p_\ell) + \nu (-\Delta)^\theta w_{q+1} + \divg \Rls. \end{align*} Thus, as in \cite{Buckmuster_Vicol}, if we denote \begin{align} \Rl = & \opR \Big( \pt \wqp + \pt \wqc + \nu (-\Delta)^\theta w_{q+1} \Big) + v_\ell \ootimes w_{q+1} + w_{q+1} \ootimes v_\ell, \label{3.51_Rl} \\ \Rc = & \Big( (\wqc+\wqt) \ootimes w_{q+1} + \wqp \ootimes (\wqc + \wqt) \Big), \label{3.52_Rc} \\ \Ro = & \wqp \ootimes \wqp + \Rls + \pt \opR \wqt, \end{align} we can choose \begin{equation} \label{3.54_R} \oR_{q+1} = \opR \divg \big( \Rl + \Rc + (\Ro - p^* \id) + (p_{q+1} - p_\ell + p^*) \id \big) \end{equation} for $p^*$ to be chosen later. Then obviously, if we properly choose $p_{q+1}$, we have \begin{equation} \label{3.58+1} \supp_t \oR_{q+1} \subseteq \supp_t w_{q+1} \cup \supp_t \Rls \subseteq N_{2\ell} (\supp_t R_q). \end{equation} For $\Rc$, by \eqref{3.44_wqcwqt_infp}--\eqref{3.43_w_infp} and \eqref{3.46_w_cN}, we have \begin{align} \linfp{\opR \divg \Rc} \lesssim & \linfp{\Rc} \notag \\ \lesssim & \big( \linftwop{\wqc} + \linftwop{\wqt} \big) \cdot (\linftwop{w_{q+1}} + \linftwop{\wqp} )\notag \\ \lesssim & \ell^{-8} \big( \sigma r + \mu^{-1} r \big) r^{2-\frac{2}{p}} \label{3.58+2} \end{align} and \begin{align} & \CNN{\opR \divg \Rc}{1} \notag \\ \ls & (\cNN{\wqp}{2} + \cNN{\wqc}{2} + \cNN{\wqt}{2}) \cdot (\linfinf{\wqp} + \linfinf{\wqc} + \linfinf{\wqt}) \notag \\ \ls & \ell^{-8} r \lamqp^2 \cdot \ell^{-4} r \ls \ell^{-12} r^2 \lamqp^2. \label{7.9+} \end{align} Meanwhile, for $\Rl$ by \eqref{3.33} and \eqref{3.3+_est_bpsi}, \eqref{3.25_eta_est}, \eqref{3.39}, it holds that \begin{align*} \linfp{\opR (\pt \wqp + \pt \wqc) } = & \Linfp{\opR \pt \nabla^\perp \big( \sumk a_\dir \eta_\dir \psi_\dir \big)} \\ \lesssim & \Linfp{ \sumk \pt (a_\dir \eta_\dir) \psi_\dir} \\ \lesssim & \ell^{-2} \sigma \mu r^{2-\frac{2}{p}}. \end{align*} By \eqref{3.43_w_infp}, \eqref{3.46_w_cN} and \eqref{+.8_vl_CN}, \begin{gather*} \linfp{ \opR (-\Delta)^\theta w_{q+1} } \ls \linfp{w_{q+1}}^{1-\theta_*} \linfp{\nabla w_{q+1}}^{\theta_*} \lesssim \ell^{-4} \lamqp^{\theta_*} r^{1-\frac{2}{p}}, \\ \linfp{ v_\ell \ootimes w_{q+1} + w_{q+1} \ootimes v_\ell} \lesssim \cNN{v_\ell}{1} \linfp{w_{q+1}} \lesssim \ell^{-1} r^{1-\frac{2}{p}}, \end{gather*} where $\theta_*$ is defined by \eqref{3.58+3}. Thus, \begin{equation} \label{3.56_Rl_Est} \linfp{\Rl} \lesssim \ell^{-2} \sigma \mu r^{2-\frac{2}{p}} + \ell^{-4} \lamqp^{\theta_*} r^{1-\frac{2}{p}}. \end{equation} {Also} by \eqref{3.46_w_cN} and \eqref{+.8_vl_CN}, one has \begin{align} \cNN{\Rl}{1} \ls & \cNN{\pt \wqp + \pt \wqc}{1} + \cNN{w_{q+1}}{2} + \cNN{v_\ell}{1} \cNN{w_{q+1}}{1} \notag\\ \ls & \ell^{-8} r \lamqp^2 + \ell^{-5} r \lamqp \notag \\ \ls & \ell^{-8} r \lamqp^2. \label{7.11+} \end{align} At last, we shall get the estimates for $\Ro$, which is the main part in the convex integration scheme. By the definition \eqref{3.30_wqp_def} of $\wqp$, and noting \eqref{3.34}, \eqref{3.21+}, {one has} \begin{align*} \wqp \ootimes \wqp + \Rls = & \sumkk a_\dir(t,x) a_{\dir'}(t,x) \mbw_\dir(t,x) \ootimes \mbw_{\dir'}(t,x) + \Rls \\ = & \sumkk a_\dir a_{\dir'} \mbp_{\neq 0} \big(\mbw_\dir \ootimes \mbw_{\dir'} \big) \end{align*} and \begin{align*} & \divg \big( \sumkk a_\dir a_{\dir'} \mbp_{\neq 0} (\mbw_\dir \ootimes \mbw_{\dir'}) \big) \\ = & \divg \big( \sumkk a_\dir a_{\dir'} \mbp_{\geq (\lamqp \sigma)/{2}} (\mbw_\dir \ootimes \mbw_{\dir'}) \big) \\ = & \frac{1}{2} \sumkk \mbp_{\neq 0} \Big( \nabla(a_\dir a_{\dir'}) \cdot \mbp_{\geq (\lamqp \sigma)/{2}} (\mbw_\dir \ootimes \mbw_{\dir'} + \mbw_{\dir'} \ootimes \mbw_{\dir} )\Big) \\ & + \frac{1}{2} \sumkk \mbp_{\neq 0} \Big( a_\dir a_{\dir'} \divg \mbp_{\geq (\lamqp \sigma)/{2}} (\mbw_\dir \ootimes \mbw_{\dir'} + \mbw_{\dir'} \ootimes \mbw_{\dir} )\Big) \\ := & \frac{1}{2} \sumkk \big( \mce{\dir,\dir',1} + \mce{\dir,\dir',2} \big), \end{align*} Among these terms, by Lemma \ref{Lem:3.11}, and noting \eqref{3.24_mbw_est}, \begin{align} \linfp{\opR \mce{\dir,\dir',1}} \lesssim & \Linfp{ |\nabla|^{-1} \mce{\dir,\dir',1}} \notag \\ \lesssim & (\lamqp \sigma)^{-1} \cNN{a_\dir a_{\dir'}}{3} \Linfp{ \mbw_\dir \ootimes \mbw_{\dir'}} \notag \\ \lesssim & \ell^{-8} (\lamqp \sigma)^{-1} \linftwop{\mbw_\dir} \linftwop{\mbw_{\dir'}} \lesssim \frac{\ell^{-8}}{\lamqp \sigma} r^{2-\frac{2}{p}}. \label{3.57_E1_Est} \end{align} Since we use stationary 2D flow instead of the Beltrami flow in 3D, we shall use a process slightly different from the one in \cite{Buckmuster_Vicol} and \cite{Luo_Titi} to estimate $\mce{\dir,\dir',2}$, see also Lemma 4 of \cite{Choffrut_DeLellis_Szekelyhidi}. Noting the definition of $b_\dir$ and $\psi_\dir$, \eqref{3.1_def_bpsi}, and that $\dir,\dir' \in \Lambda \subset \mbs^1$, it is direct to check that \begin{align*} & \big( \dir^\perp \ootimes \dir'^\perp +\dir'^\perp \ootimes \dir^\perp \big) (\dir+\dir') \\ = & (\dir \cdot \dir' -1 ) (\dir+\dir') \\ = & (\dir^\perp \cdot \dir'^\perp -1) (\dir+\dir'). \end{align*} Thus, \begin{align*} & \divg \big( b_\dir \ootimes b_{\dir'} + b_{\dir'} \ootimes b_\dir \big) \\ = & \divg \big( b_\dir \otimes b_{\dir'} + b_{\dir'} \otimes b_\dir - b_\dir \cdot b_{\dir'} \id \big) \\ = & - i \lamqp \big( \dir^\perp \otimes \dir'^\perp +\dir'^\perp \otimes \dir^\perp - \dir^\perp \cdot \dir'^\perp \id \big) (\dir+\dir') \rme^{i \lamqp (\dir+\dir')\cdot x} \\ = & i \lamqp (\dir+\dir') \rme^{i \lamqp (\dir+\dir')\cdot x} \\ = & \nabla \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big), \end{align*} and \begin{align*} & \divg \big( \mbw_\dir \ootimes \mbw_{\dir'} + \mbw_{\dir'} \ootimes \mbw_{\dir} \big) \\ = & \big( b_\dir \ootimes b_{\dir'} + b_{\dir'} \ootimes b_\dir \big) \nabla (\eta_\dir \eta_{\dir'}) - \eta_\dir \eta_{\dir'} \nabla \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big). \end{align*} Then for $\mce{\dir,\dir',2}$, if $\dir+\dir' \neq 0$, due to \eqref{3.21}, \begin{align*} & a_\dir a_{\dir'} \mbp_{\geq (\lamqp \sigma)/{2}} \divg \Big( \mbw_\dir \ootimes \mbw_{\dir'} + \mbw_{\dir'} \ootimes \mbw_{\dir} \Big) \\ = & a_\dir a_{\dir'} \mbp_{\geq (\lamqp \sigma)/{2}} \big( (b_\dir \ootimes b_{\dir'} + b_{\dir'} \ootimes b_\dir ) \nabla(\eta_\dir \eta_{\dir'}) \big) \\ & - \nabla \Big((a_\dir a_{\dir'}) \mbp_{\geq (\lamqp \sigma)/{2}} \big(\eta_\dir \eta_{\dir'} \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big) \big)\Big) \\ & + \nabla(a_\dir a_{\dir'}) \cdot \mbp_{\geq (\lamqp \sigma)/{2}} \big(\eta_\dir \eta_{\dir'} \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big) \big)\\ & + a_\dir a_{\dir'} \mbp_{\geq (\lamqp \sigma)/{2}} \big( \nabla(\eta_\dir \eta_{\dir'}) \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big) \big)\\ := & \mce{\dir,\dir',2,1} + \mce{\dir,\dir',2,2} + \mce{\dir,\dir',2,3} + \mce{\dir,\dir',2,4}. \end{align*} Among these terms, $\mce{\dir,\dir',2,2}$ can be added to the $p^* \id$ term, $\mce{\dir,\dir',2,3}$ can be estimated as $\mce{\dir,\dir',1}$. Moreover, as \eqref{3.20+_eta_FrePro}, by the definitions \eqref{3.1_def_bpsi} and \eqref{3.17_eata_def}, for the case $\dir+\dir'\neq 0$, we can change the projector $\mbp_{\geq (\lambda \sigma)/{2}}$ in $\mce{\dir,\dir',2,1}$ and $\mce{\dir,\dir',2,4}$ into $\mbp_{\geq {\lamqp}/{10}}$. Then using Lemma \ref{Lem:3.11} and noting \eqref{3.25_eta_est}, \begin{align*} & \linfp{\opR \mbp_{\neq 0} \mce{\dir,\dir',2,1}} \\ \lesssim & \Linfp{ |\nabla|^{-1} \mbp_{\neq 0} \Big( a_\dir a_{\dir'} \mbp_{\geq \lamqp/10} \big( b_\dir \ootimes b_{\dir'} + b_{\dir'} \ootimes b_\dir \big) \nabla(\eta_\dir \eta_{\dir'}) \Big)} \\ \lesssim & \lamqp^{-1} \cNN{a_\dir a_{\dir'}}{2} \| b_\dir \|_{L^\infty_{t,x}}\| b_{\dir'} \|_{L^\infty_{t,x}} \big( \linftwop{\eta_\dir} \linftwop{\nabla \eta_{\dir'}} + \linftwop{\nabla \eta_\dir} \linftwop{\eta_{\dir'}} \big) \\ \lesssim & \ell^{-6} \sigma r^{3-\frac{2}{p}}. \end{align*} Similarly, \[ \Linfp{\opR \mbp_{\neq 0} \mce{\dir,\dir',2,4}} \lesssim \ell^{-6} \sigma r^{3-\frac{2}{p}}. \] Thus, for $\dir+\dir'\neq 0$, \begin{equation} \label{3.58_E2_Est} \Linfp{\opR \mce{\dir,\dir',2}} \lesssim \Big( \frac{\ell^{-8}}{\lamqp \sigma} + \ell^{-6} \sigma r \Big) r^{2-\frac{2}{p}}. \end{equation} Next, for the case $\dir+\dir'=0$ namely, for $\mce{\dir,-\dir,2}$ with $\dir \in \Lambda$, we have $$ \nabla \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big) = 0, $$ and by \eqref{3.18_eta_est} \begin{align*} & \divg \big( \mbw_\dir \ootimes \mbw_{-\dir} + \mbw_{-\dir} \ootimes \mbw_\dir \big) \\ = & \big( b_\dir \ootimes b_{-\dir} + b_{-\dir} \ootimes b_\dir \big) \nabla (\eta_\dir \eta_{-\dir}) \\ = & 2 (\dir^\perp \ootimes \dir^\perp) \nabla \eta_\dir^2 = \big( \id - 2 \dir\otimes \dir \big) \nabla \eta_\dir^2 \\ = & \Big( \nabla \eta_\dir^2 - 2 \big( (\dir\cdot \nabla) \eta_\dir^2 \big) \dir \Big) \\ = & \Big( \nabla\eta_\dir^2 \mp 2 \frac{1}{\mu} \dir \pt \eta_\dir^2 \Big) \quad \text{for}\ \dir \in \Lambda^\pm. \end{align*} Thus, for $\dir \in \Lambda^\pm$, \begin{align*} \mce{\dir,-\dir,2} = & \mbp_{\neq 0} \Big( a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \big( \nabla \eta_\dir^2 \mp 2 \frac{1}{\mu} \dir \pt \eta_\dir^2 \big) \Big) \\ = & \nabla \big( a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big) - \mbp_{\neq 0} \Big( (\nabla a_\dir^2) \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \Big) \\ & \mp 2 \mu^{-1} \dir \pt \mbp_{\neq 0} \big( a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big) \pm 2 \mu^{-1} \dir \mbp_{\neq 0} \big( (\pt a_\dir^2) \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big){.} \end{align*} Noting the definition of $\wqt$, \eqref{3.32_wqt_def}, \eqref{3.20+_eta_FrePro}, and that \[ \id - \mbp_H = \nabla \Delta^{-1} \nabla\cdot, \] one has \begin{align*} & \frac{1}{2} \sumk \mce{\dir,-\dir,2} + \pt \wqt \\ = & \Big(- \sumk \mu^{-1} \dir \nabla \Delta^{-1} \divg \pt \mbp_{\neq 0} \big( a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big) + \frac{1}{2}\nabla \big(a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2\big) \Big) \\ & + \Big( - \frac{1}{2} \sumk \mbp_{\neq 0} \big( \nabla a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big) \pm \sumk \mu^{-1} \dir \mbp_{\neq 0} \big( \pt a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big) \Big) \\ := & \mce{\dir,-\dir,2,1} + \mce{\dir,-\dir,2,2}. \end{align*} Here, $\mce{\dir,-\dir,2,1}$ can be added to the pressure term, and $\mce{\dir,-\dir,2,2}$ can be estimated with Lemma \ref{Lem:3.11} as \begin{align} \Linfp{ \opR \mce{\dir,-\dir,2,2}} \lesssim & \frac{\ell^{-8}}{\lamqp \sigma} \linftwopsq{\eta_\dir} \notag \\ \lesssim & \frac{\ell^{-8}}{\lamqp \sigma} r^{2-\frac{2}{p}}. \label{3.66+} \end{align} Thus, combining \eqref{3.57_E1_Est}, \eqref{3.58_E2_Est} and \eqref{3.66+} yields, \begin{equation} \label{3.63} \Linfp{\opR \divg ( \Ro -p^* \id ) } \lesssim \big( \frac{\ell^{-8}}{\lamqp \sigma} + \ell^{-6} \sigma r \big) r^{2-\frac{2}{p}}, \end{equation} for \begin{align*} p^* = & - \sum_{ \substack{\dir,\dir' \in \Lambda\\ \dir + \dir' \neq 0}} (a_\dir a_{\dir'}) \mbp_{\geq (\lamqp \sigma)/{2}} \big(\eta_\dir \eta_{\dir'} \big( \lamqp^2 \psi_\dir \psi_{\dir'} \big) \big) \\ & - \sumk \mu^{-1} \dir \Delta^{-1} \divg \pt \mbp_{\neq 0} \big( a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2 \big) + \frac{1}{2} \big(a_\dir^2 \mbp_{\geq (\lamqp \sigma)/2} \eta_\dir^2\big). \end{align*} {Also} by \eqref{3.39}, \eqref{3.25_eta_est}, \begin{align} & \CNN{\opR \divg ( \Ro -p^* \id ) }{1} \notag \\ \ls & \sumkk \CNN{\mce{\dir,\dir',1}}{2} + \sum_{ \substack{\dir,\dir' \in \Lambda\\ \dir + \dir' \neq 0}} \big( \CNN{\mce{\dir,\dir',2,1}}{2} + \CNN{\mce{\dir,\dir',2,3}}{2} + \CNN{\mce{\dir,\dir',2,4}}{2} \big) \notag \\ & \quad + \sumk \CNN{\mce{\dir,-\dir,2,2}}{2} \notag \\ \ls & \cNN{a_\dir}{3} \cNN{a_\dir}{0} (\cNN{\nabla^3 \eta_\dir}{0} + \cNN{\nabla^2 \pt \eta_\dir}{0}) \cNN{\eta_\dir}{0} \notag \\ & \qquad \cdot \big( \cNN{b_\dir}{2} \cNN{b_\dir}{0} + \lamqp^3 \cNN{\psi_\dir}{2} \cNN{\psi_\dir}{0} \big) \notag \\ \ls & \ell^{-8} \lamqp^5 \sigma^3 r^4 \mu. \label{7.15+} \end{align} Summing up \eqref{3.58+2}--\eqref{7.9+}, \eqref{3.56_Rl_Est}--\eqref{7.11+} and \eqref{3.63}--\eqref{7.15+}, we can get \begin{align} \linfp{\oR_{q+1}} \lesssim & \ell^{-8} \Big( \sigma \mu + \sigma r + \mu^{-1} r + (\lamqp \sigma)^{-1} \Big) r^{2-\frac{2}{p}} + \ell^{-4} \lamqp^{\theta_*} r^{1-\frac{2}{p}}, \label{7.16} \\ \CNN{\oR_{q+1} }{1} \ls & \ell^{-12} r^2 \lamqp^2 + \ell^{-8} \lamqp^5 \sigma^3 r^4 \mu. \label{7.17} \end{align} At last, we choose the parameters specifically as \begin{equation} \label{3.64_parameter_choice} r = \lamqp^{1 - 6 \alpha}, \quad \mu = \lamqp^{1 - 4\alpha}, \quad \sigma = \lamqp^{-(1-2\alpha)}, \end{equation} with $\alpha \in \mbq^+$ defined in \eqref{7.16+_alpha}, and choose $1 < p < 2$ such that \[ (1-6\alpha) (2-\frac{2}{p}) = \alpha, \] namely, \[ p = \frac{2-12\alpha}{2-13\alpha} \in (1,2) \quad \text{and} \quad r^{2-\frac{2}{p}} = \lamqp^\alpha. \] Then we can check that $r, \sigma, \mu$ satisfy the requirements in \eqref{3.19+_parameters}. By choosing $A \in 5 \mbn$ large enough, one can get \eqref{2.3_b_Rq_l1_est} and \eqref{2.3_c_RC1_est}. And \eqref{3.46_w_cN} yields \eqref{2.3_a_vqC1_est}. Meanwhile, by \eqref{3.39+} and \eqref{3.58+1}, we can get \eqref{2.4_suppv}; by \eqref{3.42_wqp_inf2}--\eqref{3.44_wqcwqt_infp}, we can get \eqref{2.5_L2Increase}; and by \eqref{+.11} and \eqref{3.43_w_infp}, we can get \eqref{2.6_WIncrease}, which completes the proof of Lemma \ref{Lem:2.1}. \section*{Acknowledgments} The authors would like to thank Professor Zhouping Xin for his encouragement and supports.
train/arxiv
BkiUgkLxK7ICUuXemUuv
5
1
\section{Introduction} The ultra-relativistic heavy-ion collisions produce strongly interacting quark-gluon matter (sQGM) under extreme conditions of temperature and energy density at Relativistic Heavy Ion Collider (RHIC) \cite{rhic1,rhic2,rhic3,rhic4} and Large Hadron Collider (LHC) \cite{ALICE1,CMS1,ATLAS1}. The centrality is one key physical character in studying the high energy heavy-ion collisions, because it is directly related to the interacting volume (overlap zone) of the collision system. This overlap zone depends on the impact parameter $b$ defined as the distance between the centers of two colliding nuclei in the plane transverse to the beam axis. The centrality of a nucleus-nucleus (AA) collision with impact parameter $b$ is usually defined as a percentile $c$ in the nucleus-nucleus total cross section $\sigma_{AA}$ \cite{abel1}: \begin{equation} c=\frac{\int_0^b d\sigma/db^{'}db^{'}}{\int_0^{\infty} d\sigma/db^{'} db^{'}} =\frac{1}{\sigma_{AA}}\int_0^b d\sigma/db^{'} db^{'}. \label{bas1} \end{equation} In the experiment, this centrality percentile $c$ of the nucleus-nucleus total cross section is usually assumed to be approximately equivalent to the fraction of charged particle multiplicity above a multiplicity cut of $N_{ch}^{cut}$, or to the energy deposited in the zero-degree calorimeter (ZDC) below a cut of $E_{ZDC}^{cut}$\cite{abel1}: \begin{equation} \begin{aligned} c\approx &\frac{1}{N_{ch}^{tot}}\int_{N_{ch}^{cut}}^{N_{ch}^{tot}} d\sigma/dN_{ch}^{'} dN_{ch}^{'} \\ \approx &\frac{1}{E_{ZDC}^{tot}}\int_0^{E_{ZDC}^{cut}} d\sigma/dE_{ZDC}^{'} dE_{ZDC}^{'}. \label{bas3} \end{aligned} \end{equation} The nucleus-nucleus total cross section in Eq. (\ref{bas1}) is calculated by \begin{equation} \sigma_{AA}=\pi b_{max}^2 \times \frac{N_{evt}(N_{nn-c}\geq 1)} {N_{evt}(N_{nn-c}\geq 0)}, \label{bas2} \end{equation} i.e. by the nucleus-nucleus geometrical total cross section ($\pi b_{max}^2$) corrected with the fraction of events with at least one nucleon-nucleon collision. Meanwhile, the centrality percentile $c$ in a nucleus-nucleus collision is also assumed to be equivalent to the fraction of impact parameter distribution ($f(b)\propto bdb$). Therefore, a mapping relation of \begin{equation} b=\sqrt c \times b_{max} \label{eq1} \end{equation} is obtained. In above equation $c$ refers to the centrality percentile and $b_{max}$ is assumed to be \begin{equation} b_{max}=R_{A}+ R_{B}+f\times d, \label{eq2} \end{equation} where the $R_{A}$, for instance, is radius of nucleus $A$ (it denotes the atomic number of nucleus either), the $d=0.546 fm$ refers to the tail of the nuclear density profile, and the coefficient $f$ is a free parameter. \begin{equation} R_A=r_0A^{1/3}, \hspace{0.2cm} r_0=1.12 fm, \end{equation} \section{Methodology} The PACIAE model \cite{sa,sa2,zhou} is a parton and hadron cascade model based on PYTHIA \cite{PYTHIA}. For nucleon-nucleon (NN) collisions, with respect to PYTHIA, the partonic and hadronic rescatterings are introduced before and after the hadronization, respectively. The final hadronic state is developed from the initial partonic hard scattering and parton showers, followed by parton rescattering, string fragmentation, and hadron rescattering stages. Thus, the PACIAE model provides a multi-stage transport description on the evolution of the collision system. For a nucleus-nucleus (AA) collisions, the initial positions of nucleons in the colliding nuclei are sampled according to the Woods-Saxon distribution. Together with the initial momentum setup of $p_{x}=p_{y}= 0$ and $p_{z}=p_{\rm beam}$ for each nucleon, a list containing the initial state of all nucleons in AA collision is constructed. A collision happens between two nucleons from different nucleus if their relative transverse distance is less than or equal to the minimum approaching distance: $D\leq\sqrt{\sigma_{\rm NN}^{\rm tot}/\pi}$. The collision time is calculated with the assumption of straight-line trajectories. All such nucleon pairs compose a nucleon-nucleon (NN) collision time list. The NN collision with least collision time is selected from the list and executed by PYTHIA (PYEVNW subroutine) with the hadronization temporarily turned-off, as well as the strings and diquarks broken-up. The nucleon list and NN collision time list are then updated. A new NN collision with least collision time is selected from the updated NN collision time list and executed by PYTHIA. With repeating the aforementioned steps till the NN collision list empty, the initial partonic state is constructed for a AA collision. Then, the partonic rescatterings are performed, where the LO-pQCD parton-parton cross section~\cite{Combridge,Field} is employed. After partonic rescattering, the string is recovered and then hadronized with the Lund string fragmentation scheme resulting in an intermediate hadronic state. Finally, the system proceeds into the hadronic rescattering stage and produces the final hadronic state observed in the experiments. Thus PACIAE Monte-Carlo simulation provides a complete description of the NN and/or AA collisions, which includes the partonic initialization stage, partonic rescattering stage, hadronization stage, and the hadronic rescattering stage. Meanwhile, the PACIAE model simulation could be selected to stop at any stage desired conveniently. In this work, the simulations are stopped after the hadronic rescattering stage, i.e. at the final hadronic state. In the PACIAE 2.0 and successors, we introduce the geometrical model of Eq.(\ref{eq1}) and set the coefficient of $f$ to be equal to 2 and 1 for nucleus-nucleus and proton nucleus collisions~\cite{sa}, respectively. The results of this geometrical model were very consistent with the STAR/RHIC centrality definitions~\cite{sa,star}, because of they have the same $b_{max}$ definition. Later on, the ALICE, ATLAS and CMS collaborations at LHC have observed that the $b_{max}$ should be extended to 20 $fm$, where the interaction is just approaching to zero~\cite{abel1}. This 20 $fm$ is larger than the $b_{max}$ calculated by Eq.(\ref{eq1}) with above $f$ coefficient setting. A new set of $f$ coefficient is then required. In the improved Monte Carlo Glauber (MC-Glauber, MCG) model simulation \cite{loiz} the impact parameter $b$ bin, corresponding to a given centrality percentile bin, is sliced according to impact parameter distribution up to a $b_{cut}$, which is assumed to be at somewhere between $R_{A}+R_{B}$ and 20 $fm$. But the last $b$ bin is just assumed to be extended from $b_{cut}$ to the $20 fm$. For the Pb-Pb, Xe-Xe, Au-Au and Cu-Cu collisions at the relativistic energies, the $b_{cut}$ is assumed to be approximately equal to 15.6, 13.8, 14.9, and 11.0 $fm$ \cite{loiz}, respectively. Substituting these $b_{cut}$ values into left side of Eq.(\ref{eq2}) individually, the corresponding $f$ value of 4.21, 4.45, 3.42, and 3.74 is obtained sequentially. Therefore we assume the parameter $f$ in Eq. (\ref{eq2}) equals to 4 for the nucleus-nucleus collision but equals to 2 for the proton-nucleus collision, and we just slice the last $b$ bin up to $b_{cut}$ in this work. The participant nucleons number $\ensuremath{N_{\rm part}}$ and binary nucleon-nucleon collisions number $\ensuremath{N_{\rm coll}}$ are commonly used to represent the collision centrality. In MC-Glauber model, they are counted as the number of binary nucleon collisions happened and the number of nucleons which suffer at least one collision (wounded nucleons), respectively, in a nucleus-nucleus collision simulation within the boundary of $b_{cut}$. The $\langle N_{\rm{part}}\rangle$ and $\langle N_{\rm{coll}}\rangle$ denote their average value over events. And the nuclear overlap function $\langle T_{AA}\rangle$ is calculated by \begin{equation} \langle T_{AA}\rangle=\frac{\langle N_{\rm{coll}}\rangle}{\sigma_{NN}^{inel}}, \label{tanc} \end{equation} where $\sigma_{NN}^{inel}$ is the inelastic nucleon-nucleon (NN) cross section. In the optical Glauber model\cite{esk} used in PACIAE, the $\ensuremath{N_{\rm part}}$ and $T_{AB}$ are analytically calculated by \begin{equation} \begin{aligned} N_{part}(b)=&\int T_A(\vec b-\vec s)[1-\exp{(-\sigma_{in}T_B(\vec s))}]d^2s \\ &+ \int T_B(\vec s)[1-\exp{(-\sigma_{in}T_A(\vec b-\vec s))}]d^2s,\\ T_{AB}=&\int T_A(\vec b-\vec s)T_B(\vec s)d^2s, \hspace{0.8cm}\\ T_A(\vec s)=&\int \rho(\vec s,z)dz. \label{bas7} \end{aligned} \end{equation} for the asymmetric AB collisions. In Eq.(\ref{bas7}) the $\vec s$ refers to a vector of an area perpendicular to the beam axis $z$, $s=|\vec s|$, and $\rho(\vec s,z)$ stands for the nuclear density in the volume element of $d^2sdz$ at point of $(\vec s, z)$. The nuclear density distribution in a nucleus is assumed to be the spherically symmetric Woods-Saxon density distribution~\cite{star} \begin{equation} \rho(r)=\rho_0[1+\rm{exp}(\frac{r-R_A}{d})]^{-1} \hspace{0.2cm}. \end{equation} One assumes further: \begin{equation} \rho_0=\frac{A}{\frac{4\pi}{3}R_A^3} \label{bas11} \end{equation} \cite{Shalid}. The notarization relation of \begin{equation} 4\pi\int \rho(r)r^2dr=A. \nonumber \end{equation} is then required. \section{Results} The relation of Eq.~\ref{eq2} with new set of $f$ coefficient, together with the Eqs.~\ref{bas7}-\ref{bas11} are used in the PACIAE 2.2.2 to calculate the $b_{max}$ and $b$ bins as well as $\ensuremath{N_{\rm coll}}$ and $\ensuremath{N_{\rm part}}$ in the Pb-Pb and p-Pb collisions at $\sqrt{s_{NN}}$=5.02 TeV, Xe-Xe collisions at $\sqrt{s_{NN}}$=5.44 TeV, as well as Au-Au and the Cu-Cu collisions at $\sqrt{s_{NN}}$=0.2 TeV, respectively. The nucleon-nucleon inelastic cross section $\sigma_{NN}^{inel}$ are set as 41.6, 67.6 and 68.4 mb for $\sqrt{s_{NN}}$=0.2, 5.02 and 5.44 TeV\cite{loiz}, respectively. The results are given in the Tabs. 1-5. In these table, the corresponding improved MC-Glauber model results \cite{loiz} are also given for the comparisons. We see in these tables that, the optical Glauber model results are well consistent with corresponding MC-Glauber ones within the error bar, except the most peripheral collisions. \begin{center} \begin{figure}[htbp] \centering \hspace{-0.50cm} \includegraphics[width=0.7\textwidth]{eta.eps} \caption{(Color online) PACIAE model simulated charged-particle pseudorapidity distribution in Pb-Pb collisions at $\sqrt{s_{NN}}$=5.02 TeV (solid symbols) compares with the corresponding ALICE data (open symbols) \cite{ALICE2} (open symbols).} \label{eta} \end{figure} \end{center} The PACIAE model simulated charged-particle pseudorapidity distributions (open symbols) in Pb-Pb collision at $\sqrt{s_{NN}}$=5.02 TeV are compared with the corresponding ALICE data \cite{ALICE2} (solid symbols) in Fig. \ref{eta} for ten centrality bins of 0-5\%, 5-10\%, 10-20\%, ..., 80-90\%. This figure shows that the PACIAE 2.2.2 model results well reproduce the ALICE data from central to peripheral Pb-Pb collisions. In the $|\eta|>4$ region, the theoretical results are smaller than the experimental data. This has to by studied further in the next work by adjusting the parameters in Lund fragmentation function and/or trying other fragmentation functions. We compare the PACIAE model simulated charged-particle transverse momentum distribution in Pb-Pb collision at $\sqrt{s_{NN}}$=5.02 TeV to the corresponding ALICE data \cite{ALICE3} in Fig~\ref{pt} for centrality bins of 0-5\%, 5-10\%, ..., 70-80\%. For better visibility, both the PACIAE results and the ALICE data, except 70-80\% one, are rescaled by a factor of 10, $10^2$, ..., $10^8$, respectively. In this figure we see the PACIAE model well reproduce the corresponding ALICE data in $\ensuremath{p\rm {_T}}<$6 GeV/c region. However in the $\ensuremath{p\rm {_T}}>$6 GeV/c region, the theoretical results are smaller than the experimental data. This might because the $\ensuremath{p\rm {_T}}$ distribution in heavy-ion collisions change from exponential-like at low $\ensuremath{p\rm {_T}}$ region to a power-law shape at high $\ensuremath{p\rm {_T}}$ region \cite{ALICE4}, and the $\ensuremath{p\rm {_T}}$ distribution is sampled by exponential-like distribution in this work. In the next study we shall try to sample particle $\ensuremath{p\rm {_T}}$ by power law distribution instead of exponential-like one in the $\ensuremath{p\rm {_T}}>$ 6 GeV/c region. \begin{center} \begin{figure}[htbp] \centering \hspace{-0.50cm} \includegraphics[width=0.7\textwidth]{pt.eps} \caption{(Color online) PACIAE model simulated charged-particle transverse momentum distribution (open symbols) in Pb-Pb collisions at $\sqrt{s_{NN}}$=5.02 TeV compares with the corresponding ALICE data (solid symbols)\cite{ALICE3}. For better visibility, the data except 70-80\% are rescaled by different factors as indicated in legend.} \label{pt} \end{figure} \end{center} \section{Summary} The $f$ coefficient in the impact parameter $b_{max}$ formula in PACIAE 2.2.2 model is reset to respond the ALICE, ATLAS, and CMS's observation of the maximum impact parameter in heavy-ion collisions at relativistic energies should be extended to 20 $fm$. Consequently, the PACIAE model is updated to a new version of PACIAE 2.2.2 with convenience of studying the elementary nuclear collisions, proton-nucleus collisions, and the nucleus-nucleus collisions in an unified program version. The PACIAE model calculated impact parameter bin and the optical Glauber results of $\ensuremath{N_{\rm part}}$ and $\ensuremath{N_{\rm coll}}$ are well consistent with the corresponding improved MC-Glauber ones in the Pb-Pb, $p$-Pb, Xe-Xe, Au-Au, and Cu-Cu collisions at relativistic energies. PACIAE model simulated charged-particle pseudorapidity and transverse momentum distributions in Pb-Pb collisions at $\sqrt{s_{NN}}$=5.02 TeV well reproduce the corresponding ALICE data. These results prove the correction of the impact parameter centrality definition in the updated PACIAE model and the efficiency of PACIAE model itself. \textbf{Acknowledgments} This work was supported by the National Natural Science Foundation of China under grant Nos.: 11775094, and 11905188 and by the 111 project of the foreign expert bureau of China. YLY acknowledge the financial support from the Continuous Basic Scientific Research Project (No, WDJC-2019-13) in CIAE. \label{} \begin{table} \centering \begin{varwidth}{\textwidth} \caption{Impact parameter $b$, $\langle N_{\rm{coll}}\rangle$ and $\langle N_{\rm{part}}\rangle$ of optical Glauber (PACIAE) model for Pb-Pb at $\sqrt{s_{NN}}$=5.02 TeV, and compared with the ones of Monte Carlo Glauber model\cite{loiz}.} \end{varwidth} \begin{tabularx}{\textwidth}{CCCCC|CCCC} \hline \hline & \multicolumn{4}{c|}{optical Glauber (PACIAE) model} & \multicolumn{4}{c} {Monte Carlo Glauber (MCG) model$^{*}$} \\ \hline Centrality & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle$ & $\langle N_{\rm{part}}\rangle $ & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle \pm$rms & $\langle N_{\rm{part}}\rangle \pm$rms \\ \hline 0-5\% & 0 & 3.46 & 1871.8 & 375.4 & 0 & 3.49 & 1762$\pm$147 & 384.3$\pm$16.6 \\ 5-10\% & 3.46 & 4.89 & 1447.8 & 317.5 & 3.49 & 4.93 & 1380$\pm$113 & 331.2$\pm$17.7 \\ 10-15\% & 4.89 & 5.99 & 1128.1 & 267.4 & 4.93 & 6.04 & 1088$\pm$93.4 & 283$\pm$16.8 \\ 15-20\% & 5.99 & 6.91 & 876.8 & 224.5 & 6.04 & 6.98 & 855.3$\pm$80.8 & 240.9$\pm$16 \\ 20-25\% & 6.91 & 7.73 & 675.4 & 187.4 & 6.98& 7.8& 667.6$\pm$71.6 &204$\pm$15.3 \\ 25-30\% & 7.73 & 8.47 & 512.9 & 155.1 & 7.8 & 8.55 & 515.7$\pm$63.9 & 171.6$\pm$14.7 \\ 30-35\% & 8.47 & 9.14 & 383.9 & 127.3 & 8.55 & 9.23 & 392.9$\pm$57 & 143.2$\pm$14.1 \\ 35-40\% & 9.14 & 9.78 & 281.3 & 103.0 & 9.23 & 9.87 & 294.5$\pm$50 & 118.3$\pm$13.6 \\ 40-45\% & 9.78 & 10.37 & 201.0 & 82.1 & 9.87 & 10.5 & 216.4$\pm$43.3 & 96.49$\pm$13 \\ 45-50\% & 10.37 & 10.93 & 140.2 & 64.4 & 10.5 & 11 & 155.5$\pm$36.6 & 77.48$\pm$12.4 \\ 50-55\% & 10.93 & 11.46 & 95.0 & 49.4 & 11 & 11.6 & 109.2$\pm$30.2 & 61.19$\pm$11.7 \\ 55-60\% & 11.46 & 11.97 & 62.3 & 36.9 & 11.6 & 12.1 & 74.73$\pm$24.3 & 47.31$\pm$10.9 \\ 60-65\% & 11.97 & 12.46 & 39.5 & 26.8 & 12.1 & 12.6 & 49.88$\pm$19.1 & 35.74$\pm$9.96 \\ 65-70\% & 12.46 & 12.93 & 24.3 & 18.8 & 12.6 & 13.1 & 32.38$\pm$14.7 & 26.26$\pm$8.95 \\ 70-75\% & 12.93 & 13.39 & 14.5 & 12.7 & 13.1 & 13.5 & 20.54$\pm$11.1 & 18.75$\pm$7.79 \\ 75-80\% & 13.39 & 13.82 & 8.5 & 8.3 & 13.5 & 14 & 12.85$\pm$8.16 & 13.09$\pm$6.55 \\ 80-85\% & 13.82 & 14.25 & 4.9 & 5.3 & 14 & 14.4 & 8.006$\pm$5.82 & 9.038$\pm$5.22 \\ 85-90\% & 14.25 & 14.66 & 2.8 & 3.2 & 14.4 & 14.9 & 5.084$\pm$4.08 & 6.304$\pm$3.98 \\ 90-95\% & 14.66 & 15.06 & 1.5 & 1.9 & 14.9 & 15.6 & 3.27$\pm$2.77 & 4.452$\pm$2.86 \\ 95-100\% & 15.06 & 15.46 & 0.8 & 1.1 & 15.6 & 20 & 2.035$\pm$1.72 & 3.103$\pm$1.8 \\ \hline \hline \\ \multicolumn{4}{l}{* Data are taken from \cite{loiz}} \\ \end{tabularx} \label{tab:PbPb502} \end{table} \begin{table} \centering \begin{varwidth}{\textwidth} \caption{Impact parameter $b$, $\langle N_{\rm{coll}}\rangle$ and $\langle N_{\rm{part}}\rangle$ of optical Glauber (PACIAE) model for p-Pb at $\sqrt{s_{NN}}$=5.02 TeV, and compared with the ones of Monte Carlo Glauber model\cite{loiz}.} \end{varwidth} \begin{tabularx}{\textwidth}{CCCCC|CCCC} \hline \hline & \multicolumn{4}{c|}{optical Glauber (PACIAE) model} & \multicolumn{4}{c} {Monte Carlo Glauber (MCG) model} \\ \hline Centrality & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle$ & $\langle N_{\rm{part}}\rangle $ & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle \pm$rms & $\langle N_{\rm{part}}\rangle \pm$rms \\ \hline 0-5\% & 0 & 1.79 & 13.1 & 11.2 & 0 & 1.82 & 13.68$\pm$3.51 & 14.68$\pm$3.51 \\ 5-10\% & 1.79 & 2.53 & 12.5 & 10.7 & 1.82 & 2.58 & 13.11$\pm$3.4 & 14.11$\pm$3.4 \\ 10-15\% & 2.53 & 3.10 & 11.8 & 10.1 & 2.58 & 3.16 & 12.5$\pm$3.3 & 13.5$\pm$3.3 \\ 15-20\% & 3.10 & 3.58 & 11.1 & 9.5 & 3.16 & 3.65 & 11.83$\pm$3.18 & 12.83$\pm$3.18 \\ 20-25\% & 3.58 & 4.00 & 10.3 & 8.9 & 3.65 & 4.08 & 11.13$\pm$3.07 & 12.13$\pm$3.07 \\ 25-30\% & 4.00 & 4.38 & 9.5 & 8.3 & 4.08 & 4.47 & 10.36$\pm$2.96 & 11.36$\pm$2.96 \\ 30-35\% & 4.38 & 4.74 & 8.7 & 7.7 & 4.47 & 4.83 & 9.529$\pm$2.83 & 10.53$\pm$2.83 \\ 35-40\% & 4.74 & 5.06 & 7.9 & 7.0 & 4.83 & 5.16 & 8.646$\pm$2.7 & 9.646$\pm$2.7 \\ 40-45\% & 5.06 & 5.37 & 7.0 & 6.3 & 5.16 & 5.47 & 7.721$\pm$2.57 & 8.721$\pm$2.57 \\ 45-50\% & 5.37 & 5.66 & 6.2 & 5.7 & 5.47 & 5.77 & 6.766$\pm$2.41 & 7.766$\pm$2.41 \\ 50-55\% & 5.66 & 5.94 & 5.4 & 5.0 & 5.77 & 6.05 & 5.836$\pm$2.25 & 6.836$\pm$2.25 \\ 55-60\% & 5.94 & 6.20 & 4.6 & 4.4 & 6.05 & 6.32 & 4.949$\pm$2.07 & 5.949$\pm$2.07 \\ 60-65\% & 6.20 & 6.45 & 3.9 & 3.8 & 6.32 & 6.58 & 4.132$\pm$1.87 & 5.132$\pm$1.87 \\ 65-70\% & 6.45 & 6.70 & 3.3 & 3.3 & 6.58 & 6.84 & 3.415$\pm$1.66 & 4.415$\pm$1.66 \\ 70-75\% & 6.70 & 6.93 & 2.7 & 2.8 & 6.84 & 7.1 & 2.802$\pm$1.45 & 3.802$\pm$1.45 \\ 75-80\% & 6.93 & 7.16 & 2.2 & 2.4 & 7.1 & 7.36 & 2.294$\pm$1.23 & 3.294$\pm$1.23 \\ 80-85\% & 7.16 & 7.38 & 1.8 & 2.0 & 7.36 & 7.65 & 1.877$\pm$1.00 & 2.877$\pm$1.00 \\ 85-90\% & 7.38 & 7.59 & 1.5 & 1.7 & 7.65 & 7.99 & 1.55$\pm$0.78 & 2.55$\pm$0.78 \\ 90-95\% & 7.59 & 7.80 & 1.2 & 1.4 & 7.99 & 8.49 & 1.287$\pm$0.56 & 2.287$\pm$0.56 \\ 95-100\% & 7.80 & 8.00 & 0.9 & 1.1 & 8.49 & 14.7 & 1.082$\pm$0.30 & 2.082$\pm$0.30 \\ \hline \hline \\ \end{tabularx} \label{tab:pPb502} \end{table} \begin{table} \centering \begin{varwidth}{\textwidth} \caption{Impact parameter $b$, $\langle N_{\rm{coll}}\rangle$ and $\langle N_{\rm{part}}\rangle$ of optical Glauber (PACIAE) model for Xe-Xe at $\sqrt{s_{NN}}$=5.44 TeV, and compared with the ones of Monte Carlo Glauber model\cite{loiz}.} \end{varwidth} \begin{tabularx}{\textwidth}{CCCCC|CCCC} \hline \hline & \multicolumn{4}{c|}{optical Glauber (PACIAE) model} & \multicolumn{4}{c} {Monte Carlo Glauber (MCG) model} \\ \hline Centrality & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle$ & $\langle N_{\rm{part}}\rangle $ & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle \pm$rms & $\langle N_{\rm{part}}\rangle \pm$rms \\ \hline 0-5\% & 0 & 3.03 & 992.5 & 234.4 & 0 & 3.01 & 942.5$\pm$92.1 & 236.5$\pm$10 \\ 5-10\% & 3.03 & 4.29 & 764.3 & 198.4 & 3.01 & 4.26 & 734.1$\pm$72.8 & 206.1$\pm$11.7 \\ 10-15\% & 4.29 & 5.25 & 591.6 & 166.9 & 4.26 & 5.22 & 571.9$\pm$62 & 177.1$\pm$12.2 \\ 15-20\% & 5.25 & 6.06 & 457.2 & 140.0 & 5.22 & 6.02 & 443.9$\pm$55.5 & 151.1$\pm$12.4 \\ 20-25\% & 6.06 & 6.78 & 348.5 & 116.3 & 6.02 & 6.73 & 341.7$\pm$50.8 & 127.9$\pm$12.6 \\ 25-30\% & 6.78 & 7.43 & 262.3 & 95.8 & 6.73 & 7.38 & 260.5$\pm$46.2 & 107.4$\pm$12.6 \\ 30-35\% & 7.43 & 8.02 & 194.5 & 78.2 & 7.38 & 7.97 & 196.1$\pm$41.7 & 89.36$\pm$12.6 \\ 35-40\% & 8.02 & 8.58 & 141.4 & 63.0 & 7.97 & 8.52 & 145.5$\pm$36.8 & 73.53$\pm$12.4 \\ 40-45\% & 8.58 & 9.10 & 100.4 & 49.9 & 8.52 & 9.04 & 106.5$\pm$31.7 & 59.75$\pm$12.1 \\ 45-50\% & 9.10 & 9.59 & 69.8 & 38.9 & 9.04 & 9.53 & 76.83$\pm$26.8 & 47.94$\pm$11.6 \\ 50-55\% & 9.59 & 10.06 & 47.3 & 29.7 & 9.53 & 9.99 & 54.64$\pm$22.1 & 37.9$\pm$10.9 \\ 55-60\% & 10.06 & 10.50 & 31.4 & 22.2 & 9.99 & 10.4 & 38.28$\pm$18 & 29.43$\pm$10.1 \\ 60-65\% & 10.50 & 10.93 & 20.4 & 16.1 & 10.4 & 10.9 & 26.61$\pm$14.4 & 22.56$\pm$9.17 \\ 65-70\% & 10.93 & 11.35 & 12.8 & 11.4 & 10.9 & 11.3 & 18.25$\pm$11.3 & 16.98$\pm$8.06 \\ 70-75\% & 11.35 & 11.74 & 8.0 & 7.9 & 11.3 & 11.7 & 12.49$\pm$8.7 & 12.68$\pm$6.89 \\ 75-80\% & 11.74 & 12.13 & 4.9 & 5.3 & 11.7 & 12.1 & 8.627$\pm$6.62 & 9.503$\pm$5.74 \\ 80-85\% & 12.13 & 12.50 & 3.0 & 3.4 & 12.1 & 12.5 & 6.011$\pm$4.93 & 7.152$\pm$4.61 \\ 85-90\% & 12.50 & 12.86 & 1.8 & 2.2 & 12.5 & 13.1 & 4.232$\pm$3.64 & 5.422$\pm$3.6 \\ 90-95\% & 12.86 & 13.22 & 1.1 & 1.4 & 13.1 & 13.8 & 2.967$\pm$2.58 & 4.116$\pm$2.67 \\ 95-100\% & 13.22 & 13.56 & 0.6 & 0.9 & 13.8 & 20 & 1.95$\pm$1.64 & 3.007$\pm$1.72 \\ \hline \hline \\ \end{tabularx} \label{tab:XeXe} \end{table} \begin{table} \centering \begin{varwidth}{\textwidth} \caption{Impact parameter $b$, $\langle N_{\rm{coll}}\rangle$ and $\langle N_{\rm{part}}\rangle$ of optical Glauber (PACIAE) model for Au-Au at $\sqrt{s_{NN}}$=0.2 TeV, and compared with the ones of Monte Carlo Glauber model\cite{loiz}.} \end{varwidth} \begin{tabularx}{\textwidth}{CCCCC|CCCC} \hline \hline & \multicolumn{4}{c|}{optical Glauber (PACIAE) model} & \multicolumn{4}{c} {Monte Carlo Glauber (MCG) model} \\ \hline Centrality & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle$ & $\langle N_{\rm{part}}\rangle $ & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle \pm$rms & $\langle N_{\rm{part}}\rangle \pm$rms \\ \hline 0-5\% & 0 & 3.31 & 1075.5 & 345.5 & 0 & 3.31 & 1053$\pm$92.2 & 351$\pm$17.8 \\ 5-10\% & 3.31 & 4.68 & 842.8 & 289.6 & 3.31 & 4.68 & 831.4$\pm$72.1 & 298.1$\pm$17 \\ 10-15\% & 4.68 & 5.74 & 664.6 & 243.0 & 4.68 & 5.73 & 660.1$\pm$61 & 252.7$\pm$16 \\ 15-20\% & 5.74 & 6.62 & 523.3 & 203.8 & 5.73 & 6.61 & 523$\pm$54.4 & 213.8$\pm$15.4 \\ 20-25\% & 6.62 & 7.40 & 409.7 & 170.2 & 6.61 & 7.39 & 412$\pm$49.5 & 180.1$\pm$14.9 \\ 25-30\% & 7.40 & 8.11 & 316.8 & 141.0 & 7.39 & 8.1 & 321.1$\pm$45.3 & 150.8$\pm$14.6 \\ 30-35\% & 8.11 & 8.76 & 241.5 & 115.7 & 8.1 & 8.75 & 247.2$\pm$41.3 & 125.1$\pm$14.3 \\ 35-40\% & 8.76 & 9.37 & 180.9 & 93.8 & 8.75 & 9.35 & 187.8$\pm$37 & 102.8$\pm$13.9 \\ 40-45\% & 9.37 & 9.93 & 133.1 & 75.1 & 9.35 & 9.92 & 139.9$\pm$32.5 & 83.36$\pm$13.4 \\ 45-50\% & 9.93 & 10.47 & 95.8 & 59.2 & 9.92 & 10.5 & 102.4$\pm$27.8 & 66.65$\pm$12.7 \\ 50-55\% & 10.47 & 10.98 & 67.1 & 45.6 & 10.5 & 11 & 73.35$\pm$23.4 & 52.37$\pm$11.9 \\ 55-60\% & 10.98 & 11.47 & 45.8 & 34.4 & 11 & 11.5 & 51.45$\pm$19.2 & 40.39$\pm$11 \\ 60-65\% & 11.47 & 11.94 & 30.3 & 25.2 & 11.5 & 11.9 & 35.33$\pm$15.4 & 30.5$\pm$9.95 \\ 65-70\% & 11.94 & 12.39 & 19.6 & 18.0 & 11.9 & 12.4 & 23.74$\pm$12 & 22.5$\pm$8.79 \\ 70-75\% & 12.39 & 12.82 & 12.3 & 12.4 & 12.4 & 12.8 & 15.64$\pm$9.17 & 16.23$\pm$7.5 \\ 75-80\% & 12.82 & 13.24 & 7.6 & 8.4 & 12.8 & 13.2 & 10.22$\pm$6.83 & 11.55$\pm$6.17 \\ 80-85\% & 13.24 & 13.65 & 4.6 & 5.4 & 13.2 & 13.7 & 6.699$\pm$4.96 & 8.193$\pm$4.86 \\ 85-90\% & 13.65 & 14.05 & 2.7 & 3.5 & 13.7 & 14.2 & 4.426$\pm$3.49 & 5.852$\pm$3.67 \\ 90-95\% & 14.05 & 14.43 & 1.6 & 2.1 & 14.2 & 14.9 & 2.949$\pm$2.38 & 4.216$\pm$2.6 \\ 95-100\% & 14.43 & 14.81 & 0.9 & 1.3 & 14.9 & 20 & 1.867$\pm$1.43 & 2.957$\pm$1.57 \\ \hline \hline \\ \end{tabularx} \label{tab:AuAu} \end{table} \begin{table} \centering \begin{varwidth}{\textwidth} \caption{Impact parameter $b$, $\langle N_{\rm{coll}}\rangle$ and $\langle N_{\rm{part}}\rangle$ of optical Glauber (PACIAE) model for Cu-Cu at $\sqrt{s_{NN}}$=0.2 TeV, and compared with the ones of Monte Carlo Glauber model\cite{loiz}.} \end{varwidth} \begin{tabularx}{\textwidth}{CCCCC|CCCC} \hline \hline & \multicolumn{4}{c|}{optical Glauber (PACIAE) model$^{*}$} & \multicolumn{4}{c} {Monte Carlo Glauber (MCG) model} \\ \hline Centrality & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle$ & $\langle N_{\rm{part}}\rangle $ & $b_{\rm{min}}(\rm{fm})$ & $b_{\rm{max}}(\rm{fm})$ & $\langle N_{\rm{coll}}\rangle \pm$rms & $\langle N_{\rm{part}}\rangle \pm$rms \\ \hline 0-5\% & 0 & 2.44 & 217.9 & 105.3 & 0 & 2.34 & 203.6$\pm$24.9 & 106.5$\pm$6.21 \\ 5-10\% & 2.44 & 3.45 & 166.7 & 86.8 & 2.34 & 3.31 & 162.9$\pm$20.6 & 91.68$\pm$6.41 \\ 10-15\% & 3.45 & 4.23 & 127.5 & 71.3 & 3.31 & 4.06 & 130.1$\pm$18 & 78.42$\pm$6.52 \\ 15-20\% & 4.23 & 4.88 & 97.0 & 58.3 & 4.06 & 4.68 & 103.7$\pm$16.3 & 66.83$\pm$6.65 \\ 20-25\% & 4.88 & 5.46 & 73.2 & 47.3 & 4.68 & 5.24 & 82.13$\pm$15 & 56.58$\pm$6.78 \\ 25-30\% & 5.46 & 5.98 & 54.5 & 38.0 & 5.24 & 5.73 & 64.7$\pm$13.8 & 47.63$\pm$6.86 \\ 30-35\% & 5.98 & 6.46 & 40.0 & 30.2 & 5.73 & 6.19 & 50.63$\pm$12.5 & 39.83$\pm$6.86 \\ 35-40\% & 6.46 & 6.90 & 28.9 & 23.6 & 6.19 & 6.62 & 39.28$\pm$11.3 & 33.03$\pm$6.8 \\ 40-45\% & 6.90 & 7.32 & 20.5 & 18.1 & 6.62 & 7.02 & 30.23$\pm$10.2 & 27.14$\pm$6.66 \\ 45-50\% & 7.32 & 7.72 & 14.3 & 13.7 & 7.02 & 7.4 & 23.11$\pm$8.95 & 22.11$\pm$6.43 \\ 50-55\% & 7.72 & 8.09 & 9.9 & 10.2 & 7.4 & 7.77 & 17.54$\pm$7.79 & 17.84$\pm$6.08 \\ 55-60\% & 8.09 & 8.45 & 6.7 & 7.5 & 7.77 & 8.11 & 13.25$\pm$6.69 & 14.3$\pm$5.65 \\ 60-65\% & 8.45 & 8.80 & 4.5 & 5.3 & 8.11 & 8.45 & 9.988$\pm$5.67 & 11.4$\pm$5.13 \\ 65-70\% & 8.80 & 9.13 & 3.0 & 3.8 & 8.45 & 8.78 & 7.576$\pm$4.75 & 9.111$\pm$4.56 \\ 70-75\% & 9.13 & 9.45 & 2.0 & 2.6 & 8.78 & 9.11 & 5.774$\pm$3.9 & 7.305$\pm$3.94 \\ 75-80\% & 9.45 & 9.76 & 1.3 & 1.8 & 9.11 & 9.47 & 4.453$\pm$3.18 & 5.906$\pm$3.34 \\ 80-85\% & 9.76 & 10.06 & 0.8 & 1.2 & 9.47 & 9.86 & 3.465$\pm$2.55 & 4.822$\pm$2.78 \\ 85-90\% & 10.06 & 10.35 & 0.5 & 0.8 & 9.86 & 10.3 & 2.703$\pm$2 & 3.953$\pm$2.23 \\ 90-95\% & 10.35 & 10.64 & 0.3 & 0.5 & 10.3 & 11 & 2.116$\pm$1.52 & 3.261$\pm$1.7 \\ 95-100\% & 10.64 & 10.91 & 0.2 & 0.4 & 11 & 19.1 & 1.582$\pm$1.06 & 2.629$\pm$1.15 \\ \hline \hline \\ \multicolumn{8}{l}{* The tail of the nuclear density profile $d=0.488 fm$ for the Cu.} \\ \end{tabularx} \label{tab:CuCu} \end{table}
train/arxiv
BkiUdPg5qhLACGCwNxGE
5
1
\section{Introduction} The simulation of random processes on computers is an important tool in scientific research and a subroutine of many statistical algorithms. One way to formalize this task is to return samples from some distribution given access to a density or mass function and to a pseudorandom number generator that returns independent uniform random numbers. ``Monte Carlo methods'', a phrase originally referring to the casinos of Monte Carlo, is a catchall for algorithms that solve this problem. Many Monte Carlo methods exist for specific distributions or classes of distributions \citep{walker1977alias, devroye}, but there are a few generic principles. One principle is to simulate a Markov chain whose stationary distribution is the distribution of interest. Work on these Markov chain Monte Carlo methods has exploded over the past few decades, because of their efficiency at sampling from complex distributions in high dimensions. Their downside is that convergence can be slow and detecting convergence is hard. A second principle is propose samples from a tractable distribution and accept them according to a correction factor. These accept-reject Monte Carlo methods are the workhorses of modern statistical packages, but their use is restricted to simple distributions on low dimensional spaces. Recently, a research program has developed around another principle for sampling from discrete distributions, the so called ``Gumbel-Max trick''. The trick proceeds by simulating a random function $G : \{1, \ldots, m\} \rightarrow \mathbb{R}$ whose maximum is located at a sample. Sampling therefore reduces to finding the state that maximizes $G$. This trick has the same complexity as better known methods, but it has inspired research into approximate methods and extensions. Methods that abandon exactness for efficiency have considered introducing correlated $G$ with a variety of applications (\citeauthor{papandreou2011perturb}, \citeyear{papandreou2011perturb}; \citeauthor{tarlow2012randomized}, \citeyear{tarlow2012randomized}; \citeauthor{hazan2013perturb}, \citeyear{hazan2013perturb}). \cite{2015arXiv150609039C} consider bandit algorithms for optimizing $G$ over low dimensional spaces when function evaluation is expensive. \cite{maddison2014astarsamp} generalized $G$ with Gumbel processes, random functions over infinite spaces whose maxima occur at samples of arbitrary distributions, and introduced A* sampling, a branch and bound algorithm that executes a generalized Gumbel-Max trick. \cite{kim2016lprelaxsamp} introduced a related branch and bound algorithm tailored to discrete distributions and successfully sampled from a large fully connected attractive Ising model. Taken together, this view of simulation as a maximization problem is a promising direction, because it connects Monte Carlo research with the literature on optimization. Yet, its relationship to more established methods has not been clearly expressed. This chapter addresses that need by identifying a model that jointly explains both the accept-reject principle and the Gumbel-Max trick. As a brief introduction, we cover a simple example of an accept-reject algorithm and the Gumbel-Max trick shown in \Fig{fig:intro}. Suppose we are given a positive function $f : \{1, \ldots, m\} \to \mathbb{R}^+$, which describes the unnormalized mass of a discrete random variable $I$, \begin{align} \label{eq:example} \mathbb{P}(I \in B) = \sum_{i \in B} \frac{f(i)}{\sum_{j=1}^m f(j)}, \quad B \subseteq \{1, \ldots, m\}. \end{align} The following algorithms return an integer with the same distribution as $I$. The accept-reject algorithm is, \begin{enumerate} \item Sample $J$ uniformly from $\{1, \ldots, m\}$, $U$ uniformly from $[0,\max_{i=1}^m f(i)]$, \item If $U < f(J)$, return $J$, else go to 1. \end{enumerate} We can intuitively justify it by noticing that accepted pair $(J, U)$ falls uniformly under the graph of $f(i)$, \Fig{fig:intro}. The sample $J$, which is accepted or rejected, is often called a \emph{proposal}. The Gumbel-Max trick proceeds by optimizing a random function, \begin{enumerate} \item For $i \in \{1, \ldots m\}$ sample an independent Gumbel random variable $G(i)$. \item Find and return $I^* = \argmax_{i=1}^m \log f(i) + G(i)$. \end{enumerate} Because the random values $\log f(i) + G(i)$ can be seen as a perturbed negative energy function, the function $G$ is often called a \emph{perturbation}. Uniform and Gumbel random variables are included among the standard distributions of statistical computing packages. So these algorithms, while inefficient, are simple to program. \begin{figure}[t] \begin{center} \includegraphics{figures/intro.pdf} \caption{Two simple Monte Carlo methods for a discrete distribution described by positive function $f$ via (\ref{eq:example}). The left hand plot shows the first accepted sample $J$ in an accept-reject scheme; note that $U < f(J)$. The right hand plot shows a sample $I^*$ in the Gumbel-Max trick; $I^*$ is the state that achieves the maximum $G^* = \max_i \log f(i) + G(i)$.} \label{fig:intro} \end{center} \end{figure} Considering their apparent differences and the fact that they have been studied in distinct literatures, it is surprising that both algorithms can be unified under the same theoretical framework. The framework rests on the study of Poisson processes, a random object whose value is a countable set of points in space \citep{kingman1992poisson, daley2007introduction}. The central idea is to define a specific Poisson process, called an exponential race, which models a sequence of independent samples arriving from some distribution. Then we identify two operations, corresponding to accept-reject and the Gumbel-Max trick, which modify the arrival distribution of exponential races. In this view a Monte Carlo method is an algorithm that simulates the first arrival of an exponential race, and many existing algorithms fall into this framework. Section \ref{sec:pp} reviews Poisson processes and studies the effect of operations on their points. Section \ref{sec:er} introduces exponential races and studies the accept-reject and perturb operations. In Section \ref{sec:gup} we construct Gumbel processes from exponential races and study the generalized Gumbel-Max trick. In Section \ref{sec:alg} we analyze A* sampling and OS* \citep{dymetman2012osstar} and show how they use perturb and accept-reject operations, respectively, to simulate the first arrival of an exponential race. All of our Poisson process results are either known or elementary extensions; the correctness and behaviour of the Monte Carlo methods that we study have all been established elsewhere. Our contribution is in identifying a theory that unifies two distinct literatures and in providing a toolset for analyzing and developing Monte Carlo methods. \section{Poisson processes} \label{sec:pp} \subsection{Definition and properties} A Poisson process is a random countable subset $\Pi \subseteq \mathbb{R}^n$. Many natural processes result in a random placement of points: the stars in the night sky, cities on a map, or raisins in oatmeal cookies. A good generic mental model to have is the plane $\mathbb{R}^2$ and pinpricks of light for all points in $\Pi$. Unlike most natural processes, a Poisson process is distinguished by its complete randomness; the number of points in disjoint subsets are independent random variables, see \Fig{fig:pp}. In this section we review a general Poisson process theory culminating in two theorems, which describe how they behave under the generic operations of removing or relocating their points. In the next section we restrict our view to a specific Poisson process and two specific operations, which correspond to accept-reject and Gumbel-Max. Our study is situated in $\mathbb{R}^n$ for intuition, but these results generalize naturally; for more information, the ideas of this section are adapted from the general treatment in \cite{kingman1992poisson}. Readers familiar with that treatment can safely skip this section \begin{figure}[t] \begin{center} \includegraphics{figures/pp.pdf} \caption{The set of $\ast$ is a realization of a Poisson process in the plane. Counts in sets $A, B, C$ are marginally Poisson and are independent for disjoint sets.} \label{fig:pp} \end{center} \end{figure} To identify a realization of a random countable set $\Pi \subseteq \mathbb{R}^n$, we use counts of points in subsets $B \subset \mathbb{R}^n$, \begin{align*} N(B) = \# (\Pi \cap B). \end{align*} where $N(B) = \infty$ if $B$ is infinite, see \Fig{fig:pp} again. Counts are nonnegative and additive, so for any realization of $\Pi$ $N(B)$ satisfies \begin{enumerate} \item\label{itm:nonneg} (\emph{Nonnegative}) $N(B) \geq 0$, \item\label{itm:countadd} (\emph{Countably additive}) For disjoint $B_i \subseteq \mathbb{R}^n$, $N(\cup_{i=1}^{\infty} B_i) = \sum_{i=1}^{\infty} N(B_i).$ \end{enumerate} Set functions from subsets of $\mathbb{R}^n$ to the extended reals $\mathbb{R} \cup \{\infty, -\infty\}$ that are nonnegative and countably additive are called measures. Measure theory is a natural backdrop for the study of Poisson processes, so we briefly mention some basic concepts. In general measures $\mu$ assign real numbers to subsets with the same consistency that we intuitively expect from measuring lengths or volumes in space. If $\mu(\mathbb{R}^n) = 1$, then $\mu$ is a probability distribution. Because it is not possible to define a measure consistently for all possible subsets, the subsets $B \subseteq \mathbb{R}^n$ are restricted here and throughout the chapter to be from the Borel sets, a nice measurable family of subsets. The Borel sets contain almost any set of interest, so for our purposes it is practically unrestricted. Integration of some function $f : \mathbb{R}^n \to \mathbb{R}$ with respect to some measure $\mu$ naturally extends Riemann integration, which we can think about intuitively as the area under the graph of $f(x)$ weighted by the instantaneous measure $\mu(dx)$. When a measure is equal to the integral of a nonnegative function $f: \mathbb{R}^n \to \mathbb{R}^{\geq 0}$ with respect to $\mu$, we say $f$ is the \emph{density} with respect to $\mu$. The Poisson process receives its name from the marginal distribution of counts $N(B)$. $N(B)$ is Poisson distributed on the nonnegative integers parameterized by a rate, which is also its expected value. \begin{definition}[Poisson random variable] $N$ is a Poisson distributed random variable on $k \in \{0, 1, \ldots\}$ with nonnegative rate $\lambda \in \mathbb{R}^{\geq 0}$ if \begin{align*} \mathbb{P}(N = k) = \exp(-\lambda) \frac{\lambda^k}{k!}. \end{align*} This is denoted $N \sim \mathrm{Poisson}(\lambda)$. $N \sim \mathrm{Poisson}(0)$ and $N \sim \mathrm{Poisson}(\infty)$ are the random variables whose values are $0$ and $\infty$ with probability one. If $N \sim \mathrm{Poisson}(\lambda)$, then $\mathbb{E}(N) = \lambda$. \end{definition} \noindent The Poisson distribution is particularly suited to modelling random counts, because it is countably additive in the rate. \begin{lemma} \label{lem:po} If $N_i \sim \mathrm{Poisson}(\lambda_i)$ independent with $\lambda_i \in \mathbb{R}^{\geq 0}$, then \begin{align*} \sum\nolimits_{i=1}^{\infty} N_i \sim \mathrm{Poisson}\left(\sum\nolimits_{i=1}^{\infty} \lambda_i\right). \end{align*} \end{lemma} \begin{proof} \citep{kingman1992poisson}. Let $S_m = \sum_{i=1}^m N_i$ and assume $\lambda_i > 0$ without loss of generality. Then for $S_2$, \begin{align*} \mathbb{P}(S_2 = k) &= \sum_{r=0}^k \mathbb{P}(N_1 = r, N_2 = k-r)\\ &= \sum_{r=0}^k \exp(-\lambda_1)\frac{\lambda_1^r}{r!} \exp(-\lambda_2)\frac{\lambda_2^{k-r}}{(k-r)!}\\ &= \frac{\exp(-\lambda_1 - \lambda_2)}{k!} \sum_{r=0}^k {k \choose r}\lambda_1^r\lambda_2^{k-r}\\ &= \frac{\exp(-\lambda_1 - \lambda_2)}{k!} (\lambda_1 + \lambda_2)^k. \end{align*} By induction Lemma \ref{lem:po} also holds for $S_m$. For infinite sums the events $\{S_m \leq k\}$ are nonincreasing. Thus, \begin{align*} \mathbb{P}(S_{\infty} \leq k) &= \lim_{m \to \infty} \mathbb{P}(S_m \leq k) = \sum_{j=1}^k \lim_{m \to \infty} \exp\left(- \sum\nolimits_{i=1}^m \lambda_i\right) \frac{(\sum\nolimits_{i=1}^m \lambda_i)^j}{j!}. \end{align*} \end{proof} \noindent Because expectations distribute over infinite sums of positive random variables, the Poisson rate $\mu(B) = \mathbb{E}(N(B))$ must also be a measure. Instead of starting with a definition of Poisson processes, we work backwards from an algorithmic construction. \Algo{alg:pp} is a procedure that realizes a Poisson process $\Pi$ for a specified mean measure $\mu$. \Algo{alg:pp} iterates through a partition $\{B_i\}_{i=1}^{\infty}$ of $\mathbb{R}^n$. For each $B_i$ it first decides the number of points to place in $\Pi$ by sampling a Poisson with rate given by the measure, $N_i \sim \mathrm{Poisson}(\mu(B_i))$. Then, it places $N_i$ points by sampling independently from the probability distribution proportional to $\mu$ restricted to $B_i$. Normally, $X \sim \mathcal{D}$ is just a statement about the marginal distribution of $X$. In the context of an Algorithm box we also implicitly assume that it implies independence from all other random variables. We should note that \Algo{alg:pp} operates on volumes and samples from $\mu$. This is not an issue, if we think of it as a mathematical construction. It would be an issue, if we set out to simulate $\Pi$ on a computer. \Algo{alg:pp} will occasionally have pathological behaviour, unless we restrict $\mu$ further. First, we require that each subset $B_i$ of the partition has finite measure; if $\mu(B_i) = \infty$, then \Algo{alg:pp} will stall when it reaches $B_i$ and fail to visit all of $\mathbb{R}^n$. If a partition $\{B_i\}_{i=1}^{\infty}$ with $\mu(B_i) < \infty$ exists for measure $\mu$, then $\mu$ is called $\sigma$-finite. Second, we want the resulting counts $N(B_i)$ to match the number of points placed $N_i$. This can be ensured if all of the points $X_{ij}$ are distinct with probability one. It is enough to require that $\mu(\{x\}) = 0$ for all singleton sets $x \in \mathbb{R}^n$. This kind of measure is known as nonatomic. \begin{algorithm}[t] \caption{A Poisson process $\Pi$ with $\sigma$-finite nonatomic mean measure $\mu$} \label{alg:pp} \begin{algorithmic} \State Let $\{B_i\}_{i=1}^{\infty}$ be a partition of $\mathbb{R}^n$ with $\mu(B_i) < \infty$ \State $\Pi = \emptyset$ \For{$i=1$ to $\infty$} \State $N_i \sim \mathrm{Poisson}(\mu(B_i))$ \For{$j=1$ to $N_i$} \State $X_{ij} \sim \mu(\cdot \cap B_i)/\mu(B_i)$ \State $\Pi = \Pi \cup \{X_{ij}\}$ \EndFor \EndFor \end{algorithmic} \end{algorithm} The crucial property of the sets $\Pi$ produced by \Algo{alg:pp} is that the number of points $N(A_j)$ that fall in \emph{any} finite collection $\{A_j\}_{j=1}^m$ of disjoint sets are independent Poisson random variables. Clearly, the counts $N(B_i)$ for the partitioning sets of \Algo{alg:pp} are independent Poissons; it is not obvious that this is also true for other collections of disjoint sets. To show this we study the limiting behaviour of $N(B)$ by counting the points placed in $B_i \cap B$ and summing as \Algo{alg:pp} iterates over $\mathbb{R}^n$. \begin{theorem} \label{thm:pp} Let $\Pi \subseteq \mathbb{R}^n$ be the subset realized by \Algo{alg:pp} with $\sigma$-finite nonatomic mean measure $\mu$ and $A_1, \ldots A_m \subseteq \mathbb{R}^n$ disjoint. $N(B) = \#(\Pi \cap B)$ for $B \subseteq \mathbb{R}^n$ satisfies \begin{enumerate} \item $N(A_j) \sim \mathrm{Poisson}(\mu(A_j))$, \item $N(A_j)$ are independent. \end{enumerate} \end{theorem} \begin{proof} Adapted from \cite{kingman1992poisson}. Let $B_i$ be the partition of \Algo{alg:pp} with $\mu(B_i) > 0$ without loss of generality. With probability one, \begin{align*} N(A_j) = N(\cup_{i=1}^{\infty} B_i \cap A_j) = \sum_{i=1}^{\infty} N(B_i \cap A_j). \end{align*} Consider the array of $N(B_i \cap A_j)$ for $i \in \{1, 2, \ldots\}$ and $j \in \{1, \ldots, m\}$. The rows are clearly independent. Thus, by Lemma \ref{lem:po} it is enough to show \begin{enumerate} \item $N(B_i \cap A_j) \sim \mathrm{Poisson}(\mu(B_i \cap A_j))$, \item $N(B_i \cap A_j)$ for $j \in \{1, \ldots, m\}$ are independent, \end{enumerate} Let $A_0$ be the complement of $\cup_{i=1}^m A_i$. Because $\mu$ is nonatomic, each point is distinct with probability one. Thus, \begin{align*} \mathbb{P}(N(B_i \cap A_0) = k_0, \ldots, N(B_i \cap A_m) = k_m &| N_i = k) =\\ &\frac{k!}{k_0 ! \ldots k_m!} \prod_{j=0}^m \frac{\mu(B_i \cap A_j)^{k_j}}{\mu(B_i)^{k_j}} \end{align*} with $k_0 = k - \sum_{j=1}^m k_j$. Now, \begin{align*} \mathbb{P}(N(B_i \cap A_1) &= k_1, \ldots, N(B_i \cap A_m) = k_m) =\\ &\sum_{k = \sum_j k_j}^{\infty} \exp(-\mu(B_i)) \frac{\mu(B_i)^k}{k!} \frac{k!}{k_0 ! \ldots k_m!} \prod_{j=0}^m \frac{\mu(B_i \cap A_j)^{k_j}}{\mu(B_i)^{k_j}}\\ &\sum_{k_0 = 0}^{\infty} \prod_{j=0}^m \exp(-\mu(B_i \cap A_j)) \frac{\mu(B_i \cap A_j)^{k_j}}{k_j!}\\ &= \prod_{j=1}^m \exp(-\mu(B_i \cap A_j)) \frac{\mu(B_i \cap A_j)^{k_j}}{k_j!}. \end{align*} finishes the proof. \end{proof} Notice that the partition in \Algo{alg:pp} has an indistinguishable effect on the eventual counts $N(B)$. In fact there may be entirely different algorithms that realize random subsets indistinguishable from $\Pi$. This motivates the standard definition for deciding whether a random process is Poisson. \begin{definition}[Poisson process] \label{def:pp} Let $\mu$ be a $\sigma$-finite nonatomic measure on $\mathbb{R}^n$. A random countable subset $\Pi \subseteq \mathbb{R}^n$ is a Poisson process with mean measure $\mu$ if \begin{enumerate} \item For $B \subseteq \mathbb{R}^n$, $N(B) \sim \mathrm{Poisson}(\mu(B))$. \item For $A_1, \ldots A_m \subseteq \mathbb{R}^n$ disjoint, $N(A_j)$ are independent. \end{enumerate} \end{definition} \noindent \Algo{alg:pp} together with Theorem \ref{thm:pp} is an existence proof for Poisson processes. Poisson processes are generic models for procedures that place points completely randomly in space. In later sections we specialize them to model the sequence of points considered by Monte Carlo methods. \subsection{Mapping and thinning a Poisson process} We are ultimately interested in understanding how the operations of accept-reject and the Gumbel-Max trick modify distributions. They are special cases of more generic operations on the points $X \in \Pi$ of a Poisson process, which modify its measure. Accept-reject corresponds to the stochastic removal of points based on their location. The Gumbel-Max trick corresponds to the deterministic relocation of points. Here we study those operations in some generality. The stochastic removal of points $X \in \Pi$ is called thinning. To count the number of points that remain after thinning, we need their joint distribution before thinning. If we restrict our attention to one of the subsets $B_i$ of the partition in \Algo{alg:pp}, then the distribution is clear: conditioned on $N(B_i) = k$, each point is distributed identically and independently (i.i.d.) as $\mu$ restricted to $B_i$. This property turns out to be true for any subset $B \subseteq \mathbb{R}^n$ of finite measure. \begin{lemma} \label{lem:bernoulli} Let $\Pi \subseteq \mathbb{R}^n$ be a Poisson Process with $\sigma$-finite nonatomic mean measure $\mu$ and $B \subseteq \mathbb{R}^n$ with $0 < \mu(B) < \infty$. Given $N(B) = k$, each $X_i \in \Pi \cap B$ for $i \in \{1, \ldots k\}$ is i.i.d. as, \begin{displaymath} X_i \, | \, \{N(B) = k\} \sim \mu(\cdot \cap B)/\mu(B). \end{displaymath} \end{lemma} \begin{proof} The proof is uninformative, so we leave it to the Appendix. \end{proof} \noindent Intuitively, this result ought to be true, because we could have realized $\Pi$ via \Algo{alg:pp} with $B$ as one of the partitioning sets. Now suppose we remove points $X \in \Pi$ independently with probability $1-\rho(X)$, where $\rho : \mathbb{R}^n \to [0, 1]$ is some integrable function. For $B$ with finite measure, given $N(B)$ the probability of keeping $X \in \Pi \cap B$ is \begin{align} \label{eq:keepprob} \mathbb{P}(\text{keep } X\, | \, N(B) = k) = \mathbb{E} (\rho(X) \, | \, N(B) = k) = \int_{B} \frac{\rho(x)}{\mu(B)} \mu(dx). \end{align} By summing over the value of $N(B)$, we can derive the marginal distribution over the number of remaining points. This is the basic strategy of the Thinning Theorem. \begin{theorem}[Thinning] \label{thm:thin} Let $\Pi \subseteq \mathbb{R}^n$ be a Poisson Process with $\sigma$-finite nonatomic mean measure $\mu$ and $S(x) \sim \mathrm{Bernoulli}(\rho(x))$ an independent Bernoulli random variable for $x \in \mathbb{R}^n$ with integrable $\rho: \mathbb{R}^n \to [0,1]$, then \begin{align} \label{eq:thindef} \mathrm{thin}(\Pi, S) = \{X : X \in \Pi \text{ and } S(X) = 1\} \end{align} is a Poisson process with mean measure \begin{displaymath} \mu^*(B) = \int_{B} \rho(x) \mu(dx). \end{displaymath} \end{theorem} \begin{proof} Originally from \cite{thinning}. Let $B \subseteq \mathbb{R}^n$. Define, \begin{align*} N^*(B) = \#(\mathrm{thin}(\Pi, S) \cap B) \end{align*} $N^*(B)$ clearly satisfies the independence property and the result is trivial for $\mu(B) = 0$. For $0 < \mu(B) < \infty$, \begin{align*} \mathbb{P}(N^*(B) = k) &= \sum_{j=k}^{\infty} \mathbb{P}(N(B) = j)\mathbb{P}(k \text{ of } S(X_i) = 1 | N(B) = j). \intertext{Let $\bar{\mu}^*(B) = \mu(B) - \mu^*(B)$. By (\ref{eq:keepprob}),} &= \sum_{j=k}^{\infty} \exp(-\mu(B)) \frac{\mu(B)^j}{j!} {j \choose k} \frac{\mu^*(B)^k}{\mu(B)^k} \frac{\bar{\mu}^*(B)^{j-k}}{\mu(B)^{j-k}}\\ &= \exp(-\mu^*(B))\frac{\mu^*(B)^k}{k!} \sum_{j=k}^{\infty} \exp(-\bar{\mu}^*(B)) \frac{\bar{\mu}^*(B)^{j-k}}{(j-k)!} \\ &= \exp(-\mu^*(B))\frac{\mu^*(B)^k}{k!}. \end{align*} For $\mu(B) = \infty$, partition $B$ into subsets with finite measure. The countable additivity of integrals of nonnegative functions and of Poisson random variables (Lemma \ref{lem:po}) finishes the proof. \end{proof} A measurable function $h : \mathbb{R}^n \to \mathbb{R}^n$ that relocates points $X \in \Pi$ is easy to analyze if it is 1-1, because it will not relocate two distinct points to the same place. The key insight is that we can count the points relocated to $B \subseteq \mathbb{R}^n$ by counting in the preimage $h^{-1}(B)$; the so-called Mapping Theorem. \begin{theorem}[Mapping] \label{thm:map} Let $\Pi \subseteq \mathbb{R}^n$ be a Poisson process with $\sigma$-finite nonatomic mean measure $\mu$ and $h : \mathbb{R}^n \rightarrow \mathbb{R}^n$ a measurable 1-1 function, then \begin{align*} h(\Pi) = \{h(X) : X \in \Pi\} \end{align*} is a Poisson process with mean measure \begin{displaymath} \mu^*(B) = \mu(h^{-1}(B)) \end{displaymath} \end{theorem} \begin{proof} Adapted from \cite{kingman1992poisson}. $h$ is 1-1, therefore \begin{displaymath} \# (\{h(X) : X \in \Pi\} \cap B) = \# \{X \in \Pi: X \in h^{-1}(B)\} \sim \mathrm{Poisson}(\mu(h^{-1}(B))). \end{displaymath} Pre-images preserve disjointness, so the independence property is guaranteed. 1-1 functions map partitions of the domain to partitions of the range, so $\mu^*$ is still $\sigma$-finite. \end{proof} \section{Exponential races} \label{sec:er} \subsection{Definition and first arrivals distribution} In this section we specialize the Poisson process to model the sequence of points considered by accept-reject and the Gumbel-Max trick. We call the model an exponential race as a reference to a classical example. An exponential race (occasionally race for short) is a Poisson process in $\mathbb{R}^+ \times \mathbb{R}^n$, which we interpret as points in $\mathbb{R}^n$ ordered by an arrival time in the positive reals $\mathbb{R}^+$. The ordered points of an exponential race have a particularly simple distribution; the location in $\mathbb{R}^n$ of each point is i.i.d. according to some arrival distribution and the rate at which points arrive in time depends stochastically on the normalization constant of that arrival distribution. The Thinning and Mapping Theorems of Poisson processes have corresponding lemmas for exponential races, which describe operations that modify the arrival distribution of an exponential race. The ultimate value of this model is that a variety of apparently disparate Monte Carlo methods can be interpreted as procedures that simulate an exponential race. In Section \ref{sec:alg} we present Monte Carlo methods which produce samples from intractable distributions by operating on the simulation of an exponential race with a tractable distribution. In this section we define an exponential race for an arbitrary finite nonzero measure $P$, discuss strategies for simulating exponential races when $P$ is tractable, and derive two operations that modify the arrival distribution of exponential races. \begin{figure}[t] \begin{center} \includegraphics{figures/er.pdf} \caption{The realization of an exponential race with points arriving at $p_j \in \mathbb{R}^2$. The left hand plot shows the location of arrivals in the plane $\mathbb{R}^2$ and the first arrival at time $t$ at $p_3$. The right hand plot shows future arrival times at the four points.} \label{fig:er} \end{center} \end{figure} For motivation we review the traditional exponential race example (see \citeauthor{durrett2012essentials}, \citeyear{durrett2012essentials}). Imagine instantaneous flashes of light arriving in time at $m$ distinct points $p_j$ scattered in $\mathbb{R}^2$. Suppose the arrival times of the flashes at each $p_j$ are determined by independent Poisson processes $\Pi_j \subseteq \mathbb{R}^+$ with mean measure $\lambda_j((0, t]) = \lambda_j t$ and $\lambda_j > 0$, see \Fig{fig:er}. The question is which point will get the first flash of light and how long do we need to wait? The first arrival at $p_j$ is after time $t$ iff $\Pi_j \cap (0, t]$ is empty, \begin{align} \label{eq:arrival} \mathbb{P}(T_j > t) = \mathbb{P}(\# (\Pi_j \cap (0, t]) = 0) = \exp(-\lambda_j t). \end{align} (\ref{eq:arrival}) is the complementary cumulative distribution function of an exponential random variable, which we briefly review. \begin{definition}[Exponential random variable] $E$ is an exponential random variable distributed on positive $t \in \mathbb{R}^+$ with nonnegative rate $\lambda \in \mathbb{R}^{\geq 0}$ if \begin{align} \mathbb{P}(E > t) = \exp(-\lambda t). \end{align} This is denoted $E \sim \mathrm{Exp}(\lambda)$ and $E \sim \mathrm{Exp}(0)$ is the random variable whose value is $\infty$ with probability one. If $E \sim \mathrm{Exp}(1)$, then $E/\lambda \sim \mathrm{Exp}(\lambda)$. \end{definition} \noindent Thus, the location and time of the first arrival is determined by the minimum of $m$ exponential random variables. For exponential random variables this is particularly easy to analyze; the minimum is an exponential random variable with rate $\sum_{j=1}^m \lambda_j$ and it is achieved at the $j$th variable with probability proportional to the rate $\lambda_j$. Surprisingly, these values are independent. \begin{lemma} \label{lem:exp} Let $E_j \sim \mathrm{Exp}(\lambda_j)$ independent with nonegative $\lambda_j \in \mathbb{R}^{\geq 0}$. If \begin{displaymath} E^* = \min_{1 \leq j \leq m} E_j \text{ and } J^* = \argmin_{1 \leq j \leq m} E_j, \end{displaymath} and at least one $\lambda_j >0$ then \begin{enumerate} \item The density of $E_j$ with $\lambda_j > 0$ is $\lambda_j \exp(-\lambda_j t )$ for $t \in \mathbb{R}^+$, \item $E^* \sim \mathrm{Exp}(\sum\nolimits_{j=1}^m \lambda_j)$, \item $\mathbb{P}(J^* =k) \propto \lambda_k$, \item $E^*$ is independent of $J^*$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item The derivative of $1 - \exp(-\lambda_j t)$ is $\lambda_j \exp(-\lambda_j t )$. \end{enumerate} \noindent 2., 3., 4. Note that with probability 1 the $E_j$ will be distinct, so \begin{align*} \mathbb{P}(J^* =k, E^* > t) &= \mathbb{P}(\cap_{j \neq k}\{E_j > E_k > t\})\\ &= \int_{t}^{\infty} \lambda_k \exp(-\lambda_k x) \prod\nolimits_{j \neq k} \exp(-\lambda_j x) \, dx\\ &= \frac{\lambda_k}{\sum\nolimits_{j=1}^m \lambda_j} \int_{t}^{\infty} (\sum\nolimits_{j=1}^m \lambda_j) \exp(-\sum\nolimits_{j=1}^m \lambda_j x) \, dx\\ &= \frac{\lambda_k}{\sum\nolimits_{j=1}^m \lambda_j} \exp(-\sum\nolimits_{j=1}^m \lambda_j t). \end{align*} This finishes the lemma. \end{proof} The extension of exponential races to arbitrary distributions on $\mathbb{R}^n$ is straightforward. The $m$ Poisson processes of the example are together a single Poisson process on $\mathbb{R}^+ \times \mathbb{R}^n$ with mean measure $(\lambda \times P)((0, t] \times B) = \sum_{j=1}^m t \lambda_j 1_{B}(p_j)$. $\lambda \times P$ is the product measure on $\mathbb{R}^+ \times \mathbb{R}^n$, where each is respectively equipped with $\lambda((0, t]) = t$ and $P(B) = \sum_j \lambda_j 1_{B}(p_j)$. Extending this idea to an arbitrary finite measure $P$ (not just the discrete measures) is the key idea behind exponential races. Notice that $P$ in our example is atomic, which is fine, because the product measure $\lambda \times P$ is not atomic. On the other hand, we want the points arriving in $\mathbb{R}^n$ to correspond to the probability distribution $P(\cdot)/P(\mathbb{R}^n)$, so we will require that $P$ is finite, $P(\mathbb{R}^n) < \infty$, and nonzero, $0 < P(\mathbb{R}^n)$. Also, in contrast to Poisson processes, exponential races have a natural ordering in time. \begin{definition}[Exponential race] Let $P$ be a finite nonzero measure on $\mathbb{R}^n$. A random countable subset $R \subseteq \mathbb{R}^+ \times \mathbb{R}^n$ is an exponential race with measure $P$ if the following hold \begin{enumerate} \item $R$ is a Poisson process with mean measure $\lambda \times P$. \item $R$ is totally ordered by time, the first coordinate. \end{enumerate} If $R = \{(T_i, X_i)\}_{i=1}^{\infty}$, then we assume the enumeration corresponds to the ordering so that $i < j$ implies $T_i < T_j$. \end{definition} We can realize an exponential race with a slight modification of \Algo{alg:pp}; use the partition of rectangles $B_i = (i-1, i] \times \mathbb{R}^n$, and sort points by their time variable. This is not the most direct characterization, so instead we derive the joint distribution of the first $m$ ordered points in Theorem \ref{thm:er}. The distribution of the countably infinite set $R$ is completely described by the joint distribution of the first $m$ points for all finite $m$. The proof of Theorem \ref{thm:er} shows that the locations $X_i$ are independently distributed as $P(\cdot)/P(\mathbb{R}^n)$ and the interarrival times $T_i - T_{i-1}$ are independent and exponentially distributed with rate $P(\mathbb{R}^n)$. This theorem is the cornerstone of this chapter, because it suggest a strategy for proving the correctness of Monte Carlo methods; if we can prove that the output of an algorithm $(T, X)$ is the first arrival of an exponential race with measure $P$, then Theorem \ref{thm:er} guarantees that the location $X$ is a sample from $P(\cdot)/P(\mathbb{R}^n)$. \begin{theorem} \label{thm:er} Let $P$ be a finite nonzero measure on $\mathbb{R}^n$, $X_i \sim P(\cdot)/P(\mathbb{R}^n)$ independent, and $E_i \sim \mathrm{Exp}(P(\mathbb{R}^n))$ independent, then first $m$ points $\{(T_i, X_i)\}_{i=1}^m$ of any exponential race $R \subseteq \mathbb{R}^+ \times \mathbb{R}^n$ with measure $P$ have the same joint distribution as \begin{align*} \{(\sum\nolimits_{j=1}^i E_j, X_i)\}_{i=1}^m. \end{align*} \end{theorem} \begin{proof} Let $T(t, B)$ be the time of the first arrival in $B$ after time $t \geq 0$, \begin{align} \label{eq:arrivaltime} T(t, B) = \min \{T_i : (T_i, X_i) \in R \cap (t, \infty) \times B\}. \end{align} $R \cap ((t, s + t] \times B)$ is finite with probability one for all $s > 0$, so (\ref{eq:arrivaltime}) is well defined. $T(t, B) - t$ is an exponential random variable, because \begin{align*} \mathbb{P}(T(t, B) - t > s) = \mathbb{P}(N((t, s+t] \times B) = 0) = \exp(- P(B)s). \end{align*} $T(t, B)$ and $T(t, B^c)$ are independent, by Poisson process independence. We proceed by induction. The event $\{T_1 > s, X_1 \in B\}$ is equivalent to $\{T(0, B^c) > T(0, B) > s\}$. $P(B) > 0$ or $P(B^c) > 0$, so by Lemma \ref{lem:exp}, \begin{align*} \mathbb{P}(T_1 > s, X_1 \! \in \! B) = \mathbb{P}(T(0, B^c) > T(0, B) > s) = \exp(-s P(\mathbb{R}^n))\frac{ P(B)}{P(\mathbb{R}^n)}. \end{align*} Now, assume Theorem \ref{thm:er} holds for $k$. The event \begin{align*} \{T_i = t_i, X_i = x_i\}_{i=1}^k \end{align*} is completely described by counts in $(0, t_k] \times \mathbb{R}^n$ and thus independent of \begin{align*} \{T(t_k, B^c) > T(t_k, B) > s + t_k\} \end{align*} Thus \begin{align*} \mathbb{P}(T_{k+1} - &T_k > s, X_{k+1} \! \in \! B | \{T_i = t_i, X_i = x_i\}_{i=1}^k) \\ &=\mathbb{P}(T(t_k, B^c) > T(t_k, B) > s + t_k | \{T_i = t_i, X_i = x_i\}_{i=1}^k) \\ &=\mathbb{P}(T(t_k, B^c) > T(t_k, B) > s + t_k) \\ &= \exp(-s P(\mathbb{R}^n))\frac{ P(B)}{P(\mathbb{R}^n)} \end{align*} concludes the proof.\end{proof} \subsection{Simulating an exponential race with a tractable measure} \label{subsec:ersim} \begin{algorithm}[b] \caption{An exponential race $R$ with finite nonzero measure $Q$} \label{alg:er} \begin{algorithmic} \State $R = \emptyset$ \State $T_0 = 0$ \For{$i=1$ to $\infty$} \State $E_i \sim \mathrm{Exp}(Q(\mathbb{R}^n))$ \State $X_{i} \sim Q(\cdot)/Q(\mathbb{R}^n)$ \State $T_i = T_{i-1} + E_i$ \State $R = R \cup \{(T_i, X_i)\}$ \EndFor \end{algorithmic} \end{algorithm} If $Q$ is a tractable finite nonzero measure on $\mathbb{R}^n$, that is we have a procedure for computing $Q(\mathbb{R}^n)$ and sampling from $Q(\cdot)/Q(\mathbb{R}^n)$, then Theorem \ref{thm:er} suggests \Algo{alg:er} for simulating an exponential race $R$ with measure $Q$. \Algo{alg:er} simulates the points of an exponential race in order of arrival time. It does not terminate, but we can think of it as a coroutine or generator, which maintains state and returns the next arrival in $\mathbb{R}$ each time it is invoked. As a simple example consider the uniform measure $Q((a, b]) = b-a$ on $[0,1]$. \Algo{alg:er} for this $Q$ simulates a sequence of arrivals $\{(T_i, X_i)\}_{i=1}^{\infty}$ with arrival location $X_i \sim \mathrm{Uniform}[0,1]$ and interarrival time $T_{i+1} - T_{i} \sim \mathrm{Exp}(1)$, see the left hand plot of Figure \ref{fig:eralt}. As with the initial discrete example, in which we constructed an exponential race from $m$ independent Poisson processes, this is not the only approach. More generally, if $\{B_i\}_{i=1}^m$ is any finite partition of $\mathbb{R}^n$ such that $Q(\cdot \cap B_i)$ is tractable, then we can simulate $R$ by simulating $m$ independent exponential races $R_i$ with measure $Q(\cdot \cap B_i)/Q(B_i)$ via \Algo{alg:er} and sorting the result $\cup_{i=1}^m R_i$. This can be accomplished lazily and efficiently with a priority queue data type, which prioritizes the races $R_i$ according to which arrives next in time. It also possible to split the races $R_i$ online by partitioning $B_i$ and respecting the constraint imposed by the arrivals already generated in $B_i$. We highlight a particularly important variant, which features in A* sampling in Section \ref{sec:alg}. Consider an infinitely deep tree in which each node is associated with a subset $B \subseteq \mathbb{R}^n$. If the root is $\mathbb{R}^n$ and the children of each node form a partition of the parent, then we call this a space partitioning tree. We can realize an exponential race over a space partitioning tree by recursively generating arrivals $(T, X)$ at each node $B$. Each location $X$ is sampled independently from $Q(\cdot \cap B)/Q(B)$, and each time $T$ is sampled by adding an independent $\mathrm{Exp}(Q(B))$ to the parent's arrival time. The arrivals sorted by time over the realization of the tree form a exponential race. See \Fig{fig:eralt}. \begin{figure}[t] \begin{center} \includegraphics{figures/eralt.pdf} \caption{Two methods for simulating an exponential race. The left hand plot shows the first arrivals of a uniform exponential race on $[0, 1]$ simulated by \Algo{alg:er}. The right hand plot shows the first arrivals of an exponential race simulated over a space partitioning tree. Dashed lines dominate the set in which an arrival is first.} \label{fig:eralt} \end{center} \end{figure} \subsection{Transforming an exponential race with accept-reject and perturb} Most finite nonzero measures $P$ on $\mathbb{R}^n$ are not tractable. Monte Carlo methods accomplish their goal of sampling from intractable distributions by transforming samples of tractable distributions. In this subsection we present accept-reject and perturb operations, which transform a realization of an exponential race with measure $Q$ into a realization of an exponential race with a distinct measure $P$. In practice $Q$ will be tractable and $P$ intractable, so that simulating an exponential race with an intractable measure can be accomplished by simulating the points of an exponential race with a tractable measure, for example via \Algo{alg:er}, and transforming it with accept-reject or perturb operations. The accept-reject and perturb operations are named after their respective literatures, accept-reject corresponds to rejection sampling and perturb corresponds to the Gumbel-Max trick. The correspondence between the perturb operation and the Gumbel-Max trick may not be obvious, so we discuss this in Section \ref{sec:gup}. Let $Q$ and $P$ be finite nonzero measures in $\mathbb{R}^n$. We assume that they have densities $g$ and $f$ with respect to some base measure $\mu$, \begin{align} \label{eq:density} Q(B) = \int_B g(x) \mu(dx) \qquad P(B) = \int_B f(x) \mu(dx). \end{align} We assume that $g$ and $f$ have the same support and their ratio is bounded, \begin{align} \label{eq:bounded} \mathrm{supp}(f) = \mathrm{supp}(g) \qquad \frac{f(x)}{g(x)} \leq M \text{ for all } x \in \mathrm{supp}(g) \end{align} where $\mathrm{supp}(g) = \{x \in \mathbb{R}^n : g(x) \neq 0\}$. The assumption $\mathrm{supp}(f) = \mathrm{supp}(g)$ can be softened here and throughout the chapter to $\mathrm{supp}(f) \subseteq \mathrm{supp}(g)$, but it complicates the analysis. The accept-reject strategy is to realize more points than needed from an exponential race with measure $MQ(\cdot)$ and stochastically \emph{reject} points with probability equal to the ratio of instantaneous rates of arrival, $f(x)/(g(x)M)$. The perturbation strategy is to realize just the points needed from an exponential race with measure $Q$, but to \emph{perturb} the arrival times according to the transformation $t \to t g(x)/f(x)$ for all points arriving at $x$. Before we present the proofs, consider the following intuition. Imagine taking a long exposure photograph of the plane as instantaneous flashes arrive according to an exponential race with measure $Q$. The rate at which points arrive will determine the intensity of a heat map with regions receiving more points brighter than those receiving fewer. Over time the relative intensities will correspond to the probability distribution proportional to $Q$. If someone were just ahead of us in time and stochastically discarded points that arrived in $B$ or delayed points in $B$ relative to points in $B^c$, then our perception of the likelihood of $B$ would change. Mired in time, we would not be able to distinguish whether points were discarded, reordered, or the true measure $Q$ was in fact different. The correctness of these operations on an exponential race can be justified as special cases of the Thinning and Mapping Theorems. \begin{lemma}[Accept-Reject] \label{lem:accept} Let $Q$ and $P$ be finite nonzero measures on $\mathbb{R}^n$ under assumptions (\ref{eq:density}) and (\ref{eq:bounded}). If $R\subseteq \mathbb{R}^+ \times \mathbb{R}^n$ is an exponential race with measure $MQ(\cdot)$ and $\mathrm{accept}(t,x) \sim \mathrm{Bernoulli}(\rho(t,x))$ is i.i.d. for all $(t, x)$ with probability \begin{align*} \rho(t,x) = \frac{f(x)}{g(x)M}, \end{align*} then $\mathrm{thin}(R, \mathrm{accept})$, from (\ref{eq:thindef}), is an exponential race with measure $P$. \end{lemma} \begin{proof} By the Thinning Theorem, the mean measure of $\mathrm{thin}(R, \mathrm{accept})$ is \begin{align*} \iint\limits_{B} \frac{f(x) }{g(x)M} g(x)M\mu(dx) \lambda(dt) = \iint\limits_{B} f(x) \mu(dx) \lambda(dt) = (\lambda \times P)(B). \end{align*} for $B \subseteq \mathbb{R}^+ \times \mathrm{supp}(g)$. The subsampled $(T_i, X_i)$ are in order and thus an exponential race with measure $P$. \end{proof} \begin{lemma}[Perturbation] \label{lem:perturb} Let $Q$ and $P$ be finite nonzero measures on $\mathbb{R}^n$ under assumptions (\ref{eq:density}) and (\ref{eq:bounded}). If $R \subseteq \mathbb{R}^+ \times \mathbb{R}^n$ is an exponential race with measure $Q$ and \begin{align*} \mathrm{perturb}(t, x) = \left(t\frac{g(x)}{f(x)}, x\right), \end{align*} then $\mathrm{sort}(\mathrm{perturb}(R))$ is an exponential race with measure $P$ where $\mathrm{sort}$ totally orders points by the first coordinate, time. \end{lemma} \begin{proof} $\mathrm{perturb}$ is 1-1 on $\mathrm{supp}(f)$, so the Mapping Theorem applies. It is enough to check the mean measure of $\mathrm{perturb}(R)$ on subsets of the form $B = (0, s] \times A$ for $s \in \mathbb{R}^+$ and $A \subseteq \mathrm{supp}(g)$, \begin{align*} \iint\limits_{h^{-1}(B)} g(x) \lambda(dt) \mu(dx) = \int\limits_A g(x) s\frac{f(x)}{g(x)} \mu(dx) = (\lambda \times P)(B). \end{align*} Thus, sorting $\mathrm{perturb}(T_{i}, X_{i})$ forms an exponential race with measure $P$. \end{proof} \section{Gumbel processes} \label{sec:gup} \subsection{Definition and construction} The central object of the Gumbel-Max trick is a random function over a finite set whose values are Gumbel distributed. Gumbel valued functions over a finite choice set are extensively studied in random choice theory, where there is a need for a statistical model of utility (\citeauthor{yellott1977relationship}, \citeyear{yellott1977relationship} for example). The extension to Gumbel valued functions over continuous spaces has been explored in random choice theory \citep{malmberg2013random} and in the context of Monte Carlo simulation \citep{maddison2014astarsamp}. Following \cite{maddison2014astarsamp} we will refer to this class of Gumbel valued functions on $\mathbb{R}^n$ as Gumbel processes. Gumbel processes underpin the recent interest in perturbation based Monte Carlo methods, because their maxima are located at samples from probability distributions, see also \citep{papandreou2011perturb, tarlow2012randomized, hazan2013perturb, 2015arXiv150609039C, kim2016lprelaxsamp}. In this section we clarify the connection between Gumbel processes and our development of exponential races. We will show that the value of a Gumbel process at $x \in \mathbb{R}^n$ can be seen as the log transformed time of the first arrival at $x$ of some exponential race. This has the advantage of simplifying their construction and connecting the literature on the Gumbel-Max trick to our discussion. Related constructions have also been considered in the study of extremal processes \citep{resnick2013extreme}. In this subsection we define and construct Gumbel processes. In the next subsection we discuss their simulation and present a generalized Gumbel-Max trick derived from the Perturbation Lemma. The Gumbel distribution dates back to the statistical study of extrema and rare events \citep{gumbel}. The Gumbel is a member of a more general class of extreme value distributions. A central limit theorem exists for these distributions --- after proper renormalization the maximum of an i.i.d. sample of random variables converges to one of three possible extreme value distributions \citep{gedenko}. The Gumbel is parameterized by a location $\mu \in \mathbb{R}$. \begin{definition}[Gumbel random variable] $G$ is a Gumbel distributed random variable on $\mathbb{R}$ with location $\mu \in \mathbb{R}$ if \begin{align*} \mathbb{P}(G \leq g) = \exp(-\exp(-g + \mu)) \end{align*} This is denoted $G \sim \mathrm{Gumbel}(\mu)$ and $G \sim \mathrm{Gumbel}(-\infty)$ is the random variable whose value is $-\infty$ with probability one. If $G \sim \mathrm{Gumbel}(0)$, then $G + \mu \sim \mathrm{Gumbel}(\mu)$. \end{definition} \noindent The Gumbel distribution has two important properties for our purposes. The distribution of the maximum of independent Gumbels is itself a Gumbel --- a property known as max-stability --- and the index of the maximum follows the Gibbs distribution: if $G(i) \sim \mathrm{Gumbel}( \mu_i)$, then \begin{align*} \max_{1 \leq i \leq m} G(i) \sim \mathrm{Gumbel}(\log \sum\limits_{i=1}^m \exp(\mu_i)) \quad \argmax_{1 \leq i \leq m} G(i) \sim \frac{\exp(\mu_i)}{\sum_{i=1}^m \exp(\mu_i)}. \end{align*} The Gumbel-Max trick of the introduction for sampling from a discrete distribution with mass function $f : \{1, \ldots, m\} \to \mathbb{R}^+$ is explained by taking $\mu_i = \log f(i)$. It is informative to understand these properties through the Gumbel's connection to the exponential distribution. \begin{lemma} \label{lem:gumbel} If $E \sim \mathrm{Exp}(\lambda)$ with nonnegative rate $\lambda \in \mathbb{R}^{\geq 0}$, then \begin{align*} -\log E \sim \mathrm{Gumbel}(\log \lambda). \end{align*} \end{lemma} \begin{proof} $\mathbb{P}(-\log E \leq g) = \mathbb{P}(E \geq \exp(-g)) = \exp(- \exp(-g + \log \lambda))$ \end{proof} \noindent Therefore the distribution of the maximum and argmaximum of Gumbels is explained by Lemma \ref{lem:exp}, because passing a maximization through $-\log$ becomes a minimization. A Gumbel process $G : \mathbb{R}^n \to \mathbb{R} \cup \{-\infty\}$ is a Gumbel valued random function. Their characterizing property is that the maximal values of a Gumbel process over the subsets $B \subseteq \mathbb{R}^n$ are marginally Gumbel distributed with a location that scales logarithmically with the volume of $B$ according to some finite nonzero measure $P$, \begin{align*} \max_{x \in B} G(x) \sim \mathrm{Gumbel}(\log P(B)) \end{align*} Implicit in this claim is the assertion that the maximizations $\max_{x \in B} G(x)$ are well-defined --- the maximum exists --- for all $B \subseteq \mathbb{R}^n$. \begin{definition}[Gumbel process] \label{def:gup} Let $P$ be a finite nonzero measure on $\mathbb{R}^n$, $G: \mathbb{R}^n \to \mathbb{R}\cup\{-\infty\}$ a random function, and \begin{align} \label{eq:gupmax} G^*(B) = \max_{x \in B} G(x). \end{align} $G$ is a Gumbel process with measure $P$ if \begin{enumerate} \item For $B \subseteq \mathbb{R}^n$, $G^*(B) \sim \mathrm{Gumbel}(\log P(B))$. \item For $A_1, \ldots, A_m$ are disjoint, $G^*(A_i)$ are independent. \end{enumerate} \end{definition} \noindent Note, the event that $\argmax_{x \in \mathbb{R}^n} G(x)$ lands in $B \subseteq \mathbb{R}^n$ depends on which of $G^*(B)$ or $G^*(B^c)$ is larger. Following this reasoning one can show that the argmax over $\mathbb{R}^n$ is distributed as $P(\cdot)/P(\mathbb{R}^n)$. The study of Gumbel processes can proceed without reference to exponential races, as in \cite{maddison2014astarsamp}, but our construction from exponential races is a convenient shortcut that allows us to import results from Section \ref{sec:er}. Consider the function that reports the arrival time of the first arrival at $x \in \mathbb{R}^n$ for an exponential race $R$ with measure $P$, \begin{align*} T(x) = \min \{T_i : (T_i, x) \in R\} \end{align*} This function is almost surely infinite at all $x$, but for any realization of $R$ it will take on finite value at countably many points in $\mathbb{R}^n$. Moreover, the minimum of $T(x)$ over subsets $B \subseteq \mathbb{R}^n$ is well-defined and finite for sets with positive measure $P(B) > 0$; it is exponentially distributed with rate $P(B)$. In this way we can see that $- \log T(x)$ is Gumbel process, Figure \ref{fig:gup}. \begin{figure}[t] \begin{center} \includegraphics{figures/gup.pdf} \caption{Constructing a uniform Gumbel process $G : \mathbb{R}^n \to \mathbb{R} \cup \{-\infty\}$ on $[0,1]$ with an exponential race. The left hand plot shows the first arrivals $\ast$ of a uniform exponential race $R$. The right hand plot shows $G(x)$ set to $-\log$ the time $T(x)$ of the first arrival at $x$. The graph of $G(x)$ extends downwards to $-\infty$ taking on finite value at all points in $[0, 1]$ that have arrivals and $-\infty$ for all points with no arrivals.} \label{fig:gup} \end{center} \end{figure} \begin{theorem} Let $R \subseteq \mathbb{R}^+ \times \mathbb{R}^n$ be an exponential race with measure $P$. \begin{align} \label{eq:gupcon} G(x) = -\log \min \{T_i : (T_i, x) \in R\} \end{align} is a Gumbel process with measure $P$. \end{theorem} \begin{proof} First, for $x\in \mathbb{R}^n$ \begin{align*} \min \{T_i : (T_i, x) \in R\} = T(0, \{x\}), \end{align*} where $T(0, B)$ is the first arrival time in subset $B \subseteq \mathbb{R}^n$ defined in (\ref{eq:arrivaltime}) from Theorem \ref{thm:er}. Thus $G^*(B)$ of (\ref{eq:gupmax}) is well defined, because \begin{align*} G^*(B) = \max_{x \in B} - \log \min \{T_i : (T_i, x) \in R\} = -\log T(0, B). \end{align*} $G^*(B)$ inherits the independence properties from Poisson process independence. Finally, Lemma \ref{lem:gumbel} gives us the marginal distribution of $G^*(B)$. \end{proof} \subsection{Simulating a Gumbel process and the Gumbel-Max trick} Gumbel processes are relevant to Monte Carlo simulation in the same sense that we motivated exponential races --- if we can simulate the maximum value of a Gumbel process with measure $P$, then its location is a sample from the distribution $P(\cdot)/P(\mathbb{R}^n)$. \cite{maddison2014astarsamp} gave an algorithm for simulating Gumbel processes with tractable measures and a generalized Gumbel-Max trick for transforming their measure. We present those results derived from our results for exponential races. \begin{algorithm}[b] \caption{A Gumbel process with finite measure $Q$}\label{alg:gup} \begin{algorithmic} \State Initialize $G(x) = -\infty$ for all $x \in \mathbb{R}^n$. \State $(\Omega_1, G_0, i) = (\mathbb{R}^n, \infty, 1)$ \While{$Q(\Omega_i) > 0$} \State $G_i \sim \mathrm{TruncGumbel}(\log Q(\Omega_i), G_{i-1})$ \State $X_i \sim Q(\cdot \cap \Omega_i)/Q(\Omega_i)$ \State $G(X_i) = G_i$ \% assign $G(x)$ at $X_i$ to $G_i$ \State $\Omega_{i+1} = \Omega_{i} - \{X_i\}$ \State $i = i + 1$ \EndWhile \end{algorithmic} \end{algorithm} The Gumbel process $G$ from construction (\ref{eq:gupcon}) has value $-\infty$ everywhere except at the countably many arrival locations of an exponential race. Therefore, for tractable measures $Q$ we could adapt \Algo{alg:er} for exponential races to simulate $G(x)$. The idea is to initialize $G(x) = -\infty$ everywhere and iterate through the points $(T_i, X_i)$ of an exponential race $R$ setting $G(X_i) = - \log T_i$. To avoid reassigning values of $G(x)$ we refine space as in Section \ref{subsec:ersim} by removing the locations generated so far. \Algo{alg:gup} implements this procedure, although it is superficially different from our description. In particular the value $G(X_i)$ is instead set to a truncated Gumbel $G_i \sim \mathrm{TruncGumbel}(\log Q(\Omega_i), G_{i-1})$, a Gumbel random variable with location $\log Q(\Omega_i)$ whose domain is truncated to $(-\infty, G_{i-1}]$. The connection to \Algo{alg:er} can be derived by decomposing the arrival times $T_i = \sum_{j=1}^i E_j$ for $E_j \sim \mathrm{Exp}(Q(\Omega_j))$ and then considering the joint distribution of $G_i = - \log(\sum_{j=1}^i E_j)$. A bit of algebraic manipulation will reveal that \begin{align*} G_i \, | \, G_{i-1} \sim \mathrm{TruncGumbel}(\log Q(\Omega_i), G_{i-1}) \end{align*} Thus, translating between procedures for simulating Gumbel processes and procedures for simulating exponential races is as simple as replacing chains of truncated Gumbels with partial sums of exponentials. For continuous measures removing countably many points from the sample space has no effect, and in practice the removal line of \Algo{alg:gup} can be omitted. For those and many other measures \Algo{alg:gup} will not terminate; instead it iterates through the infinitely many finite values of $G(x)$ in order of their rank. For discrete measures with finite support \Algo{alg:gup} will terminate once every atom has been assigned a value. Finally, for simulating Gumbel processes with intractable measures $P$ the Perturbation Lemma of exponential races justifies a generalized Gumbel-Max trick. The basic insight is that multiplication by the ratio of densities $g(x)/f(x)$ becomes addition in log space. \begin{lemma}[Gumbel-Max trick] \label{lem:gumbelmax} Let $Q$ and $P$ be finite nonzero measures on $\mathbb{R}^n$ with densities $g$ and $f$ under assumptions (\ref{eq:density}) and (\ref{eq:bounded}). If $G : \mathbb{R}^n \to \mathbb{R} \cap \{-\infty\}$ is a Gumbel process with measure $Q$, then \begin{align*} G^{\prime}(x) = \begin{cases} \log f(x) - \log g(x) + G(x) & x \in \mathrm{supp}(g)\\ - \infty & \text{ otherwise} \end{cases} \end{align*} is a Gumbel process with measure $P$. In particular for $G^* = \max_{x \in \mathbb{R}^n} G^{\prime}(x)$ and $X^* = \argmax_{x \in \mathbb{R}^n} G^{\prime}(x)$, \begin{align*} G^* \sim \mathrm{Gumbel}(\log P(\mathbb{R}^n)) \qquad X^* \sim P(\cdot)/P(\mathbb{R}^n) \end{align*} \end{lemma} \begin{proof} Arguing informally, this follows from the Perturbation Lemma applied to our construction (\ref{eq:gupcon}) of Gumbel processes. For $x \in \mathrm{supp}(g)$ \begin{align*} \log f(x) - \log g(x) + G(x) = - \log \min \{T_i g(x)/ f(x) : (T_i, x) \in R\}. \end{align*} See \cite{maddison2014astarsamp} for a formal proof. \end{proof} \begin{figure}[t] \begin{center} \includegraphics{figures/gumbelmax.pdf} \caption{A continuous Gumbel-Max trick. The left hand plot shows the maximal values of a uniform Gumbel process $G(x)$ on $[0,1]$. The right hand plot shows the result of perturbing $\log f(x)$ with $G(x)$. Notice that the ordering of values changes, and $X^*$ is now the location of the maximum $G^* = \max_x \log f(x) + G(x)$. Therefore, $X^*$ is a sample from the distribution with density proportional to $f(x)$.} \label{fig:contgumbelmax} \end{center} \end{figure} \noindent When $Q$ is the counting measure on $\{1, \ldots, m\}$, Lemma \ref{lem:gumbelmax} exactly describes the Gumbel-Max trick of the introduction. This brings full circle the connection between accept-reject and the Gumbel-Max trick. A Gumbel process is not profoundly different from an exponential race, but the difference of perspective --- a function as opposed to a random set --- can be valuable. In particular consider the following generalization of a result from Hazan and Jaakkola of this book. Let $G : \mathbb{R}^n \to \mathbb{R} \cup \{-\infty\}$ be a Gumbel process with measure $P$ whose density with respect to $\mu$ is $f$. If $G^* = \max_{x \in \mathbb{R}^n} G(x)$ and $X^*= \argmax_{x \in \mathbb{R}^n} G(x)$, then \begin{align*} \mathbb{E}(G^*) = \log P(\mathbb{R}^n) + \gamma \qquad \mathbb{E}(- \log f(X^*) + G^*) = H(f) + \gamma, \end{align*} where $H(f)$ is the entropy of a probability distribution with probability density function proportional to $f$ and $\gamma$ is the Euler-Mascheroni constant. Therefore the representation of probability distributions through Gumbel processes gives rise to a satisfying and compact representation of some of their important constants. \section{Monte Carlo methods that use bounds} \label{sec:alg} \subsection{Rejection sampling} In this section we present practical Monte Carlo methods that use bounds on the ratio of densities to produce samples from intractable distributions. We show how these methods can be interpreted as algorithms that simulate the first arrival of an exponential race. The basic strategy for proving their correctness is to argue that they perform accept-reject or perturb operations on the realization of an exponential race until they have provably produced the first arrival of the transformed race. We start by discussing the traditional rejection sampling and a related perturbation based method. Then we study OS* \citep{dymetman2012osstar}, an accept-reject method, and A* sampling \citep{maddison2014astarsamp}, a perturbation method. These algorithms have all been introduced elsewhere in the literature, so for more information we refer readers to the original papers. Throughout this section our goal is to draw a sample from the probability distribution proportional to some measure $P$ with density $f$ with respect to some base measure $\mu$. We assume, as in the Accept-Reject and Perturbation Lemmas, access to a tractable proposal distribution proportional to a measure $Q$ with density $g$ with respect to $\mu$ such that $f$ and $g$ have the same support and the ratio $f(x)/g(x)$ is bounded by some constant $M$. For example consider the sample space $\{0,1\}^n$ whose elements are bit vectors of length $n$. A proposal distribution might be proportional to the counting measure $Q$, which counts the number of configurations in a subset $B \subseteq \{0,1\}^n$. Sampling from $Q(\cdot)/Q(\{0,1\}^n)$ is as simple as sampling $n$ independent $\mathrm{Bernoulli}(1/2)$. Rejection sampling is the classic Monte Carlo method that uses bound information. It proposes $(X, U)$ from $Q$ and $\mathrm{Uniform}[0,1]$, respectively, and accepts $X$ if $U \leq f(X)/(g(X)M)$. The algorithm terminates at the first acceptance and is normally justified by noticing that it samples uniformly from the region under the graph of $f(x)$ by rejecting points that fall between $g(x)M$ and $f(x)$, see the left hand graph on \Fig{fig:alg} for an intuition. The acceptance decision also corresponds exactly to the accept-reject operation on exponential races, so we can interpret it as an procedure on the points of an exponential race. We call this procedure $\mathbf{REJ}$ for short, \begin{algorithmic} \For{$(T_i, X_i) \in R$ simulated by \Algo{alg:er} with measure $M Q(\cdot)$} \State $U_i \sim \mathrm{Uniform}[0,1]$. \If{$U_i < f(X_i)/(g(X_i)M)$} \Return $(T_{i}, X_i)$ \EndIf \EndFor \end{algorithmic} The Accept-Reject Lemma guarantees that the returned values $(T, X)$ will be the first arrival of an exponential race with measure $P$, and Theorem \ref{thm:er} guarantees that $X$ is a sample from $P(\cdot)/P(\mathbb{R}^n)$. This is the basic flavour of the arguments of this section. \begin{figure}[t] \begin{center} \includegraphics{figures/alg.pdf} \caption{Algorithms $\mathbf{REJ}$ and $\mathbf{PER}$ for measure $P$ on $[0,1]$ with proposal measure $Q$. The densities of $Q$ and $P$ are shown on the left hand side as densities over $x \in [0, 1]$. $\circ$ are arrivals of the race with measure $Q$, $\ast$ of the race with measure $P$. Both plots show the proposals considered until the first acceptance. For $\mathbf{PER}$ opaque solid lines represent the perturb operation. $T_4$ is the fourth arrival from the race with measure $Q$. $T_4/M$ is the lower bound on all future arrivals, and thus all $\ast$ points to the left of $T_4/M$ are in order.} \label{fig:alg} \end{center} \end{figure} The Perturbation Lemma has a corresponding procedure, which uses the bound $M$ to provably return the first arrival of a perturbed exponential race. It is shown on the right hand side of \Fig{fig:alg}, and we call it $\mathbf{PER}$. \begin{algorithmic} \State $(T^*, X^*) = (\infty, \mathrm{null})$ \For{$(T_i, X_i) \in R$ simulated by \Algo{alg:er} with measure $Q$} \If{$T^* > T_i g(X_i)/f(X_i)$} \State $T^* = T_ig(X_i)/f(X_i)$ \State $X^* = X_i$ \EndIf \If{$T_{i+1}/M \geq T^*$} \Return $(T^* , X^*)$ \EndIf \EndFor \end{algorithmic} In this procedure $(T_i, X_i)$ iterates in order through the arrivals of an exponential race with measure $Q$. The perturbed times $T_ig(X_i)/f(X_i)$ will form a race with measure $P$, but not necessarily in order. $(T^*, X^*)$ are variables that track the earliest perturbed arrival so far, so $T^*$ is an upper bound on the eventual first arrival time for the race with measure $P$. $T_{i+1}$ is the arrival time of the next point in the race with measure $Q$ and $M$ bounds the contribution of the perturbation, so $T_{i+1}/M$ is a lower bound on the remaining perturbed arrivals. When $T^*$ and $T_{i+1}/M$ cross, $(T^*, X^*)$ is guaranteed to be the first arrival of the perturbed race. $\mathbf{REJ}$ and $\mathbf{PER}$ can turned into generators for iterating through all of the arrivals of an exponential race with measure $P$ as opposed to just returning the first. For $\mathbf{REJ}$ it is as simple as replacing {\bf return} with {\bf yield}, so that each time the generator is invoked it searches until the next acceptance and returns. For $\mathbf{PER}$ we must store every perturbed arrival until its eventual order in the race with measure $P$ is determined. This can be accomplished with a priority queue $\Uc$, which prioritizes by earliest arrival time, \begin{algorithmic} \State $\Uc = \mathrm{minPriorityQueue}()$ \For{$(T_i, X_i) \in R$ simulated by \Algo{alg:er} with measure $Q$} \State $\Uc.\mathrm{pushWithPriortiy}(T_i g(X_i)/f(X_i), X_i)$ \If{$T_{i+1}/M \geq \min \Uc$} {\bf yield} $\Uc.\mathrm{pop}()$ \EndIf \EndFor \end{algorithmic} $\Uc$ takes the place of $T^*$ and $X^*$ in $\mathbf{PER}$. The highest priority arrival on $\Uc$ will be the earliest of the unordered perturbed arrivals and $T_{i+1}/M$ is a lower bound on all future perturbed arrivals. When $T_{i+1}/M \geq \min \Uc$, the earliest arrival on $\Uc$ is guaranteed to be the next arrival. It is informative to think of the generator version of $\mathbf{PER}$ via Figure \ref{fig:alg}. The lower bound $T_{i+1}/M$ is a bound across space that advances rightward in time, every arrival to the left of $T_{i+1}/M$ is in order and every arrival to the right is unordered. Consider the number of iterations until the first acceptance in $\mathbf{REJ}$ and $\mathbf{PER}$. At first it seems that both algorithms should have different runtimes. $\mathbf{REJ}$ is obviously memoryless, and it seems wasteful --- no information accumulates. On the other hand $\mathbf{PER}$ accumulates the earliest arrival and its termination condition depends on a history of arrivals. Unfortunately, both algorithms have the same geometric distribution over the number of arrivals considered. Arguing informally, the lower bound $T_{i+1}/M$ of $\mathbf{PER}$ plotted over the iterations will form a line with slope $(MQ(\mathbb{R}^n))^{-1}$. $\mathbf{PER}$ terminates when this line crosses the first arrival time of the perturbed race. The first arrival of a race with measure $P$ occurs at $P(\mathbb{R}^n)^{-1}$ in expectation, so we expect the crossing point to occur on average at $MQ(\mathbb{R}^n)/P(\mathbb{R}^n)$ iterations. This is the same as the expected runtime of $\mathbf{REJ}$. \begin{lemma} \label{lem:constantbound} Let $K(\mathbf{REJ})$ and $K(\mathbf{PER})$ be the number of proposals considered by the rejection and perturbation sampling algorithms. Then \begin{align*} \mathbb{P}(K(\mathbf{REJ}) > k) = \mathbb{P}(K(\mathbf{PER})> k) = (1 - \rho)^{k} \text{ with } \rho = \frac{P(\mathbb{R}^n)}{Q(\mathbb{R}^n)M}. \end{align*} Thus $K(\mathbf{REJ})$ and $K(\mathbf{PER})$ are geometric random variable with \begin{align*} \mathbb{E}(K(\mathbf{REJ})) = \mathbb{E}(K(\mathbf{PER})) = \frac{1}{\rho} \end{align*} \end{lemma} \begin{proof} The probability of accepting a proposal at any iteration of $\mathbf{REJ}$ is \begin{align*} \mathbb{E}(f(X_i)/(g(X_i)M)) = \int \frac{f(x)}{g(x)M} \frac{g(x)}{Q(\mathbb{R}^n)} \mu(dx) = \rho. \end{align*} Each decision is independent, so the probability of $k$ rejections is $(1 - \rho)^k$. $\mathbf{PER}$ exceeds $k$ iterations if $T_i g(X_i)/f(X_i) > T_{k+1}/ M$ for all $i \leq k$. Because the $X_i$ are i.i.d., \begin{align*} \mathbb{P}(K(\mathbf{PER})> k \ &| \ \{T_i = t_i\}_{i=1}^{k+1}) = \prod_{i=1}^k \mathbb{P}(t_i/t_{k+1} > f(X)/(g(X)M)), \end{align*} where $X\sim Q(\cdot)/Q(\mathbb{R}^n)$. Given $T_{k+1} = t_{k+1}$ the $T_i$ for $i \leq k$ are i.i.d. $T_i \sim \mathrm{Uniform}(0, t_{k+1})$ by Lemma \ref{lem:bernoulli}. Thus $T_i/T_{k+1} \sim \mathrm{Uniform}(0, 1)$ i.i.d. \begin{align*} \mathbb{P}(K(\mathbf{PER})> k) = \prod_{i=1}^k \mathbb{P}(U > f(X)/(g(X)M)) = (1- \rho)^k \end{align*} finishes the proof. \end{proof} \subsection{Adaptive bounds} \label{subsec:adaptivebounds} Lemma \ref{lem:constantbound} is disappointing, because it suggests that reasoning about perturbations is as inefficient as discarding proposals. The problem is fundamentally that information carried in the bound $M$ about the discrepancy between $g(x)$ and $f(x)$ is static throughout the execution of both algorithms. Considering a contrived scenario will illustrate this point. Suppose that for every failed proposal we are given a tighter bound $M_{i+1} < M_{i}$ from some oracle. Both $\mathbf{REJ}$ and $\mathbf{PER}$ can be adapted to take advantage of these adaptive bounds simply by dropping in $M_i$ wherever $M$ appears. In this case $\mathbf{PER}$ is distinguished from $\mathbf{REJ}$. $\mathbf{REJ}$ makes an irrevocable decision at each iteration. In contrast $\mathbf{PER}$ simply pushes up the lower bound $T_{i+1}/M_i$ without erasing its memory, bringing it closer to accepting the earliest arrival so far. Indeed, the probability of this oracle rejection sampling exceeding $k$ proposals is \begin{align*} \mathbb{P}(K(\mathbf{OREJ}) > k) = \prod_{i}^k (1 - \rho_i) \text{ where } \rho_i = P(\mathbb{R}^n)/(Q(R^n)M_i). \end{align*} On the other hand, the probability of this oracle perturbation sampling exceeding $k$ proposals is \begin{align*} \mathbb{P}(K(\mathbf{OPER}) > k) = \prod_{i=1}^k \mathbb{P}(U > f(X)/(g(X)M_k)) = (1 - \rho_k)^k, \end{align*} or the probability of rejecting $k$ proposals \emph{as if} the $M_k$th bound was known all along. By tracking the earliest arrival so far $\mathbf{OPER}$ makes efficient use of adaptive bound information, reevaluating all points in constant time. \subsection{OS* adaptive rejection sampling and A* sampling} The difference between $\mathbf{REJ}$ and $\mathbf{PER}$ exposed by considering adaptive bounds motivates studying OS* and A* sampling, Monte Carlo methods that use realistic adaptive bounds. Both methods iteratively refine a partition $\{B_i\}_{i=1}^m$ of $\mathbb{R}^n$, which allows them to use regional bounds $M(B_i)$, where $f(x)/g(x) \leq M(B_i)$ for $x \in B_i$. As with $\mathbf{REJ}$ and $\mathbf{PER}$, OS* and A* sampling are only distinguished by how they use this information. OS* reasons about accept-reject operations, A* sampling about perturb operations. In contrast to the relationship between $\mathbf{REJ}$ and $\mathbf{PER}$, A* sampling makes more efficient use of proposal samples than OS*. OS* and A* sampling must compute volumes and samples of subsets under the proposal measure $Q$. It will be possibly intractable to consider any possible $B_i \subseteq \mathbb{R}^n$, so a user must implicitly specify a nice family $\Fc$ of subsets that is closed under a user-specified refinement function $\mathrm{split}(B, x)$. Hyperrectangles are a simple example. All together, the user must provide, \begin{enumerate} \item finite nonzero measure $P$ with a method for computing the density $f(x)$. \item finite nonzero proposal measure $Q$ with methods for sampling restricted to $B \in \Fc$, computing measures of $B \in \Fc$, and computing the density $g(x)$. \item partitioning set function $\mathrm{split}(B, x) \subseteq \Fc$ for $B \in \Fc$ that partitions $B$. \item bounding set function $M(B)$ for $B \in \Fc$, $f(x)/g(x) \leq M(B)$ for $x \in B$. \end{enumerate} Specific examples, which correspond to experimental examples, are given in the Appendix. \begin{algorithm}[t] \caption{$\mathbf{OS^{\ast}}$ adaptive rejection sampling for $P$ with proposal $Q$} \label{alg:osstar} \begin{algorithmic} \State $\Pc_0 = \{\mathbb{R}^n\}$ \State $T_0 = 0$ \For{$i=1$ to $\infty$} \State $B_i \sim \mathbb{P}(B) \propto Q(B)M(B)$ for $B \in \Pc_{i-1}$ \State $X_{i} \sim Q(\cdot \cap B_i)/Q(B_i)$ \State $E \sim \mathrm{Exp}(\sum_{B \in \Pc_{i-1}} M(B)Q(B))$ \State $T_i = T_{i-1} + E$ \State $U_i \sim \mathrm{Uniform}[0,1]$ \If{$U_i < f(X_i)/(g(X_i)M(B_i))$} \State\Return $(T_{i}, X_i)$ \Else \State $\Cc = \mathrm{split}(B_i, X_i)$ \State $\Pc_{i} = \Pc_{i-1} - \{B_i\} + \Cc$ \EndIf \EndFor \end{algorithmic} \end{algorithm} OS* ($\mathbf{OS^{\ast}}$ for short) is in a family of adaptive rejection sampling algorithms, which use the history of rejected proposals to tighten the gap between the proposal density and the density of interest. The name adaptive rejection sampling (ARS) is normally reserved for a variant that assumes $\log f(x)$ is concave \citep{gilks1992adaptive}. Accept-reject decisions are independent, so any adaptive scheme is valid as long as the rejection rate is not growing too quickly \citep{casella}. Our proof of the correctness appeals to exponential races, and it works for a wider range of adaptive schemes than just $\mathbf{OS^{\ast}}$. In more detail, $\mathbf{OS^{\ast}}$ begins with the proposal density $g(x)$ and a partition $\Pc_{0} = \{\mathbb{R}^n\}$. At every iteration it samples from the distribution with density proportional to $\sum_{B \in \Pc_{i-1}} g(x)M(B)1_{B}(x)$ in a two step procedure, sampling a subset $B \in \Pc_{i-1}$ with probability proportional to $Q(B)M(B)$, and then sampling a proposal point $X$ from the distribution with density $g(x)$ restricted to $B$. If $X$ is rejected under the current proposal, then $P_{i-1}$ is refined by splitting $B$ with the user specified $\mathrm{split}(B, X)$. There is a choice of when to refine and which subset $B \in \Pc_{i-1}$ to refine, but for simplicity we consider just the form the splits the subset of the current proposal. $\mathbf{OS^{\ast}}$ continues until the first acceptance, see \Algo{alg:osstar}. \begin{theorem}[Correctness of OS*] \label{thm:osstar} Let $K(\mathbf{OS^{\ast}})$ be the number of proposal samples considered before termination. Then \begin{align*} \mathbb{P}(K(\mathbf{OS^{\ast}}) > k) \leq (1-\rho)^k \text{ where } \rho = \frac{P(\mathbb{R}^n)}{Q(\mathbb{R}^n)M(\mathbb{R}^n)} \end{align*} and upon termination the return values $(T, X)$ of OS* are independent and \begin{align*} T \sim \mathrm{Exp}(P(\mathbb{R}^n)) \quad X \sim \frac{P(\cdot)}{P(\mathbb{R}^n)}. \end{align*} \end{theorem} \begin{proof} The situation is complicated, because the proposals $\{(T_i, X_i)\}_{i=1}^{\infty}$ of $\mathbf{OS^{\ast}}$ are not an exponential race. Instead, we present an informal argument derived from a more general thinning theorem, Proposition 14.7.I. in \cite{daley2007introduction}. Let $g_i(x)$ be the proposal density at iteration $i$, \begin{align*} g_i(x) = \sum\nolimits_{B \in \Pc_{i-1}} g(x)M(B)1_B(x). \end{align*} Clearly, $g_i(x)$ depends on the history of proposals so far and $f(x) \leq g_i(x) \leq g(x)M(\mathbb{R}^n)$ for all $i$. Let $R$ be an exponential race with measure $M(\mathbb{R}^n)Q(\cdot)$ and $U_j \mathrm{Uniform}[0,1]$ i.i.d. for each $(T_j, X_j) \in R$. Consider the following adaptive thinning procedure, subsample all points of $R$ that satisfy $U_j \leq g_i(X_j)/(g(X_j) M(\mathbb{R}^n))$ where $g_i(X_j)$ is defined according to the refinement scheme in $\mathbf{OS^{\ast}}$, but relative to the history of \emph{points subsampled from $R$ in the order of their acceptance}. It is possible to show that the sequence of accepted points $\{(T_i, X_i, U_i)\}_{i=1}^{\infty}$ have the same marginal distribution as the sequence of proposals in $\mathbf{OS^{\ast}}$. Thus, we can see $\mathbf{OS^{\ast}}$ and $\mathbf{REJ}$ as two separate procedures on the same realization of $R$. For the termination result, notice that $\mathbf{REJ}$ considers at least as many points as $\mathbf{OS^{\ast}}$. For partial correctness, the points $(T_i, X_i, U_i)$ such that $U_i < f(X_i)/g_i(X_i)$ are exactly the subsampled points that would have resulted from thinning $R$ directly with probability $f(x)/(g(x)M(\mathbb{R}^n))$. Thus, by the Accept-Reject Lemma, the returned values $(T, X)$ will be the first arrival of an exponential race with measure $P$. \end{proof} A* sampling ($\mathbf{A^{\ast}}$ for short) is a branch and bound routine that finds the first arrival of a perturbed exponential race. It follows $\mathbf{PER}$ in principle by maintaining a lower bound on all future perturbed arrivals. The difference is that $\mathbf{A^{\ast}}$ maintains a piecewise constant lower bound over a partition of space that it progressively refines. On every iteration it selects the subset with smallest lower bound, samples the next arrival in that subset, and refines the subset unless it can terminate. It continues refining until the earliest perturbed arrival is less than the minimum of the piecewise constant lower bound. The name A* sampling is a reference to A* search \citep{astar}, which is a path finding algorithm on graphs that uses a best-first criteria for selecting from heuristically valued nodes on the fringe of a set of visited nodes. A* sampling was originally introduced by \cite{maddison2014astarsamp} as an algorithm that maximizes a perturbed Gumbel process. We define it over an exponential race for the sake of consistency. Usually, it is better to work with a Gumbel process to avoid numerical issues. In more detail, $\mathbf{A^{\ast}}$ searches over a simulation of an exponential race organized into a space partitioning tree, as in the right hand plot of Figure \ref{fig:eralt}, for the first arrival of the perturbed race. The tree is determined by the splitting function $\mathrm{split}(B, x)$. Each node $v$ of the tree is associated with a subset $B_v \subseteq \mathbb{R}^n$ and an arrival $(T_v, X_v)$ from an exponential race with measure $Q$. $\mathbf{A^{\ast}}$ iteratively expands a subtree of internal visited nodes, taking and visiting one node from the current fringe at each iteration. The fringe $\Lc$ of the visited subtree is always a partition of $\mathbb{R}^n$. Each subset $B \in \Lc$ is associated with the arrival time $T$ of the next arrival of the race with measure $Q$ in $B$. Therefore $T/M(B)$ is a lower bound on all future perturbed arrivals in $B$. $\Lc$ is implemented with a priority queue that prioritizes the subset $B$ with the lowest regional bound $T/M(B)$. As $\mathbf{A^{\ast}}$ expands the set of visited nodes the lower bound $\min \Lc$ increases. \begin{algorithm}[t] \label{alg:astar} \caption{A* sampling for $P$ with proposal $Q$} \label{alg:astar} \begin{algorithmic} \State $\Lc, \Uc = \mathrm{minPriorityQueue}(), \mathrm{minPriorityQueue}()$ \State $T_1 \sim \mathrm{Exp}(Q(\mathbb{R}^n))$ \State $\Lc.\mathrm{pushWithPriority}(T_1/M(\mathbb{R}^n), \mathbb{R}^n)$ \For{$i=1$ to $\infty$} \State $(T_i/M(B_i), B_i) = \Lc.\mathrm{pop}()$ \State $X_i \sim Q(\cdot \cap B)/Q(B_i)$ \State $\Uc.\mathrm{pushWithPriority}(T_ig(X_i)/f(X_i), X_i)$ \State $E \sim \mathrm{Exp}(Q(B_i))$ \State $T = T_i + E$ \If{$\min(\min \Lc, T/M(B_i)) < \min \Uc$} \State $\Cc = \mathrm{split}(B_i, X_i)$ \While{$\Cc \neq \emptyset$} \State $C \sim \mathbb{P}(C) \propto Q(C)$ for $C \in \Cc$ \State $\Lc.\mathrm{pushWithPriority}(T/M(C), C)$ \State $\Cc = \Cc - \{C\}$ \State $E \sim \mathrm{Exp}(\sum_{C \in \Cc} Q(C))$ \State $T = T + E$ \EndWhile \Else \State $\Lc.\mathrm{pushWithPriority}(T/M(B_i), B_i)$ \EndIf \If{$\min \Lc \geq \min \Uc$} \State\Return $\Uc.\mathrm{pop}()$ \EndIf \EndFor \end{algorithmic} \end{algorithm} $\Lc$ is initialized with the root of the tree $\{(T_1/M(\mathbb{R}^n), \mathbb{R}^n)\}$. At the start of an iteration $\mathbf{A^{\ast}}$ removes and visits the subset $(T_i/M(B_i), B_i)$ with lowest lower bound on $\Lc$. Visiting a subset begins by realizing a location $X_i$ from $Q(\cdot \cap B_i)/Q(B_i)$ and pushing the perturbed arrival $(T_i g(X_i)/f(X_i), X_i)$ onto another priority queue $\Uc$. $\Uc$ prioritizes earlier arrivals by the perturbed arrival times $T_ig(X_i)/f(X_i)$. In this way $\mathbf{A^{\ast}}$ decreases the upper bound $\min \Uc$ at each iteration. $\mathbf{A^{\ast}}$ attempts to terminate by simulating the next arrival time $T > T_i$ in $B_i$ of the race with measure $Q$. If $\min \Uc \leq \min(\min \Lc, T/M(B_i))$, then the top of $\Uc$ will not be superseded by future perturbed arrivals and it will be the first arrival of the perturbed race. If termination fails, $\mathbf{A^{\ast}}$ refines the the partition by splitting $B_i$ into a partition $\mathrm{split}(B_i, X_i)$ of children. Arrival times for each of the children are assigned respecting the constraints of the exponential race in $B_i$. Each child $C$ is pushed onto $\Lc$ prioritized by its lower bound $T/M(C)$. Because the lower bounds have increased there is a second opportunity to terminate before continuing. $\mathbf{A^{\ast}}$ checks if $\min \Uc \leq \min \Lc$, and otherwise continues, see \Algo{alg:astar}. As with $\mathbf{PER}$, $\mathbf{A^{\ast}}$ can be turned into a generator for iterating in order through the points of the perturbed race by replacing the {\bf return} statement with a {\bf yield} statement in \Algo{alg:astar}. \begin{theorem}[Correctness of A* sampling] \label{thm:astar} Let $K(\mathbf{A^{\ast}})$ be the number of proposal samples considered before termination. Then \begin{align*} \mathbb{P}(K(\mathbf{A^{\ast}}) > k) \leq (1-\rho)^k \text{ where } \rho = \frac{P(\mathbb{R}^n)}{Q(\mathbb{R}^n)M(\mathbb{R}^n)} \end{align*} and upon termination the return values $(T, X)$ of A* sampling are independent and \begin{align*} T \sim \mathrm{Exp}(P(\mathbb{R}^n)) \quad X \sim \frac{P(\cdot)}{P(\mathbb{R}^n)}. \end{align*} \end{theorem} \begin{proof} Adapted from \cite{maddison2014astarsamp}. The proposals are generated lazily in a space partitioning tree. If $\{(T_i, X_i)\}_{i=1}^{\infty}$ are the arrivals at every node of the infinite tree sorted by increasing $T_i$, then $(T_i, X_i)$ forms an exponential race with measure $Q$. For the termination result, each node $v$ of the tree can be associated with a subset $B_v$ and a lower bound $T_v/M(B_v)$. One of the nodes will contain the first arrival of the perturbed process with arrival time $T^*$. $\mathbf{A^{\ast}}$ visits at least every node $v$ with $T_v/M(B_v) > T^*$. If $M(B)$ is replaced with a constant $M(\mathbb{R}^n)$, then this can only increase the number of visited nodes. The last step is to realize that $\mathbf{A^{\ast}}$ searching over a tree with constant bounds $M(\mathbb{R}^n)$ searches in order of increasing $T_v$, and so corresponds to a realization of $\mathbf{PER}$. The distribution of runtimes of $\mathbf{PER}$ is given in Lemma \ref{lem:constantbound}. For partial correctness, let $(T, X)$ be the return values with highest priority on the upper bound priority queue $\Uc$. The arrival time of unrealized perturbed arrivals is bounded by the lower bound priority queue $\Lc$. At termination $T$ is less than the top of the lower bound priority queue. So no unrealized points will arrive before $(T, X)$. By Lemma \ref{lem:perturb} $(T, X)$ is the first arrival of an exponential race with measure $P$. \end{proof} \begin{table}[t] \begin{center} \begin{tabular}{llllrr} \toprule $P$ & $Q$ & $\Omega$ & $N$ & $\bar{K}(\mathbf{OS^{\ast}})$ & $\bar{K}(\mathbf{A^{\ast}})$ \\ \midrule clutter posterior & prior & $\mathbb{R}$ & 6 & 9.34 &7.56 \\ clutter posterior & prior & $\mathbb{R}^2$ & 6 & 38.3 & 33.0 \\ clutter posterior & prior & $\mathbb{R}^3$ & 6 & 130 & 115 \\ robust Bayesian regression & prior & $\mathbb{R}$ & 10 & 9.36 & 6.77\\ robust Bayesian regression & prior & $\mathbb{R}$ & 100 & 40.6 & 32.2 \\ robust Bayesian regression & prior & $\mathbb{R}$ & 1000 & 180 & 152 \\ fully connected Ising model & uniform & $\{-1,1\}^{5}$ & - & 4.37 & 3.50 \\ fully connected Ising model & uniform & $\{-1,1\}^{10}$ & - & 19.8 & 15.8 \\ \midrule \end{tabular} \end{center} \caption{Comparing $\mathbf{A^{\ast}}$ and $\mathbf{OS^{\ast}}$. Clutter and robust Bayesian regression are adapted from \cite{maddison2014astarsamp} and the Ising model from \cite{kim2016lprelaxsamp}. $\Omega$ is the support of the distribution; $N$ is the number of data points; and $\bar{K}(\mathbf{OS^{\ast}})$ and $\bar{K}(\mathbf{A^{\ast}})$ are averaged over 1000 runs. More information in the Appendix.\label{tab:expts}}. \end{table} \subsection{Runtime of A* sampling and OS*} $\mathbf{A^{\ast}}$ and $\mathbf{OS^{\ast}}$ are structurally similar; both search over a partition of space and refine it to increase the probability of terminating. They will give practical benefits over rejection sampling if the bounds $M(B)$ shrink as the volume of $B$ shrinks. In this case the bound on the probability of rejecting $k$ proposals given in Theorems \ref{thm:osstar} and \ref{thm:astar} can be very loose, and $\mathbf{OS^{\ast}}$ and $\mathbf{A^{\ast}}$ can be orders of magnitude more efficient than rejection sampling. Still, these methods scale poorly with dimension. The cost of running $\mathbf{A^{\ast}}$ and $\mathbf{OS^{\ast}}$ will be dominated by computing the ratio of densities $f(x)/g(x)$ and computing bounds $M(B)$. Because the number of bound computations is within a factor of 2 of the number of density computations, the number of evaluations of $f(x)/g(x)$ (equivalently number of proposals) is a good estimate of complexity. \tab{tab:expts} presents a summary of experimental evidence that $\mathbf{A^{\ast}}$ makes more efficient use of density computations across three different problems. For each problem the full descriptions of $P$, $Q$, $M(B)$, and $\mathrm{split}(B, x)$ are found in the Appendix. The dominance of $\mathbf{A^{\ast}}$ in experiments is significant, because it has access to {the same information} as $\mathbf{OS^{\ast}}$. There are at least two factors that may give $\mathbf{A^{\ast}}$ this advantage. First, if all lower bounds increase sharply after some exploration $\mathbf{A^{\ast}}$ can retroactively take advantage of that information, as in Section \ref{subsec:adaptivebounds}. Second, $\mathbf{A^{\ast}}$ can take advantage of refined bound information on the priority queue $\Lc$ before proposing the next sample. Still, the difference in search strategy and termination condition may counteract these advantages, so a rigorous theory is needed to confirm exactly the sense in which $\mathbf{A^{\ast}}$ and $\mathbf{OS^{\ast}}$ differ. We refer readers to \cite{maddison2014astarsamp} for more detailed experiments. \section{Conclusion} \label{sec:con} The study of Poisson processes is traditionally motivated by their application to natural phenomenon, and Monte Carlo methods are developed specifically for them \citep{ripley1977modelling, geyer1994simulation}. We considered the inverse relationship, using Poisson processes to better understand Monte Carlo methods. We suspect that this general perspective holds value for future directions in research. Monte Carlo methods that rely on bounds are not suitable for most high dimensional distributions. Rejection sampling scales poorly with dimensionality. Even for A* sampling there are simple examples where adaptive bounds become uninformative in high dimensions, such as sampling from the uniform hypersphere when using hyperrectangular search subsets. Still, specialized algorithms for limited classes of distributions may be able to take advantage of conditional independence structure to improve their scalability. Another direction is to abandon the idea of representing arbitrary distributions, and study the class of distributions represented by the maxima of combinations of lower order Gumbel processes. This is the approach of the perturbation models studied in Papandreou and Yuille; Gane et al.; Hazan and Jaakkola; Tarlow et al.; and Keshet at al. of this book. In these models a Gumbel process over a discrete space is replaced by sums of independent Gumbel processes over discrete subspaces. The maxima of these models form a natural class of distributions complete with their own measures of uncertainty. An open direction of inquiry is developing efficient algorithms for optimizing their continuous counterparts. Our study of Poisson processes and Monte Carlo methods was dominated by the theme of independence; the points of an exponential race arrive as independent random variables and accept-reject or perturb do not introduce correlations between the points of the transformed race. Continuing in this direction it is natural to investigate whether other Poisson process models or other operations on an exponential race could be used to define a new class of Monte Carlo methods. In a separate direction the Markov Chain Monte Carlo (MCMC) methods produce a sequence of correlated samples whose limiting distribution is the distribution of interest. The theory of point processes includes a variety of limit theorems, which describe the limiting distribution of random countable sets \citep{daley2007introduction}. It would be interesting to see whether a point process treatment of MCMC bears fruit, either in unifying our proof techniques or inspiring new algorithms. \section*{Acknowledgements} We would like to thank Daniel Tarlow and Tom Minka for the ideas, discussions, and support throughout this project. Thanks to the other editors Tamir Hazan and George Papandreou. Thanks to Jacob Steinhardt, Yee Whye Teh, Arnaud Doucet, Christian Robert for comments on the draft. Thanks to Sir J.F.C. Kingman for encouragement. This work was supported by the Natural Sciences and Engineering Research Council of Canada. \section*{Appendix} \subsection*{Proof of Lemma \ref{lem:bernoulli}} \begin{proof} The lemma is trivial satisfied for $k=0$. For $k > 0$ and $B_i \subseteq B$ we will express \begin{align} \label{eq:condoncount} \mathbb{P}(\{X_i \in B_i\}_{i=1}^k | N(B) = k) \end{align} in terms of counts. The difficulty lies in the possible overlap of $B_i$s, so we consider $2^k$ sets of the form \begin{align*} A_j = B_1^* \cap B_2^* \cap \ldots \cap B_k^* \end{align*} where $*$ is blank or a complement, and $A_1$ is interpreted as $B \cap B_1^c \cap \ldots \cap B_k^c$. The $A_j$ are a disjoint partition of $B$, \begin{align*} B_i = \cup_{j \in I(i)} A_j, \quad B = \cup_{j=1}^{2^k} A_j, \end{align*} where $I(i) \subseteq \{1, \ldots, 2^k\}$ is some subset of indices. Let $\Ic = I(1) \times I(2) \times \ldots \times I(k)$, so that each $s \in \Ic$ is a vector indices $(s_1, s_2, \ldots, s_k)$ associated with the disjoint events $\{X_i \in A_{s_i}\}_{i=1}^k$. Thus, \begin{align*} \mathbb{P}(\{X_i \in B_i\}_{i=1}^k | N(B) = k) = \sum_{s \in \Ic} \mathbb{P}(\{X_i \in A_{s_i}\}_{i=1}^k | N(B) = k). \end{align*} For $s \in \Ic$, let $n_j(s) = \# \{i : s_i = j\}$ be the number of indices in $s$ equal to $j$ and notice that $\sum_{j=1}^{2^k} n_j(s) = k$. To relate the probability if specific numbering $\{X_i \in A_{s_i}\}_{i=1}^k$ with counts $\{N(A_j) = n_j(s)\}_{j=1}^{2^k}$, we discount by all ways of the arranging $k$ points that result in the same counts. \begin{align*} \mathbb{P}(\{X_i \in A_{s_i}\}_{i=1}^k | N(B) = k) &= \frac{\prod_{j=1}^{2^k}n_j(s)!}{k!}\frac{\mathbb{P}(\{N(A_j) = n_j(s)\}_{j=1}^{2^k})}{\mathbb{P}(N(B) = k)}\\ &= \frac{\prod_{j=1}^{2^k} \mu(A_j)^{n_j(s)} }{\mu(B)^k}. \end{align*} Thus (\ref{eq:condoncount}) is equal to \begin{align*} \sum_{s \in \Ic} \frac{\prod_{j=1}^{2^k} \mu(A_j)^{n_j(s)}}{\mu(B)^k} &= \prod_{i=1}^k \frac{\sum_{j \in I(i)} \mu(A_j)}{\mu(B)} = \prod_{i=1}^k \frac{\mu(B_i)}{\mu(B)} \end{align*} \end{proof} \subsection*{Clutter posterior} This example is taken exactly from \cite{maddison2014astarsamp}. The clutter problem \citep{minka2001expectation} is to estimate the mean $\theta \in \mathbb{R}^n$ of a Normal distribution under the assumption that some points are outliers. The task is to sample from the posterior $P$ over $w$ of some empirical sample $\{(x_i)\}_{i=1}^N$. \begin{align*} f_i(\theta) &= \frac{0.5 \exp(- 0.5 \lVert \theta - x_i\rVert^2 ) }{(2\pi)^{n/2}} + \frac{0.5\exp(- 0.5 \lVert x_i\rVert^2 /100^2) }{100^n (2\pi)^{n/2}}\\ \log g(\theta) &= -\frac{\lVert\theta\rVert^2}{8} \quad \log f(\theta) = \log g(\theta) + \sum_{i=1}^N \log f_i(\theta)\\ (a, b] &= \{y : a_d < y_d \leq b_d\} \text{ for } a, b \in \mathbb{R}^n\\ M((a, b]) &= \prod_{i=1}^N f_i(x^*(a, b, x_i)) \quad x^*(a, b, x)_d = \begin{cases} a_d & \text{if } x_d < a_d\\ b_d & \text{if } x_d > b_d\\ x_d & \text{o.w. }\\ \end{cases}\\ \mathrm{split}((a,b], x) &= \{(a,b] \cap \{y : y_s \leq x_s\}, (a,b] \cap \{y : y_s > x_s\}\}\\ &\text{where } s = \argmax_d b_d - a_d \end{align*} Our dataset was 6 points $x_i \in \mathbb{R}^n$ of the form $x_i = (a_i, a_i, \ldots, a_i)$ for $a_i \in \{-5, -4, -3 , 3, 4, 5\}$. \subsection*{Robust Bayesian regression} This example is an adaption from \cite{maddison2014astarsamp} with looser bounds. The model is a robust linear regression $y_i = w x_i + \epsilon_i$ where the noise $\epsilon_i$ is distributed as a standard Cauchy and $w$ is a standard Normal. The task is to sample from the posterior $P$ over $w$ of some empirical sample $\{(x_i, y_i)\}_{i=1}^N$. \begin{align*} \log g(w) &= -\frac{w^2}{8}\\ \log f(w) &= \log g(w) - \sum_{i=1}^N \log(1 + (wx_i - y_i)^2)\\ M((a, b]) &= \prod_{i=1}^N M_i((a,b]) \quad M_i((a, b]) = \begin{cases} \exp(a) & \text{if } y_i/x_i < a\\ \exp(b) & \text{if } y_i/x_i > b\\ \exp(y_i/x_i) & \text{o.w. }\\ \end{cases}\\ \mathrm{split}((a, b], x) &= \{(a, x], (x, b]\} \end{align*} The dataset was generated by setting $w^* = 2$; $x_i \sim \mathrm{Normal}(0, 1)$ and $y_i = wx_i + \epsilon$ with $\epsilon \sim \mathrm{Normal}(0, 0.1^2)$ for $i \leq N/2$; and $x_i = x_{i - N/2}$ and $y_i = - y_{i-N/2}$ for $i > N/2$. \subsection*{Attractive fully connected Ising model} This is an adaptation of \cite{kim2016lprelaxsamp}. The attractive fully connected Ising model is a distribution over $x \in \{-1, 1\}^n$ described by parameters $w_{ij} \sim \mathrm{Uniform}[0, 0.2]$ and $f_i \sim \mathrm{Uniform}[-1, 1]$. \begin{align*} \log g(x) &= 0\\ \log f(x) &= \sum_i f_i x_i + \sum_{i < j \leq n} w_{ij} x_ix_j \end{align*} We considered subsets of the form $B = \{x : x_i = b_i, i \in I\}$ where $I \subseteq \{1, \ldots, n\}$ and $b_i \in \{0, 1\}$. We split on one of the unspecified variables $x_i$ by taking variable whose linear program relaxation was closest to 0.5. \begin{align*} \mathrm{split}(B, x) = \{B \cap \{x : x_i = 0\}, B \cap \{x : x_i = 1\}\} \end{align*} $\log M(B)$ is computed by solving a linear program relaxation for the following type of integer program. Let $b_i \in \{0, 1\}$ for $1 \leq i \leq n$ and $b_{ijkl} \in \{0, 1\}$ for $1 \leq i < j \leq n$ and $k, l \in \{0, 1\}$. \begin{align*} \min_x \sum_i -f_i b_i + f_i (1-b_i) + \sum_{1\leq i < j \leq n} \sum_{k,l \in \{0,1\}} (-1)^{kl + (1-l)(1-k)}w_{ij}b_{ijkl} \end{align*} subject to the constraints for $1 \leq i < j \leq n$, \begin{align*} \sum_{l \in \{0,1\}} b_{ij0l} = 1 - b_i \quad \sum_{k \in \{0,1\}} b_{ijk0} = 1 - b_j\\ \sum_{l \in \{0,1\}} b_{ij1l} = b_i \quad \sum_{k \in \{0,1\}} b_{ijk1} = b_j \end{align*} as the subsets $B$ narrowed we just solved new linear programs with constants for the fixed variables.
train/arxiv
BkiUdTU4dbghSWQcXSsV
5
1
\section*{Abstract} \begin{abstract} We develop separability criteria to identify non-$k$-separability $(k = 2,3,\ldots ,n)$ and genuine multipartite entanglement in different classes of mixed $n$-partite quantum states using elements of density matrices. With the help of these criteria, we detect non-$k$-separability in $n$-qudit GHZ and W states respectively added with white noise. We also discuss the experimental implementation of our criteria by means of local observables. \end{abstract} \section{Introduction} \label{intro} Though the $n$-number of quantum systems can have various kinds of entanglement, the focusing point is the genuine multipartite entanglement because it can be used for various quantum information and computational tasks \cite{horo2009,guhne2009}. An exponential speed-up of quantum computation requires multipartite entanglement \cite{jozsa2003}. Two widely studied multipartite entangled states are (i) Greenberger-Horne- Zeilinger (GHZ) and (ii) W states. These two states are inequivalent and maximally entangled ones which are found applications in diverge topics including quantum teleportation \cite{karl1998}, quantum secret sharing \cite{hill1999}, superdense coding \cite{agra2006}, splitting quantum information \cite{zheng2006} and enhancing the computational power \cite{hond2006}. The stronger nonlocality displayed by these two multipartite entangled states also lead to many theoretical and experimental interests in quantum physics, see for example Refs.\cite{green1990,banc2009,cabel2002}. Identifying entanglement in the arbitrary multipartite states is not an easy task because in these systems one encounters many types of multiparticle entanglement. For example, the multipartite states may posses partially separable or $k$-separable and partially entangled or $k$-party entangled states \cite{horo2009,guhne2009}. An $n$-partite system is $k$-separable if it can be separated into $k$-parts. For example, a $4$-partite state, $ABCD$, is $3$-separable if it can be separated into any one of the following forms, namely $A|B|CD$, $A|C|BD$, $A|D|BC$, $B|C|AD$, $B|D|AC$ and $C|D|AB$. More precisely, an $n$-partite pure quantum state $|\psi_{k-\textrm{sep}}\rangle$ is called $k$- separable $(k=2,3,\ldots,n)$ if and only if it can be written as a product of $k$ substates, that is \begin{eqnarray} \label{k0} |\psi_{k-\textrm{sep}}\rangle = |\psi_1\rangle \otimes |\psi_2\rangle \otimes \ldots \otimes|\psi_k\rangle, \end{eqnarray} where $|\psi_i\rangle$, $i=1,2,\ldots,k$, represents the state of a single subsystem or a group of subsystems \cite{gabr2010}. A mixed state $\rho_{k-\textrm{sep}}$ is called $k$-separable, if it can be decomposed into pure $k$-separable states, that is \begin{eqnarray} \label{k2} \rho_{k-\textrm{sep}} = \sum_i p_i~ \rho_{k-\textrm{sep}}^i, \end{eqnarray} where $\rho_{k-\textrm{sep}}^i$ might be $k$-separable under different partitions, $p_i > 0$ and $\sum_i p_i = 1$. An $n$-partite state is fully separable if $k=n$ and biseparable if $k=2$. States that are not fully separable and not biseparable are called nonseparable and genuinely multipartite entangled states respectively. The aim of the present work is to identify non-$k$- separability in the GHZ and W classes of multipartite states. Several conditions were proposed to detect genuine multipartite entanglement and nonseparability of multipartite states \cite{gabr2010,dur1999,dur2000,dur2001,seev2002,uff2002,lask2005,toth2005,seev2008,hube2010,gao2010,guhne2010,gao2011,gao2013}. To name a few, we cite the following : Seevinck and Uffink have proposed a set of inequalities, which can characterize various levels of partial separability and entanglement in multiqubit states \cite{seev2008}. Huber {\it et.al.} have proposed a general framework to obtain bilinear inequalities which can characterize the genuinely multipartite entangled mixed quantum states in arbitrary-dimensional systems \cite{hube2010}. From the later Gabriel {\it et.al.} have developed an easily computable criterion to detect $k$-nonseparability in mixed multipartite states \cite{gabr2010}. Recently G\"uhne and Seevinck have proposed the biseparability and full separability criteria for different classes of $3$-qubit and $4$-qubit states \cite{guhne2010}. These conditions were associated with density matrix elements. Later Gao and Hong have generalized the separability criteria proposed by G\"uhne and Seevinck to $n$-qubit and $n$-qudit states and proved that their criteria is applicable for any partitions \cite{gao2011}. The $k$-nonseparability criteria for the arbitrary dimensional mixed multipartite states was subsequently developed in Ref.\cite{gao2013}. In the present work, we extend the criteria given by Gao and Hong \cite{gao2011} to $k$-separable $n$-partite states. For a given $k$, violation of our criteria reveals the non-$k$-separability. With the help of our criteria one can detect non-$k$-separability in different classes of arbitrary dimensional $n$-partite states. We also illustrate the non-$k$-separability of mixed $n$-partite states with two examples. We formulate the separability conditions in terms of density matrix elements since these elements can be measured efficiently with local observables \cite{seev2008,gao2010,guhn2007,lu2013}. The conditions presented in this paper are experimentally implementable without a full quantum state tomography. We also discuss how many local observables are required to implement the present criteria in experiments. The paper is organized as follows. In the following section, we derive separability criteria to identify non-$k$-separable mixed $n$-partite quantum states. In Sec.\ref{sec3} we illustrate our criteria by considering $n$-qudit GHZ and W states respectively mixed with white noise. In Sec.\ref{sec4} we calculate the number of local observables required to evaluate the criteria given in this work. Finally, we summarize our conclusions in Sec.\ref{con}. \section{Criteria for non-$k$-separability } \label{sec2} In this section, we present the separability criteria to identify different classes of non-$k$-separable $n$-qudit states. We derive these conditions based on the ideas given in Refs.\cite{guhne2010,gao2011}. To begin we present the separability condition which is applicable for a class of GHZ multipartite states. \\ \\ {\bf Criterion 1. } Let $\rho$ be a $k$-separable $n$-partite density matrix acting on Hilbert space $\mathcal{H}_1\otimes \mathcal{H}_2 \otimes \ldots \otimes \mathcal{H}_n$, where dim $\mathcal{H}_l=d_l$, $l=1,2,\ldots,n$. Then \begin{align} \label{a5} (2^{k-1}-1)~|\rho_{1,d_{1}d_{2}...d_{n}}| \leq\frac{1}{2}\sum_{j\in A} \sqrt{\rho_{j,j}\rho_{d_{1}d_{2}...d_{n}-j+1,d_{1}d_{2}...d_{n}-j+1}}. \end{align} Here $A=\{\sum_{l=1}^{n-1}j_ld_{l+1}\cdots d_n+j_n+1 ~ | ~ j_l=0, d_l-1, (j_1,j_2,\cdots, j_n)\neq (0,0,\cdots,0),(d_1-1,d_2-1,\cdots,d_n-1)\}$. An $n$-partite state $\rho$ which violates the inequality (\ref{a5}), is a non-$k$-separable $n$-partite state. Suppose a state violates the inequality (\ref{a5}) for $k=2$, then $\rho$ is a non-$2$-separable $n$-partite state or a genuinely $n$-partite entangled state \cite{gao2013}. We obtain the above inequality (\ref{a5}) from the biseparability of $n$-qudit case \cite{gao2011}. The inequality given above can be verified in the same manner as the Theorem 2 in Ref.\cite{gao2011} was proved. Since the underlying ideas are exactly the same we do not present the details here. The term $(2^{k-1}-1)$ which appear additionaly in (\ref{a5}) decides the non-$k$-separability of $n$-partite states. In the following, we formulate another criterion applicable for a class of $n$-qudit W states \cite{kim2008}, which is not discussed in the earlier works \cite{guhne2010,gao2011}. We mention here that the $n$-qudit W class state has several generalizations and in this work we consider only one generalization which was considered in Ref.\cite{kim2008}. To derive the condition for the $n$-qudit W states, we generalize the biseparability of $n$-qubit case \cite{gao2011} to $n$-qudit case and obtain the following form of inequality which is suitable for non-$k$-separable $n$-qudit states. \\ \\ {\bf Criterion 2.} Let $\rho = (\rho_{i,j})_{d^n\times d^n}$ be an $n$-qudit density matrix. If $\rho$ is $k$-separable, then its density matrix elements fulfill \begin{align} \label{t3w} \sum\limits_{{1\leq j<i\leq n},\atop p,q=1,2,\ldots,d-2,d-1}|\rho_{p\times d^{n-i}+1,q\times d^{n-j}+1}| \leq&\sum\limits_{{1\leq j<i\leq n},\atop p,q=1,2,\ldots,d-2,d-1}\sqrt{\rho_{1,1}\rho_{p\times d^{n-i}+q\times d^{n-j}+1,p\times d^{n-i}+q\times d^{n-j}+1}} \notag\\ &\quad +\left(\frac{n-k}{2}\right) \sum\limits_{1\leq i\leq n,\atop p=1,2,\ldots,d-2,d-1}\rho_{p\times d^{n-i}+1,p\times d^{n-i}+1}. \end{align} An $n$-qudit state $\rho$ which violates the inequality (\ref{t3w}), is a non-$k$-separable $n$-partite state. If the inequality (\ref{t3w}) is violated for $k=2$, then the state is genuinely $n$-partite entangled one. The criterion given in (\ref{t3w}) can also be verified in the same manner as the Theorem 3 in Ref.\cite{gao2011} was proved. One can deduce the non-$k$-separability criteria for $n$-qubit states by restricting $d = 2$ in Eqs.(\ref{a5}) and (\ref{t3w}). We conclude this section by noting that in the criteria $1$ and $2$, $\rho_{i,j}$ represents the $i^{\text{th}}$ row and $j^{\text{th}}$ column element in the density matrix. \section{Examples} \label{sec3} In this section, we analyze the non-$k$-separability of $n$-partite GHZ state mixed with white noise through the criterion 1. We then investigate the non-$k$-separability of $n$-partite W state mixed with white noise through the criterion 2. In both the examples we consider the $3$-qutrit and $4$-qutrit cases and explain their nonseparability and genuine multipartite entanglement in detail. \subsection{$n$-qudit GHZ state mixed with white noise} To illustrate the criterion 1, we consider the $n$-qudit GHZ state mixed with white noise, \begin{align} \label{rdn}\rho_{dn} = p|GHZ_{dn}\rangle\langle GHZ_{dn}| + \frac{(1-p)} {d^n} I, \end{align} where $|GHZ_{dn}\rangle $ $= \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i\rangle^{\otimes n}$ and $I$ is the Identity operator \cite{hube2010}. Imposing the condition (\ref{a5}) on the state (\ref{rdn}), we can obtain the following general function, namely \begin{align} \label{alph}\alpha_k^{n,d} = \frac{2^{n-1}-1}{2^{k-1}-1}\times \frac{1-p}{p\times d^{n-1}}. \end{align} The outcome $\alpha_k^{n,d} < 1$, for the given value of $k$ $(k=2,3,\ldots,n)$, confirms that the state is non-$k$-separable. To illustrate the non-$k$-separability, let us consider the $3$-qutrit ($n=3$ and $d=3$) and $4$-qutrit ($n=4$ and $d=3$) cases in (\ref{rdn}). For these two cases, Eq.(\ref{alph}) turns out to be $\alpha_k^{3,3}=\frac{(1-p)}{3p(2^{(k-1)}-1)}$ and $\alpha_k^{4,3}=\frac{7(1-p)}{27p(2^{(k-1)}-1)}$. We plot these two functions for various $k$ $(2\leq k\leq n)$ values and depict the outcome in Figs.\ref{f1} and \ref{f2} \cite{hube2013}. In these two Figures, the region covered by $\alpha_k^{n,d} < 1$ brings out the non-$k$-separability. For the state (\ref{rdn}), the criterion (\ref{a5}) act as strong as the PPT criterion and the criteria developed in Refs.\cite{gabr2010,gao2010} for detecting nonseparable quantum states. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{fig1} \caption{non-$k$-separability of $3$-qutrit GHZ state mixed with white noise} \label{f1} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{fig2} \caption{non-$k$-separability of $4$-qutrit GHZ state mixed with white noise} \label{f2} \end{center} \end{figure} \subsection{$n$-qudit W state mixed with white noise} To illustrate the criterion 2, we consider the $n$-qudit W state with additional isotropic (white) noise, \begin{align} \label{rwn}\rho_{W_n} = (1-p) |W_n^d\rangle \langle W_n^d| + p \frac{I}{d^n}, \end{align} where $|W_n^d\rangle = \frac{1}{\sqrt{n\times (d-1)}} \big({\sum_{i=1}^{d-1}}$ $(|00\ldots i\rangle + |0\ldots i0\rangle$ $+ \cdots +|i0\ldots 0\rangle)\big)$ and $I$ is the identity operator. Applying the inequality (\ref{t3w}) on the state $\rho_{W_n}$, given above, we find \begin{align} \label{bet} \beta_k^{n,d} = \left(\frac{p~n~(d-1)}{d^n~(1-p)}\right)+ \left(n(d-1)+\frac{n^2(d-1)^2~p}{d^n~(1-p)}\right) \times \left(\frac{n-k}{2}\right)\times\frac{1}{\left(\sum\limits_{i=1}^{n(d-1)-1} i - n \sum\limits_{j=1}^{d-2} j\right)}. \end{align} An $n$-partite state (\ref{rwn}) is non-$k$-separable if it obeys the inequality $\beta_k^{n,d} < 1$ for a given $k$. In other words the genuine multipartite entanglement of $\rho_{W_n}$ can be confirmed with $\beta_2^{n,d} < 1$. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{fig3} \caption{non-$k$-separability of $3$-qutrit W state mixed with white noise} \label{f3} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=0.5\linewidth]{fig4} \caption{non-$k$-separability of $4$-qutrit W state mixed with white noise} \label{f4} \end{center} \end{figure} To identify the non-$k$-separabilities of $3$-qutrit and $4$-qutrit mixed states $\rho_{W_n}$ respectively in (\ref{rwn}), we consider the functions $\beta_k^{3,3}$ ($n=3,d=3$ in Eq.(\ref{bet})) and $\beta_k^{4,3}$ ($n=4,d=3$ in Eq.(\ref{bet})). We analyze these two functions for various $k$ values and plot the results in Figs.\ref{f3} and \ref{f4}. \section{Experimental feasibility} \label{sec4} We formulated the separability criteria in terms of density matrix elements. The condition given above can also be experimentally accessible by means of local observables such as $\mathcal{L}=\mathcal{A}_1\otimes \mathcal{A}_2 \otimes\ldots\otimes \mathcal{A}_n$, where $\mathcal{A}_l$ acts on $l^{\text{th}}$ subsystem. In the following, we calculate the required number of local observables to implement the matrix elements, which are present in the inequalities (\ref{a5}) and (\ref{t3w}), in experiments. For this purpose, we redefine the observables given in Ref.\cite{gao2010} to determine the elements in higher dimensional multipartite states. Following the method given in Refs.\cite{seev2008,gao2010,guhn2007}, we determine the modulus of the far-off antidiagonal element, $|\rho_{1,d_1d_2\ldots d_n}|$, present in Eq.(\ref{a5}), by measuring the observables $Q$ and $\tilde{Q}$. The operators $\langle Q\rangle = 2 \text{Re} (\rho_{1,d_1d_2\ldots d_n})$ and $\langle\tilde{Q}\rangle = -2 \text{Im} (\rho_{1,d_1d_2\ldots d_n})$ can be represented as \begin{subequations} \begin{align} Q=& {|0\rangle\langle (d_1-1)(d_2-1)\ldots (d_n-1)|}^{\otimes n} + {|(d_1-1)(d_2-1)\ldots (d_n-1)\rangle\langle 0|}^{\otimes n}, \\ \tilde{Q}=& -i{|0\rangle\langle (d_1-1)(d_2-1)\ldots (d_n-1)|}^{\otimes n} + i {|(d_1-1)(d_2-1)\ldots (d_n-1)\rangle\langle 0|}^{\otimes n}. \end{align} \end{subequations} Then the far-off antidiagonal element can be obtained from two measurement settings $\mathcal{M}_l$ and $\tilde{\mathcal{M}_l}$, given by \begin{subequations} \begin{align} \label{ml} \mathcal{M}_l =& \bigotimes_{j=1}^n \left[\cos\left(\frac{l\pi}{n}\right) R_l^j + \sin\left(\frac{l\pi}{n}\right) \tilde{R}_l^j\right], \\ \label{mlt} \tilde{\mathcal{M}_l} =& \bigotimes_{j=1}^n \left[\cos\left(\frac{l\pi+\frac{\pi}{2}}{n}\right) R_l^j + \sin\left(\frac{l\pi+\frac{\pi}{2}}{n}\right) \tilde{R}_l^j\right], \end{align} \end{subequations} where $R_l^j = |y_l^j\rangle\langle x_l| + |x_l\rangle\langle y_l^j|$, $\tilde{R}_l^j = i |y_l^j\rangle\langle x_l| - i |x_l\rangle\langle y_l^j|$, $|x_l\rangle = |0 \rangle$, $|y_l^j\rangle = |d_j-1\rangle$, $d_j$ is the dimension of the $j^{\text{th}}$ subsystem, $j=1,2,\ldots,n$ and $l=1,2,\ldots,n$. The operators (\ref{ml}) and (\ref{mlt}) also obey \begin{align} \sum_{l=1}^n (-1)^l \mathcal{M}_l = n Q, \qquad \sum_{l=1}^n (-1)^l \tilde{\mathcal{M}_l} = n \tilde{Q}, \end{align} which can be verified in the same way as done in Ref.\cite{guhn2007}. Therefore, the real and imaginary parts of an antidiagonal element of $n$-partite state can be determined by $2n$ local observables. Now we determine the modulus of the off-diagonal elements, $|\rho_{p d^{n-i}+1,q d^{n-j}+1}|$, which appear in the left hand side of inequality (\ref{t3w}) by measuring the observables $O_{ab}^{rs}$ and $\tilde{O}_{ab}^{rs}$, where $\langle O_{ab}^{rs}\rangle = 2\text{Re}(\rho_{p d^{n-i}+1,q d^{n-j}+1})$ and $\langle \tilde{O}_{ab}^{rs}\rangle = -2\text{Im}$ $(\rho_{p d^{n-i}+1,q d^{n-j}+1})$. Without loss of generality, let $r<s$, they can be written as \begin{subequations} \begin{align} O_{ab}^{rs} =& \frac{1}{2} T^{\otimes (r-1)} \otimes M_a \otimes T^{\otimes (s-r-1)} \otimes N_b \otimes T^{\otimes (n-s)}\qquad\qquad \notag\\ &+\frac{1}{2} T^{\otimes (r-1)} \otimes \tilde{M}_a \otimes T^{\otimes (s-r-1)} \otimes \tilde{N}_b \otimes T^{\otimes (n-s)},\\ \tilde{O}_{ab}^{rs} =& \frac{1}{2} T^{\otimes (r-1)} \otimes M_a \otimes T^{\otimes (s-r-1)} \otimes \tilde{N}_b \otimes T^{\otimes (n-s)} \notag\\ &-\frac{1}{2} T^{\otimes (r-1)} \otimes \tilde{M}_a \otimes T^{\otimes (s-r-1)} \otimes N_b \otimes T^{\otimes (n-s)}. \end{align} \end{subequations} Here $T=|x\rangle\langle x|$, $M_a = |a\rangle\langle x| + |x\rangle\langle a|$, $\tilde{M}_a = i|a\rangle\langle x| - i|x\rangle\langle a|$, $N_b = |b\rangle\langle x| + |x\rangle\langle b|$, $\tilde{N}_b = i |b\rangle\langle x| - i |x\rangle\langle b|$, $x=0$ and $a,b = \{ 1,2,\ldots,d-1\}$. Therefore, the off-diagonal element can be determined by measuring the real and imaginary parts in which each one is associated with two local observables. Therefore, the term which appear in the left hand side of the inequality (\ref{t3w}) can be determined by $4(d-1)\sum_{i=1}^{n-1} i(d-1)$ local observables. Finally, the diagonal elements that present in the right hand side of expressions (\ref{a5}) and (\ref{t3w}) can be implemented by the following local observables, namely \begin{align} \label{diag} |x_1 x_2 \ldots x_n\rangle\langle x_1 x_2 \ldots x_n | = \bigotimes_{i=1}^n T_{m_i}, \end{align} with $T_{m_i}= |m_i\rangle\langle m_i |$, $m_i = 0,1,2,\ldots,d_i-1$. It is clear from (\ref{diag}) that to determine a diagonal matrix element, it is required only one local observable. We note here that for the criterion 1 eventhough the total number of density matrix elements of an $n$-partite state is $d_1^2\times d_2^2\times d_3^2\times\ldots\times d_n^2$, we need to measure only $2^n-1$ elements out of it. They require $2^n+2n-2$ local observables in order to identify the non-$k$-separability by the criterion $1$. Similarly for the criterion 2 eventhough the total number of density matrix elements of an $n$-partite state is $d^{2n}$, we need to measure only $2\times(d-1)\sum_{i=1}^{n-1} (i\times (d-1)) + (n\times(d-1)+1)$ elements out of it. In other words one totally requires $5(d-1)\sum_{i=1}^{n-1} i(d-1)+(n(d-1)+1)$ local observables to test the criterion $2$. Since the elements need to be measured are very less compared to the total number of elements, it would require only fewer measurements compared to the $(d_1^2-1)(d_2^2-1)\ldots(d_n^2-1)$ number of measurements needed for quantum state tomography. \section{Conclusion} \label{con} In this work, we have extended the criteria given by Gao and Hong to $k$-separable $n$-partite states. With the help of our criteria $1$ and $2$ one can identify the non-$k$-separability $(k = 2,3,\ldots ,n)$ and genuine $n$-partite entanglement in mixed quantum states. We have verified non-$k$-separability of different classes of mixed multipartite states. We have also given two general functions namely, $\alpha_k^{n,d}$ and $\beta_k^{n,d}$, to detect non-$k$-separability in the $n$-qudit GHZ state and $n$-qudit W state, respectively added with white noise. Our criteria can also identify the nonseparability of mixture of GHZ and W states added with white noise. We have also shown that the criteria developed in this paper can be computable and implementable in experiments. They require only fewer measurements compared to full quantum state tomography. \\ \noindent Acknowledgement : We would like to thank the referee for his valuable suggestions to improve the quality of this paper.
train/arxiv
BkiUcwY4uBhivXSXx4cv
5
1
\section{Introduction} \IEEEPARstart{M}{illimetre}-Wave (mmW) has two main advantages which makes it desirable in 5G networks. The first one is its abundant frequency spectrum resource, which makes it eligible to accommodate higher capacity, and the second one is the high propagation loss which makes it promising in small cell scenarios in 5G. Hence, mmW communication is the key enabler of the high data rate demands in future network generation, due to their ability to provide suitable data accommodation. Besides, exploiting all the mmW capacity is highly dependent on the multiple access technique which assigns different users to the high amount of spectrum resource. Among the well-known approaches, non-orthogonal multiple access (NOMA) is attracting researcher's interest to support high data rate demands and massive connectivity in 5G and beyond \cite{BenjebbourSaito, AlaviYamchi}. NOMA technique is introduced to outperform other well-known approaches such as time division multiple access (TDMA), orthogonal frequency division multiple access (OFDMA), and zero-forcing (ZF) in terms of spectral and energy efficiency \cite{ChenZhang, XuDing, DingSchober, HanifDing}. NOMA allocates frequency and time resources between all the available users, simultaneously, thanks to the power domain superposition \cite{DingYang,SaitoKishiyama, Choi, DingPeng, DaiWang, WangDai}. Power domain superposition is exploited by NOMA appreciating to the Successive Interference Cancellation (SIC) at the receiver. In essence, lower power is assigned to the user with better channel and more transmit power is allocated to the user with worse channel condition. Then, SIC is employed at the receiver to remove weaker users' interference. Specifically, a reasonable complexity is added at the receiver to handle the interference which is caused by non-orthogonal allocated resources \cite{CumananKrishna, DaiWang}. The most crucial difference of mmW-NOMA with conventional NOMA is the interlace of power allocation and beamforming problem. In \cite{DingFan}, one of pioneer works on mmW-NOMA schemes was adapted to introduce the random steering beamformer limited to the near users to each other. The energy efficiency was optimized subject to the power allocation and sub-channel assignment in \cite{FangZhang}, where no beamformer was represented. In \cite{HanifDing}, fully digital beamformer was introduced to optimize the sum-rate. Moreover, in \cite{XiaoZhu}, the joint problem was considered but the optimization was only performed on the most powerful path in channel. Further, in \cite{PangWu} joint power allocation and beamforming was optimized with considering Signal-to-Leakage-plus-Noise (SLNR) in order to decouple the power allocation and beamforming. In the contrary to all the mentioned prior works, we have used newly introduced machine learning (ML) tools to optimize the joint power allocation and beamformer design in mmW-NOMA. Recently, machine learning (ML) techniques are deployed in wireless communication networks as the optimization tools. Among them, Reinforcement Learning (RL) is a close-loop optimization procedure which is based on the action and reaction of agent and environment, respectively. Further, encountering a continuous problem with a large number of states will be handled with deep RL (DRL) approaches \cite{MismarEvans, LuongGagnon, WangLi, WangFeng, GeLiang}. In \cite{MismarEvans} DRL is exploited to optimize joint beamforming, power control and interference coordination in 5G network. The authors used deep Q-learning approach to estimate the future rewards. Additionally, cooperative UAV-assisted network resource allocation is optimized by DRL in \cite{LuongGagnon}. Besides, hybrid beamforming in mmW network was considered in Multi-user Multiple-Input Single-Output (MU-MISO) system utilizing multi-agent DRL-based approach in \cite{WangLi, WangFeng}. Further, DRL-based optimization for distributed dynamic MISO system was considered in \cite{GeLiang}. Employing deep Q-learning approach and cooperation of BSs, the optimal approach was designed. In this paper, we have exploited DRL to optimize joint power allocation and hybrid beamformer in the Base Station (BS) to increase the sum-rate of the users. The following paragraphs summarize the contributions of the work. \begin{itemize} \item In order to optimize the joint power allocation and hybrid beamforming we have modelled the joint problem based on the maximizing the sum-rate constrained to the minimum guaranteed rate for each user and the total transmission power. This problem is non-convex and could not be handled by traditional approaches, jointly. Hence, we have introduced DRL to optimize the joint problem. \item In the second part, we have defined state and action spaces in the RL framework till the problem can be handled in the agent-environment interactions. We have considered the real and imaginary parts of the channel responses between the BS and users as the state space. Besides, a controlling variable $\alpha$ is defined to represent the constraints feasibility. We have considered $\alpha$ as the soft controlling variable where it is denoted based on the ratio of each constraint to the limit value of its own. Furthermore, the hybrid beamforming weights and devoted power to each user are defined as the available action space. \item Since the problem is continuous, we have utilized Soft Actor-Critic (SAC) \cite{SAC} as the suitable approach to handle the problem. SAC algorithm calculate the optimal policy by maximizing jointly the expected reward and entropy of the policy. In essence, SAC controls the policy uncertainty given the state by maximizing its entropy. This criterion makes this algorithm more robust than other well-known approaches such as Deep Deterministic Policy Gradient (DDPG). It benefits from two different DNN as actor and critic networks to approximate the action and value function, respectively. To alleviate the complexity of the problem, DNN aids the RL framework. \item The simulation results represent the superiority of the proposed approach regarding two different state-of-the-art approaches, called TDMA as the orthogonal multiple access (OMA) and NLOS-NOMA \cite{XiaoZhu} as the NOMA approach. Moreover, the impact of different parameters including number of antennas, signal-to-noise ratio (SNR), and minimum guaranteed rate are considered, separately. \end{itemize} The rest of the paper is organized as follows: Following system model and problem formulation in section II, the proposed joint power allocation and hybrid beamformer design is represented in section III. Further, numerical results are demonstrated in section IV, and concluding remarks are presented in section V. \textit{Notations:} We have used $a, \mathbf{a}, \mathbf{A}$ denoting scalar, vector, and matrix, respectively. $\mathbb{E}$ denotes the expected value of the argument. Furthermore, $\mathcal{A}, \mathcal{S}$ represent the action space, and state space, respectively. Additionally, $\mathbb{C}$ demonstrate the set of complex numbers. \section{System Model and Problem Formulation} \begin{figure} \includegraphics[width = \linewidth]{2.pdf} \centering \caption{mmW-NOMA system diagram for joint power allocation and hybrid beamforming} \label{Fig.SystemModel} \end{figure} We consider a millimetre-wave non-orthogonal multiple access (mmW-NOMA) downlink system where an $N$-antenna equipped Base Station (BS) transmits towards $K$ single antenna users. Furthermore, there are $K < N$ RF chains in the BS, and $N_s$ independent data streams are transmitted from BS to the user terminals. In essence, the BS exploits $N$ antenna and NOMA strategy to simultaneously serves $K$ users. Without loss of generality, it is assumed that $K = N_s$ for touching the maximum multiplexing gain. Specifically, $N_s$ transmitted symbols are coded by digital beamformer (DBF) $\mathbf{D}\in\mathbb{C}^{K \times N_s}$ and then passed by $N$ phase shifters as analog beamformer (ABF) $\mathbf{A} \in \mathbb{C}^{N\times K}$. Besides, the ABF phase shifters are constant modulus. Hence, the overall beamformer matrix can be denoted as $\mathbf{W} = \mathbf{A} \mathbf{D}\in \mathbb{C}^{N \times N_s}$ with unit-norm columns $\mathbf{w}_k$ for $k = 1, 2, \dots, K$. Regarding NOMA as the multiple access technique, the whole power in the BS will be divided among all the users and each of the users will obtain power $p_k$ of the signal, where \begin{equation} \sum\limits_{k=1}^{K}p_k = P \end{equation} is the total available power in the BS. Gathering the power of each user in the main diagonal of a matrix, we can represent the power denoting matrix as $\mathbf{P} = \text{diag}\{p_1, p_2, \dots, p_K\}$. Consequently, the received symbol in each user $k$ is denoted as \begin{equation} y_k = \mathbf{h}_k^H \sqrt{p_k}\mathbf{w}_k s_k + \sum\limits_{\bar{k} = 1, \bar{k}\neq k}^{K}\mathbf{h}^H_{k} \sqrt{p_{\bar{k}}} \mathbf{w}_{\bar{k}} s_{\bar{k}} + n_k, \end{equation} where $\mathbf{h}_k \in \mathbb{C}^{N\times 1}$ represents the channel vector between the BS and the $k$-th user and $n_k$ denotes the zero-mean additive white Gaussian noise with variance $\sigma^2$ at $k$-th user terminal. Besides, $s_k$ is the transmitted symbol for $k^{\text{th}}$ user. The channel vector between the BS and $k$-th user is an mmW channel. Since there are limited scatterers in mmW band, the multipath which is caused by reflection, is small and lead to a spatially sparse directional channel in angle domain \cite{PengWang,GaoHu}. Assuming a uniform linear array (ULA), the mmW channel can be demonstrated as \cite{PengWang,GaoHu} \begin{equation} \mathbf{h}_k = \sum\limits_{l = 1}^{L_k}\lambda_{k,l}\mathbf{a}(N, \Omega_{k,l}) \end{equation} where $\lambda_{k,l}$, and $\Omega_{k,l}$ are the complex coefficient and angle-of-arrival (AoD) of the $l$-th multipath components in the channel vector for the $k$-th user, respectively. Further, $L_k$ is the number of multipath components for $k$-th user, and $\mathbf{a}(.)$ is the steering vector function which is defined as \begin{equation} \mathbf{a}(N, \Omega) = \left[ e^{j\pi 0 \cos(\Omega)}, e^{j\pi 1 \cos(\Omega)}, \dots, e^{j\pi (N-1) \cos(\Omega)} \right] \end{equation} and is dependent to the geometry of the array. For the rest of the paper, we assume that the user terminals are ordered based on their channel quality, i.e., $\| \mathbf{h}_1 \|_2 \leq \| \mathbf{h}_2 \|_2 \leq \dots \leq \| \mathbf{h}_K \|_2$. Exploiting Successive Interference Cancellation (SIC), the NOMA receiver can decode the power domain transmitted signal in the same frequency and time resource \cite{WunderJung, ZhangHanzo}. Based on the suggested ordering and exploiting SIC in the receiver, each user $k$ can remove first $k-1$ signals as interference and treats other remaining users, i.e., from $k+1$ to $K$, as noise \cite{BenjebbourSaito, HanifDing}. Obviously, this kind of detection is not optimal, but it is more than our paper discussion and here we try to demonstrate a model-free approach for optimization. Therefore, the received Signal-to-Interference-plus-Noise (SINR) in each user can be defined as \begin{equation} \text{SINR}_k = \frac{|\mathbf{h}^H_k \mathbf{w}_k|^2 p_k}{\sum\limits_{k^\star = k + 1}^{K}|\mathbf{h}^H_k \mathbf{w}_{k^\star}|^2 p_{k^\star} + \sigma^2} \end{equation} where $k^\star$ indicates the user indices more than the current index. Hence, the achievable rate in each user is denoted as \begin{eqnarray} R_k = \log_2(1 + \text{SINR}_k). \end{eqnarray} In this paper, we consider the problem of joint power allocation and hybrid beamformer design for sum-rate optimization. As a consequence, the objective function is the sum-rate of the $K$ users. Moreover, there are some constraints which are denoted as below: \begin{align} \label{eq.opt} \nonumber \max_{\mathbf{d}_k,\mathbf{a}_k,p_k} & \sum\limits_{k=1}^{K}R_k\\ \nonumber \text{s.t. } & R_k \geq r_k \\ \nonumber & \sum\limits_{k=1}^{K}p_k = P \\ \nonumber & \| \mathbf{w}_k \|_2^2 = 1 \\ & |A_{i,j}| = 1/\sqrt{N} \end{align} where $k = 1,2, \dots, K$, $i = 1,2,\dots, N$, and $j = 1,2,\dots, K$ for the above equation and $P$ is the total available power in BS. Besides, $\mathbf{d}_k$, and $\mathbf{a}_k$ denote the digital and analog beamformer weights for the $k^{th}$ user, respectively. Moreover, $r_k$ is the minimum guaranteed rate in $k$-th user which should be provided. As discussed earlier, the cost function is the sum-rate of the users. Further, the first constraint ensures the minimum rate for each user. While, the second and third constraints determine the feasible space of the independent variables of the problem, i.e., they guarantee the transmission power of the users and their corresponding beamforming weights. Ultimately, the last one restricts the analog beamformer behaviour. The signal decoding in SIC-based receivers is performed after decoding of weaker users' signal and successfully removing their corresponding interference. In order to aid this kind of decoding, the received power of the desired signal should be higher than other users' power level. This phenomena will lead to some equations on the signal power which imposes more power to the users located far from BS \cite{AlaviCumanan}. \section{Joint Power Allocation and Hybrid Beamformer Design} Here we will discuss the proposed approach to solve the joint power optimization and hybrid beamformer design problem in Eq. \eqref{eq.opt}. We have utilized the Deep Reinforcement Learning (DRL) approach to introduce the optimized power and beamformer weights. \subsection{Deep Reinforcement Learning} Deep Reinforcement Learning (DRL) is involved of Deep Learning (DL) and Reinforcement Learning (RL), simultaneously. Due to existence of the DL part in this approach, it's very suitable handling large state and action space MDP problems \cite{Sutton}. The MDP can be defined by a four-tuple space as $\{\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}\}$, where $\mathcal{S}$ is the state space denoting the observation from environment. Further, $\mathcal{A}$ is the action space and representing the available actions from the agent to optimize the reward. Besides, $\mathcal{P}$ is the state transition probability, and $\mathcal{R}$ is the immediate reward of the action $a_t\in \mathcal{A}$ in state $s_t \in \mathcal{S}$. According to the immediate reward, the long-term reward \begin{equation} V(s) = \mathbb{E}\lbrace \sum\limits_{t=0}^{\infty}\gamma^t r_t(s_t,a_t) | s\rbrace \end{equation} for discount factor $\gamma \in [0,1]$ can be maximized to guarantee the best action in each state. \begin{figure*} \includegraphics[width = \textwidth]{3.pdf} \centering \caption{Schematic diagram of soft actor-critic network} \label{Fig.ActorCriticNet} \end{figure*} Accordingly, we can model the problem at hand into DRL framework by defining mandatory 4-tuple state variables. In order to represent the 4-tuple state variables, we demonstrate the immediate reward function based on the problem. To this end, we need to reconsider the optimization problem in Eq. \eqref{eq.opt}. The auxiliary variable $\alpha$ is defined to guarantee the constraints feasibility. Hence, the following immediate reward is defined \begin{equation} r_t = \alpha R \end{equation} where $R = \sum_{k=1}^{K} R_k$, and $\alpha = \prod\limits_{i=1}^{K+1} \alpha_i$ is the controlling parameter for guaranteeing the constraints of Eq. \eqref{eq.opt}. By the following definition for $\alpha_i$ \begin{equation} \label{eq.a1} \alpha_i = \left \lbrace \begin{aligned}[l|l] \frac{R_k}{r_k}, && \text{if } R_k < r_k \\ 1, && \text{otherwise} \end{aligned} \right. \end{equation} for $1 \leq i \leq K$, and \begin{equation} \label{eq.a2} \alpha_i = \left \lbrace \begin{aligned}[l|l] \frac{P}{\sum\limits_{k=1}^{K}p_k} , && \text{if } P < \sum\limits_{k=1}^{K}p_k \\ 1, && \text{otherwise} \end{aligned} \right. \end{equation} for $i = K + 1$. Apparently, Eq. \eqref{eq.a1} ensures the minimum guaranteed rate of $K$ users, while the Eq. \eqref{eq.a2} guarantees the power limitation constraint in Eq. \eqref{eq.opt}. With the aid of $\alpha$ we are able to weight the received sum-rate of user based on the feasibility of the constraints. In essence, whenever $\alpha_i$ for $i = 1,2, \dots, K + 1$ are $1$ it determines that the designed power allocation and hybrid beamformer strategy satisfies the constraints, though, the calculated rate is not downgraded. While each of the $\alpha_i$ for $i = 1, 2,\dots, K+1$ is the calculated ratio in Eq. \eqref{eq.a1} and \eqref{eq.a2}, it would decrease the sum-rate relative to the calculated ratio. Additionally, the state $\mathcal{S}$ is defined as $\mathcal{S} = \lbrace \mathbf{h}_1^{\mathcal{R}}, \mathbf{h}_1^{\mathcal{I}}, \mathbf{h}_2^{\mathcal{R}}, \mathbf{h}_2^{\mathcal{I}}, \dots, \mathbf{h}_K^{\mathcal{R}}, \mathbf{h}_K^{\mathcal{I}}, \alpha \rbrace$, where superscript $\mathcal{R}$ and $\mathcal{I}$ denote real and imaginary parts, respectively. Furthermore, the action space which includes the available actions that are taken by the agent, can be denoted by $\mathcal{A}$ and is defined as $\mathcal{A} = \lbrace \mathbf{d}_1^{\mathcal{R}}, \mathbf{d}_1^{\mathcal{I}}, \mathbf{a}_1^{\mathcal{R}}, \mathbf{a}_1^{\mathcal{I}}, \dots, \mathbf{d}_K^{\mathcal{R}}, \mathbf{d}_K^{\mathcal{I}}, \mathbf{a}_K^{\mathcal{R}}, \mathbf{a}_K^{\mathcal{I}}, p_1, p_2, \dots, p_K \rbrace$, where $\mathbf{d}_k$, and $\mathbf{a}_k$ are the digital and analog beamformer weights for the $k^{th}$ user, respectively. \subsection{Soft Actor-Critic} DNNs are employed by the researchers to enable self-decision making in RL framework. But for real-world problems there are two crucial challenges: sample complexity, and hyperparameter sensitivity. Sample complexity is regard as the millions of samples that these approaches needed to model an even simple task. On the other hand, achieving suitable results in each problem is highly dependent to the learning rates, exploration constants and other hyperparameter settings. Soft Actor-Critic (SAC) formulation represents significant improvement in robustness and exploration \cite{SAC}. These two essential characteristics are represented in \cite{Ziebart, HaarnojaTang}. SAC algorithm combines three pivotal components: firstly, an actor-critic framework with individual networks for policy and value function, secondly, an off-policy criterion that enables the reusability of previous data collection, and finally, entropy maximization which brings stability and exploration to the algorithm. SAC incorporates off-policy actor-critic training with a stochastic actor and targets to maximize the entropy of the stochastic actor \cite{SAC}. As discussed earlier, continuous problems need DNN as model-free policy and value estimators, separately. Parametrized Q-function approximator is $Q_\theta(s_t, a_t)$ and a tractable policy is denoted by $\pi_\phi(a_t | s_t)$. In these notations, $\phi$ and $\theta$ are actor and critic DNN parameters, respectively. Accordingly, the critic network will be trained to minimize the soft Bellman residual as \begin{eqnarray} J_Q(\theta)& = &\mathbb{E}_{(s_t, a_t)}\left[ (Q_\theta(s_t,a_t) - P_{t+1}(s_t,a_t))^2 \right] \\ J_\pi(\phi)& = &\mathbb{E}_{s_t}\left[\mathbb{E}_{a_t}\left[\alpha\log(\pi_\phi(a_t|s_t))-Q_\theta(s_t,a_t)\right]\right] \end{eqnarray} while \begin{equation} P_{t+1}(s_t,a_t) = r(s_t,a_t) + \gamma\mathbb{E}_{s_{t+1}}\left[ V_{\bar{\theta}}(s_{t+1}) \right] \end{equation} where $V_{\bar{\theta}}(s_{t+1})$ is the value function regarding the next state, and $r(s_t,a_t)$ denotes the reward function in the state $s_t$ taking action $a_t$. $J_Q(\theta)$ can be optimized by stochastic gradients. While, in the update process target soft Q-function is exploited which defined as exponentially moving average of previous weights leading to stabilization in training \cite{MnihKavukcuoglu}. Besides, $J_\pi(\phi)$ is expected weighted Kullback-Leibler (KL)-divergence where $a_t = f_\phi(\epsilon_t,s_t)$ represents the neural network approximation for policy, and can be optimized by gradient-based approaches leading to any tractable stochastic policy \cite{SAC}. The whole network of the system is demonstrated as in Fig. \ref{Fig.ActorCriticNet}. \subsection{Deep Neural Networks} As mentioned earlier, the whole optimization network involved of two independent DNNs. The first one which is the critic one, accepts three different inputs including current observation, current action, and current reward. The first two ones (i.e., observation and action) are merged via a concatenation layer and then go across two dense layers followed by ReLU as the activation layer. Finally, a dense layer will estimate the Q-value of the network. Correspondingly, there is another critic network called target critic while will be updated periodically based on the latest parameters values and critic in order to improve the stability. Secondly, the actor network welcomes the current observation and after applying a dense layers with ReLU as the activation layer, it will be break into two separate paths, called mean path and standard deviation (std) path to represent the Gaussian probability density function (pdf). In the mean path, after applying a dense layer and ReLU activation, there is a dense layer and mean value will be estimated. While, in the std path, after employing a dense layer and ReLU activation, there is the final dense and a soft plus activation layer to estimate the std value of the pdf. Then, the agent will generate the desired action randomly, based on the estimated Gaussian pdf. \section{Numerical Results} In this section, the numerical results of the proposed approach is compared with other approaches in order to demonstrate its ability for optimizing the problem. This section is consist of two separate parts. In the first part, the learning curve of the proposed SAC-based approach is demonstrated and compared with DDPG approach as the benchmark of continuous problem dealing approach. In the second part, the proposed sum-rate is represented and compared with 'TDMA' and 'NLOS-NOMA' in \cite{XiaoZhu} as state-of-the-art approaches for sum-rate problem. The common parameters for learning system are listed in Table \ref{table.1}. \begin{table} \centering \caption{Simulation parameters for DRL} \begin{tabular}{| r | l |} \hline Parameter & Value \\ \hline \hline Discount factor & $\gamma = 0.99$ \\ Episode trial & $100$ \\ Moving average length & $25$ \\ Actor learn rate & $\alpha_a = 10^{-3}$ \\ Critic learn rate & $\alpha_c = 10^{-3}$ \\ Experience buffer length & $10^{6}$ \\ Target smooth factor & $\tau = 10^{-3}$ \\ \hline \end{tabular} \label{table.1} \end{table} \subsection{Learning curve} As mentioned earlier, the learning curve of the SAC-based algorithm is represented in this section. The first simulation is the comparison of average and immediate reward for $K=2$ user system with a BS equipped with $N = 32$ antennas, and minimum achievable rate of $r_1 = r_2 = 3$ (bps/Hz) is depicted in Fig. \ref{fig.score}. The average is computed on the last $25$ scores. As can be seen, the average reward is reached over $13$ (bps/Hz) after nearly $40$ episodes. In Fig. \ref{fig.alpha}, the impact of soft and hard $\alpha$ are compared. The system configuration is as before. Soft $\alpha$ is presented in this paper, earlier in Eq. \eqref{eq.a1} and Eq. \eqref{eq.a2}. Hard $\alpha$ is the case where $\alpha$ is a binary value which is $1$ in case of all the constraints feasibility and is $0$ in case of infeasibility for at least one of the constraints. Since, in hard $\alpha$ the immediate reward is zero and we cut the value of the reward with a hard limiter, the overall immediate and average reward is lower than the case of soft $\alpha$ almost $3$ (bps/Hz). Finally, the comparison of the proposed SAC-based algorithm with the DDPG-based approach is presented in Fig. \ref{fig.algorithm}. Again, the configuration of simulation is the same as previous ones. Obviously, the speed of convergence in DDPG is dominated by SAC-based algorithm. Since the entropy of policy and reward are optimized jointly, the convergence speed and stability is increased drastically. \begin{figure} \includegraphics[width=0.99\linewidth]{Fig1-eps-converted-to.pdf} \caption{The immediate and average score of the SAC algorithm}\label{fig.score} \end{figure} \begin{figure} \includegraphics[width=0.99\linewidth]{Fig2-eps-converted-to.pdf} \caption{The impact of hard or soft $\alpha$ on the immediate and average score}\label{fig.alpha} \end{figure} \begin{figure}[htb] \includegraphics[width=0.99\linewidth]{Fig3-eps-converted-to.pdf} \caption{The comparison of DDPG and SAC based DRL approaches}\label{fig.algorithm} \end{figure} \subsection{Sum-rate} Here, we will consider the optimization capability of the proposed DRL-based approach rather than 'TDMA' which is an instance of Orthogonal Multiple Access (OMA) approach and 'NLOS-NOMA' which is an instance of NOMA-based approach and is designed in \cite{XiaoZhu}. We have regarded these two approaches as the state-of-the-art methods. Additionally, we have considered four different parameter settings to represent the capability of the apprach in all the conditions. Hence, the results of the simulations are represented in four different figures in the following paragraphs. In order to compare the results, we have trained the network with $10^6$ independent samples during $1000$ consecutive episodes. Then, the results are calculated based on $250000$ new samples, i.e., the results are averaged over $250000$ independent runs. As the first one, the Fig. \ref{fig.snr} demonstrated the sum-rate performance of the approach regarding the received signal-to-noise ratio (SNR) of the users. In this simulation, $K = 2$ users are served by a $16$-antenna BS while the minimum guaranteed rates are $r_1 = r_2 = 3 $(bps/Hz). Moreover, SNR is defined as the ratio of the received power $P$ to the noise variance $\sigma^2$ which is $\sigma^2 = 1 $(mW). As depicted in Fig. \ref{fig.snr}, the proposed approach outperforms the other two methods. The superiority of the proposed approach rather than the 'TDMA'-based approach is obvious (due to the resource sharing in NOMA systems). In addition, the proposed approach is superior to the 'NLOS-NOMA' approach which is caused by the joint optimization of the proposed approach rather than the successive manner of 'NLOS-NOMA' approach. Furthermore, 'NLOS-NOMA' is highly dependent to the condition of the channel while our proposed approach is independent of the channel condition. In the next simlation, the proposed approach is compared with the others based on the user numbers which are served by the BS. For this simulation, we have considered $30$ dB SNR for the users. Furthermore, $N_{ant} =16, r_i=3$ for $i = 1,2,\dots,8$. Expectedly, the sum-rate is decreased with the increase of the number of users. It is resulted from the increased interference coming from the users. Again, the proposed approach outperforms the 'NLOS-NOMA' based approach because of channel insensitivity. \begin{figure}[t!] \includegraphics[width=0.99\linewidth]{Fig4-eps-converted-to.pdf} \caption{Sum-rate comparison for various SNRs}\label{fig.snr} \end{figure} \begin{figure} \includegraphics[width=0.99\linewidth]{Fig5-eps-converted-to.pdf} \caption{Sum-rate comparison for different number of users}\label{fig.user} \end{figure} The impact of the minimum guaranteed rate on the sum-rate of the approaches is regarded in Fig. \ref{fig.minRate}. Here, two users are supported with a $16$-antenna-equipped BS. The minimum guaranteed rate is changed from $1$ to $4$ with steps equal to $0.5$. As represented in Fig. \ref{fig.minRate}, the sum-rate is decreased with the increase of the minimum guaranteed rate. It is caused by the increase of the power which is devoted to the bad-conditioned user in the system to maintain the minimum rate. Hence, lower power remains for the other user. This is the main reason of the decrease of the sum-rate with the increase of the minimum guaranteed rate. \begin{figure} \includegraphics[width=0.99\linewidth]{Fig6-eps-converted-to.pdf} \caption{Sum-rate comparison for minimum guaranteed rate in users}\label{fig.minRate} \end{figure} Finally, antennas impact is represented in Fig. \ref{fig.antenna}. As depicted the increase of the number of antenna, decrease smoothly the sum-rate of the two users with $30$-dB SNR. The antenna numbers are $16, 32, 64$. Increase of the antennas will increase the interference which generates lower sum-rate. While in TDMA case, the interference is omitted because of the orthogonal access to the resources. \begin{figure} \includegraphics[width=0.99\linewidth]{Fig7-eps-converted-to.pdf} \caption{Sum-rate comparison for several number of antennas}\label{fig.antenna} \end{figure} \section{Concluding Remarks} In this paper, we have considered the design of power allocation and hybrid beamforming strategies, jointly. We have exploited control-theory-borrowed DRL approach to optimize the sum-rate of the users in mmW-NOMA based system. We have modelled the optimization problem in the context of the DRL, and due to the continuity of the problem, we have utilized SAC-based approach. The system is included of two DL framework act as the critic and actor. To optimize the joint power allocation and hybrid beamforming we have defined a soft sum-rate reward which is based on the received rate of each user and consumed power. In the simulations, the proposed approach outperforms other benchmark approaches. There are two main reasons for the superiority of the proposed approach as the joint optimization behaviour, and independence to the channel models. The joint optimization behaviour helps the BS to design a power allocation and hybrid beamforming strategies which cooperate with each other to optimize the sum-rate, while in other successive approaches the optimal beamformer is designed regarding the power allocated strategy, and again power allocated strategy is designed based on the selected beamformer. In this work, channel estimation error is not considered which will be handled in our future works. \ifCLASSOPTIONcaptionsoff \newpage \fi
train/arxiv
BkiUdK7xaKgQKLxdbfo5
5
1
\section{Statement of Main Results} The purpose of this note is to annouce and sketch certain results in a future paper by the current authors \cite{CDRRZ}. We start by the notion of bounded generation. An abstract group $\Gamma$ is said to have the {\bf bounded generation} property (BG) if it can be written in the form $$\Gamma=\langle \gamma_1 \rangle \cdots \langle \gamma_r \rangle$$ for certain fixed $\gamma_1,\dots, \gamma_r\in \Gamma$, where $\langle \gamma_i \rangle$ is the cyclic subgroup generated by $\gamma_i$. We refer the interested readers to the discussion in Section 1 of \cite{CRRZ} and the references therein for the motivation for (BG). In \cite{CRRZ}, it was shown that a linear group $\Gamma \subset \mathrm{GL}_n(K)$ over a field $K$ of characteristic zero ``usually lacks (BG) by semi-simple elements'', i.e. (BG) such that all $\gamma_i$ are diagonalizable. More precisely, it was shown in \cite{CRRZ} that if a linear group $\Gamma$ over a field of characteristic zero consists entirely of semi-simple elements, then $\Gamma$ has (BG) if and only if it is finitely generated and virtually abelian. In particular, if $K$ is a number field and $S$ is a finite set of places including all infinite ones, then \underline{infinite} $S$-arithmetic subgroups of absolutely almost simple $K$-anisotropic groups never have (BG). The current paper will significantly strengthen the above results by providing some quantitative properties which describe the extent of the absence of (BG) by semi-simple elements. In fact, we will consider the following more flexible question in terms of purely exponential polynomial parametrizations (PEP). \vskip2mm \noindent {\bf Definition.} {\it Let $\Sigma $ be a subset of a variety $V\subset \mathbb{A}_K^n$ ($K$ is a field). Then $\Sigma$ is said to have {\bf Purely Exponential Parametrization} (PEP) in $r$ variables if $\Sigma$ has shape $$\Sigma=\Big\{ (f_1(\mathbf{n}),\dots,f_s(\mathbf{n}));\mathbf{n}\in \mathbb{Z}^r \Big\},$$ where each $f_i(\mathbf{x})=f(x_1,\dots, x_r)$ is a {\bf Purely Exponential Polynomial}, i.e. an expression of the form $$f_i(\mathbf{x})=\sum_{j=1}^e a_j \lambda_1^{l_{1,j}(\mathbf{x})}\cdots \lambda_k^{l_{k,j}(\mathbf{x})},$$ for certain constants $a_1,\dots,a_e,\lambda_1,\dots, \lambda_k\in \overline{K}^{\times}$ and linear forms $l_{j,s}(\mathbf{x})$ in $r$ variables whose coefficients are {\bf rational integers}. Here we refer to the elements $\lambda_1,\dots,\lambda_k$ as the {\bf bases} of $\mathbf{f}\colon =(f_1,\dots,f_s)$, to the linear forms $l_{i,j}$ as the {\bf exponents} of $\mathbf{f}$, and to the constants $a_j$ as the {\bf coefficients} of $\mathbf{f}$.} \begin{rema} In the definition as above, we do not require that all coefficiens and bases are in $K$. Also, it is easy to see that any finite union of (PEP) sets is still a (PEP) set. \end{rema} \begin{exam} The classical Pell equations naturally produce (PEP) sets. For example, the set of integer solutions of $x^2-2y^2=1$, which corresponds to the integer points of the special orthogonal group for the quadratic form $h=x^2-2y^2$, is given by $$ \left\{ \left( (-1)^m \left(\frac{(3-2\sqrt{2})^n+(3+2\sqrt{2})^n}{2} \right), \left( \frac{(3-2\sqrt{2})^n-(3+2\sqrt{2})^n}{2\sqrt{2}} \right) \right) ; m,n\in \mathbb{Z} \right\}. $$ \end{exam} \begin{exam} Linear groups $\Gamma$ admitting (BG) by semi-simple elements, which are main study objects of \cite{CRRZ}, become typical examples of (PEP) sets. In fact, if $\Sigma=\Gamma\subset \mathrm{GL}_n(K)$ with $\Gamma=\langle \gamma_1\rangle\cdots \langle \gamma_r \rangle$ with the $\gamma_i$'s semi-simple, then there exist $g_i\in \mathrm{GL}_n(\overline{K})$ and $\lambda_{i,j}$ for $i=1,\dots, r,j=1,\dots,n$ with $$g_i^{-1}\gamma_i g_i=\mathrm{diag}(\lambda_{i,1},\dots, \lambda_{i,n}),\text{ for all }i=1,\dots,r.$$ This implies that every $\gamma \in \Gamma$ has shape $$\gamma=\prod_{i=1}^r g_i\left[\mathrm{diag}(\lambda_{i,1}^{a_i},\dots, \lambda_{i,n}^{a_i})\right]g_i^{-1}\text{ for some }a_1,\dots,a_r\in \mathbb{Z}.$$ Comparing entries of the two sides of the above relation, we realize $\Sigma$ as a (PEP) set $\subset \mathbb{A}_K^{n^2}$ in $r$ variables with bases equal to those eigenvlues $\lambda_{i,j}$'s. \end{exam} In the current article, we will provide some sparseness results for (PEP) subsets of affine varieties $V\subset \mathbb{A}_K^n$ over a \underline{number field} $K$. The language we are using to describe sparseness is the {\bf height function} on the affine space $K^n$, defined by $$H_{\text{aff}}(x_1,\dots,x_n)\colon =H(1:x_1:\dots:x_n)\colon =\Big( \prod_{v \in V_K}\max\{ 1,\| x_1 \|_v,\dots, \| x_n \|_v \}\Big)^{1\slash [K\colon \mathbb{Q}]}$$ where $V_K$ is the set of all places of $K$, and $\|\cdot \|_v$ are normalized $v$-adic valuations such that the product formula holds. We will also use the corresponding logarithmic height $h_{\text{aff}}\colon = \log H_{\text{aff}}$. See \cite[\S B]{HindrySilverman} or \cite{BombieriGubler} for details about height functions. The first main result of this paper, which is about the distribution of (PEP) sets, can be stated as follows. \begin{theo}[First Main Theorem: quantitative result] \label{firstmainthm}Let $\mathbb{A}_K^n$ be an affine space over a number field $K$, then for any (PEP) set $\Sigma \subset \mathbb{A}_K^n$ in $r$ variables, we have $$\Big| \{P\in \Sigma;H_{\text{aff}}(P)\leq H\}\Big|=O((\log H)^r)\text{ when }H\to \infty.$$ In other words, any (PEP) set has at most {\bf logarithmic-to-the-}$r$ growth in terms of the height. \end{theo} \begin{rema} In order to interprete Theorem \ref{firstmainthm} as a \underline{sparseness} result, it should be emphasized that there is a highly involved but also well-developed topic about ``counting lattice points in Lie groups''. In particular, \cite[Corollary 1.1]{Maucourant07} (see also \cite[Theorem 2.7 and Theorem 7.4]{GW07}) informs us that points in any lattice of a non-compact semi-simple Lie group $\mathcal{G}$ with finite center have growth rate $cH^d(\log H)^e, H\to \infty$ for certain $c,d>0,e\geq 0$ in terms of an Euclidean norm on $\mathbb{R}^{n^2}\supset \mathrm{GL}_n(\mathbb{R}) \supset \mathcal{G}$. As a consequence, we see that for a semi-simple algebraic group $G\subset \mathrm{GL}_n$ over $\mathbb{Q}$ of non-compact type, Theorem \ref{firstmainthm} provides sparseness, in terms of the height, for all (PEP) subsets of $\Gamma \colon =G(\mathbb{Z})\colon =\mathrm{GL}_n(\mathbb{Z})\cap G(\mathbb{R})$. As a more explicit example, according to \cite[Example 1.6]{DukeRudnickSarnak}, the set $\left\{\mathbf{s}\in \mathrm{SL}_n(\mathbb{Z});H_{\mathrm{aff}}(\mathbf{s})\leq H\right\}$ is of order $cH^{n^2-n}$ for some $c>0$, therefore any (PEP) set in $\Gamma=\mathrm{SL}_n(\mathbb{Z})$, which has only logarithmic growth, is sparse in terms of the height. Verification of sparseness for (PEP) subsets of many other $S$-arithmetic groups, following strategies developed in \cite{GW07} and \cite{GN12}, will be available in \cite{CDRRZ}. \end{rema} If we apply Theorem \ref{firstmainthm} to the particular situation of (BG) by semi-simple elements, we acquire the following consequence. \begin{coro}\label{corsparsebgsemisimple} Let $\Gamma \subset \mathrm{GL}_n(K)$ be a linear group over a number field $K$, then for any semi-simple elements $\gamma_1,\dots, \gamma_r\in \Gamma$, we have $$\Big| \{P\in \langle \gamma_1\rangle\cdots \langle \gamma_r\rangle ;H_{\text{aff}}(P)\leq H\}\Big|=O((\log H)^r)\text{ when }H\to \infty.$$ \end{coro} The proof of Theorem \ref{firstmainthm} relies crucially on a key statement about the so-called ``minimal $m$-tuples'' with respect to a (PEP) set which seems to be of independent interest. \begin{defi} Given a vector ${\mathbf{f}}=(f_1,\dots, f_s)$ of exponential polynomials in $r$ variables, i.e. each $f_j$ is an exponential polynomial in $r$ variables, an element ${\mathbf{n}}=(n_1,\dots,n_r)\in \mathbb{Z}^r$ is called $\mathbf{f}$-{\bf minimal} (or {\bf minimal} with respect to $\mathbf{f}$) if for all ${\mathbf{n'}}=(n_1',\dots, n_r')\in \mathbb{Z}^m$ with ${\mathbf{f}}({\mathbf{n'}})={\mathbf{f}}({\mathbf{n}})$ (i.e. $f_j({\mathbf{n'}})=f_j({\mathbf{n}})$ for all $j$), we have $\| \mathbf{n'} \|_{\infty} \colon =\max\{|n_1'|,\dots, |n_r'|\}\geq \max\{|n_1|,\dots, |n_r|\}=\colon \| \mathbf{n} \|_{\infty} $. \end{defi} \begin{theo}[Primary Height Inequality] \label{minspecial} Let $\mathbf{f}$ be a vector of purely exponential polynomials in $r$ variables, then there exists a constant $C=C(\mathbf{f})>0$ such that for all $\mathbf{f}$-minimal vectors $\mathbf{n}\in \mathbf{Z}^r$, we have \begin{equation} h_{\mathrm{aff}}({\mathbf{f}}({\mathbf{n}}))\geq C\cdot \|\mathbf{n}\|_{\infty} \end{equation} except on some set of the form $\mathbf{f}^{-1}(A)$ with $A$ finite. \end{theo} It should be emphasized that the constant $C$ above will be explicitly computable, while the cardinality of the set $A$ in Theorem \ref{minspecial} is non-effective in general, see Remark \ref{effective}. The first main Theorem, i.e. Theorem \ref{firstmainthm}, being quantitative itself, leads us to the following qualitative theorem which fully describes all linear groups admitting (BG) by semi-simple elements (or (PEP)). It is worth pointing out that, thanks to a specialization argument, the following result works for linear groups over {\bf arbitrary} fields of characteristic zero. \begin{theo}[Second Main Theorem: qualitative result]\label{secondmainthm} Let $K$ be a field of characteristic zero and let $\Gamma \subset \mathrm{GL}_n(K)$ be a linear group. Then the following three properties are equivalent. \begin{enumerate} \item $\Gamma$ has (PEP). \item $\Gamma$ consists only of semi-simple elements and has (BG). \item $\Gamma$ is finitely generated and the connected component $G^{\circ}$ of the Zariski closure $G$ of $\Gamma$ is a torus (in particular, $\Gamma$ is virtually abelian). \end{enumerate} \end{theo} This result serves as an extension of one main Theorem in \cite[Theorem 1.1]{CRRZ} which claims that if a linear group over a field of characteristic zero has (BG) by semi-simple elements, then it is virtually solvable. More importantly, Theorem \ref{secondmainthm} gives a {\bf complete} answer to the Questions asked in \cite[p. 3]{CRRZ}. \section{Brief outline of proofs} It is straightforward to verify that Theorem \ref{minspecial} implies Theorem \ref{firstmainthm}. For simplicity of argument, we only sketch the proof of Theorem \ref{minspecial} for $\mathbf{f}=f$ being a single purely exponential polynomial. The sketch we give here follows the lines of the proof in the general case, and already includes all the main ideas and ingredients in the counterpart in \cite{CDRRZ}. \begin{proof}[Sketch of proof of Theorem \ref{minspecial}] The proof goes by induction on $r$, the number of variables in $\mathbf{n}$. The base case when $r=0$ is trivial, now let $r\geq 1$. We write: \[ f(\mathbf{n})=\sum_{i=1}^e a_i u_i(\mathbf{n}), \] where $\lambda_j, a_i\in K^*$ and $u_i(\mathbf{n})=\lambda_{1}^{l_{1,i}(n_1,\ldots,n_r)} \cdots \lambda_{k}^{l_{k,i}(n_1,\ldots,n_r)}$ are purely exponential monomials. Some non-trivial but routine manipulations enable one to reduce to the case where $\lambda_1,\ldots,\lambda_k$ are multiplicatevely independent, i.e. $\lambda_1^{\theta_1}\cdots \lambda_k^{\theta_k}=1(\theta_j\in \mathbb{Z}) \Longleftrightarrow \theta_1=\cdots \theta_k=0$, and where the linear forms $l_{i,j}$ span the dual space of $\mathbb{Q}^{r}$ over $\mathbb{Q}$. We need the following crucial height inequality which can be derived from a result of Evertse \cite[Theorem 6.1.1]{EG15} (which is itself a consequence of the Schlickewei-Schmidt Subspace Theorem, cf. \cite[Theorem 2.2]{CorvajaZannier}). \begin{thm}[Evertse] \label{evertseinequality} Let $S$ be a finite set of places of a number field $K$ containing all archimedean ones. Then there exists an effective $C>0$ such that the inequality \[ h_{\mathrm{aff}}(s_1+\cdots+s_e) < C \cdot (h_{\mathrm{aff}}(s_1)+\cdots+h_{\mathrm{aff}}(s_e))\text{ with }s_i\in \mathcal{O}_S^{\times} \] has only finitely many solutions such that the sum $s_1+\cdots+s_e$ is non-degenerate. \end{thm} Here {\bf non-degenerate} refers to the fact that $\sum_{i \in I} s_i \neq 0$ for any nonempty proper subset $I\subseteq \{1,\ldots,e\}$. Using Theorem \ref{evertseinequality} by taking the set $S$ of places such that all bases $\lambda_i$ and coefficients $a_j$ of $f$ are $S$-units, we obtain that for certain $C'>0$, the inequality: \begin{equation}\label{Eq:inequality} h_{\mathrm{aff}}(f(\mathbf{n})) \geq C'\cdot (h_{\mathrm{aff}}(u_1(\mathbf{n}))+ \ldots + h_{\mathrm{aff}}(u_e(\mathbf{n}))) \end{equation} holds for all but finitely many $\mathbf{n} \in \mathbb{Z}^r$ such that the sum defining $f(\mathbf{n})$ is non-degenerate. Recall the following standard fact \cite[p.118, Eq. (3.12)]{Zannier09}: \begin{prop} Let $m\in \mathbb{N}$ and $\phi =(\phi_1,\ldots,\phi_e):\mathbb{Z}^r \to (K^*)^e$ be an injective group homomorphism. Then there are constants $C_2>C_1>0$ such that for every $\mathbf{n} \in \mathbb{Z}^r$ the following inequalities hold: \[ C_1 \norm{\mathbf{n}}_{\infty} \leq h_{\mathrm{aff}}(\phi_1(\mathbf{n}))+\ldots+h_{\mathrm{aff}}(\phi_e(\mathbf{n})) \leq C_2 \norm{\mathbf{n}}_{\infty}. \] \end{prop} Using the proposition above with $\phi=(u_1,\ldots,u_e)$, which is injective because of the assumption that the linear forms $l_{i,j}$ span $(\mathbb{Q}^r)^{\vee}$ and that those $\lambda_j$'s are multiplicatively independent, one deduces that the right hand side of \eqref{Eq:inequality} is $\asymp \norm{n}_{\infty}$. This completes the argument for the non-degenerate case. Now consider those $\mathbf{n} \in \mathbb{Z}^r$ such that the sum defining $f(\mathbf{n})$ is degenerate. Then we may take a proper subset $I=\{i_1,\dots,i_t\} \subset \{1,\dots, e\} (t<e)$ with $a_{i_1}u_{i_1}(\mathbf{n})+\cdots+a_{i_t}u_{i_t}(\mathbf{n})=0$. We are now in a position to use Laurent's theorem \cite[Theorem 10.10.1]{EG15}, which can also be deduced from Theorem \ref{evertseinequality}. \begin{theo}[Laurent]\label{Laurent} Let $K$ be a number field, $\Gamma \subseteq (K^*)^t$ be a finitely generated subgroup, and let $X$ be a subvariety of $(\mathbb{G}_m)^t$. Then the Zariski closure of $\Gamma \cap X$ is a finite union of cosets of $(\mathbb{G}_m)^t$. \end{theo} Employing Laurent's theorem on the subgroup $\Gamma=\mathrm{im} (\phi=(u_{i_1},\ldots,u_{i_t}):\mathbb{Z}^r \to (K^*)^t)$ and the hyperplane $X: a_{i_1}x_{i_1}+\ldots+a_{i_t}x_{i_t}=0$, and letting $I$ go through all proper subsets of $\{1,\dots,e\}$ (finitely many possibilities), we deduce that the set of such $\mathbf{n}$ is contained in a finite union of cosets of $\mathbb{Z}^r$. Moreover, due to the assumption that the linear forms $l_{i,j}$ span $(\mathbb{Q}^r)^{\vee}$, we may assume these cosets are all translates of subgroups of rank $<r$. Taking the restriction of $f$ to each of the above proper cosets, and composing it with a suitable affine transformation, we produce finitely many (PEP) sets whose parametrizations all involve \underline{$<r$ variables}. Applying the induction hypothesis to these new (PEP) sets, the proof is completed. \end{proof} \begin{rema}[effectiveness]\label{effective} Taking more care in the proof above, one can actually make the constant $C$ in Theorem \ref{minspecial} to be effectively computable in terms of $\mathbf{f}$. However, our approach can say little about the effectiveness of the exceptional set $A$ (and even less about the effectiveness of $f^{-1}(A)$) of Theorem \ref{minspecial}, not even its cardinality. As a consequence, in the context of Theorem \ref{firstmainthm}, we are unable to explicitly compute a constant $a>0$ such that ${\Big| \{P\in \Sigma;H_{\text{aff}}(P)\leq H\}\Big|<a \cdot (\log H)^r}$ for sufficiently large $H$. This is in sharp contrast with the situation of $S$-unit equations, e.g. $x_1+\cdots+x_s=1, x_i\in \mathcal{O}_S^*$, whose {\bf number} of non-degenerate solutions can be effectively boundable from above, cf. the seminal paper \cite{effective02} and its refinement \cite{effective09}, see also \cite{effectiveremond} for another approach. In fact, we prove in \cite{CDRRZ} that an effective bound for the cardinality of $A$ in Theorem \ref{minspecial} would yield an explicit bound for the {\bf size} of non-degenerate solutions to an arbitrary $S$-unit equation, which is still an open problem. Thus, the non-effectiveness of the exceptional set $A$ of Theorem \ref{minspecial} lies deeply in the openness of the difficult effectiveness problem of the Schlickewei-Schmidt subspace theorem. \end{rema} We now turn to the discussion of the second main result, Theorem \ref{secondmainthm}. The proof of Theorem \ref{secondmainthm}, being non-trivial though, is roughly analogous to that of Theorem 1.1 and Corollary 1.2 of \cite{CRRZ}. In particular, the theory of generic elements, cf. \cite{PrR1},\cite{PrR2}, \cite{PrR3}, will be needed again. We will omit the full verification here for simplicity of presentation. In the following we will only highlight two new ingredients in the proof of Theorem \ref{secondmainthm} and postpone detailed arguments in \cite{CDRRZ}. The first one is a consequence of Theorem \ref{firstmainthm}. \begin{coro}\label{Cor} Let $K$ be a number field, $\Sigma \subseteq \mathrm{GL}_n(K)$ be a (PEP) subset, and let $g\in \mathrm{GL}_n(K)$ be a non-semi-simple matrix. Then there is an $m \in \mathbb{N}$ such that $\Big| \{n\in \mathbb{N};n\leq N\text{ and }g^n\in \Sigma\}\Big| =O(\log^mN)$ as $N\to \infty$ . \end{coro} \begin{proof} Write $g=g_ug_s$ for the Jordan decomposition of $g$ with $g_u$ unipotent, $g_s$ semisimple and $[g_u,g_s]=1$. Note that the condition $g^n=(g_ug_s)^n \in \Sigma$ implies that $g_u^n \in \Sigma \cdot\langle g_s\rangle$, and that the subset $\Sigma'= \Sigma\cdot \langle g_s\rangle \subseteq \mathrm{GL}_n(K)$ is also a (PEP) set. So, we reduce to proving the result for $g_u$. We may, therefore, assume that $g$ is unipotent. By writing $g=id+g_N$ with $g_N$ nilpotent, and considering the binomial expansion of ${g^n=(id+g_N)^n}$, it is easy to check that the height of the coefficients of $g^n$ has polynomial growth in $n$. Due to Theorem \ref{firstmainthm}, the elements of height $\leq H$ in the (PEP) set $\Sigma$ grow at most as some power of $\log H$ as $H \to \infty$. This proves the corollary. \end{proof} The following second requires a not entirely trivial argument which uses the finiteness of non-degenerate solutions to $S$-unit equations (cf. \cite{CDRRZ}). \begin{lemma} Let $f:\mathbb{Z}^r \to K^*$ be a (PEP). If its image is a multiplicative subgroup of $K^*$, then this subgroup is finitely generated. \end{lemma} Details of the proofs in this section as well as relevant examples and remarks will appear in \cite{CDRRZ}. \vskip2mm \noindent {{\small {\bf Acknowledgements.} The first author is partially funded by the Italian PRIN 2017 ``Geometric, algebraic and analytic methods in arithmetic''. The second author was a guest at the Max Planck Institute for Mathematics when working on this article. He thanks the Institute for their hospitality and their financial support. The fourth author is supported by the Institute for Advanced Study and the National Science Foundation under Grant No. DMS-1926686.}} \bibliographystyle{amsplain}
train/arxiv
BkiUc4o5qsBC6Hyib63i
5
1
\section{Introduction} In the second half of the XIX century Maxwell had summarized all known electromagnetic phenomena in four partial differential equations for electric and magnetic fields. These equations contain a numerical constant, $c$, which has the dimension of a velocity and the value of the speed of light in vacuum. Far from the sources, the Maxwell equations contains also the wave equation $$ \left[ \nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right]\Phi = 0 $$ where the constant $c$ plays the role of the velocity of propagation of the wave. This led to the conclusion that the light was an EM wave which propagates with velocity $c$ with respect to a supporting medium and that Maxwell equations were valid in a frame connected to that medium. Moreover as pointed out by Poincar\'e and Lorentz, Maxwell equations are not invariant in form (\emph{covariant}) under Galilean transformations which at that time were believed to connect inertial observers. This would mean that the Galilean principle of relativity that Physics laws are the same for all inertial observers would hold good only for Mechanics laws. In his paper \cite{AE05} Einstein proposed a different solution which proved to be the correct one. \section{Galilean transformations and Classical Mechanics} The quantitative description of physical phenomena needs a reference frame where the coordinates of the observed objects are specified, a ruler for measuring the distances and a clock for describing the coordinates variation with time. Geometry says how coordinates in two different reference frames are related. If we assume for sake of simplicity two reference frames simply shifted along one of the axis\footnote{All other cases can be obtained by introducing a rotation of the axis and a shift of the origin.}, for instance by $x_0$ along $\hat x$, the relationships are (see Fig.~\ref{eliana-rel-f1}) $$ x'=x-x_0 \hspace*{6mm} y'=y \hspace*{6mm} z'=z $$ \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=56mm]{frames_new.png}} \caption{\label{eliana-rel-f1} The accented frame $S'$ is shifted by $x_0$ with respect to $S$.} \end{figure} If $S'$ is moving along the common $x$-axis with speed $\vec V$=$\hat x V$ with respect to $S$, assuming the origins coincide at $t$=0 it is \begin{eqnarray}\label{eliana-rel-eq1} x'=x-x_0=x-Vt \hspace*{7mm} y'=y \hspace*{6mm} z'=z \end{eqnarray} Eqs.(\ref{eliana-rel-eq1}) are the Galilean coordinate transformations. By differentiating with respect to time it is \begin{eqnarray}\label{eliana-rel-eq2} \dot x'=\dot x -V \hspace*{7mm} \dot y'=\dot y \hspace*{6mm} \dot z'= \dot z \end{eqnarray} where we have implicitly assumed that $t'$=$t$ and that the lengths are the same. From Eqs.(\ref{eliana-rel-eq2}) we see that velocities add. If the light from a source on a train propagates in the $x$-direction with velocity $\hat x c$, for an observer at rest on the railway platform it would propagate with velocity $\hat x (c+V)$ (see Fig.~\ref{eliana-rel-f2}). \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=46mm,height=26mm]{light_on_train.png}} \caption{\label{eliana-rel-f2} Light source on a train moving along the $x$-direction with uniform speed $\hat x V$ wrt the railway platform.} \end{figure} By differentiating Eqs.(\ref{eliana-rel-eq2}) wrt time we get \begin{eqnarray}\label{eliana-rel-eq3} \ddot x'=\ddot x \hspace*{7mm} \ddot y'=\ddot y \hspace*{6mm} \ddot z'= \ddot z \end{eqnarray} that is the acceleration of a body is the same for all observers related by Galilean transformations. The basic laws of classical dynamics are \begin{enumerate} \item A \emph{free} body perseveres in its state of rest, or of uniform motion (principle of inertia). Reference frame where the principle of inertia holds good are said inertial. \item In an inertial reference frame it is $\vec F=m\vec a$, that is the acceleration, $\vec a$, is proportional to the applied force, $\vec F$, through a constant, $m$ (``inertial mass''). In other words, if in an inertial frame a body appears to be accelerated it means that there must be something acting on it. Implicitly it is assumed that $m$ is a characteristic of the body which doesn't depend upon its status of motion. \item Whenever two bodies interact they apply equal and opposite forces to each other. \end{enumerate} The second and third laws combined give the total momentum conservation for an isolated system. The three laws of dynamics hold good in inertial frames. If an inertial frame exists, all reference frames in uniform motion with respect to it are inertial. As they are all equivalent it is reasonable to assume that all mechanics laws are the same for inertial observers (principle of relativity). More precisely, the principle states that the laws must have the \emph{same form} (covariance). If we chose a non inertial frame for describing the motion of an object, the numerical results would be the same if the motion of the reference frame itself is accounted for correctly. However the equation of motion for the observed object would take a different form. Are mechanics laws invariant under Galilean transformations? Suppose that Alex is studying the motion of a ball let to fall under the earth gravitation force. Alex measures that the object is subject to a constant acceleration of $a\approx$ 9.8 ms$^{-2}$. By using different balls he finds that the acceleration is always the same, $g$. He concludes that there must be a force acting on the balls which is directed towards the center of the earth and having magnitude $mg$. Betty is on a train moving uniformly with velocity $\vec V$=$\hat x V$ with respect to Alex (see Fig. \ref{eliana-rel-f3}). \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=46mm,height=26mm]{gravity_exp.png}} \caption{\label{eliana-rel-f3} Alex, at rest on the railway platform, studies the motion of objects under the gravitation force. Betty is on a train moving along the $x$-direction with uniform speed $\hat x V$ wrt the railway platform.} \end{figure} From Eqs.(\ref{eliana-rel-eq3}) \begin{alignat}{3} \ddot x' & = \ddot x =0 & \qquad \ddot y' & = \ddot y \nonumber \end{alignat} and as the mass, $m$ is a constant, she will agree with Alex on magnitude and direction of the force. Classic mechanics laws are covariant under Galilean transformations. The relativity principle allows us to chose the most convenient frame for describing an event. We want to show in a more formal way that Newton law is invariant under Galilean transformations by using an example. Let us suppose we have a system of particles and that the forces between them depend upon the reciprocal distances, $r_{ij}$. In the $S$ inertial reference frame it is $$\vec F_i = -\nabla_{r_i} \Sigma_j U(r_{ij})=m_i\vec a_i$$ In the moving frame $S'$ the Newton law must take the same form with the potential $U$ having the same functional dependence upon the new variables as in the old ones. From the Galilean transformations Eqs.(\ref{eliana-rel-eq1}) and (\ref{eliana-rel-eq3}) it is $r'_{ij}=r_{ij}$, $\vec a'_i=\vec a_i$ and $\nabla_{r'_i}=\nabla_{r_i}$ and therefore, as the mass is a scalar invariant, it is indeed $$\vec F'_i = -\nabla_{r'_i} \Sigma_j U(r'_{ij})=m_i\vec a'_i$$ \section{Galilean relativity and EM wave equation} Using the cyclic rule\footnote{ $ {\partial \over \partial x_i}= \sum_j{\partial x'_j\over \partial x_i} {\partial \over \partial x'_j} $} the wave equation\footnote{For simplicity we have chosen the $x$-axis along the direction of propagation.} $$ \left[ \frac{\partial^2}{\partial x^2} - \frac{1}{c^2}\frac{\partial^2}{\partial t^2}\right]\Phi = 0 $$ becomes under Galilean transformation \begin{align*} \left[\frac{\partial^2}{\partial x'^2} - \frac{1}{c^2} \frac{\partial^2}{\partial t'^2} - \frac{V^2}{c^2} \frac{\partial^2}{\partial x'^2} - 2\frac{V}{c^2} \frac{\partial^2}{\partial x'\partial t'}\right]\Phi & = 0 \end{align*} and it is clearly not covariant. As anticipated, Maxwell equations would describe EM laws in a particular reference frame, and as such, a \emph{privileged} one. It was conjectured the existence of a medium, the \emph{luminiferous aether}, supporting the propagation of EM waves, as the air supports sound waves. This medium had to be extremely rarefied to be undetectable directly and it would permeate the whole space. The speed of light would be $c$ with respect to the medium and, accordingly to Eqs.(\ref{eliana-rel-eq2}), would be different for an observer moving with respect to the medium. Experiments for demonstrating the existence of the aether, by measuring the speed of light under different conditions were attempted, the most famous of them being those performed by Michelson and Morley using an interferometer. The arrangement is schematically shown in Fig.\ref{eliana-rel-f4}. The light is split into two orthogonal patterns of equal length by the partially silvered glass M, reflected back by mirrors M1 and M2 and recombined on a screen. If the earth is at rest in the aether, the recombined waves are in phase but if the earth is moving the time needed by the two waves for reaching the screen would be different and an interference pattern should be observed on the screen S. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=46mm,height=30mm]{mm_exp.png}} \caption{\label{eliana-rel-f4} A schematic view of Michelson-Morley interferometer experiment.} \end{figure} For avoiding errors due to incorrect mirrors angle or to the distances between the two mirrors and the partially silvered glass being not identical, the apparatus can be rotated so that the possible interference fringes would move. The result of the first experiment in 1887 was negative. It was repeated with higher accuracy apparatuses during the following 50 years, however the result was always negative. Theories proposed to justify the negative result were contradicted by other experiments. A detailed quantitative description of these experiments may be found in \cite{RR68} Attempts of modifying the still relatively new EM laws in such a way that they would be invariant under Galilean transformations led to predictions of new phenomena which could not be proved experimentally. \section{Relativistic Kinematics} In 1905 Einstein\cite{AE05} proposed a solution to the dilemma based on two postulates: \begin{enumerate} \item Physics laws are the same in all inertial frames, there is no preferred reference frame. \item The speed of light in the empty space has the same finite value $c$ in all inertial frames. \end{enumerate} At that time the existence of the aether was still widely accepted and not yet ruled out by experiments. It is worth noting that Lorentz had found the coordinates transformation which leave Maxwell's equations invariant in 1904, before the publication of Einstein's paper, accompanied however by an erroneous interpretation. It is in Einstein paper that such transformations are \emph{physically} justified and therefore extendable to the whole Physics. In particular, the concept of time was critically addressed and the fact that the time is not universal comes as a consequence of the light having a finite velocity. Let us summarize Einstein reasoning. In order to describe the motion of an object we need to equip each point of our reference frame with identical clocks and rulers. Is it possible to synchronize the clocks by sending light rays. For instance we can imagine of sending a light ray from a point $A$ to $B$ and $B$ reflecting it back to $A$ (see Fig.\ref{eliana-rel-f5}). The two observers sitting in $A$ and $B$ may agree in setting the clock in $B$ at the arrival of the signal to a given value $t_B$ while $A$ will set its own clock to 2$t_B$ when receiving back the signal. However if we want the speed of light to be $c$=3$\times$10$^8$ m s$^{-1}$ we shall measure the distance, $L$, between $A$ and $B$ and set $t_B$=$L/c$. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=54mm,height=34mm]{clock_synch.png}} \caption{\label{eliana-rel-f5} Synchronization procedure of the clocks in $A$ and $B$.} \end{figure} Assuming the clocks are identical, they will stay synchronized. For this procedure we use the light because, we have assumed that it propagates in vacuum with constant velocity so that we can be assured that the velocity is the same in both directions. Once all clocks within one frame are synchronized we can establish the chronological sequence between events happenings in different places within the same frame of reference. The observer $S'$ moving with respect to $S$ may synchronize its own clocks with the very same procedure. However this synchronization procedure observed by the resting observer is not correct. Suppose $A$ and $B$ lying on the common $x$-axis with $B$ on the right of $A$ ($x_B>x_A$) as shown in Fig.\ref{eliana-rel-f6}: while the light moves to $B$, $B$ moves further away and once reflected back to $A$, $A$ moves toward the light. Therefore observed by $S$ the time needed to reach $B$ is obtained by setting $$c t_B=L+V t_B $$ ($L\equiv x_B-x_A$) which gives $$ t_B=L/(c-V)$$ while the time needed to reach $A$ is obtained from $$c t_A=L-V t_A$$ that is $$t_A=L/(c+V)$$ and $$ t_B-t_A=L\Bigl(\frac{1}{c-V}-\frac{1}{c+V}\Bigr)=\frac{2VL}{c^2[1-(V/c)^2]}\neq 0 $$ \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=54mm,height=34mm]{moving_clock_synch.png}} \caption{\label{eliana-rel-f6} Synchronization procedure of $S'$ clocks as seen by the ``resting" observer.} \end{figure} Therefore for $S$, $S'$ clocks are not synchronized. If the clocks in the moving frame would be synchronous with the stationary ones they wouldn't be synchronous in their own frame. The ``stationary" frame would dictate the timing. However stationarity is relative, the inertial frames are all equivalent: if there exist no privileged frame, we must abandon the idea of universal time. Relativity of time is a consequence of the speed of light being finite. As a consequence events which may be simultaneous for $S$ are in general not simultaneous for $S'$ and the other way round. \section{ Lorentz transformations} By assuming the speed of light constant in all reference frames, the Galilean transformations, implying the addition of velocity rule, must be modified. The new transformations must reduce to the Galilean ones when the relative motion is slow ($V\ll c$). According to the first Einstein postulate, the empty space is isotrope (all direction are equivalent) and homogeneous (all points are equivalent); it would make no sense to postulate that the laws are invariant in a space which is not homogeneous and isotrope. As time is not universal, it must be included in the coordinate transformation. Resorting to arguments of space homogeneity and isotropy, and to the Einstein postulates it is relatively simple to work out the correct coordinates transformation. Homogeneity implies the relationship between the coordinates must be linear: \begin{align*} x' & = a_{11} x + a_{12} y + a_{13} z + a_{14} t \\ y' & = a_{21} x + a_{22} y + a_{23} z + a_{24} t \\ z' & = a_{31} x + a_{32} y + a_{33} z + a_{34} t \\ t' & = a_{41} x + a_{42} y + a_{43} z + a_{44} t \end{align*} where the coefficients $a_{ij}$ may depend upon the relative speed $V$. The points on the $x$-axis where $y$=$z$=0 must transform to $y'$=$z'$=0 at all times which means that $a_{21}$=$a_{31}$=$a_{24}$=$a_{34}$=0. The points with $y$=0 (the $x$-$z$ plan) must transform into $y'$=0 and therefore it is also $a_{23}$=0. The points with $z$=0 (the $x$-$y$ plan) must transform into $z'$=0 and therefore it is also $a_{32}$=0. Because of isotropyy, time must be invariant for a sign inversion of the coordinates $y$ and $z$ which means $a_{42}$=$a_{43}$=0. So we are left with 8 unknown coefficients: \begin{align*} x' & = a_{11} x + a_{12} y + a_{13} z + a_{14} t \\ y' & = a_{22} y \\ z' & = a_{33} z \\ t' & = a_{41} x + a_{44} t \end{align*} For a point on the $y$-axis ($x$=$z$=0) it is $$x'=a_{12}y+a_{14}t $$ and therefore $x'$ value would depends on the sign of $y$ which again contradicts the hypothesis of isotropy. Therefore it must be $a_{12}$=0. The same argument can be used to set $a_{13}$=0. We are left with \begin{align*} x' & = a_{11} x + a_{14} t \\ y' & = a_{22} y \\ z' & = a_{33} z \\ t' & = a_{41} x + a_{44} t \end{align*} The value of $a_{22}$ is found by observing that $$y'=a_{22}(V)y=a_{22}(V)a_{22}(-V)y'$$ that is $a_{22}(V)a_{22}(-V)$=1. Because $a_{22}$=1 for $V\rightarrow 0$, the correct choice is $a_{22}$=1. In the same way it is found $a_{33}$=1. The origin of the $S'$ frame is described in $S$ as $x=Vt$ and has by definition $x'$=0 at any time. Therefore $$0=x'_0=a_{11}x_0+a_{14}t=a_{11}Vt+a_{14} t $$ that is $a_{14}$ and $a_{11}$ are related by $$a_{14}/a_{11}=-V$$ and the equation for $x'$ becomes $$x'=a_{11}(x+a_{14}t/a_{11})=a_{11}(x-Vt)$$ For finding the values of the remaining coefficients $a_{11}, a_{41}$ and $a_{44}$ we resort to the fact that the speed of light is the same in $S$ and $S'$ and that the wave equation is invariant in form. Suppose an EM spherical wave leaves the origin of the frame $S$ at $t$ = 0. The propagation is described in $S$ by the equation of a sphere which radius increases with time as \begin{equation}\label{eliana-rel-eq4} R(t)=x^2+y^2+z^2=c^2t^2 \end{equation} In $S'$ the wave propagates with the same speed $c$ and therefore $$R'^2(t')=x'^2+y'^2+z'^2=c^2t'^2$$ which writing the coordinates $x'$, $y'$, $z'$ and $t'$ in terms of $x$, $y$, $z$ and $t$ becomes $$a_{11}^2x^2 +a_{11}^2V^2t^2 - 2a_{11} xVt+y^2+z^2=c^2a_{41}^2 x^2+ c^2a_{44}^2t^2 + 2a_{41}a_{44}xt$$ Rearranging the terms it is $$(a_{11}^2 - c^2a_{41}^2)x^2 - 2(a_{11}^2V+c^2a_{41}a_{44})xt+y^2+z^2= (c^2a_{44}^2-a_{11}^2V^2)t^2 $$ Comparing this equation with Eq.(\ref{eliana-rel-eq4}), we get a system of 3 equations in the 3 unknown $a_{11}$, $a_{41}$ and $a_{44}$ \begin{align*} a_{11}^2 - c^2a_{41}^2 & = 1 \\ a_{11}^2V+c^2a_{41}a_{44} & = 0 \\ c^2a_{44}^2-a_{11}^2V^2 & = c^2 \end{align*} which is solved by $$a_{11}=a_{44}=\frac{1}{\sqrt{1-(V/c)^2}}$$ $$a_{41}= - \frac{V/c^2}{\sqrt{1-(V/c)^2}} $$ The final coordinates transformation for a uniform motion along the $x$-axis with relative speed $V$ is therefore \begin{equation}\label{eliana-rel-eq5} x' =\gamma(x - \beta ct) \hskip 1 cm y'=y \hskip 1 cm z'=z \hskip 1 cm ct' = \gamma (ct - \beta x) \end{equation} with $$\gamma\equiv\frac{1}{\sqrt{1-\beta^2}} \hspace*{8mm} \mbox{and} \hspace*{5mm}\beta\equiv V/c $$ The inverse transformation from $S'$ to $S$ is obtained by replacing $\beta$ with $-\beta$. The transformation can be written also in matrix form $$ \left ( \begin{matrix} ct' \\ x' \\ y' \\ z' \\ \end{matrix} \right )=\gamma \left ( \begin{matrix} 1 & -\beta & 0 & 0 \\ -\beta & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right ) \left ( \begin{matrix} ct \\ x \\ y \\ z \\ \end{matrix} \right ) \equiv {\cal L} \left ( \begin{matrix} ct \\ x \\ y \\ z \\ \end{matrix} \right ) $$ Successive collinear Lorentz transformations are obtained by matrix multiplication. The matrix elements may be also written as $${\cal L}_{\alpha\beta}=\frac{\partial x'^\alpha}{\partial x^\beta} $$ with $x^0$=$ct$, $x^1$=$x$, $x^2$=$y$ and $x^3$=$z$. It is worth noting that for $V\ll c$, that is $\beta\rightarrow$0 and $\gamma\rightarrow$1, Lorentz transformations coincide with the Galilean ones, while if $V>c$, $\gamma$ becomes imaginary and the transformations are meaningless. The fact that $c$ is the limit velocity is not a Einstein postulates, it is a consequence of the Lorentz transformation. Fig.~\ref{eliana-rel-f7} shows $\gamma$ as function of $\beta$. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=76mm]{gamma.pdf}} \caption{\label{eliana-rel-f7} $\gamma$ as function of $\beta\equiv v/c$.} \end{figure} The general expression of the Lorentz transformation of parallel translation with arbitrary direction of the relative velocity reads~\cite{DJ75} \begin{equation}\label{eliana-rel-eq5a} ct' = \gamma \bigl(ct - \vec{\beta} \cdot \vec{r}\bigr) \hspace*{14mm} \vec{r}\hspace*{1mm}' = \vec{r} +\frac{\gamma-1}{\beta^2} \vec{\beta} \cdot \vec{r} \vec{\beta}-\gamma \vec{\beta} c t \end{equation} with $\vec{\beta} \equiv {\vec{V}}/{c}$. Time is one of the 4 coordinates describing an event and as the spatial coordinates is subject to a (Lorentz) transformation between moving frames. For spatial coordinates it is always possible if for instance $x_2 > x_1$ to find a new coordinates frame such that $x'_2 < x'_1$. Is it possible to find a Lorentz transformation which inverts the temporal order of events? Assume an event happening at the time $t_1$ at the location $x_1$ in $S$ and a second event happens at $t_2$ in $x_2$ with $t_2>t_1$. Is it possible to find a Lorentz transformation such that $t'_2<t'_1$? In $S'$ it is $$ct'_{1}=\gamma(ct_{1}-\beta x_{1}) $$ $$ct'_{2}=\gamma(ct_{2}-\beta x_{2} ) $$ and therefore $$c(t'_2-t'_1)=\gamma[c(t_2-t_1) -\beta(x_2-x_1)]$$ Therefore it is $t'_2<t'_1$ if $\beta(x_2-x_1)>c(t_2-t_1)$, that is if $V(x_2-x_1)/(t_2-t_1)>c^2$. This may be possible depending on the values of $x_2-x_1$ and $t_2-t_1$. However if the first event in $S$ \emph{drives} the second one, $x_2$ and $t_2$ are not arbitrary. \newline If $w$ is the speed of the signal triggering the second event from the first one it is $$x_2-x_1=w(t_2-t_1)$$ \vspace*{-8mm} $$c(t'_2-t'_1)=\gamma[c(t_2-t_1) -\beta w (t_2-t_1)]=\gamma c (t_2-t_1) \Bigl(1-\frac{V w}{c^2}\Bigr) $$ which is always positive as $w\leq c$. Causality is not violated. \section{Some consequences of Lorentz transformations:\\ length contraction and time dilation} As a consequence of Lorentz transformations, lengths are not invariant. Consider for instance a rod along the $x$-axis and at rest in the moving frame $S'$. The length of the rod in $S'$ is $L'$. The length in $S$ is determined by the positions of the rod ends at the \emph{same} time and therefore from Eq.(\ref{eliana-rel-eq5}) with $t_1$=$t_2$ $$L'=x'_2-x'_1= \gamma(x_2-x_1)=\gamma L$$ The moving rod is shorter than in the frame where it is at rest (\emph{length contraction}). However the length of a rod aligned with one of the two axis perpendicular to the direction of motion is invariant. For this reason angles are in general not invariant. Suppose a clock at rest in $S$ measuring a time interval $t_2-t_1$ between two events happening at that same location in $S$. From Eq.(\ref{eliana-rel-eq5}) with $x_1$=$x_2$ the time interval in $S'$ between the two events is $$t'_2-t'_1=\gamma (t_2-t_1)$$ which is larger than measured in $S$ (\emph{dilation of time}). Moreover events happening at the same time but in \emph{different places} in $S$, will be no more simultaneous in the moving frame $S'$. In fact using Eq.(\ref{eliana-rel-eq5}) with $t_1$=$t_2$ it is $$c(t'_2-t'_1)=\gamma\beta (x_1-x_2)$$ which is non vanishing if $x_1\neq x_2$. In general it is named as \emph{proper} the interval (in space or time) measured in a inertial frame where the observed object is at rest. \vspace*{4mm} Let us suppose that we have two synchronized clocks, $C1$ and $C2$, at the origin $O$ of $S$ and that at $t$=0 we set $C2$ in uniform motion along the $x$-axis with velocity $V$. After a time $t_{C1}$=$T$, when $C2$ accordingly to time dilation strikes $T/\gamma$, the clock $C2$ inverts its direction. When $C2$ arrives back in $O$, $C1$ strikes $2T$ and $C2$ instead 2$T/\gamma$. This may look as a paradox because the notion of motion is relative: with respect to $C2$, it was $C1$ moving and therefore $C2$ should strike 2$T$ and $C1$ instead 2$T/\gamma$. However when they are both in $O$ we can compare their time and only one outcome is possible. The mistake is considering the two situations equivalent, while they are not. $C2$ has been set in motion by the action of some kind of force and some kind of force also is responsible for changing its direction, while $C1$ has experienced no force. Indeed direct experiments involving clocks have shown that time dilation is real~\cite{HK72}. \section{Lorentz transformations for velocity and acceleration} The relativistic transformation for the velocity follow from the Lorentz transformations for the coordinates $$ v'_x \equiv \frac{dx'}{dt'} = \frac{dx-Vdt}{dt-Vdx/c^2} = \frac{v_x-V}{1-v_x\beta/c} $$ \begin{equation}\label{eliana-rel-eq5b} v'_y \equiv \frac{dy'}{dt'} = \frac{dy}{\gamma(dt-Vdx/c^2)} = \frac{v_y}{\gamma (1-v_x \beta/c)} \end{equation} $$ v'_z \equiv \frac{dz'}{dt'} = \frac{dz}{\gamma(dt-Vdx/c^2)} = \frac{v_z}{\gamma (1-v_x \beta/c)} $$ \noindent with $\beta\equiv V/c$, $v_x\equiv dx/dt$, $v_y\equiv dy/dt$ and $v_z\equiv dz/dt$. The inverse transformation is obtained by replacing $V$ with $-V$. \noindent Unlike the classical case, also the components of the velocity perpendicular to the motion, when non vanishing, are affected by the motion. This is a consequence of the fact that the time is not invariant and therefore, although the lengths perpendicular to the motion direction are unchanged, the time needed to cover them is changed. As an exercise, let us use these expressions for a light ray. For $v_x$=$c$ and $v_y$=$v_z$=0 it is $$v'_x=\frac{c-V}{1-V/c}=c\hspace*{1mm}\frac{c-V}{c-V}=c \hspace*{4mm} \mbox{and } \hspace*{2mm} v'_y=v'_z=0 $$ For $v_y$=$c$ and $v_x$=$v_z$=0 it is $v'_x$=$-V$, $v'_y$=$c/\gamma$, $v'_z$=0 and $$v_x'^2+v_y'^2+v'^2_z=V^2+c^2[1-(V/c)^2]= c^2$$ As expected, the speed of light is invariant. In a similar way as for the velocity it is possible to find the transformation for the acceleration~\cite{RR68} $$a'_x=\frac{a_x}{\gamma^3(1-v_x\beta/c)^3 }$$ \begin{equation}\label{eliana-rel-eq6} a'_y=\frac{a_y}{\gamma^2(1-v_x\beta/c)^2 } + \frac{a_x v_y\beta/c}{\gamma^2(1-v_x\beta/c)^3 } \end{equation} $$a'_z=\frac{a_z}{\gamma^2(1-v_x\beta/c)^2 } + \frac{a_x v_z\beta/c}{\gamma^2(1-v_x\beta/c)^3 }$$ Acceleration is not invariant under Lorentz transformations unless both $v$ and $V$ $\rightarrow$ 0. \section{Experimental evidence of relativistic kinematics} In his papers Einstein suggested possible experiments for confirming the validity of his theory. Here we give some examples: light aberration, (transverse) Doppler effect and lifetime of unstable particles. \subsection{Light aberration} Light aberration is the apparent motion of a light source due to the movement of the observer. It was first discovered in astronomy. Consider a source emitting photons at an angle $\theta$ with respect to the $x$-axis in the $S$ frame where $v_y$=$c\sin{\theta}$ and $v_x$=$c\cos{\theta}$ (see Fig.\ref{eliana-rel-f8}). In $S'$ it is $v'_y$=$c'\sin{\theta'}$ and $v_x$=$c'\cos{\theta'}$. Using Galilean transformations for the velocity components $v_x$ and $v_y$ $$v'_y=v_y \hspace*{2mm} \mbox{and} \hspace*{2mm} v'_x=v_x-V$$ $$\tan{\theta'}=v'_y/v'_x=v_y/(v_x-V)$$ $$\tan{\theta'}=\frac{\sin{\theta}}{(\cos{\theta}-\beta)}$$ Using instead Lorentz transformations $$\tan{\theta'}=\frac{\sin{\theta} }{\gamma(\cos{\theta}-\beta)}$$ \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=78mm]{aberration2.png}} \caption{\label{eliana-rel-f8} Source emitting a light ray at an angle $\theta$ with respect to the $x$-axis in $S$.} \end{figure} \begin{figure}[htb] \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=78mm]{aberration.pdf}} \caption{\label{eliana-rel-f9} Angles observed in the moving frame for $\beta$=0.2 and $\beta$=0.9.} \end{minipage} \hfill \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=78mm]{pi2_vs_gamma.pdf}} \caption{\label{eliana-rel-f9a} $\theta'$ vs. $\gamma$ for $\theta$=90$^0$. } \end{minipage}% \end{figure} High energy experiments involving emission of photons confirm the relativistic expression. Fig.\ref{eliana-rel-f9} shows the classical and relativistic relationships between the emission angles for $\beta$=0.2 and $\beta$=0.9. We see that $\theta'$=$\theta$ for 0 and 180 degrees for both the classical as well as the relativistic expression. In all other cases it must be paid attention whether the angles are specified in the moving or in the rest frame. Fig.\ref{eliana-rel-f9a} shows how the angle $\theta$=90$^0$ transforms as function of $\gamma$. \subsection{Doppler effect} In the following we give an alternative computation of light aberration by using the undulatory description of light which allows also to treat the Doppler effect. \noindent Consider a plane light wave propagating in the direction $\hat r=\hat x \cos{\theta}+\hat y \sin{\theta}$ \begin{eqnarray}\label{eliana-rel-eq8} A(x,y;t)=\cos{[k(x \cos{\theta}+ y \sin{\theta})-\omega t ]} \end{eqnarray} where $k$=$\omega/c$ is the wave number. The wave must have the same form when observed in $S'$ $$A(x',y';t')=\cos{[k'(x' \cos{\theta'}+ y' \sin{\theta'})-\omega' t']}$$ Expressing the coordinates in $S$ in terms of the coordinates in $S'$, Eq.(\ref{eliana-rel-eq8}) gives $$B(x',y';t')=\cos{\{[k\gamma (x'+\beta ct)\cos{\theta}+ y' \sin{\theta})]-\omega \gamma(t'+\beta x'/c )\}}$$ Comparing with the expression for $A(x',y';t')$ we get \begin{eqnarray}\label{eliana-rel-eq9} k' \cos{\theta'}=k\gamma \cos{\theta}-\omega\gamma\beta/c = k\gamma(\cos{\theta}-\beta) \end{eqnarray} \begin{eqnarray}\label{eliana-rel-eq10} k' \sin{\theta'}=k\sin{\theta} \end{eqnarray} \begin{eqnarray}\label{eliana-rel-eq11} \omega'=-k\gamma\beta c \cos{\theta}+\gamma \omega = \gamma \omega (1-\beta\cos{\theta}) \end{eqnarray} From Eqs.~(\ref{eliana-rel-eq9}) and (\ref{eliana-rel-eq10}) it is $$\tan{\theta'}=\frac{\sin{\theta} }{\gamma(\cos{\theta}-\beta)}$$ which is the result found previously. In addition Eq.~(\ref{eliana-rel-eq11}) gives the wavelength measured by two observers in relative motion. Suppose that the source is at rest in $S$ so that $\omega$ is the proper frequency, $\omega_0$. Thus it is $$\omega'=\omega_0\gamma(1-\beta\cos{\theta})$$ where $\theta$ is the propagation angle in the source reference frame. For $\theta$=0 it is $$\omega'=\omega_0 \sqrt{\frac{1-\beta}{1+\beta}} $$ and therefore $\omega' < \omega_0$ for $\beta>0$ (receiver moving away from source), while $\omega' > \omega_0$ for $\beta<0$ (receiver moving towards the source). \noindent For $\theta$=90$^0$ (in the frame where the source is at rest) it is $$\omega'=\omega_0\gamma$$ Unlike the classical case, relativistically it is expected the existence of a transverse Doppler effect which is a consequence of time being not invariant. This was predicted by Einstein who also suggested in 1907 an experiment using hydrogen ions for measuring it. The experiment realized for the first time by Ives and Stilwell in 1938 proved the correctness of Einstein prediction. \subsection{Lifetime of unstable particles} Beside $e^-$, $p$ and $n$, in nature there are particles which are produced by scattering process and unlike $e^+$, $\bar p$ and $\bar n$, are ``short-living". Their number decays in time as $$N(t)=N_0 e^{-t/\tau}$$ Pions are produced by bombarding a proper target by high energy protons and leave the target with $v\approx$ 2.97$\times$10$^{8}$ m/s that is $\beta$=0.99 and $\gamma\approx$ 7. The lifetime of charged pions at rest is $\tau_0$=26$\times$10$^{-9}$ s. The time, $\bar t$, needed for the pions at rest to decay by half is $$N(\bar t)=N_0 e^{-\bar t/\tau}=\frac{N_0}{2} \hspace*{4mm} \rightarrow \hspace*{2mm} \bar t=18 \hspace*{1mm} \mathrm{ns}$$ It is observed that they are reduced to the half after 37 m from the target. If their lifetime would be as when at rest they should become the half already after about 5 m. \noindent The experimental observation is explained if the pion lifetime in the laboratory frame is $$\tau=\gamma \tau_0$$ as predicted by time dilation. \noindent The decaying pions produce muons which are unstable too. Their lifetime at rest is 2.2 $\mu$s, small however larger than pion one. Time dilation may allow us realizing future colliders smashing muons if we are able to accelerate them to high energy very quickly! \section{Relativistic Dynamics} Assuming $\vec F$ invariant and $m$ constant, Newton law, $\vec F=m\vec a$, is not invariant under Lorentz transformations because, as we have seen, $\vec a$ is not invariant. \noindent In addition the mass cannot be a constant because by applying a constant force to an object its speed would increase indefinitely becoming larger than $c$. Classical mechanics must be modified to achieve invariance under Lorentz transformations and the new expressions must reduce to the classical ones when the speed of the objects is much smaller than $c$. \subsection{The relativistic mass} In the 1905 paper, Einstein used the Lorentz force and the electro-magnetic field transformations to achieve the generalization of the definition of momentum and energy. We follow here instead the approach by Lewis and Tolman~\cite{LT09}. In 1909 MIT professors of chemistry Lewis and Tolman suggested a more straightforward reasoning with respect to Einstein original one and which involved purely mechanical arguments. Let us assume there are two observers, Alex and Betty, moving towards each other with the same velocity as seen by a third observer, Charlie (see Fig.~\ref{eliana-rel-f11}). \noindent Betty sits in $S$ and Alex in $S'$. Alex and Betty have identical elastic balls. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=72mm]{AB_dancers.png}} \caption{\label{eliana-rel-f11} Betty and Alex frames as seen by the third observer Charlie. They move towards each other with equal and opposite velocity along the $x$-axis.} \end{figure} Betty (see Fig.~\ref{eliana-rel-f12}) releases the red ball with $v^B_x$=0 and $v^B_z\neq 0$, while Alex (Fig.~\ref{eliana-rel-f13}) releases the green ball with speed $v'^A_x$=0 and $\bar v'^A_z$ numerically equal and opposite to the red ball velocity, that is $$v'^A_z=-v^B_z $$ The experiment is set so up that the two balls collide and rebound as shown in Fig.~\ref{eliana-rel-f14}. \newline Now let us consider Betty point of view. For Betty it is $$\Delta p^B_x=0 \hspace*{4mm} \Delta p^B_z=2 m_B v^B_z $$ $$\Delta p^A_x=0 \hspace*{4mm} \Delta p^A_z=-2 m_A v^A_z$$ We recall Lorentz transformation of velocity $$ v'_z = \frac{v_z}{\gamma (1-v_x \beta/c)} $$ As we know the values of the velocity components in the moving frame we need here the inverse of the velocity transformation Eq.~(\ref{eliana-rel-eq5b}), that is $$ v_x= \frac{v'_x+V}{1+v'_x \beta/c} $$ $$ v_z= \frac{v'_z}{\gamma (1+v'_x \beta/c)} $$ In our case $v'_x$=$v'^A_x$=0 and $v'_z$=$v'^A_z$=$-v^B_z$ and therefore $ v_x^A=V$ before and after the collision, and \begin{eqnarray}\label{eliana-rel-eq12} v^A_z=v'^A_z/\gamma=-v^B_z/\gamma \end{eqnarray} with $ \gamma=1/\sqrt{1-(v^A_x/c)^2}$, before the collision and \begin{eqnarray}\label{eliana-rel-eq12a} v^A_z=-v'^A_z/\gamma=v^B_z/\gamma \end{eqnarray} after. \begin{figure} \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=42mm]{B_view.png}} \caption{\label{eliana-rel-f12} The elastic collision as observed by Betty.} \end{minipage} \hfill \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=42mm]{A_view.png}} \caption{\label{eliana-rel-f13} The elastic collision as observed by Alex.} \end{minipage}% \end{figure} \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=42mm]{C_view.png}} \caption{\label{eliana-rel-f14} The elastic collision as observed by Charlie. The two balls are scattered under the same angle of incidence.} \end{figure} \noindent Momentum, classically defined as $\vec p$=$m\vec v$, is conserved if $\Delta(\vec p^A + \vec p^B)$=0. It is $\Delta p_x^A $=$\Delta p_x^B $=0. In addition it must be $$ \Delta p^B_z=-\Delta p^A_z$$ which using Eqs.~(\ref{eliana-rel-eq12}) and~(\ref{eliana-rel-eq12a}) and the definition of momentum gives $$m_B v^B_z=-m_A v^A_z = \frac{1}{\gamma}m_A v^B_z \hspace*{4mm} \rightarrow \hspace*{2mm}m_A=\gamma m_B$$ \noindent We may assume that $v^B_z$ is small so that $m_B$ is the \emph{mass at rest}, $m_0$, and $m_A$=$m(v)$. So we have found that $$ m(v)=\gamma m_0$$ where $m_0$ is the mass in the reference frame where the object is at rest. We can keep the momentum definition, $\vec p$=$m \vec v$, from classical dynamics by giving up the invariance of mass. It is worth noting that in modern physics language $m$ is used for denoting the rest mass. An approach similar to Lewis and Tolman one is used in~\cite{RR68} where the elastic scattering of two identical particles is observed in the center of mass and in the frame of one of the two particles. The assumption done here (and in~\cite{RR68}) is that the scattering angle is equal to the incidence one which is a possible realization of an elastic scattering. \newpage \subsection{The relativistic energy} As the mass depends upon $v$, let us write the Newton law $\vec F$=$m d\vec v/dt$ as $$\vec F= \frac{d\vec p}{dt} =\frac{d }{dt}(\gamma m_0 \vec v)=m_0 \vec v \frac{d\gamma}{dt} + m_0\gamma \frac{d\vec v}{dt} $$ \noindent By scalar multiplication of the r.hs. and l.h.s. by $ \vec{v}$, it is $$ \vec F\cdot {\vec{v}}={\vec{v}}\cdot\frac{d\vec p}{dt} $$ $$ {\vec{v}}\cdot\frac{d\vec p}{dt}= m_0\gamma{\vec{v}}\cdot\frac{d{\vec{v}}}{dt}+m_0\frac{v^2}{c^2}\gamma^3v \frac{dv}{dt}= m_0\gamma v(1+\frac{v^2\gamma^2}{c^2})=m_0\gamma^3v\frac{dv}{dt} $$ that is $$ \frac{dE}{dt}=\vec F\cdot {\vec{v}}=m_0\gamma^3v\frac{dv}{dt}$$ It is easy to verify that this equation is satisfied if we define the energy as $$E=mc^2=\gamma m_0c^2$$ For $v$=0, it is $E_0$=$m_0 c^2$ which has the meaning of the \emph{energy at rest}. Relativistically the energy of a free particle at rest is non-vanishing. \noindent The (relativistic) kinetic energy is obtained by subtracting the rest energy from the total energy $$T=mc^2-m_0c^2=m_0c^2(\gamma-1)\neq \frac{1}{2} \gamma m_0 v^2 $$ which gives the classical kinetic energy $T\simeq m_0v^2/2$ for $v\ll c$. Experiments confirmed the validity of the relativistic relationship between $\vec p$ and $\vec v$. In particular Bertozzi experiment~\cite{WB64} measured directly the velocity of an $e^-$ beam accelerated in a linear accelerator. The experimental arrangement is shown in Fig.~\ref{eliana-rel-f15}. The $e^-$ speed was measured through the time of flight. The kinetic energy was computed from the accelerating field and from the measurement of the heat deposited at the aluminum target. \noindent The results in Fig.~\ref{eliana-rel-f16} confirm Einstein prediction and also show clearly the presence of a limit speed, $c$. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=72mm]{Bertozzi_apparatus.png}} \caption{\label{eliana-rel-f15} Bertozzi apparatus (from~\cite{WB64}).} \end{figure} \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=72mm]{Bertozzi_fig.png}} \caption{\label{eliana-rel-f16} Bertozzi results (from~\cite{WB64}). The solid line is the calculation using classical formulas, while the dashed line is the relativistic prediction. The dots are the experimental results.} \end{figure} Relativity is of fundamental importance for accelerators where the particles may be accelerated to speed near to $c$. In a ring accelerator dipole magnets keep the particles on the design orbit and longitudinal radio-frequency electric fields boost their energy. The relationship between momentum and speed dictates how the frequency of the accelerating electric field and the dipole field must be varied with energy. The dipole field must be ramped up according to momentum for keeping the particles on the design orbit ($\rho$=$p/eB$). The electric field frequency, which is a multiple of the revolution frequency, is $$f_{rf}= h f_{rev} = h\frac{\beta c} {L}=h\frac{c} {\gamma L}\sqrt{\gamma^2-1} $$ which for large $\gamma$ becomes $$ f_{rf} \approx h \frac{c}{L}(1-\frac{1}{2\gamma^2}) $$ At large $\gamma$ the revolution frequency is almost constant. This is particularly true for $e^\pm$ which have 1836 larger $\gamma$ than protons for the same energy. Fig.~\ref{eliana-rel-f17} shows the CERN PS Booster case where the protons kinetic energy is ramped from 160 MeV to 2 GeV. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=68mm]{booster_n.pdf}} \caption{\label{eliana-rel-f17} Momentum and revolution frequency in the CERN PS Booster as a function of the kinetic energy. The ring is about 157 m long.} \end{figure} \newpage While in classical mechanics the mass is an invariant scalar conserved in physics processes, relativistically the rest mass alone is not conserved. To show that the rest mass is not conserved we consider an inelastic scattering (kinetic energy is not conserved) between two identical particles, $A$ and $B$, with rest mass $m_0$. In the center of mass, $S'$, it is $\vec v'^A$=$-\vec v'^B$. We may assume $\vec v'^A$=$-\vec v'^B$=$\hat x v'$ (see Fig.~\ref{eliana-rel-f18}). After colliding the two particles glue together in a new particle, $C$, at rest in $S'$ (see Fig.\ref{eliana-rel-f19}) so that momentum is conserved. In the reference frame, $S$, where $A$ is at rest, the particle $B$ moves before the collision with speed $v^B_x$=2$v'/(1+v'^2/c^2)$ while after the collision $C$ moves with speed $v^C_x$=$v'$ (see Figs.~\ref{eliana-rel-f20}, \ref{eliana-rel-f21}). The mass of $B$ in $S$ is $$ m_B = \frac{m_0}{\sqrt{1-(v^B_x/c)^2}} = \frac{m_0 [1+(v'/c)^2]}{1-(v'/c)^2} $$ Momentum conservation in $S$ requires \begin{gather*} \begin{align*} \underbrace{p^A_{x}+p^B_{x}}_{\text{before}} & = \underbrace{p^C_{x}}_{\text{after}} \quad \rightarrow \quad \frac{m_0 v^B_x}{\sqrt{1-(v_B/c)^2}} = \frac{m^C_0 v'}{\sqrt{1-(v'/c)^2}} \end{align*} \end{gather*} Using the value found for $v^B_x$ and solving for $m^C_0$ we get $$m^C_0 = \frac{2 m_0}{\sqrt {1-(v'/c)^2}}$$ $$m^C_0-2m_0 = 2m_0\biggl(\frac{1}{\sqrt{1-(v'/c)^2}}-1\biggr)$$ The rest mass of the product particle $C$ is \emph{larger} than the sum of the starting particle rest masses and the difference, multiplied by $c^2$, is just the initial total kinetic energy in $S'$, $T'_A+T'_B$. The kinetic energy in $S'$ has been completely converted into mass. Although the kinetic energy is not conserved the total energy, kinetic plus energy at rest, is conserved. The fact that the sum of the rest masses is not conserved is a fact well known to every high energy particle physicist. An example is the annihilation of a $e^+ e^-$ pair into 2 photons. \begin{figure}[htb] \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=56mm]{eli_rel2c.pdf}} \caption{\label{eliana-rel-f18} Identical particle colliding head-on observed in the center of mass frame , $S'$.} \end{minipage} \hfill \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=56mm]{eli_rel3c.pdf}} \caption{\label{eliana-rel-f19} After inelastic collision the two particles glue together in the particle $C$ at rest in $S'$.} \end{minipage}% \end{figure} \begin{figure}[htb] \begin{minipage}[c]{0.4\linewidth} \rotatebox{0}{ \includegraphics*[width=56mm]{eli_rel4c.pdf}} \caption{\label{eliana-rel-f20} Particles observed before collision in the frame $S$ where $A$ is at rest.} \end{minipage} \hfill \begin{minipage}[c]{0.4\linewidth} \centering \rotatebox{0}{ \includegraphics*[width=56mm]{rel8c.png}} \caption{\label{eliana-rel-f21} Particle $C$ after collision as seen in $S$.} \end{minipage}% \end{figure} \section{4-vectors} While relativistically lengths and time depend upon the motion of the observer, the interval defined as $$ \bigl(ds\bigr)^2 \equiv \bigl[d\bigl(ct\bigr)\bigr]^2 -\bigl(dx\bigr)^2-\bigl(dy\bigr)^2-\bigl(dz\bigr)^2 $$ is invariant under Lorentz transformations. Indeed \begin{align*} ds'^2 & = c^2dt'^2-\bigl(dx'^2+dy'^2+dz'^2\bigr) \\ & = \gamma^2\bigl(c^2dt^2+\beta^2dx^2 -2\beta c \, dt \, dx-\beta^2 c^2dt^2-dx^2+2\beta c \, dt \, dx\bigr)-dy^2-dz^2 \\ & = \gamma^2\bigl[\bigl(1-\beta^2\bigr)\bigl(c^2dt^2 -dx^2\bigr)\bigr]-dy^2-dz^2 \\ & = c^2dt^2-dx^2 -dy^2-dz^2 = ds^2 \end{align*} If $(ct_1,x_1,y_1,z_1)$ and $(ct_2,x_2,y_2,z_2)$ are the coordinates of two events in $S$ we ask whether it is possible to find an inertial frame $S'$ where the two events happen in the \emph{same place}. As the interval is invariant this means that $$(\Delta s')^2 = (\Delta s)^2$$ and therefore $$(c\Delta t')^2 = (c\Delta t )^2-(\Delta x^2 +\Delta y^2 +\Delta z^2)$$ where we have set $\Delta t \equiv t_2-t_1$, $\Delta x \equiv x_2-x_1$ and so on. The l.h.s. of this equation is always positive. Therefore the answer is affirmative if $(\Delta s)^2 >$0. Such intervals are called \emph{time-like} intervals. The time in $S'$ between the two events is $$\Delta t'=\frac{1}{c}\sqrt{c^2\Delta t^2-(\Delta x^2 +\Delta y^2 +\Delta z^2)}=\frac{\Delta s}{c}$$ By using the Lorentz transformations we find that the speed of the frame $S'$ with respect to $S$ is $V$=$\sqrt{\Delta x^2+\Delta y^2 +\Delta z^2}/\Delta t$ which is smaller than $c$ because $(\Delta s)^2 >$0. Now we ask if it is possible to find an inertial frame where the two events happen at the \emph{same time}. In this case $(\Delta s')^2 = (\Delta s)^2$ implies that $$ (c\Delta t )^2-(\Delta x^2 +\Delta y^2 +\Delta z^2)= -(\Delta x'^2 +\Delta y'^2 +\Delta z'^2) < 0$$ that is $(\Delta s)^2$ must be negative. The distance between the two events in $S'$ is $$\sqrt{\Delta x'^2 +\Delta y'^2 +\Delta z'^2}=\sqrt{\Delta x^2 +\Delta y^2 +\Delta z^2-(c\Delta t)^2 } $$ which is a \emph{real} number as the argument of the square root on the l.h.s is positive. By using the Lorentz transformations we find $$0=c\Delta t'=\gamma(c\Delta t-\beta\Delta x)$$ that is the speed $V$ of the frame $S'$ with respect to $S$ is $V$=$c^2\Delta t/\Delta x$. The constraint $v<c$ imposes $\Delta x/\Delta t>c$. This means that between the two events there may exist no causality connection. These intervals are called \emph{space-like} intervals. \vspace*{0mm} \noindent Finally the case $\Delta s'$=$\Delta s$=0 corresponds to events connected by a light ray. \noindent Let us consider a particle moving with velocity $\vec v(t)$, non necessarily uniform, in $S$. The time interval $d\tau$ evaluated in a inertial frame $S'$ where the particle is instantaneously at rest is called proper time. It is related to the time measured in $S$ by \begin{align*} d\tau & = \sqrt{1-v^2/c^2} dt \equiv \frac{dt}{\gamma} \\ \intertext{and for a finite time interval} t_2-t_1 & = \int_{\tau_1}^{\tau_2} \frac{d\tau}{\sqrt{1-v^2/c^2}} \end{align*} The proper time is \emph{by definition} an invariant. This results also from the fact that $c^2d\tau^2$ is the invariant $ds^2$ evaluated in the frame where the particle is instantaneously at rest. This definition of proper time contains the definition given in Section VI as a particular case when the particle is not accelerated. \vspace*{4mm} An object which 4 components transform as $(ct,x,y,z)$ is a 4-vector. In the same way as done for intervals, one can prove that for any 4-vector the quantities \begin{align*} A^\nu B_\nu & \equiv A_0 B_0 -\bigl(A_x B_x + A_y B_y +A_z B_z\bigr) \intertext{and in particular} A^\nu A_\nu & = A_0^2 -\bigl(A_x^2 + A_y^2 + A_z^2\bigr) \nonumber \end{align*} are invariant. Classically the scalar products, $\vec A\cdot\vec B$, and in particular the length of vectors, $\vec A\cdot\vec A$, are invariant. \section{Minkowski space-time} In 1907 the mathematician Hermann Minkowski, who was Einstein professor at Z\"urich Polytechnic, showed that the special theory of relativity can be formulated by using a 4-dimensional space with metric tensor\footnote{Here it is a 4$\times$4 matrix defining the scalar product.}, $g$, given by $$g= \left ( \begin{matrix} +1 & 0 & 0 & 0 \\ 0 &-1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & -1 \\ \end{matrix} \right ) $$ Lorentz frames are those where the metric tensor takes this special form; they are connected by Lorentz transformations. The points of Minkowski space-time are the \emph{events} and the vectors in this space have 4 components (\emph{4-vectors}) which transforms according to Lorentz transformations. The transformations for momentum and energy may be found directly from the definitions and the Lorentz transformations for the velocity, without introducing the notion of 4-vectors. The result is $$p'_x=\gamma_V(p_x-E \hspace*{1mm}V/c^2) \hspace*{10mm} p'_y=p_y \hspace*{10mm} p'_z=p_z$$ $$E'=\gamma_V (E-V\hspace*{1mm} p_x)$$ with $\gamma_V\equiv1/\sqrt{1-V^2/c^2}$. \emph{A posteriori} we notice that the transformations have the same form as the coordinates transformations with $\vec r \rightarrow \vec p$ and $t \rightarrow E/c^2 $ and therefore $(E/c,\vec p)$ is a 4-vector. A more elegant way of reaching the same result is by noticing that $(E/c,\vec p)$ \emph{must} transform according to Lorentz transformations, and therefore it is a 4-vector. Indeed owing to the fact that the proper time interval $d \tau=dt/\gamma$ is an invariant and that $(cdt,dx,dy,dz)$ transforms obviously as $(ct,x,y,z)$, the quantity (\emph{4-velocity}) defined as \begin{eqnarray}\label{eliana-rel-eq12b} \Bigl(\frac{c \, dt}{d\tau},\frac{dx}{d\tau}, \frac{dy}{d\tau},\frac{dz}{d\tau}\Bigr) =\Bigl(\gamma\frac{c \, dt}{dt},\gamma\frac{dx}{dt}, \gamma\frac{dy}{dt},\gamma\frac{dz}{dt}\Bigr) \equiv (\gamma c,\gamma \vec{v}) \end{eqnarray} transforms according to Lorentz transformations. Multiplying the 4-velocity by the rest mass we get $$ m_0(\gamma c,\gamma \vec{v})=(E/c,\vec p) $$ which is also a 4-vector (energy-momentum or 4-momentum vector) and therefore transforms according to Lorentz transformation. Relativistically energy and momentum are closely connected. If in one inertial reference frame energy and momentum are conserved ($\Delta \vec p$=0 and $\Delta E$=0), for example in a collision between particles, they are conserved in every other inertial frame because a 4-vector having all components vanishing in a reference frame will have vanishing components in any other one too. Similarly if momentum is conserved for two inertial observers ($\Delta \vec p$=$\Delta {\vec p}\hspace*{1mm}'$=0), the energy too must be conserved. \section{ Newton and Minkowski force and their relativistic transformation} We may write the relativistic Newton law $\vec F$=$d\vec p/dt$ in terms of 4-vectors. In the particle proper frame \begin{eqnarray}\label{eliana-rel-eq13} \frac{dp^\nu}{d\tau}=f^\nu \end{eqnarray} with $(p^0,p^1,p^2,p^3)$=$(E/c, p_x,p_y,p_z)$ and $(f^0,f^1,f^2,f^3)$=$(f^0,F_x,F_y,F_z)$. The l.h.s. is a 4-vector and therefore also $\vec f$, the Minkowski force, on the r.h.s. must be a 4-vector related to the Newton force $\vec F$. The space part of the equation of motion is \begin{gather*} \frac{d \vec{p}}{d \tau} = \vec{f} \qquad \rightarrow \qquad \gamma \frac{d \vec{p}}{dt} = \vec{f} \qquad \rightarrow \qquad \vec{f} =\gamma \vec{F}% \end{gather*} The time part of Eq.~(\ref{eliana-rel-eq13}) is \begin{eqnarray}\label{eliana-rel-eq14} f^0 = \frac{d p^0}{d \tau} = \frac{1}{2p^0} \frac{d (p^0)^2}{d \tau}=\frac{1}{2p^0} \frac{d (E/c)^2}{d \tau} \end{eqnarray} The invariance of $(E/c)^2 -\vec p\cdot\vec p=(m_0c)^2$ implies that $$ \frac{d}{d \tau} \biggl[ \left(\frac{E}{c}\right)^2 -\vec{p} \cdot \vec{p} \biggr]=0 $$ $$ \frac{d}{d \tau} \left(\frac{E}{c}\right)^2 = 2\vec{p} \cdot \frac{d \vec{p}}{d \tau} $$ which inserted in Eq.~(\ref{eliana-rel-eq14}) gives \begin{eqnarray} f^0 = \frac{1}{2p^0} \frac{d (E/c)^2}{d \tau} = \frac{1}{2 p^0} 2 \vec{p} \cdot \frac{d \vec{p}}{d \tau} = \frac{m_0 \gamma\vec{v}}{E/c}\cdot \bigl(\gamma \vec{F}\bigr) = \gamma \vec{\beta} \cdot \vec{F} \end{eqnarray} The Minkowski force is therefore $$(f^0,f^1,f^2,f^3)=(\gamma\vec\beta\cdot \vec F, \gamma \vec F)$$ We notice that \begin{alignat*}{3} \frac{dE}{dt} = \frac{1}{\gamma}\vec{v} \cdot \vec{f} & = \frac{1}{\gamma}\frac{d\vec{\ell}}{dt} \cdot \vec{f} \qquad & \rightarrow && \hspace*{4mm}dE & = \frac{1}{\gamma} d \vec{\ell} \cdot \vec{f} \end{alignat*} which is the expression of the work done by a force $\vec{F}=\vec{f} /\gamma$. In absence of external forces ($\vec{F}$=0) it is $\vec f$=0 and momentum and energy are conserved. Being a 4-vector, Minkowski force transforms following Lorentz transformations. It must be paid attention to distinguish between the particle velocity, $\vec v$, in the $S$ frame and the frames relative speed that we will denote by $\vec V$. Using the general expression of Lorentz transformations Eq.~(\ref{eliana-rel-eq5a}) we have \begin{align*} f'^0 & = \gamma_V \bigl(f^0 - \vec{\beta}_V \cdot \vec{f}\bigr) \\ \vec{f}\hspace*{1mm}' & = \vec{f} +\frac{\gamma_V-1}{\beta_V^2} \bigl(\vec{\beta}_V \cdot \vec{f}\,\bigr) \vec{\beta}_V-\gamma_V f^0 \vec{\beta}_V \end{align*} The Newton force transformation writes \begin{equation}\label{eliana-rel-eq16} \gamma\hspace*{0.5mm}' \vec{F}' = \gamma \vec{F} +\frac{\gamma_V-1}{\beta_V^2} \bigl[\vec{\beta}_V \cdot \bigl(\gamma\vec{F}\bigr)\bigr] \vec{\beta}_V-\gamma_V \vec{\beta}_V \bigl(\gamma \vec{\beta} \cdot \vec{F}\bigr) \end{equation} The inverse transformation is obtained by replacing $\vec{\beta}_V$ with $-\vec{\beta}_V$ \begin{equation}\label{eliana-rel-eq17} \gamma \vec{F} = \gamma\hspace*{0.2mm}' \vec{F}' +\frac{\gamma_V-1}{\beta_V^2} \bigl[\vec{\beta}_V \cdot \bigl(\gamma\hspace*{0.2mm}'\vec{F}'\bigr)\bigr] \vec{\beta}_V+\gamma_V \vec{\beta}_V \bigl(\gamma\hspace*{0.2mm}' \vec{\beta}' \cdot \vec{F}'\bigr \end{equation} \section{Relativistic transformation of EM fields} The force acting on a charged particle moving in a EM field is the Lorentz force $$ \vec F = q \vec E + q \vec v \times \vec B $$ The corresponding Minkowski force is \begin{align*} f^\nu & = \bigl(\gamma \vec{\beta} \cdot \vec{F}, \gamma \vec{F}\bigr) = q \, \bigl[\gamma \vec{\beta} \cdot \bigl(\vec{E} + \vec{v} \times \vec{B}\bigr), \gamma \, \bigl(\vec{E} + \vec{v} \times \vec{B}\bigr)\bigr] \end{align*} with $\gamma=1/\sqrt{1-(V/c)^2}$. This equation can be written in matrix form as $$ \left ( \begin{matrix} f^0 \\ f^1\\ f^2\\ f^3 \\ \end{matrix} \right )=\frac{q}{c} \left ( \begin{matrix} 0 & E_x & E_y & E_z \\ E_x & 0 & cB_z & -cB_y \\ E_y & -cB_z & 0 & cB_x \\ E_z & cB_y & -cB_x & 0 \\ \end{matrix} \right ) \left ( \begin{matrix} \gamma c \\ \gamma v_x\\ \gamma v_y \\ \gamma v_z\\ \end{matrix} \right ) $$ Minkowski force and the 4-velocity $(\gamma c, \gamma \vec v)$ (Eq.~\ref{eliana-rel-eq12b}) are 4-vectors. Using the Lorentz transformation ${\cal L}$ from $S$ to $S'$ and ${\cal L}^{-1}$ from $S'$ to $S$ we get $$ \left ( \begin{matrix} f'^0 \\ f'^1\\ f'^2\\ f'^3 \\ \end{matrix} \right )= {\cal L} \left ( \begin{matrix} f^0 \\ f^1\\ f^2\\ f^3 \\ \end{matrix} \right )=\frac{q}{c}{\cal L} \left ( \begin{matrix} 0 & E_x & E_y & E_z \\ E_x & 0 & cB_z & -cB_y \\ E_y & -cB_z & 0 & cB_x \\ E_z & cB_y & -cB_x & 0 \\ \end{matrix} \right ) {\cal L}^{-1} \left ( \begin{matrix} \gamma' c \\ \gamma' v'_x\\ \gamma' v'_y \\ \gamma' v'_z\\ \end{matrix} \right ) $$ Requiring that the Minkowski force in $S'$ has the same form as in $S$, it must be $$ \left ( \begin{matrix} 0 & E'_x & E'_y & E'_z \\ E'_x & 0 & cB'_z & -cB'_y \\ E'_y & -cB'_z & 0 & cB'_x \\ E'_z & cB'_y & -cB'_x & 0 \\ \end{matrix} \right )= {\cal L} \left ( \begin{matrix} 0 & E_x & E_y & E_z \\ E_x & 0 & cB_z & -cB_y \\ E_y & -cB_z & 0 & cB_x \\ E_z & cB_y & -cB_x & 0 \\ \end{matrix} \right ) {\cal L}^{-1} $$ which yelds the field components in $S'$ \begin{alignat*}{2} E'_x & = E_x & \qquad B'_x & = B_x \\ E'_y & =\gamma_V (E_y-VB_z) & \qquad B'_y & = \gamma_V \Bigl(B_y +{V\over c^2} E_z\Bigr) \\ E'_z & =\gamma_V (E_z+VB_y) & \qquad B'_z & = \gamma_V \Bigl(B_z -{V\over c^2} E_y\Bigr) \\ \end{alignat*} In alternative to the previous formal approach, we give here a way for finding directly the field transformation from physical considerations. The force acting on a charged particle moving in a EM field is the Lorentz force $$ \vec F = q \vec E + q \vec v \times \vec B $$ The corresponding Minkowski force is \begin{align*} f^\nu & = \bigl(\gamma \vec{\beta} \cdot \vec{F}, \gamma \vec{F}\bigr) = q \, \bigl[\gamma \vec{\beta} \cdot \vec{E} , \gamma \, \bigl(\vec{E} + \vec{v} \times \vec{B}\bigr)\bigr] \end{align*} In a second reference frame, $S'$, the force must have the same form \begin{align*} f'^\nu & = \bigl(\gamma' \vec{\beta}' \cdot \vec{F}', \gamma' \vec{F}'\bigr) = q \, \bigl[\gamma' \vec{\beta'} \cdot \vec{E'} , \gamma' \, \bigl(\vec{E'} + \vec{v'} \times \vec{B'}\bigr)\bigr] \end{align*} where we assumed $q'=q$ which is a fact experimentally proven with high precision. Knowing how the Minkowski force transforms it is possible to get the expressions for the field transformations. Let us consider the case of a particle at rest in $S$ subject to the fields $\vec E$ and $\vec B$. In $S$ it is $$ \vec F = q \vec E $$ In the frame $S'$ moving with translational motion along the common $\hat x$-axis with velocity $V$ with respect to $S$ it is $v_x'$=$-V$ and $v'_y$=$v'_z$=0. The force components in $S'$ are \begin{align*} F'_x & = q(E'_x+v'_yB'_z-v'_zB'_y)=qE'_x \\ F'_y & = q(E'_y-v'_xB'_z+v'_zB'_x)=qE'_y+qVB'_z \\ F'_z & = q(E'_z+v'_xB'_y-v'_yB'_x)=qE'_z-qVB'_y \end{align*} From Eq.~(\ref{eliana-rel-eq16}) the force components transform as \begin{align*} \gamma_V F'_x & = F_x +{\gamma_V-1 \over \beta_V^2} \beta_V^2 F_x = \gamma_V F_x \\ \gamma_V F'_y & = F_y \\ \gamma_V F'_z & = F_z \end{align*} with $\gamma_V\equiv1/\sqrt{1-(V/c)^2}$. Writing explicitly the force in terms of the fields we get \begin{alignat*}{3} E_x & = E'_x & \qquad E_y & =\gamma_V (E'_y+VB'_z) & \qquad E_z & =\gamma_V (E'_z-VB'_y) \\ \end{alignat*} The inverse transformation are obtained replacing $V$ with $-V$ \begin{alignat*}{3} E'_x & = E_x & \qquad E'_y = \gamma_V (E_y-&VB_z) & \qquad E'_z = \gamma_V (E_z+&VB_y) \end{alignat*} Finding out the transformation for the magnetic field is a more complicated because the electric force cannot be made vanishing by a convenient choice of the reference frame. We consider again the two frames $S$ and $S'$, with $S'$ moving with velocity $V$ along the common $\hat x$-axis. For a charged particle moving in $S'$ along the $\hat y'$-axis it is $v_x$=$V$, $v_y$=$v'_y/\gamma_V$ and $v_z$=$v'_z$=0. The force in $S'$ is $$ F'_x = q(E'_x+v'_yB'_z) \qquad F'_y = qE'_y \qquad F'_z = q(E'_z-v'_yB'_x) $$ Using the force transformation we get \begin{align*} \gamma'F'_x & = \gamma' q(E'_x+v'_yB'_z) = \gamma F_x+ {\gamma_V -1 \over \beta_V^2 } \beta_V^2\gamma F_x -\gamma_V \gamma \beta_V({v_x \over c}F_x+{v_y \over c} F_y) \\ & = {\gamma \over \gamma_V}F_x-\gamma_V \gamma{v_y V \over c^2} F_y = {\gamma \over \gamma_V}q(E_x+v_yB_z)-\gamma_V \gamma{v_y V \over c^2} q(E_y-v_xB_z) \\ \gamma'F'_y & = \gamma' qE'_y = \gamma F_y = \gamma q(E_y-v_xB_z) \\ \gamma'F'_z & = \gamma' q(E'_z-v'_yB'_x) = \gamma F_z = \gamma q(E_z+v_xB_y-v_yB_x) \end{align*} Using the transformations already found for the electric field and the fact that in this case it is $\gamma' \gamma_V=\gamma$, we notice that the equation for $F'_y$ is an identity while the other two equations give $$ \gamma v_y B'_z = {\gamma \over \gamma_V} v_yB_z- \gamma \gamma_V {v_y V \over c^2}(E_y-VB_z) $$ $$ \gamma' \gamma_V(E_z+VB_y)-\gamma_Vv_yB'_x = \gamma (E_z+VB_y-v_yB_x) $$ The magnetic field component transformations are therefore \begin{align*} B'_z & = \gamma_V (B_z -{V\over c^2} E_y) \\ B'_x & = B_x \end{align*} The transformation for $B_y$ is obtained considering a particle moving along the $\hat z'$-axis and writes \begin{align*} B'_y & = \gamma_V (B_y +{V\over c^2} E_z) \end{align*} The expressions found are valid for a translational motion along the $x$-axis. In the general case when $\vec V$ has an arbitrary direction the field transformations write~\cite{DJ75} \begin{equation} \begin{aligned} \vec{E}' & = \gamma_V\bigl(\vec{E}+\vec{V} \times \vec{B}\bigr) -\frac{\gamma_V^2}{\gamma_V+1} (\vec{\beta}_V \cdot \vec{E}) \vec{\beta}_V \\ \vec{B}' & = \gamma_V\bigl(\vec{B} -\frac{\vec{V}}{c^2} \times \vec{E}\bigr) - \frac{\gamma_V^2}{\gamma_V+1}\bigl(\vec{\beta}_V \cdot \vec{B}\bigr) \vec{\beta}_V \end{aligned} \end{equation} Decomposing the fields in their components parallel and perpendicular to the relative velocity $\vec V$, these relations may be written also as \begin{align*} \vec{E}' & = \vec{E}_{\parallel}+\gamma_V\bigl(\vec{E}_\bot + \vec{V} \times \vec{B}\bigr) \\ \vec{B}' & = \vec{B}_{\parallel}+\gamma_V\bigl(\vec{B}_\bot - \frac{\vec{V}}{c^2}\times \vec{E}\bigr) \end{align*} where we made use of the identity $$ \gamma_V-\frac{\gamma^2_V \beta^2_V}{\gamma_V +1} = 1 $$ \subsection{Transformation of a charge distribution} Let us consider a distribution of charges at rest in $S'$. The charge density is given by $$ \rho'(x',y',z',t') = \frac{qN}{dx' dy' dz'} $$ In $S$, moving with velocity $-V$ with respect to $S'$ (see Fig.~\ref{eliana-rel-f25}), the volume element is $$dx\, dy \, dz= \frac{dx'}{\gamma} dy' dz' $$ where we have taken into account the length contraction in the $x$ direction. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=42mm]{charges_in_box2.png}} \caption{\label{eliana-rel-f25} Charge distribution at rest in $S'$ .} \end{figure} The charge density in $S$ is therefore \begin{gather*} \rho = \frac{qN}{dx\, dy\, dz} = \gamma \rho' \end{gather*} As the charge distribution moves in $S$ with velocity $+\hat x V$, in $S$ there is a current moving in the $x$ direction with density \begin{gather*} j_x = \rho V = \gamma \rho' V \end{gather*} and in general $$\vec{j} = \rho\vec{V} = \gamma\rho'\vec{V}$$ There is an analogy with the energy-momentum 4-vector $(E/c, \vec p)$ or $(mc,\vec p)$ with $ \rho \hspace*{1mm} \rightarrow \hspace*{1mm} m$ and $\vec j \hspace*{1mm} \rightarrow \hspace*{1mm} \vec p$. In fact renaming by $\rho_0$ the charge density in the rest frame, $\rho'$, we may write $$\rho=\rho_0 \frac{m}{m_0}$$ $$\vec j = \rho_0 \frac{\vec p}{m_0} $$ as $m/m_0$=$\gamma$. Therefore $(\rho c,\vec{j} \hspace*{1mm})$ is a 4-vector. Indeed the transformations we have found are the (inverse) Lorentz transformations for the particular case $\vec j'$=0. \noindent In the Lorentz gauge $$\nabla \cdot\vec A=- \frac{1}{c^2}\frac{\partial \Phi}{\partial t} $$ the equations for the scalar and vector potential take the form $$ \frac{1}{c^2}\frac{\partial^2 \Phi}{\partial t^2} - \nabla^2\Phi = \frac{\rho}{\epsilon_0} $$ $$ \frac{1}{c^2}\frac{\partial^2 \vec A}{\partial t^2} - \nabla^2\vec A = \frac{\vec j}{\epsilon_0c^2} $$ Using the d'Alembert operator $$\Box \equiv \frac{1}{c^2}\frac{\partial^2}{\partial t^2} - \nabla^2 $$ these equations can be combined in a single one \begin{equation}{\label{eliana-rel-eq19}} \Box A^\alpha = \frac{1}{\epsilon_0c^2}\mu_0 J^\alpha \end{equation} with $ A^0$=$\Phi/c$, $A^1$=$A_x$, $A^2$=$A_y$, $A^3$=$A_z$ and $J^0$=$c\rho$, $J^1$=$j_x$, $J^2$=$j_y$, $J^3$=$j_z$. We know now that $(c\rho,\vec j \hspace*{1mm})$ is a 4-vector and it is easy to verify that the d'Alembert operator is invariant under Lorentz transformations. Therefore by requiring covariance of the Eq.~(\ref{eliana-rel-eq19}) also $(\Phi/c,\vec A)$ must be a 4-vector. \subsection{Direct proof of invariance of Maxwell Equations} Knowing how fields and sources transform one can prove that Maxwell equations are invariant under Lorentz transformation. \noindent For example let us prove that $$ \nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0} \qquad \Rightarrow \qquad \nabla' \cdot \vec{E}' = \frac{\rho'}{\epsilon_0} $$ The partial derivatives in $S'$ and in $S$ are related by the cyclic rule \begin{alignat*}{2} \frac{\partial}{\partial ct'} & = \frac{\partial ct}{\partial ct'} \frac{\partial}{\partial ct}+\frac{\partial x}{\partial ct'} \frac{\partial}{\partial x}+\frac{\partial y}{\partial ct'} \frac{\partial}{\partial y}+\frac{\partial z}{\partial ct'} \frac{\partial}{\partial z} & = \gamma \left(\frac{\partial}{\partial ct}+ \beta\frac{\partial}{\partial x} \right) & \\ \frac{\partial}{\partial x'} & = \frac{\partial ct}{\partial x'}\frac{\partial}{\partial ct}+ \frac{\partial x}{\partial x'}\frac{\partial}{\partial x}+ \frac{\partial y}{\partial x'}\frac{\partial}{\partial y}+ \frac{\partial z}{\partial x'}\frac{\partial}{\partial z} & = \gamma\left(\beta\frac{\partial}{\partial ct} + \frac{\partial}{\partial x}\right) & \end{alignat*} $$ \frac{\partial}{\partial y'} = \frac{\partial}{\partial y} \qquad\qquad \frac{\partial}{\partial z'} = \frac{\partial}{\partial z} $$ By using the cyclic rule, the EM field transformations and the fact that Maxwell equation hold good in $S$, we find \begin{align*} \nabla' \cdot \vec{E}' & = \frac{\partial E'_x}{\partial x'} +\frac{\partial E'_y}{\partial y'} +\frac{\partial E'_z}{\partial z'} \\ & = \gamma\frac{\partial E'_x}{\partial x} +\frac{\partial E'_y}{\partial y} +\frac{\partial E'_z}{\partial z} +\gamma\beta \frac{\partial E'_x}{\partial ct} \\ & = \gamma \frac{\partial E_x}{\partial x} +\gamma\frac{\partial E_y}{\partial y} +\gamma\frac{\partial E_z}{\partial z} -\gamma V\frac{\partial B_z}{\partial y} +\gamma V\frac{\partial B_y}{\partial z} +\gamma\beta\frac{\partial E_x}{\partial ct} \\ & = \gamma \nabla \cdot \vec{E} -\gamma V \left(\frac{\partial B_z}{\partial y} -\frac{\partial B_y}{\partial z}\right) +\gamma\beta\frac{\partial E_x}{\partial ct} \\ & = \gamma\frac{\rho}{\epsilon_0} -\gamma V \biggl(\nabla \times \vec{B} -\frac{1}{c^2}\frac{\partial \vec{E}}{\partial t}\biggr)_x \\ & = \gamma\frac{\rho}{\epsilon_0} -\gamma V\frac{j_x}{\epsilon_0 c^2} \\ & = \frac{\gamma}{\epsilon_0 c} \left(\rho c - \beta j_x \right) \\ &= \frac{\rho'}{\epsilon_0} \end{align*} which prove the invariance of the first Maxwell law under Lorentz transformations. \section{Some geometrical aspects of special relativity} Let us consider our observer $O$ at the origin of the inertial frame $S$. We can represent the $x$ and $w\equiv ct$ coordinates\footnote{For simplicity only the space coordinate $x$ is considered.} measured by $O$ on two orthogonal axis (see Fig.~\ref{eliana-rel-f22}). This graphical illustration was introduced by Minkowski. Any event is represented by a point in the Minkowski diagram and the trajectory of a particle will be a sequence of points called ``world line". The angle between the tangent to a material particle world line and the $w$-axis is always smaller than 45$^0$, as the particle speed is always smaller than $c$. The world line of a light ray is a straight line at 45$^0$. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=56mm]{world_line.png}} \caption{\label{eliana-rel-f22} $(x,w)$ diagram relative to the inertial reference frame $S$.} \end{figure} Let us consider the $(x,w)$ diagram relative to an inertial reference frame $S$. The world lines of light waves delimit the grey area in Fig.~\ref{eliana-rel-f24} and define the so called \emph{light cone}. For any event point inside the grey area, $P$, it is $w^2-x^2>$0. That is the interval $\Delta s$ between those points and $O$ are time-like and, as seen in the previous section, this means that it is always possible to find a Lorentz transformation where the event happens in the same place and therefore it can be established their chronological sequence. The events in the upper part of the grey region for which $t>$0 happen after the event $O$. This region is called \emph{future} with respect to $O$ ). The events represented by points in the lower part of the grey region for which $t<$0 happen before $O$ (\emph{past}). As the interval $\Delta s^2$ is invariant the fact that the $P$ is a future event with respect to $O$ does not depend upon the reference frame. All points like $Q$ outside the grey area correspond to space-like intervals because $\Delta s^2$=$(ct )^2-x^2<$0. As previously shown these are space-like intervals for which it is not possible to find a reference frame where the events happen in the same space point. Therefore it is not possible to establish a chronological sequence between them. This region is called \emph{elsewhere}. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=56mm]{cone.png}} \caption{\label{eliana-rel-f24} The light cone relative to the observer $O$. $P$ is an event in the future, while $Q$ is an ``elsewhere'' event.} \end{figure} \section{SOME APPLICATIONS OF THE EM FIELD TRANSFORMATIONS} The fact that physics laws are the same in any reference frame allows us to solve problems in the most convenient reference frame. Here we show two typical examples which are relevant in accelerator physics. \subsection{The field of a moving charge} The EM fields generated by a charge at rest in the origin of the $S'$ frame is \begin{align*} \vec{E}' & = \frac{q}{4 \pi \epsilon_0} \frac{\vec{r}'}{r'^3} \\ \vec{B}' & = 0 \end{align*} \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=42mm]{moving_charge.pdf}} \caption{\label{eliana-rel-f26} Particle moving along the $x$-axis of reference $S$.} \end{figure} We can use the field transformations found in Section XIII for computing the fields in the frame where the particle is uniformly moving. We chose the frame so that particle moves along the $x$-axis (see Fig.~\ref{eliana-rel-f26}). The electric field in $S$ is \begin{alignat*}{3} E_x = E'_x & = \frac{q}{4 \pi \epsilon_0} \frac{x'}{r'^3} & \qquad E_y = \gamma E'_y & = \gamma\frac{q}{4 \pi \epsilon_0} \frac{y'}{r'^3} & \qquad E_z = \gamma E'_z & = \gamma\frac{q}{4 \pi \epsilon_0} \frac{z'}{r'^3} \end{alignat*} Expressing the accented particle coordinates in terms of the coordinates in $S$, that is $x'=\gamma (x-vt)$, $y'$=$y$ and $z'$=$z$, the electric field components are \begin{align*} E_x & = \frac{q}{4 \pi \epsilon_0} \frac{\gamma (x-vt)}{[\gamma^2 (x-vt)^2+y^2+z^2]^{3/2}} \\ E_y & = \frac{q}{4 \pi \epsilon_0} \frac{\gamma y}{[\gamma^2 (x-vt)^2+y^2+z^2]^{3/2}} \\ E_z & = \frac{q}{4 \pi \epsilon_0} \frac{\gamma z}{[\gamma^2 (x-vt)^2+y^2+z^2]^{3/2}} \end{align*} As the particle is moving, in $S$ there is also a magnetic field. Using the magnetic field transformations it is $$ \vec B' =0 = \vec{B}_{\parallel}+\gamma_v\bigl(\vec{B}_\bot -\frac{\vec{V}}{c^2} \times \vec{E}\bigr) $$ which means \begin{align*} \vec{B}_{\parallel} & = 0 \\ \vec{B_\bot } & = \frac{1}{c^2} \vec{v} \times \vec{E} \end{align*} We may evaluate the electric field at the time $t=0$ \footnote{At a different time $\bar t$ the fields take at $(x,y,z)$ the same values as at $(x-v\bar t,y,z)$ for $t$=0.} $$ \vec{E} = \frac{q}{4 \pi \epsilon_0} \frac{\gamma \vec{r}}{[\gamma^2 x^2+y^2+z^2]^{3/2}} $$ Denoting with $\theta$ the angle between the $\hat x$-axis and $\vec{r}$ and using the relationship $$\gamma^2 x^2+y^2+z^2=\gamma^2r^2(1-\beta^2\sin^2\theta) $$ we get $$ \vec{E}=\frac{q}{4 \pi \epsilon_0} \frac{1-\beta^2}{r^2(1-\beta^2 \sin^2\theta)^{3/2}}\frac{\vec{r}}{r} $$ The electric field is still radial and follows the $1/r^2$ law, but has no more a spherical symmetry. The magnetic field is perpendicular to the plane defined by $\vec{r}$ and $\vec{v}$. In accelerators, particles are often ``ultra-relativistic'' that is their speed in the laboratory frame is almost $c$. For $\beta \rightarrow$ 1 it is $\vec E \rightarrow$ 0, unless $\theta$=90$^0$ or 270$^0$ where the field is enhanced by a factor $\gamma$. \subsection{Forces between moving charges} Let us consider an uniform cylindrical beam of radius $R$ of equally charged particles moving with velocity $v$ along $\hat{x}$ (see Fig.~\ref{eliana-rel-f27}). Each of them experiences a repulsive electric force and an attractive magnetic force. \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=42mm]{current.pdf}} \caption{\label{eliana-rel-f27} Uniform cylindrical charge distribution.} \end{figure} In the reference frame $S'$ where the particles are at rest there is no magnetic field. Inside the beam ($r'\le R$) it is \begin{align*} F'_{r'} = q E'_{r'} = \frac{1}{2\pi \epsilon_0 R^2} {q^2 \lambda' r'} \qquad \lambda' & = N/L' \end{align*} that is the force acting on each charge is purely radial. By using Eq.~(\ref{eliana-rel-eq17}) for the Newton force transformation with $\gamma'$=1, $\beta'$=0, $\gamma_V$=$\gamma$, $\beta_V$=$\beta$ and $\vec{\beta}_V \cdot \vec{F}'$=0, and we get \begin{align*} F_\parallel = 0 \qquad F_r = \frac{1}{\gamma} F'_{r'} = \frac{1}{2 \pi \epsilon_0 R^2} {q^2 \lambda r} \frac{1}{\gamma^2} \end{align*} where the line density in $S$, $\lambda$, is related to the line density in $S'$, $\lambda'$, by $\lambda = \gamma N/L'$. In the reference frame $S$ the force is still radial and repulsive, but it is reduced by a factor $1/\gamma^2$. Beam in accelerators may be approximated by a uniform cylindrical charge distribution. We see that the repulsive force between the equally charged particles becomes smaller at high energy. \section{THE CM ENERGY} The center of momentum, usually referred as center of mass, for an isolated ensemble of particles is defined as the inertial frame where it holds $$ \sum_i \vec p_i = \sum_i \frac{m_{0,i} \vec{v}_i}{\sqrt{1-V^2/c^2}} = 0 $$ where $V$ is the frame speed with respect to the laboratory. We have seen that $(E/c)^2-|\vec p|^2$ is an invariant with value $m_0^2c^2$. For the total energy and momentum of the ensemble $$E= \sum_i E_i \hspace*{6mm} \mbox{and} \hspace*{6mm} \vec P=\sum_i \vec p_i $$ the invariant evaluated in the CM frame is \vspace*{-4mm} $$ \biggl(\sum_iE_i/c\biggr)^2-\sum_i\vec{p}_i \cdot\sum_i\vec{p}_i = \biggl(\sum_i E'_i/c\biggr)^2 $$ where $E'_i$ is the energy of the $i-th$ particle in the CM frame. Let us consider two simple cases: \begin{itemize} \item[a)] two ultra-relativistic particles colliding ``head-on''; \vspace*{-2mm} \item[b)] one ultra-relativistic particle hitting a particle at rest. \end{itemize} For the system of two particles it is \begin{align*} \frac{(E'_1+E'_2)^2}{c^2} & = \frac{(E_1+E_2)^2}{c^2}-(\vec{p}_1+\vec{p}_2) \cdot (\vec{p}_1+\vec{p}_2) \\ & = \frac{(E_1+E_2)^2}{c^2}-p_1^2 -p_2^2 -2\vec{p}_1 \cdot \vec{p}_2 \end{align*} Moreover for ultra-relativistic particles it is $$ p = mv\simeq mc = \frac{E}{c} $$ Case a): $\quad \vec{p}_1/p_1 = -\vec{p}_2/p_2$ (see Fig.~\ref{eliana-rel-f28}). \vspace*{4mm} \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=42mm]{rel11c.pdf}} \caption{\label{eliana-rel-f28} Two particles colliding head-on.} \end{figure} \begin{gather*} \frac{(E'_1+E'_2)^2}{c^2} = \frac{E_1^2}{c^2}+\frac{E_2^2}{c^2} +2\frac{E_1 E_2}{c^2}-\frac{E_1^2}{c^2} -\frac{E_2^2}{c^2}+2\frac{E_1 E_2}{c^2} = 4\frac{E_1 E_2}{c^2} \end{gather*} and thus $$ E'_1+E'_2 = 2\sqrt{E_1 E_2} $$ For instance, for the LHC $pp$ collider it is $E_1$=$E_2$=6.5 TeV and the energy in the center of mass is $E_1'+E_2'$=2$\times$6.5=13 TeV. For the $p/e^\pm$ HERA collider, which was in operation until 2007, with $E_1$=920 GeV and $E_2$=27.5 GeV it is $E'_1+E'_2$=318 GeV. \vspace*{4mm} Case b): $\quad \vec p_2 = 0$ and $E_2=m_{0,2}c^2$ (see Fig.~\ref{eliana-rel-f29}). \vspace*{4mm} \begin{figure}[htb] \centering \rotatebox{0}{ \includegraphics*[width=42mm]{rel12c.pdf}} \caption{\label{eliana-rel-f29} Particle hitting a second particle at rest in the laboratory frame $S$.} \end{figure} \begin{align*} \frac{(E'_1+E'_2)^2}{c^2} & = \frac{(E_1+E_2)^2}{c^2}-p_1^2 -p_2^2 -2\vec{p}_1 \cdot \vec{p}_2 \end{align*} \begin{align*} \frac{(E'_1+E'_2)^2}{c^2} & = \frac{E_1^2}{c^2}+\frac{E_2^2}{c^2} +2\frac{E_1 E_2}{c^2}-\frac{E_1^2}{c^2} = \frac{E_2^2}{c^2}+2\frac{E_1 E_2}{c^2} \\ (E'_1+E'_2) & = \sqrt{E_2(E_2+2E_1)} = \sqrt{E_2(m_{0,2}c^2+2E_1)} \simeq \sqrt{2 E_1 E_2} \end{align*} For example, with $E_2=0.938$~GeV (proton rest mass) to get in the CM an energy of 318~GeV must be $E_1=$54~TeV. From this example we see the advantage of collider experiments with respect to fixed target ones in terms of available energy. \section{THE RELATIVISTIC HAMILTONIAN OF A PARTICLE IN A EM FIELD} Let us consider a physical system in the presence of generalized forces which can be derived from a function $U=U(q_i, \dot q_i)$ (\emph{generalized potential}). It is possible to associate to such system a lagrangian function $\mathcal{L}=T-U$, $T$ being the kinetic energy. The dynamics of the system is described by the Lagrange equations $$ \frac{d}{dt}\left(\frac{\partial \mathcal{L}}{\partial\dot{q}_j}\right)- \frac{\partial \mathcal{L}}{\partial q_j} = 0 $$ The coordinates $(q_{i},\dot{q}_{i})$ may be just the components of $(\vec r_j,\dot{\vec r}_j)$, but it can be convenient or even necessary to use other variables. The generalized forces are related to the function $U$ by~\cite{HG65} \begin{align*} F_\alpha & = \sum_i\frac{\partial q_i}{\partial r_\alpha} \left[-\frac{\partial U}{\partial q_i}+\frac{d}{dt} \frac{\partial U}{\partial \dot q_i}\right] \end{align*} which if the coordinates $(\vec r, \dot{\vec r})$ are used ($\partial q_i/\partial r_\alpha$=$\delta_{i\alpha}$) gives \begin{align*} F_\alpha & = -\left(\nabla U\right)_\alpha+\frac{d}{dt} \frac{\partial U}{\partial v_\alpha} \hspace*{6mm} \rightarrow \hspace*{4mm} \vec F = -\nabla U + \frac{d}{dt} \nabla_v U \end{align*} Very often physics problems are described by using the Lagrange or the Hamilton formalism. It is therefore useful to derive the relativistic lagrangian function for a particle in an EM field. The Hamilton principle says that between all patterns connecting the point $(q_{i,1},\dot{q}_{i,1};t_1)$ to the point $(q_{i,2},\dot{q}_{i,2};t_2)$ the system will actually follow that one for which the integral (action) $$ S = \int_{t_1}^{t_2} dt \mathcal{L}(q_i,\dot{q_i};t) $$ has a minimum or a maximum. This principle specifies the dynamics as well as the Lagrange equations do. First we find the lagrangian function for a \emph{free} particle, $\mathcal{L}_{free}$, by asking the action to be a Lorentz invariant. We rewrite the action by using the proper time $d\tau=dt/\gamma$ $$ S = \int_{t_1}^{t_2} dt \mathcal{L}_{free} = \int_{\tau_1}^{\tau_2} d\tau\gamma \mathcal{L}_{free} $$ In order to be $\gamma \mathcal{L}_{free}$ an invariant $\mathcal{L}_{free}$ must be proportional to $1/\gamma$ so that the dependence on $\gamma$ disappears from the integral. Let us write than $\mathcal{L}_{free}=\alpha/\gamma$. For a free particle the Lagrange equation becomes $$ \frac{d}{dt}\frac{\partial \mathcal{L}_{free}}{\partial v} = 0 $$ Inserting our expression for $\mathcal{L}_{free}$ we have $$ \frac{d}{dt}\frac{\partial}{\partial v}\frac{\alpha}{\gamma} = -\frac{1}{c^2}\frac{d}{dt}\alpha\gamma v = 0 $$ which reduces to the Newton law $d(m_0\gamma v)/dt=0$ if we set $\alpha=-m_0 c^2$. Therefore the relativistic lagrangian function of the free particle is $$ \mathcal{L}_{free} = -\frac{m_0c^2}{\gamma} $$ \noindent Now let us compute the lagrangian function related to the EM fields. The Lorentz force is $$ \vec{F} = q \vec{E} + q \vec{v} \times \vec{B} $$ The EM fields in terms of scalar and vector potentials are (MKSA units) \begin{gather*} \vec{B} = \nabla \times \vec{A} \qquad\qquad \vec{E} =-\nabla\Phi-\frac{\partial \vec{A}}{\partial t} \end{gather*} Thus the Lorentz force can be written as $$ \vec{F} = q\bigl(-\nabla\Phi-\frac{\partial \vec{A}}{\partial t}+ \vec{v}\times\nabla\times\vec{A} \bigr) $$ We use the identity $$ \nabla(\vec{a} \cdot \vec{b})=\bigl(\vec{a} \cdot \nabla\bigr) \, \vec{b}+ \bigl(\vec{b} \cdot \nabla\bigr) \, \vec{a}+ \vec{a}\times \bigl(\nabla \times\vec{b}\bigr)+ \vec{b}\times \bigl(\nabla \times\vec{a}\bigr) $$ for transforming the term $\vec{v}\times\nabla\times\vec{A}$ $$ \vec{v}\times\nabla\times\vec{A}=\nabla \, \bigl(\vec{A} \cdot \vec v\bigr) - \bigl(\vec{v} \cdot \nabla\bigr) \, \vec{A} $$ Thus the Lorentz force is \begin{gather*} \vec{F} = q\nabla\bigl(-\Phi + \vec{A} \cdot \vec{v} \bigr) -q\frac{\partial \vec{A}}{\partial t} -q\bigl(\vec{v} \cdot \nabla\bigr)\vec{A} = q\nabla\bigl(-\Phi + \vec{A} \cdot\vec{v}\bigr) -q\frac{d\vec{A}}{dt} \end{gather*} We recognize that the generalized potential is $U=q\Phi -q\vec{A}\cdot \vec{v}$. Indeed $$\frac{d}{dt} \nabla_v U = \frac{d}{dt} \nabla_v \bigl(q\Phi -q\vec{A}\cdot \vec{v}\bigr) = -q\frac{d\vec{A}}{dt} $$ because the EM potentials do not depend upon the particle velocity. In conclusion, the Lorentz force for a particle in an EM field may be written in terms of a generalized potential $U$ as $$ \vec{F} = -\nabla U + \frac{d}{dt} \nabla_v U $$ with $$U=q \, \bigl(\Phi - \vec{A} \cdot \vec{v}\bigr) $$ and the particle lagrangian related to the EM field is $$ \mathcal{L}_{int} = -U = -q\Phi+q\vec{A}\cdot \vec{v}$$ The total lagrangian is obtained adding the lagrangian of the free particle \begin{equation}\label{eliana-rel-eq18} \mathcal{L} = -\frac{m_0c^2}{\gamma}-q\Phi+q\vec{A}\cdot \vec{v} \end{equation} The hamiltonian function is related to the lagrangian function by \begin{equation}\label{eliana-rel-eq19a} \mathcal{H}(q_i,P_i) = \sum_iP_i\dot{q}_i-\mathcal{L} \end{equation} with $$P_i\equiv\frac{\partial \mathcal{L}}{\partial \dot{q}_i}=p_i+qA_i$$ The hamiltonian must be a function of $q_i$ and $P_i$ and therefore we must express $\dot q_i$ (or $v_i$) in terms of $P_i$. From $$ p_i= P_i-qA_i$$ and the relationship between momentum and energy $$c^2p^2=E^2-E_0^2=m_0^2\gamma^2 c^4-m_0^2c^4$$ we get $$ m_0^2\gamma^2 c^2-m_0^2 c^2=p^2=(\vec P-q\vec A)\cdot (\vec P-q\vec A)$$ and therefore $$ \vec{v}=\frac{\vec p}{m_0\gamma} = c\frac{\vec{P}-q\vec{A}}{\sqrt{m_0^2c^2+(\vec{P}-q\vec{A})^2}} $$ Inserting this expression in Eqs.~(\ref{eliana-rel-eq18}) and (\ref{eliana-rel-eq19a}) we finally find the hamiltonian function $$ \mathcal{H}(q_i,P_i) = \sum_iP_i\dot{q}_i-\mathcal{L}=c\sqrt{(\vec P-q\vec A)^2+m_0^2c^2} + q\Phi $$ \section{Some relationships} \begin{gather*} \gamma \equiv \frac{1}{\sqrt{1-(v/c)^2}} \qquad\qquad \beta \equiv \frac{v}{c}=\sqrt{1-\frac{1}{\gamma^2}} \\ m = \gamma m_0 \qquad \vec{p} = \gamma m_0 \vec{v} = \frac{m_0 \vec{v}}{\sqrt{1-(v/c)^2}} \qquad \left(\frac{v}{c}\right)^2 = \frac{p^2}{(m_0c)^2+p^2 } \\ E = mc^2 \qquad\qquad E_0 = m_0c^2 \qquad\qquad \frac{E}{E_0} = \frac{m_0\gamma c^2}{m_0 c^2} = \gamma \\ T = E-E_0 = m_0\gamma c^2-m_0c^2 = m_0c^2(\gamma -1) \\ \begin{align*} E^2 & = (T+E_0)^2 = m^2c^4 = m_0^2\gamma^2 c^4 = \frac{m_0^2c^4}{1-(v/c)^2} = \frac{m_0^2c^4}{1-p^2/(m_0^2c^2+p^2)} \\ & = \frac{m_0^2c^4}{m_0^2c^2}(m_0^2c^2+p^2) = m_0^2c^4+c^2p^2 \end{align*} \\ cp = c\gamma m_0 v = \frac{E}{E_0}c m_0 v = \frac{E}{m_0c^2}c m_0 v = \beta E \qquad\qquad cp\simeq E \quad \text{for} \quad \beta\rightarrow 1 \end{gather*} A table of relationships between $\beta$, $\gamma$, momentum and relativistic energy, together with their relative variations, may be found in~\cite{BGGR70}.
train/arxiv
BkiUdd85qoTA3qq3mUiO
5
1
\section{Appendix} \input{samples/appendix/system_details} \input{samples/appendix/corpus_collection} \input{samples/appendix/crowd_experiment} \clearpage \subsection{Additional Figures} \input{figures/crowd_appendix} \input{figures/irscreen1} \input{figures/irscreen2} \clearpage \input{figures/prototype_1} \input{figures/prototype_2} \clearpage \subsection{Additional Tables} \input{tables/participants} \input{tables/field_study_historians} \input{tables/interviewees} \input{tables/libraryscience} \section{Introduction}\label{s:intro} \input{samples/introv2} \section{Related work}\label{s:related} \input{samples/related_work} \section{Needfinding study: defining practices and requirements}\label{s:needs} \input{samples/user_needs} \section{System}\label{s:system} \input{samples/system} \section{Expert interview study procedure}\label{s:usabilitystudy} \input{samples/expert_interview_procedure} \section{Expert interview study results}\label{s:qualresults} \input{samples/expert_interview_results} \section{Field study}\label{s:fieldstudy} \input{samples/field_study} \section{A quantitative comparison with keyword document search~tools}\label{s:crowdstudy} \input{samples/crowd_study} \section{Discussion}\label{s:discussion} \input{samples/discussion} \section{Limitations and future work}\label{s:limits_and_future} \input{samples/limitations_and_future} \section{Conclusion}\label{s:conclusion_cc} \input{samples/conclusion} \begin{acks} We thank Mahmood Jasim for detailed feedback throughout our research process. We also thank Su Lin Blodgett, Javier Burroni, Katherine A. Keith and Kalpesh Krishna for helpful discussions, suggestions and edits. Finally, we thank Alyx Burns, Lucy Cousins, Prachi Modi and Ali Sarvghad for offering feedback on the user interface. \end{acks} \bibliographystyle{ACM-Reference-Format} \subsection*{Field study: additional details} To help $H1$ answer their question using \textsc{ClioQuery}, we gathered a custom corpus of articles from \textit{The New York Times} (NYT). To gather the corpus, we searched for ``El Salvador'' on \textit{The New York Times} website \cite{nytwebsite}, and then automatically downloaded all query-matching articles published between 1980 and 1985 in the World News and Week in Review sections of the newspaper. We filtered downloaded articles to create a corpus of NYT articles containing the word ``Salvador,'' and we loaded this corpus into \textsc{ClioQuery}~for $H1$. To help $H2$ answer their research question, we similarly gathered a second custom corpus of articles by searching for ``astronaut'' on the \textit{New York Times} website \cite{nytwebsite}, and then automatically downloading all query-matching articles published between 1980 and 1985. We then similarly filtered the documents to ensure that all query-matching mentioned ``astronaut'' and loaded the corpus into \textsc{ClioQuery}~for $H2$. \subsection*{The \textsc{ClioQuery}~system: additional details} \subsubsection*{Implementation details} \textsc{ClioQuery}~is a web application written in Python 3, using the Flask and React libraries.\footnote{\url{https://flask.palletsprojects.com/en/1.1.x/} and \url{https://reactjs.org/}} The text simplification methods in the paper use Stanford CoreNLP \cite{corenlppipeline} for tokenization, dependency parsing, and part-of-speech tagging.\footnote{Eisenstein \cite{eisenstein2019introduction} offers a detailed introduction to these NLP techniques.} \textsc{ClioQuery}'s relationship span extraction method also employs logistic regression; we use the implementation from Scikit-learn \cite{Pedregosa:2011:SML:1953048.2078195}. In the future, rewriting our Python-based prototype~in a faster language like Java or C would reduce our system's latency, helping \textsc{ClioQuery}~scale to larger corpora. It might also be possible to further improve performance by employing time and space efficient IR methods for efficiently indexing and retrieving the locations of query words in documents \cite{irbook}. \subsubsection*{Time Series View: additional details} \textsc{ClioQuery}'s {Time Series View} shows a single rug point (small vertical line) for each document mentioning the query. These markings both help explain aggregated count statistics encoded in the time series plot (more rug points mean an higher annual count), and help link the Time Series View with the Document Feed. If a user hovers over a rug point, \textsc{ClioQuery}~displays the headline of the corresponding news story using a tooltip; if the user clicks a rug point, \textsc{ClioQuery}~updates so that the story is displayed in the Document Feed and in the Document Viewer. When a user hovers over some year in the Time Series View, \textsc{ClioQuery}~displays a tooltip showing the total count of documents containing the query for that year. \subsubsection*{Default system behaviors} If a user has not yet entered a query, \textsc{ClioQuery}'s time series plot simply shows the overall counts of documents by year across the entire corpus, shown with a neutral black line. In this case, the Document Feed also shows all documents in the corpus. Moreover, when filter-by-date is not used, \textsc{ClioQuery}~shows documents from the time span of the corpus. \subsubsection*{Choosing colors} We {chose colors for \textsc{ClioQuery}} using Colorbrewer \cite{colorbrewer}, a common resource, which offers colorblind safe and print-friendly palettes. Hall and Hanna \cite{HallAndHanna} test how foreground and background color affects how people read, retain, and experience text on screen. Our study focuses on testing the utility of in-text highlighting and text simplification for expert social researchers; future work might test the effect of varying the foreground or background color. \subsubsection*{Handling token gaps during clause deletion} In some cases, there may be gaps between tokens in simplified mentions, where tokens have been removed from the middle of a sentence. (These are shown with ellipses in the Document Feed). In these cases, in performing {automatic in-text highlighting to link the Document Feed and Document Viewer}, we highlight the span in the Document Viewer which begins with and ends with the first and last token of the corresponding simplified mention, shown in the Document Feed. \subsubsection*{Computing tf-idf scores of iterative clause deletion} To compute tf-idf scores during iterative clause deletion, we assign each word in each possible output candidate shortening a word-level tf-idf score, and average the word-level tf-idf scores of all words in each possible candidate shortening to compute an overall, sentence-level tf-idf score. We assign each word a tf score equal to the total occurrences of the word among all documents that contain $Q$, and an idf score equal to 1 divided by the count of documents containing the word across the corpus. We then multiply each word's tf score by its idf score to get a word-level td-idf score. We then select the candidate shortening with the highest overall tf-idf score for display in the Document Feed. \subsubsection*{Choosing among possible sentence shortening methods} In the System section, we describe three different sentence shortening techniques, which are applied in the \textsc{ClioQuery}~interface. Below, we describe how \textsc{ClioQuery}~chooses to apply the three different methods. After a user enters a query $Q$, for each document mentioning $Q$, \textsc{ClioQuery}'s Document Feed displays the first sentence within the document mentioning $Q$ that can be shortened via query-focused clause deletion. If no such sentence exists, \textsc{ClioQuery}~resorts to shortening the first sentence mentioning $Q$ via {character windowing}. (Character windowing is only used as a last resort because it does not attempt to create well-formed output containing salient words from the input.) In cases when a user has entered both a query and subquery, for each document mentioning the query or subquery, \textsc{ClioQuery}~will attempt to display the first sentence in the document that can be shorted via relationship span extraction. This is because we assume the user is interested in the relationship between the query and subquery. If there is no sentence that can be shortened via relationship span extraction, \textsc{ClioQuery}~will display the first sentence that can be shortened via query-focused clause deletion. If no sentence can be shortened via clause deletion, it will resort to shortening the first sentence mentioning the query or subquery via character windowing. \textsc{ClioQuery}~also allows the user to click ``expand'' to see all sentences mentioning the query within the document, as described in the System section. In this case, \textsc{ClioQuery}~will first attempt to shorten each sentence mentioning $Q$ via query-focused clause deletion, before resorting to shortening the sentence with character windowing. If the user has also set a subquery (in addition to $Q$), \textsc{ClioQuery}~will first try to shorten each sentence mentioning the query and subquery using relationship span extraction (and then attempt clause deletion, and character windowing). \subsection*{Quantitative comparison study: additional details} \subsubsection*{Additional details regarding creation of reading comprehension questions} We used a semi-automated procedure to create reading comprehension questions for our quantitative crowd study. Specifically, we first collected all editorials from The New York Times Annotated Corpus \cite{SandhausNYT} which included the words ``Zimbabwe'' and ``Mugabe''. We then used the TfidfVectorizer class from scikit-learn \cite{Pedregosa:2011:SML:1953048.2078195} with default settings to construct tf-idf vectors for all 1,689 sentences in the editorials. We also similarly constructed tf-idf vectors for all 597 sentences from the Wikipedia page on Robert Mugabe \cite{wikimugabe}. We then computed the cosine similarity of each sentence pair in the Cartesian product of Wikipedia and \textit{New York Times} sentences. We manually reviewed the 200 sentence pairs with the highest cosine similarities, and manually labeled 37 total sentences from \textit{New York Times} editorials which reported a fact described in some sentence from Wikipedia. This process identified 37 facts about Mugabe from Wikipedia reported in editorials in \textit{The New York Times}. We selected 8 of these facts to create reading comprehension questions for our task. \subsubsection*{Additional details regarding tuning of IR baseline} We implemented the IR baseline using Whoosh, an open-source Python search engine. Like many search engines, Whoosh shows small document snippets from ranked documents on the search engine results page (Figure \ref{f:appendix_ir_serp}). To encourage fair comparison between Whoosh and \textsc{ClioQuery}, we tuned Whoosh so that document snippets contained roughly as much text as the shortened sentences in the \textsc{ClioQuery}~Document Feed. Specifically, Whoosh allows snippet customization by setting the \texttt{maxchars} and \texttt{surround} parameters in its \texttt{Highlighter} module. We set these parameters by performing a grid search over all possible values from 10 to 100 (for each parameter), in order to maximize the average number of characters per Whoosh document snippet, under the constraint that the average was less than or equal to 90 characters (the length of the longest-possible shortened sentence in the \textsc{ClioQuery}~Document Feed). The final setting for the \texttt{surround} parameter was 27 characters and the final setting for the \texttt{maxchars} parameter was of 10 characters. Using these settings, we observe a mean snippet length of exactly 90 characters using the IR system on the crowd task. Beyond tuning these parameters, we use default settings for the Whoosh search engine. \subsubsection*{Additional details regarding the crowd study pretest} Before beginning the main task in our crowd study, participants in each condition used their interface to complete a three minute pretest using a small corpus of six \textit{New York Times} editorials mentioning ``Iraq''. The pretest was very similar to the main task; each interface was hard-coded to use the query ``Falluja'' and participants were instructed to ``find and remember everything the \textit{New York Times} wrote about Falluja'' using their tool. After participants typed this exact phase into a text box to confirm they understood the instructions, they conducted research using their assigned interface. After 3 minutes, participants were then presented with a screen with four facts about U.S.\ involvement in Falluja (included in supplemental materials), and asked to identify which facts were reported in the six articles. Because only one fact from the list was reported in the articles, to get a perfect score of 4 out of 4 on the pretest, workers had to both correctly identify the reported fact, and refrain from guessing any of the other three facts. The pretest was designed to be very easy for attentive workers. \subsubsection*{Additional details regarding data collection phases for the crowd task} Data collection for the crowd task proceeded in two phases: an initial pilot phase and a main data collection phase. After the small pilot, we added two training screens for \textsc{ClioQuery}~participants (shown in supplemental materials) to help \textsc{ClioQuery}~users gain practice using unfamiliar features. We also fixed a bug in the pilot in which \textsc{ClioQuery}~users were shown an extra two editorials. We emphasize that these two editorials did not contain any facts about Mugabe which could be used to answer the reading comprehension questions, and also note that the two extra editorials would have made the task harder for \textsc{ClioQuery}~participants (because they would have had to read extra text during the task, which was not relevant to the reading comprehension questions). Finally, after the pilot, we adjusted the random assignment mechanism so that participants were assigned to conditions in an alternating fashion following an initial random draw (i.e.\ first \textsc{ClioQuery}, then IR, then \textsc{ClioQuery}...). In the pilot, participants were assigned to conditions at random when they loaded the first screen in the task. \subsubsection*{Additional details regarding task payment} Because we had trouble recruiting qualified masters workers for our lengthy and complex task we increased payment during data collection. The first 18 participants were paid \$2.50 to complete the pilot. After the pilot, we increased payment to \$3.00 and collected data from 75 more participants. Because data collection was still very slow (e.g.\ 10 workers over a 24 hour period) we further increased payment to \$4.00 for the task and collected data from 26 more workers. Finally, we increased payment to \$5.00 for the task. When only 2 workers signed up over a half-day period at the \$5.00 rate we ended data collection. \subsection{Formalizing historians' current practice of mention gathering and analysis}\label{s:needs_formal_problem} In Section \ref{s:intro}, we document and informally describe historians' current practice of mention gathering and mention analysis. We now define this work more formally. During mention gathering, a historian investigates a unigram query $Q$~in a newspaper archive. $Q$~is a word type and each mention of the query, \specificmention, is a word token. For instance, a historian might investigate $Q$=``Falluja'' by gathering specific mentions of the word ``Falluja'' in individual documents, published on particular dates. Using $d$ to refer to the text of a specific document and $t$ to refer to its publication date, we can formally define mention gathering as the task of finding all \mentions~in an archive \archive=$\{(d_1, t_2), (d_2, t_2) ... (d_N, t_N)\}$, which is an unordered set of $N$ timestamped documents. The task of mention analysis consists of manually reviewing one or more query mentions in context. We use the notation $\mathcal{C}(i)$~to refer to a specific passage showing a particular query mention in context, where $\mathcal{C}(i)$~is a token span. For instance, if ``Falluja'' occurs in document $d$, then $\mathcal{C}(i)$~might be a paragraph from $d$ that contains the string ``Falluja.'' We denote this using $\mathcal{C}_{\text{paragraph}}(i)$. Note that different systems may define $\mathcal{C}(i)$~in different ways. For example, keyword document search~systems return whole documents. Thus a keyword document search~system defines $\mathcal{C}(i)$~as the whole document $d$ containing \specificmention. We denote this using $\mathcal{C}_{\text{full doc.}}(i)$. Having now formally defined historians' current practice of mention gathering and mention analysis, and explained the limitations of baseline tools for these tasks (Section \ref{s:intro}), we now describe an investigation into the needs of historians (Section \ref{s:needs_protocol}) which informs our design requirements for a text analytics system (Section \ref{s:needs_results}). \subsection{Observing and analyzing needs from heterogeneous data}\label{s:needs_protocol} We identified user needs by collecting and analyzing two different sources of data, described below. \subsubsection{Observing needs from existing literature} First, we studied historians' needs by reviewing a large literature from history, library science, and information science devoted to the systematic study of the digital and non-digital information-seeking behavior of historians. To identify this literature, we followed citations starting from Allen and Sieczkiewicz's paper ``How Historians use Historical Newspapers'' \cite{allen}, which we first found via a search on Google Scholar. In total, we reviewed and took notes on six prior studies describing surveys and interviews with 1002 historians (shown in a table in the Appendix). We consider our synthesis of this prior literature to be part of the contribution of our work, as we translate these prior descriptive findings (focused on how historians find information) into actionable design requirements for an interface. The studies we review are largely unknown in computer science disciplines like NLP, IR, VIS, and HCI. \subsubsection{Observing needs from interviews and feedback on prototypes} We additionally supplemented, contextualized, and validated existing studies by conducting five of our own one-on-one needfinding interviews with five interviewees (I1 to I5) on Zoom video chat over a period of three months.\footnote{Our needfinding interviews, expert interviews, and field study (Sections \ref{s:needs_protocol}, \ref{s:usabilitystudy}, and \ref{s:fieldstudy} respectively) were approved as exempt from review by our institution's human subjects IRB office. All participants received a \$50 Amazon gift card for their time.} The Appendix describes the backgrounds of interviewees in detail. All but one interview was 60 minutes long. (We met with I4~for 30 minutes, due to limited availability.) Interviews proceeded in two phases. During \textit{Phase A}, in the initial exploratory stage of our work, one researcher from our group interviewed I2, I4, and I5, who we recruited through convenience sampling \cite{given_sage_2008}. The interviewer asked open-ended, exploratory questions about needs and practices, and solicited feedback on early prototypes. The researcher also took detailed notes. Later, when we better understood how historians find information in archives, we began \textit{Phase B}. During this phase, the same researcher conducted two one-on-one, video-recorded, semi-structured interviews with I1~andI3, who also provided feedback on later prototypes. We recruited I1~andI3~via email outreach.\footnote{ We emailed five PhD students in history at a nearby university. Each student expressed interest in media, archives or science in describing their work on their department's web page. We also emailed all members of the editorial board at a history journal. We do not list the name of the university or journal to ensure interviewees remain anonymous. } The researcher again took detailed notes. We include the interview script in supplemental material. In total, each of the five interviewees across \textit{Phase A} and \textit{Phase B} reviewed a different iterative prototype. In the interest of space, we only present feedback on what we consider to be the two most important prototypes, shown in the Appendix. \subsubsection{Analyzing observations of historians' needs} Following data collection, one researcher qualitatively analyzed and organized notes and transcripts to articulate four overall needs, and translate these needs into four corresponding design requirements (described in Section \ref{s:needs_results}). In general, we found that feedback from needfinding interviews and feedback on early prototypes was very consistent with findings from prior work. Nevertheless, our own needfinding interviews helped to contextualize and translate prior descriptive findings on historians' information-seeking behaviors into actionable guidelines for system design. \subsection{Needfinding results and design requirements}\label{s:needs_results} Following data collection and data analysis, we defined four high-level design requirements (R1-R4), based on four needs. We describe each requirement below. \subsubsection{\textbf{R1: A system should show a navigable overview of change over time}}\label{s:exploration} Prior study of the information-seeking behavior of historians emphasizes the theoretical importance of \textit{``the dimension of time''} \cite{Case} in historical research, and also emphasizes historians' practical need to perform \textit{``searching and narrowing by date''} \cite{allen}. In our needfinding interviews, historians and archivists also stressed the theoretical and practical importance of time-based investigation. \textit{``Time is always a historian's first move},'' I3~explained. \textit{``It's about change over time as the fundamental thing.''} I5~noted: \textit{``Historians are often trying to find articles within a specific date range and about a specific topic} ... \textit{research often starts with a keyword and a date range and a source or list of sources}.'' Because historical research involves studying change across time, I2~explained how time series plots showing the frequency of query words by time period (see Figure \ref{f:time_series_plus_family}) are often useful for gaining a temporal overview of a corpus. \textit{``Bar charts [or line charts] by time are really helpful},'' I2~explained, \textit{``because news has these peaks where a topic becomes important and then dies down.''} Such charts \textit{``help people trace an idea or series of ideas or terminology over time}.'' Observing the centrality of temporal analysis in historical research, we assert a design requirement (R1): a system designed for historical mention gathering and analysis should show some kind of navigable, visual overview of query mentions \mentions~across the time span of a corpus. Showing such a visual ``overview first'' \cite{HeerShneiderman, The_Eyes_Have_It} is a known best practice in visual analytics. \subsubsection{\textbf{R2: A system should help people comprehensively review all query mentions in a corpus}}\label{s:needs_comprehensive} Prior work often emphasizes the importance of gathering comprehensive evidence during historical research. \textit{``Comprehensiveness is clearly the highest priority in searching a database,''} one study concludes \cite{DaltonCharnigo}, explaining that 70\% of 278 survey respondents would prefer to spend time filtering out irrelevant material than run the risk that relevant material \textit{``might fall through the cracks''} in a limited search. Nevertheless, some historians in prior work acknowledge that truly comprehensive search is an impossible goal. \textit{``I never think I'm going to be able to read every record,''} one reports \cite{DuffJohnson}. \textit{``I'm always creating priority orders of what I think is going to be most useful.''} Our interviewees similarly emphasized the importance of comprehensiveness in gathering and evaluating historical evidence. \textit{``The most important thing for historical researchers is to be confident that they are being exhaustive,''} said~I4. \textit{``I want to know I can be confident I have been able to access everything relevant. Did my search cast a wide enough net?''} I4~also praised an early prototype (see Appendix) for displaying a very large number of potentially relevant passages. \textit{``The biggest fear is Type II error,''} he explained. \textit{``In doing searches, am I missing something that is crucial but I don't know because I never looked?''} Similarly, I5~explained that it is important to \textit{``be as completist as possible''} in historical research. \textit{``The thing about historians....they want to be as comprehensive as possible with their topics}.'' Citing the importance of comprehensiveness, I5~expressed deep skepticism (see Appendix) about an early prototype which omitted some information to form a summary. However, like some interviewees in prior work, I2~pointed out that truly comprehensive investigation may not be possible. \textit{``Ultimately,''} she noted, \textit{``there is a limit in terms of time and money for any given project.''} We translate the need for comprehensive archival search into a second design requirement {(R2): a system for mention gathering and analysis should help people comprehensively review all query mentions in a corpus.} Expressed more formally, a system should help historians easily navigate to and review every single \specificmention~in an archive. \subsubsection{\textbf{R3: A system should present as much context as possible for any given record in an archive}}\label{s:needs_context} Prior work emphasizes the importance of context in historical research. \textit{``Building context is the {\normalfont sine qua non [indispensable condition]} of historical research,''} Duff and Johnson write \cite{DuffJohnson}. \textit{``Without it historians are unable to understand or interpret the events or activities they are examining.''} In a separate study, another historian explains, \textit{``You can't have the specific facts without the context ... Where an article is in the paper, and what surrounds it, matters.''} During our own needfinding interviews, historians and archivists also repeatedly emphasized the importance of contextual information in archive news search. The job of a historian is to \textit{``put facts in context,''} I5~said. A historian will need to \textit{``contextualize''} facts from a periodical by examining its publishers and audience. Similarly, I4~noted that \textit{``as an archivist I do research to give context to collections.''} Finally, I2~stressed the importance of contextualizing evidence in archive search software. \textit{``Who does the New York Times have writing this?''} I2~asked, while examining an early \textsc{ClioQuery}~prototype. \textit{``Where does each sentence occur in the document? What section of the newspaper? You need to show more context.''} Observing the importance of context in historical research, we assert a design requirement (R3): a system for mention gathering and analysis should show each query mention amid as much surrounding context as possible. Formally, R3~implies that $\mathcal{C}(i)$~should be as large as possible (in token length) for each \specificmention. However, R3~must be balanced against other requirements, which impose competing demands. In particular, because screen space and human attention are limited resources, if $\mathcal{C}(i)$~is large (e.g., a full document) this will make it harder for a historian to comprehensively review all mentions in a corpus. Balancing a need for context and comprehensiveness is a challenge in designing for historians. \subsubsection{\textbf{R4: A system should be as transparent, trustworthy and neutral as possible}}\label{s:needs_trust} Prior studies of the information-seeking behavior of historians underscore the need for trustworthy tools that transparently present digital archival materials in a neutral manner. For instance, in one study \cite{DuffCraigCherry}, a historian reports that they prefer original sources because they can trust such sources to be \textit{``accurate, undistorted and complete.''} Similarly, in another study \cite{Chassanoff}, another historian explains that direct \textit{``access to the original image of the primary source rather than to a transcribed version''} is important, \textit{``especially when there is no description of what rules they used to transcribe documents.''} This historian reports that they do not trust and can not interpret electronic transcription, and thus must rely on direct observation of digitized images to draw conclusions. In our interviews, historians and archivists similarly described the importance of transparently presenting digitized archives in a neutral manner. \textit{``When I see something that is trying to decide or curate for me that is a worry. That is a red flag},'' I4~explained. Similarly, I2~added, \textit{``I think the system should be as transparent as possible. I need to distinguish between what some primary source is saying versus what the computer thinks a primary source is saying.''} I5~also cited the importance of transparency and trust in expressing deep skepticism about an early prototype, shown in the Appendix.\footnote{Even as some interviewees stressed the importance of unbiased, transparent and trustworthy presentation of archive evidence, I3~reported that, in practice, historical researchers do trust ranked results from keyword document search systems. She explained that many historians might not realize that black-box document rankings from a~keyword document search~tool will affect conclusions from archival research.} Because historians frequently expressed commitments to direct and neutral observation of archival evidence, we assert a design requirement {(R4): search software should show evidence in a maximally transparent and trustworthy manner.} One consequence of R4 is that systems for mention gathering and analysis should not attempt to create a curated summary of the most ``important'' \mentions~in an archive (see Section \ref{s:discussion_NLP}). \subsection{High-level system description}\label{s:highlevell} The \textsc{ClioQuery}~web interface presents results from a Boolean search \cite[Chapter 1]{irbook}, which returns the unranked set of documents containing one or more mentions~of a unigram query term $Q$~in an archive \archive. (This notation and terminology is defined in Section \ref{s:needs_formal_problem}; Section \ref{s:limits_and_future} discusses possible extensions to exact string matching.) When a user enters $Q$ into the search bar at the top of the interface (Figure \ref{f:system_cc}A), \textsc{ClioQuery}~identifies all documents containing $Q$ and presents the documents using three linked views \cite{BujaLinking}. First, \textsc{ClioQuery}~includes a \textbf{Time Series View}, showing a graphical overview of the count of documents mentioning the query by year (Figure \ref{f:system_cc}D). Second, \textsc{ClioQuery}~includes a \textbf{Document Feed} view, presenting all query mentions from across all documents in a single scrollable window (Figure \ref{f:system_cc}H). Finally, \textsc{ClioQuery}~includes a \textbf{Document Viewer}, which shows the full text of a single document from the corpus, with individual query mentions from the document highlighted in context (Figure \ref{f:system_cc}I). \textsc{ClioQuery}~also includes a \textbf{filtering system} to help users narrow the set of query mentions shown in the interface (Figure \ref{f:system_cc}B, C and F), and a \textbf{history tracking system} to automatically monitor and display reading history during comprehensive search (Figure \ref{f:system_cc}G). All features in the interface also follow a coordinated \textbf{color coding} scheme. For instance, the user's query word is always displayed using the purple query color {\includegraphics[scale=0.06]{figures/CCPurple.pdf}} in the Document Feed and Document Viewer, and the Time Series View also uses a {purple} line to represent query frequency (Figure \ref{f:system_cc}D). We consider the color-coded bolding of query terms to be one form of \textbf{automatic in-text highlighting} \cite{Handler2016VisualizingTM} throughout the \textsc{ClioQuery}~interface. Automatic in-text highlighting draws user attention to some word, phrase, or passage in text by automatically setting the text's foreground color, background color or decoration (e.g., bolding). The Appendix describes our process for selecting a colorblind safe and print-friendly palette. It also provides additional engineering details about our implementation of \textsc{ClioQuery}. \subsection{Overview first: a Time Series View for temporal context (R1)}\label{s:system_ts} Because change across time is central to historical research (R1),~\textsc{ClioQuery}~presents a navigable Time Series View (Figure \ref{f:system_cc}D) showing query frequency by year across a corpus. The component's x-axis represents time (binned by year), and its y-axis represents the annual count of all documents containing the query $Q$~published during a given year. If a user also enters a subquery (Section \ref{s:dont_rank_filter}), \textsc{ClioQuery}'s Time Series View also shows the annual count of documents mentioning both the query and subquery. In Figure \ref{f:system_cc}D, \textsc{ClioQuery}~displays one line showing the count of documents mentioning the query term in the purple query color, and another line showing the count of documents mentioning the subquery term (as well as the query term) in the green subquery color \includegraphics[scale=0.06]{figures/CCGreen.pdf}. \textsc{ClioQuery}'s time series plot also shows a single rug point (small vertical line) for each document mentioning the query, just beneath the temporal x-axis (Figure \ref{f:system_cc}E). Such rug points allow the user to easily preview and navigate to individual news stories; we describe these possible interactions in detail in the Appendix. \subsection{A Document Feed for comprehensive search (R2)}\label{s:feed_and_viewer} During needfinding, we found that experts often emphasized the importance of gathering comprehensive evidence (Section \ref{s:needs_comprehensive}), and also often search for specific query terms in news archives (Section \ref{s:intro}). We thus designed \textsc{ClioQuery}'s {Document Feed} to help such users easily gather and analyze the comprehensive set of every single mention of a query term in a collection of news stories (R2). We assume the user is working with a small corpus (or small set of documents from a larger corpus), where such comprehensive review is possible. This assumption is appropriate for our use case; for instance, Black reviews roughly 500 documents to analyze the racial history of ``watermelon,'' \cite{watermelon} and MacNamara reviews 605 documents to analyze ``race suicide'' \cite{racesuicide}. After a user issues a query $Q$, \textsc{ClioQuery}~populates the Document Feed to show a comprehensive, skimmable, summary which includes every single \specificmention~across the corpus (Figure \ref{f:system_cc}J). To create the summary, \textsc{ClioQuery}~selects each sentence containing some \specificmention, and then automatically simplifies the sentence (without removing query words) so that historians can quickly read over the mention $i$ in context. We use the notation $\mathcal{C}_{s^{\prime}}(i)$~to refer to a specific query mention \specificmention~shown within the context of a sentence $s$ that is simplified to $s^\prime$ for display in the Document Feed. Section \ref{s:text_simplification_overview} provides details on how \textsc{ClioQuery}~shortens sentences to create~ $\mathcal{C}_{s^{\prime}}(i)$. To the best our knowledge, such text simplification is new to the literature on text analytics. In presenting each~ $\mathcal{C}_{s^{\prime}}(i)$, \textsc{ClioQuery}~bolds and highlights the query word using the purple query color so that each $\mathcal{C}_{s^{\prime}}(i)$~is shown in a visually consistent format designed for skimming (Figure \ref{f:compressioncartoon}). Note that by default, \textsc{ClioQuery}~displays a single~ $\mathcal{C}_{s^{\prime}}(i)$~from each document beneath the document's headline (The Appendix describes how the sentence is chosen). To see each \specificmention~from a document within a simplified sentence, the user can click an ``expand'' button (Figure \ref{f:field_study_loop}). The user can also click a star to bookmark a document in the red bookmark color \includegraphics[scale=0.06]{figures/CCRed.pdf}. \input{figures/lex_summary_overview} \input{tables/reading_volume} \textsc{ClioQuery}'s Document Feed is designed to directly address two of the limitations of baseline keyword document search~systems, described in Section \ref{s:intro}. First, by summarizing documents mentioning $Q$, \textsc{ClioQuery}~is able to fit more query mentions in limited screen space, reducing the need for context switching across individual windows or tabs. For instance, in Figure \ref{f:system_cc}, the Document Feed saves the user from having to open 239 separate documents during comprehensive review. Second, by selecting sentences mentioning the query from documents, and removing tokens from those sentences, \textsc{ClioQuery}~reduces the user's reading burden. For instance, in Figure \ref{f:system_cc}, the user has queried for documents mentioning Reagan and Duarte (in this example, Reagan is a subquery; subqueries are described in detail in Section \ref{s:dont_rank_filter}). By selecting and simplifying sentences, \textsc{ClioQuery}~removes 87.0\% of the tokens in all documents mentioning these two words (Table \ref{t:tokens}). We include a detailed description of \textsc{ClioQuery}'s text simplification techniques in Section \ref{s:simplification}. \subsection{A linked Document Viewer for necessary context (R3, R4)}\label{s:documentviewer} Because historians need to evaluate evidence in context without black-box algorithmic influence (R3, R4), we anticipated that \textsc{ClioQuery}~users would need to quickly review each \specificmention~from the Document Feed within the context of full underlying news articles. Therefore \textsc{ClioQuery}'s Document Feed is closely linked with a corresponding \textbf{Document Viewer}, which shows the complete text of a single selected document from the corpus (Figure \ref{f:system_cc}I). The Document Viewer satisfies R3~because it shows each \specificmention~within the context of a full document, denoted $\mathcal{C}_{\text{full doc.}}(i)$. After a user clicks a shortened sentence $\mathcal{C}_{s^{\prime}}(i)$~in the Document Feed, the Document Viewer updates to show the entire document containing $\mathcal{C}_{s^{\prime}}(i)$. \textsc{ClioQuery}~also automatically scrolls the document so that the (just clicked) simplified sentence is visible on screen. \textsc{ClioQuery}~also makes it easy for users to locate simplified sentences, by using {automatic in-text highlighting to further link the Document Feed and Document Viewer}. Each simplified sentence $\mathcal{C}_{s^{\prime}}(i)$~from the Document Feed is shown with yellow background highlighting \includegraphics[scale=0.06]{figures/CCHighlight.pdf} in the document shown in the Document Viewer. Additionally, if a user hovers over a sentence in the Document Feed or Document Viewer, the sentence is highlighted in dark yellow hover color \includegraphics[scale=0.06]{figures/CCHover.pdf} in each component (shown in Figure \ref{f:system_cc}J and \ref{f:system_cc}K). We hypothesize that linking between shortened text and full documents helps build user trust (R4) because it helps experts transparently see and understand how shortened mentions are drawn from underlying text. This feature is inspired by CommunityClick \cite{CommunityClick}. \subsection{Color-coded history tracking for systematic review of evidence (R2)}\label{s:tracking} Some historical researchers emphasize the importance of comprehensively examining all available evidence during research (R2). To support historians in this work, \textsc{ClioQuery}~keeps track of which documents the analyst clicks in the Document Feed and opens in the Document Viewer. \textsc{ClioQuery}~also keeps track of bookmarked news stories (Figure \ref{f:system_cc}J), and displays a simple stacked horizontal bar chart (Figure \ref{f:system_cc}G) showing the proportions and counts of read, unread and bookmarked documents. The bar chart uses the read \includegraphics[scale=0.06]{figures/CCSilver.pdf}, unread \includegraphics[scale=0.06]{figures/CCBlack.pdf}, and bookmarked \includegraphics[scale=0.06]{figures/CCRed.pdf} color scheme employed across the color-coordinated interface. (\textsc{ClioQuery}~considers all documents to be either read but not bookmarked, unread or bookmarked. We do not allow intersection between these sets.) For instance, Figure \ref{f:system_cc}G shows 5 read, 89 unread, and 5 bookmarked documents. The user can click check marks (Figure \ref{f:system_cc}G) to show or hide documents in each category. \textsc{ClioQuery}'s Document Feed and Time Series View use the same color scheme to help users quickly identify opened and unopened documents. Stories that a user has already clicked appear with grey read text in the Document Feed, and their corresponding rug points are shown in grey in the Time Series View. For instance, in Figure \ref{f:system_cc}, the user has read the story published on Jan. 9, 1985. The story is greyed out in the Document Feed, and its corresponding rug point is shown in grey beneath the time series plot. Similarly, there are five red rug points in Figure \ref{f:system_cc}E because the user has bookmarked five documents. Note that \textsc{ClioQuery}'s history tracking is query-dependent; tracking resets each time a user issues a new query (unlike the history tracking mechanism in some prior work \cite[Section 6]{Footprints}). Such query-dependent tracking is appropriate for \textsc{ClioQuery}~because the system is designed to help historians review all mentions of some specific keyword in a corpus. We hypothesize that this feature offers experts assurance they have comprehensively reviewed all \specificmention. We leave exploration of other forms of history tracking for future work. \subsection{A filtering system to review many results in a neutral manner (R4)}\label{s:dont_rank_filter} Some prior text analysis systems designed for historians (e.g., Expedition \cite{expedition}) attempt to answer keyword queries by ranking documents to direct users towards most-relevant news articles. Because such ranked retrieval might introduce unwanted algorithmic influence over the expert search process (R4), \textsc{ClioQuery}~responds to queries with Boolean search, which returns the unranked set of all documents containing $Q$. (The Document Feed shows such documents in chronological order.) \textsc{ClioQuery}~then allows users to narrow down unranked search results with a filtering system, consisting of three filter controls. The \textbf{filter-by-date} control selects documents by time period. After users select a start date and end date from date pickers at the top of the interface (Figure \ref{f:system_cc}B), \textsc{ClioQuery}~updates to show only those documents mentioning the query published during the selected interval. (Historians are often interested in specific time periods; see Section \ref{s:needs}.) In Figure \ref{f:system_cc}B, the user has filtered to documents published in 1983--1985. The \textbf{filter-by-subquery} control allows users to select documents that contain some additional word, called a subquery. For instance, after a user queries for the Salvadoran leader ``{Duarte}'' they might wish to further narrow results to understand the relationship between ``{Duarte}'' and his ally U.S.\ President Ronald Reagan. To investigate, the user can enter the subquery ``{Reagan}'' to select all documents mentioning the word ``{Duarte}'' which also mention the query word ``{Reagan}'' (Figure \ref{f:system_cc}C). We included this feature because complex Boolean queries are often popular with experts \cite[Section 1.4]{irbook}. More complex Boolean expressions are possible in future work. The \textbf{filter-by-count} control filters results based on the the number of times a query term is mentioned in a document. When a user adjusts the filter-by-count slider to some value $K \in \{1,2,3,4,5\}$ all components of the interface update to show only those documents with $K$ or more \specificmention. In cases where a user has set a subquery, the filter-by-count control allows the user to select documents which contain the subquery word at least $K$ times. For instance, in Figure \ref{f:system_cc}F, the user selects documents which mention ``{Reagan}'' at least 3 times. \textsc{ClioQuery}~also helps users quickly see the count of query terms within documents using square-shaped, query-colored \textbf{count markers}, shown beside each document headline. Count markers use brightness to encode the count of a query term within a document. For instance, count markers for documents with more mentions of a query term have a darker purple color than count markers for documents with fewer mentions. If a user enters a subquery, count markers show the count of the subquery within each document, using shades of the subquery color (as in Figure \ref{f:system_cc}F and \ref{f:system_cc}H). This feature is inspired by TileBars \cite{TileBars}. \subsection{Sentence simplification to help summarize a query across a corpus}\label{s:simplification} \textsc{ClioQuery}~introduces text simplification methods from NLP to the literature on text analytics. We describe these methods below. \subsubsection{Overview of sentence simplification in \textsc{ClioQuery}}\label{s:text_simplification_overview} \textsc{ClioQuery}'s Document Feed displays a query-focused summary of a user's query and subquery, by first extracting and then simplifying sentences mentioning query (or subquery) words. To simplify sentences, we turn to sentence compression techniques from the text summarization literature in NLP (introduced in Section \ref{s:textual_summary_family}). These methods try to summarize naturally-occurring input sentences by removing words, to create shorter and well-formed output sentences which contain the most salient information from the input. (A well-formed sentence is one that sounds natural, rather than garbled or choppy \cite{sprouseschutzeintro}.) In particular, we turn to a specific class of sentence compression methods, which can ensure that simplified sentences both (A) fit within limited screen space in a user interface and (B) mention the user's query term or subquery term. Such methods are appropriate for \textsc{ClioQuery}~because each line in the Document Feed has a fixed width, and must include some mention of the user's query or subquery. More concretely, we use a \textit{query-focused clause deletion} \cite{Handler2019HumanAJ,Handler2019Query} method to shorten sentences in cases when a user has entered a query (Section \ref{s:clause_deletion}), and also use \textit{relationship span extraction} method \cite{handler-oconnor-2018-relational} in cases when a user has entered both a query and subquery (Section \ref{s:rsum_extraction}). We also employ a final fallback approach, \textit{character windowing}, when it is not possible to shorten a sentence using other techniques (Section \ref{s:clause_deletion}). In the next sections, we describe each sentence shortening method in greater detail. The Appendix provides additional details on how \textsc{ClioQuery}~chooses between possible sentence shortening methods.\footnote{ In Figure \ref{f:system_cc}, \textsc{ClioQuery}~uses relationship span extraction to shorten and display some sentence from 31 out of 239 documents which mention ``Duarte'' and ``Reagan.'' It uses query-focused clause deletion to shorten and display some sentence from 85 documents, and it resorts to character windowing for the remaining 123 documents.\label{footnote_counts}} \subsubsection{Query-focused clause deletion, and character windowing}\label{s:clause_deletion} \textsc{ClioQuery}'s Document Feed requires shortened sentences that mention $Q$ and fit within available screen space. We assume that such shortenings should also be well-formed and contain the most salient information from longer source sentences. Prior research in IR suggests that users prefer well-formed snippets \cite{ryenwhitesnippets}, and prior work in sentence compression \cite{Knight2000StatisticsBasedS,filippova-altun-2013-overcoming, filippova2015sentence} strives for both well-formedness and salience. We also assume that methods for constructing shortenings must run with low latency, which is known to be important in user-facing analytics systems \cite{latencyliu}. Different sentence shortening techniques might optimize for and manage tradeoffs between such requirements. But in this work we turn to a simple \textit{query-focused clause deletion} method to meet such criteria, allowing us to focus on how to apply text summarization methods in user interfaces for historical research. Query-focused clause deletion exploits the fact that natural language sentences are sequences of words, which exhibit hierarchical and nested grammatical structure \cite{bender_linguistic_2013}. For instance, the sequence ``She swims in the pool'' can be divided into interrelated word groups, with specific grammatical relationships; the words ``in the pool'' form a prepositional phrase that modifies the verb ``swims.'' To represent such linguistic structure, clause deletion employs a dependency parse tree \cite{Nivre2016UniversalDV} grammatical formalism. A dependency parse is a directed tree graph with one vertex for each word in the sentence, along with a latent root vertex.\footnote{We use the UD (v1) dependency formalism \cite{Nivre2016UniversalDV}; other related formalisms allow for non-tree parses \cite{schuster-manning-2016-enhanced}. Eisenstein \cite[Chapter 11]{eisenstein2019introduction} offers a broad introduction to dependencies. We perform dependency parsing using Stanford CoreNLP \cite{corenlppipeline,chen-manning-2014-fast}.} Each subtree in the parse corresponds to a constituent subsequence in the sentence. The sentence simplification literature sometimes describes such subtrees as \textit{clauses} \cite{filippova-strube-2008-dependency}. Figure \ref{f:clause_deletion_1} shows an example dependency parse. Sentence simplification via {clause deletion} shortens sentences by iteratively deleting clauses from a dependency parse.\footnote{Tokens from the remaining tree are then printed in left-to-right order, based on their position in the original sentence.} Figure \ref{f:clause_deletion} shows how one sentence is shortened by iteratively deleting two clauses. Unlike sentence compression techniques which consider individual tokens for removal (e.g., Filippova et al.\ \cite{filippova2015sentence}), deleting clauses naturally identifies and removes groups of related words. For example, a single deletion could remove the prepositional phrase ``after the election,'' or a much longer word group with more modifiers and embedded clauses: ``after the previous election last year, which went poorly.'' Shortening sentences via clause deletion also makes it easy to ensure that output sentences must include $Q$; clauses that contain query mentions are not allowed to be removed during deletion.\footnote{ It is also possible to enforce such query constraints using integer linear programming (ILP). However, ILP-based sentence compression techniques (e.g., Clarke and Lapata \ \cite{clarke2008}) are NP-hard and have been shown to be orders of magnitude slower than other iterative approaches to query-focused sentence compression \cite{Handler2019Query}. } \input{figures/clause_deletion} To try and create well-formed output sentences, \textsc{ClioQuery}~turns to prior work on clause deletion \cite[Section 6]{Handler2019HumanAJ}, which has found that in general removing more clauses from an input sentence makes it less likely that the resulting output sentence will be well-formed. Thus, to shorten an input sentence, \textsc{ClioQuery}'s clause deletion first identifies those candidate output shortenings that can be constructed by removing at most $K$ clauses from the input (without removing $Q$), and are also short enough to fit in one line of text within the Document Feed. Because in practice it is often possible to dramatically shorten an English news sentence by removing only one or two large clauses (for example, a lengthy relative clause, such as ``Reagan met with the envoy \sout{who was sent by the ...}''), \textsc{ClioQuery}~only considers shortenings which can be constructed by removing $0 < K \leq 2$ deletions.\footnote{ In addition to encouraging well-formed output, this strict limit ensures low latency for the user. For a sentence $M$ words long, the worst case for performance is a tree where all words are leaf vertexes, resulting in $M+M(M-1)/2$ possible outputs of $K=1$ or $2$ deletions. But in typical trees, there are far fewer possible deletions because: (1) the query word and all its ancestors are not allowed to be deleted, (2) after the first deletion of a clause length $C$ (i.e., the size of the deleted subtree) only $M-C$ candidates remain for the second deletion, and (3) if \textsc{ClioQuery}~finds any candidate shortenings using $K$=1, it won't search for candidates using $K$=2, as shortenings which remove fewer clauses are more likely to be well-formed. We do not consider cases where $K=0$, as most unshortened news sentences are too long to fit within the Document Viewer.} To try and ensure that output shortenings include the most salient information from input sentences, \textsc{ClioQuery}~then returns the candidate output shortening with the highest tf-idf score \cite{irbook}. Tf-idf scores are often used in extractive sentence compression \cite{clarke2008,filippova-strube-2008-dependency} and text summarization \cite{das2007survey} to identify salient information for inclusion in summary output; this metric identifies words which occur with unusual frequency (relative to the overall corpus), which is an important signal of salience in summarization \cite{sumbasic}. The Appendix includes details of how we compute td-idf in \textsc{ClioQuery}~to identify words which occur frequently in documents mentioning a query. In some cases, there is no way to shorten a sentence by removing one or two clauses while ensuring that that output sentence mentions $Q$ and will fit in the Document Feed. In these circumstances, \textsc{ClioQuery}~resorts to shortening the sentence by extracting the span of $N$ characters to the left and right of $Q$ in the sentence, where we maximize $N$ under the constraint that the resulting character span will both fit in Document Feed and respect word boundaries. We use this \textit{character windowing} method only as a last resort because it may cut off syntactic constituents (e.g., show only a portion of a prepositional phrase), which may create awkward-sounding output. \Cref{footnote_counts} describes how often \textsc{ClioQuery}~uses this fallback, during an example run of \textsc{ClioQuery}. In the future, it might be possible to shorten more sentences with query-focused clause deletion by considering candidate output shortenings that are created using more than $K=2$ deletions. (Prior work on query-focused clause deletion does not yet offer an efficient solution for considering such candidates \cite{Handler2019HumanAJ}.) Because the number of candidates grows with $K$, developing algorithms which efficiently search over possible outputs or learn greedy deletion policies based on data (e.g., with reinforcement learning) might offer useful starting points. \subsubsection{Relationship span extraction}\label{s:rsum_extraction} \textsc{ClioQuery}~users who search for a query term $Q$ can also filter query results by a subquery. When a user enters both a query and a subquery term, we assume that they are broadly interested in how these two terms are related in the corpus. For instance, a user might query for the Salvadoran leader $Q$=``Duarte'' and apply a subquery for the then U.S.\ President ``Reagan,'' in order to understand Duarte's relationship to Reagan (Figure \ref{f:system_cc}). \input{figures/rsum/rsum} To meet this information need, \textsc{ClioQuery}~attempts to simplify long and complex sentences mentioning both the query and subquery terms into short sentences which concisely describe the {relationship} between the query and subquery. We describe the process of shortening sentences in this manner as \textit{relationship span extraction} because each shortened sentence is a token span (i.e., sequence of tokens) extracted from a longer sentence. For instance, in Figure \ref{f:system_cc}, we extract the span ``Reagan sent congratulations to Mr.\ Duarte'' from the longer sentence ``{\color{gray}\sout{President}} Reagan sent congratulations to Mr. Duarte {\color{gray}\sout{and Ambassador Thomas R. Pickering pledged United States support for further meetings}}.'' \textsc{ClioQuery}~relies on a known natural language processing technique to perform relationship span extraction, which is specified in detail in prior literature \cite[Sec.\ 4]{handler-oconnor-2018-relational}. At a high-level, this method employs logistic regression to determine if an input sentence $s$ containing two input query words can be shortened to express a relationship between those two query words. To make this determination, the method first extracts a vector of linguistic features $\bm{x}$ containing information about the query words in the sentence (e.g., is there a verb token between the query words in $s$?), and then passes the dot product of $\bm{x}$ and a learned weight vector $\bm{\theta}$ through a logistic function $\sigma$. This returns a predicted probability that the token span between the query and subquery will sound natural when removed from the sentence (Figure \ref{f:rsum}). In our case, the input query words are the user's query and subquery; we shorten a sentence $s$ to a relationship span if the predicted probability that the span sounds natural is greater than a threshold $T=0.5$.\footnote{We implement with Scikit-learn \cite{Pedregosa:2011:SML:1953048.2078195}. Note that setting a lower threshold $T$ might increase the total number of shortened sentences, at the cost of creating fewer well-formed extractions (and vice versa).} In our implementation of \textsc{ClioQuery} (as in prior literature \cite{handler-oconnor-2018-relational}), relationship span extraction is supervised using a benchmark corpus from \citet{filippova-altun-2013-overcoming}, consisting of single sentences paired with single-sentence summaries, which are automatically generated from news headlines. In principle, a technically-oriented \textsc{ClioQuery}~user would be able to retrain relationship span extraction on their own corpus, using the technique from Filippova and Altun to automatically generate training data from headlines in their own news archive. \subsection{\textsc{ClioQuery}~suggests new features and directions for interactive text analysis}\label{s:discussion_simplification} Much prior work in interactive text analysis focuses on helping people investigate bodies of documents by presenting textual units like topics \cite{tiara}, events \cite{eventriver}, or thematic hierarchies \cite{overview}. But motivated by the needs of historians and archivists, \textsc{ClioQuery}~instead proposes and tests a new approach to interactive corpus investigation, organized around the analysis of query mentions in context (see Section \ref{s:related_comparison}). To help people investigate this ``unit of analysis'' \cite{chuangheer}, \textsc{ClioQuery}~employs new text summarization techniques from the NLP literature to create a summary of a query term across a corpus. The system then presents summaries alongside more traditional features from the text analytics literature, such as linked views and in-text highlighting, to help people easily and transparently review summary text in underlying documents. During expert interview and field study evaluations, many historians said that they found such features helpful for archival research. They reported skimming over query mentions in the Document Feed to gain a sense of a query's use across a corpus, and then reading highlighted mentions in the Document Viewer for more context and detail. Several specifically mentioned that these components helped with mention gathering~and analysis. This query-oriented approach suggests new directions for interactive text analytics in other query-oriented settings. For instance, some marketing applications identify salient words and phrases in online forums \cite[Section 4.1]{marketingkdd}; \textsc{ClioQuery}'s query-focused summaries and linked views might help marketing analysts using such systems understand what people say about products online. Features based on \textsc{ClioQuery}~might also be applied in existing text analytics systems. For instance, people might formulate a query using overview-oriented features such as word clusters, and then investigate this query word using a \textsc{ClioQuery}-style Document Feed and Document Viewer. \subsection{\textsc{ClioQuery}~tests an idea: {``Text and its affordances should be taken seriously''}\label{s:text_seriously}} Researchers have proposed many approaches to text visualization, which map high-dimensional text to two-dimensional graphical representations like time series plots (e.g., ThemeRiver \cite{ThemeRiver}) or bubble diagrams (e.g., EventRiver \cite{eventriver}). By contrast, \textsc{ClioQuery}'s Document Feed and Document Viewer do not map text data to a graphical representation. Instead, \textsc{ClioQuery}~uses text summarization methods from NLP to extract and present spans of text from a corpus for people to read, using automatic in-text highlighting to facilitate skimming. In this sense, \textsc{ClioQuery}~follows the advice of \citet{moretextplease}, who suggest that {``text and its affordances should be taken seriously''} in text analytics by making text itself ``a central piece of the visualization.'' Viewed through the lens of this recommendation, \textsc{ClioQuery}~reflects one strategy for a text analytics system fundamentally organized around displaying spans from a corpus. Other work from \citet{storifier} also explores this ``reading-centered approach.'' Some authors of prior text analytics systems have later noted the importance of showing underlying documents during interactive analysis. Authors of the Jigsaw system found that ``interactive visualization cannot replace the reading of reports'' \cite{Gorg2013JigsawReflections}. Similarly, creators of both Overview \cite[Sec.\ 5]{stray} and ThemeRiver \cite[Sec.\ 7]{ThemeRiver} also describe finding that people need to read underlying text. \subsection{User feedback on summarization has implications for natural language processing}\label{s:discussion_NLP} \textsc{ClioQuery}~applies particular ideas from query-focused text summarization for interactive text analysis. However, building and evaluating a user-facing system forced us to reexamine several core assumptions from the text summarization literature. In particular, early versions of \textsc{ClioQuery}~applied standard optimization-based summarization methods \cite{McDonald} to select ``important'' information from a corpus. This approach was reminiscent of prior temporally-oriented language engineering systems such as HistDiv \cite{histdiv}, TimeMine \cite{allen}, and TimeExplorer \cite{TimeExplorer}, which each attempt to automatically identify most-relevant information based on a query. However, during needfinding and prototyping, we found that some historians and archivists strongly disliked this approach. Experts reported that they needed to understand why the computer was showing particular summaries, before they could actually draw conclusions from the output (see prototypes in the Appendix). Based on this feedback, in later versions of \textsc{ClioQuery}, we stopped trying to extract ``important'' mentions of a query term in search results. Instead, we decided to shorten and present every single sentence mentioning a user's query in the Document Feed, and allow people to easily examine such shortenings in context in the Document Viewer. During our expert interview and field study and evaluations, we found that this approach was more successful. We hypothesize that experts liked this format because they could understand {why} \textsc{ClioQuery}~showed query shortenings, and thus use \textsc{ClioQuery}~output in their research. Our experiences might have implications for NLP, where research in summarization typically focuses on generating summaries which best match ``gold'' references \cite{das2007survey,nenkova2012survey} without worrying about explaining how summaries are formed. In particular, much recent work on abstractive summarization in NLP \cite{rush-etal-2015-neural,Hermann2015TeachingMT} seeks to generate summary passages that do not occur in the input text. Because such abstractive output can not be checked against underlying sources, and because such methods also currently suffer from frequent factual errors \cite{kryscinski-etal-2019-neural}, much more research may be required before abstractive approaches might be applied towards social research. \subsection{Comprehensive and unbiased search costs time; transparency might help}\label{s:discussion_relevance} During needfinding interviews, historians and archivists often emphasized the importance of directly and comprehensively examining all evidence relevant to a given research question, without allowing black-box algorithms to influence their conclusions. We thus designed \textsc{ClioQuery}~to minimize potential bias from algorithmic ranking. Yet feedback on these aspects of \textsc{ClioQuery}~was mixed (Section \ref{s:expert_interview_comprehensivenesss} and \ref{s:relevance_model_feedback}). Some appreciated how \textsc{ClioQuery}~used filters instead of ranking to narrow down search results. But others reported that truly forgoing algorithmic curation required the researcher to spend too much time reading irrelevant documents. For instance, some admitted that they often no have choice but to trust computer models of relevance to find evidence in archives because keyword search often turns up far more documents than they can possibly review. While historians do sometimes work with smaller corpora (Section \ref{s:feed_and_viewer}), this issue would be particularly problematic in larger archives, where some queries will be mentioned many times. Why did some express deep commitment to full manual review of evidence during needfinding interviews, while others admit that they had to trust search engines to select evidence during system evaluation? There are at least two possibilities. One possibility is that historians and archivists might express commitment to comprehensive review when describing their ideal practices, but remember the limitations of this ideal when faced with a real task during system evaluation. Some approaches to needfinding in HCI emphasize the limits of user interviews \cite{ethographic} because ``what people say and what they do can vary significantly.'' Another possibility is that there is variation in historians' commitment to comprehensiveness. Some but not all historians may feel required to comprehensively review all evidence during research, possibly based on intellectual background or subfield. (Other authors find similar variation among doctors \cite[Sec.\ 4.3.5]{doccurate}.) Better understanding this apparent contradiction between experts' stated commitments to comprehensive review and the realities of inevitable tradeoffs between recall and time \cite[Fig.\ 6]{pirolli2005sensemaking} will require further research. Nevertheless, future researchers might resolve the contradiction with improved user interfaces. Specifically, systems might transparently show which documents are selected or hidden by an algorithm, and allow people to easily override and investigate any document ranking decisions from a machine. Such features would be particularly important for larger corpora, where historians would not be able to review all query mentions in context. Research on tools for visually and interactively refining search results \cite{TiisVizIR} might offer a useful starting point. Features which help groups of historians to collaborate during search could also enable teams of researchers to comprehensively review evidence from larger corpora. \subsection{A crowdsourced historical reading comprehension task} We designed a crowdsourced historical reading comprehension task to compare \textsc{ClioQuery}~with a keyword document search~system (IR), which we consider to be a baseline tool for historical research (Section \ref{s:intro} and \ref{s:related}). Our task is designed to reflect historians' common practice of mention gathering and analysis, in which expert social researchers find and review occurrences of a query $Q$ in an archive (Section \ref{s:intro}) in order to draw conclusions about society. In our crowdsourced adaptation of this common historical research process, we tasked non-specialists with finding and reviewing occurrences of a query in a newspaper corpus.\footnote{We did not ask participants to take the next step of drawing substantive historical conclusions from their findings, which would have required deep historical knowledge and specialized training.} We then used reading comprehension questions to measure how well participants performed at finding and reviewing information about the query; many common educational assessments use similar reading comprehension questions to assess how well people learn information from documents \cite[Chp.\ 7]{reading_comp}. To ensure we presented an ecologically-valid research prompt, we modeled our crowd task after a real historical question from P2. In our interview study (Section \ref{s:usabilitystudy}), P2 used \textsc{ClioQuery}~to investigate if \textit{The New York Times} portrayed the controversial figure Robert Mugabe as a corrupt authoritarian, or as a hero of Zimbabwe's fight for independence. In our crowd study, we presented participants with one of two text analytics tools loaded with the same small corpus of 12 \textit{New York Times} editorials mentioning Robert Mugabe, published from January, 2001 to June, 2003. We then asked participants to ``find and remember everything the \textit{The New York Times} wrote about Robert Mugabe'' using their tool. Because historians have only so much time for a given research project (Section \ref{s:needs_comprehensive}), we limited participants to exactly six minutes to conduct their research using their assigned interface. After six minutes, we presented eight true/false reading comprehension questions about \textit{New York Times} coverage of Mugabe, and observed the total number of correct answers for each participant. Because scoring well on this test of reading comprehension requires finding and reviewing information about a query in a corpus, we believe our crowd task measures how well people perform mention gathering and analysis, using a particular interface. \subsubsection{Details: reading comprehension questions and scoring} In order to ensure that our task was as objective and neutral as possible, we created reading questions using the Wikipedia page for Robert Mugabe \cite{wikimugabe}. Specifically, we used a semi-automated procedure based on tf-idf sentence vectors (described in detail in the Appendix) to identify Mugabe facts from Wikipedia reported in \textit{New York Times} editorials about Mugabe. We then selected four facts from Wikipedia reported in the 12 editorials in the corpus, and four facts from Wikipedia that were not reported in the 12 editorials. These four facts were reported in some other \textit{New York Times} editorial that was not presented to participants (because the editorial was published before or after January, 2001 to June, 2003). In total, this process created a list of eight total Mugabe facts from Wikipedia. To evaluate reading comprehension, we presented all eight facts in randomized order, and asked participants to select those facts which appeared in the articles they had reviewed during the task. To get a perfect score of eight out of eight correct answers without guessing,\footnote{In this task, a participant would have a $.5^8 * 100 = 0.391\%$ chance of correctly guessing all 8 answers.} a participant would have to find and remember the four Mugabe facts reported in the editorials shown during the task, without selecting any of the four facts that were not reported in the editorials. The Appendix includes a screenshot showing the reading comprehension questions. \subsection{Experiment design and experiment details} We compared \textsc{ClioQuery}~with a keyword document search~system using a between-subjects experiment design with U.S.\ masters workers recruited via Amazon Mechanical Turk. (Amazon confers the master designation on crowdworkers with a record of success in crowd tasks.) Participants were randomly assigned to complete the reading comprehension task using either \textsc{ClioQuery}~or a baseline keyword document search~interface (IR). We then measured the difference in the mean number of total correct answers from workers in each group to determine if \textsc{ClioQuery}~helped people find and remember information about Robert Mugabe (as compared to the IR system). \subsubsection{Implementation of the IR baseline} We implemented the IR system using Whoosh, an open-source Python keyword document search~tool which ranks results using the common BM25 metric.\footnote{Whoosh is similar to other traditional keyword document search~tools like Lucene. \url{https://whoosh.readthedocs.io/}} The Appendix contains a screenshot of this baseline interface. To ensure fair comparison, we tuned Whoosh to be most similar to \textsc{ClioQuery}. Specifically, Whoosh accepts a number of configuration parameters which govern how the system creates snippets on the results page (see Section \ref{s:related_work_search}). Because such snippets are similar to the snippets in the \textsc{ClioQuery}~Document Feed, we adjusted the Whoosh snippet parameters so that that Whoosh snippets were as close as possible in length to the shortened sentences in the \textsc{ClioQuery}~Document Feed. Further details about the tuning procedure are described in the Appendix. We also adjusted the IR system to use the same font size as \textsc{ClioQuery}. To minimize possible variation in worker behavior, we hard-coded the IR system (and the \textsc{ClioQuery}~system) to use the query ``Mugabe'' during the experiment.\footnote{During the experiment, we also removed \textsc{ClioQuery}~interface elements which are not relevant for the task, such as the corpus selection control, filter-by-date feature and filter-by-subquery feature.} To rank the 12 documents in the corpus using the IR system, we loaded the IR tool with all \textit{New York Times} editorials published between 1987 and 2007 that include the word ``Zimbabwe,'' and then queried for ``Mugabe'' while applying a date filter to select only those results which were published from January, 2001 to June, 2003. We did not implement the IR system using proprietary black-box search tools like Google or \textit{New York Times} search \cite{nytwebsite}. This is because it is not possible to load open-source versions of such systems with a custom corpus. Loading a custom corpus is crucial for two reasons. (1) Our broader goal is to design an open-source software system that can be deployed and used by historians, who are often interested in corpora that are not published on the web (Section \ref{s:limits_and_future}) and thus inaccessible to Google (or to any other web search engine product). Comparing to an open-source search engine is thus more appropriate than comparing to a black-box system like Google. A historian could use an open-source search engine to index and analyze the documents they collect during their work. (2) For a controlled experimental comparison of two user interfaces, it's necessary to fix the dataset used for both interfaces. In our experiment, users were evaluated based on what information they found (which would change based on the dataset). \subsubsection{Experiment sequence, experiment pretest, and phases of data collection} At the start of the experiment, participants in each condition watched a roughly one minute training video describing how to use their randomly assigned interface. They also read several screens with task instructions, where they entered short phrases into text boxes to confirm they understood the task and were paying attention. After these preliminaries, participants took an easy pretest which was very similar to the main task (but was about Iraq instead of Zimbabwe). We describe the details of the pretest in the Appendix. After the pretest, participants proceeded to the main Robert Mugabe task, conducted their research, and answered the eight reading comprehension questions. The task concluded with qualitative questions, including questions about the strengths or weaknesses of the assigned interface. Qualitative questions are provided in the Appendix. In total, the task took 20 minutes. Data collection for the task proceeded in two phases. We first collected data from 18 participants in a small initial pilot. Following the initial pilot, we made adjustments to the task described in the Appendix, including fixing a bug which was favorable to the IR baseline. Following these changes, we collected data from the remaining 103 participants. We decided to include data from the pilot in our analysis because collecting data from crowdworkers was expensive, and because we had trouble recruiting participants from the limited pool of masters workers. (Pooling data is common in settings where data is sparse). Because we struggled with recruitment, we had to increase task payment from \$2.50 to \$5.00 during data collection. We include details in the Appendix. \subsubsection{Detecting engaged and not-engaged workers} In their highly-cited study on crowdsourcing for HCI, \citet{Kittur} emphasize the importance of detecting suspect responses from crowdworkers who may not be completing tasks in good faith. We thus measure worker engagement in two different ways. First, because the pretest was designed to be very easy, we assume that participants who did not score perfectly on the pretest were less engaged in the crowd task than other participants. Second, we also assume that participants who made mistakes on task instructions were also less engaged. For instance, some participants made a mistake on task instructions by trying to skip ahead without watching the training video (we logged this and similar behaviors). In subsequent analysis, we refer to participants who both completed the pretest correctly and did not make any mistakes on task instructions as \textbf{engaged participants}. Engaged participants are a subset of \textbf{all participants}, the set of all people who completed the task. \begin{figure}[h] \centering \subfloat{ \includegraphics[width=8cm]{figures/faceted_histogram.pdf} }\\ \subfloat{ \input{tables/crowd_table} } \caption{Total correct questions by interface, among all participants, and among engaged participants. A star* indicates a significant difference between the means of the \textsc{ClioQuery}~and IR groups.}\label{f:crowdresults} \end{figure} \subsection{Results and analysis} We found that participants assigned to complete the historical reading comprehension task with \textsc{ClioQuery}~averaged more total correct answers than participants assigned to complete the same task with the IR system (Figure \ref{f:crowdresults}). Among the \nengaged~engaged participants, workers in the \textsc{ClioQuery}~group averaged \deltaCQengaged~more correct answers than workers in the IR group (Cohen's $d$=\cohensengaged).\footnote{Computed with v0.8.1 of the \texttt{effsize} package in R.} \textsc{ClioQuery}'s effect was weaker among all participants, where \textsc{ClioQuery}~workers averaged $0.399$~more correct answers than IR workers (Cohen's $d$=\cohensALL). We hypothesize that this weaker effect may be due to inattention among non-engaged participants, which may have introduced data collection noise. For instance, participants who did not read task instructions carefully or who failed the pretest may have been more inclined to guess on the Mugabe task.\footnote{The number of workers in each group is not exactly equal. This is common in crowdsourced settings, where some workers may not finish a task. For instance, we used an alert to ask workers attempting to complete our task on a phone rather than a computer to not proceed with the survey.} \begin{figure}[h] \centering \includegraphics[width=8cm]{figures/bootstrap_means.pdf} \caption{Distribution of sample means, across 2500 bootstrap samples of scores from the 59 IR participants, and 2500 separate bootstrap samples of scores from the 62 \textsc{ClioQuery}~participants.}\label{f:samples} \end{figure} We tested for possible equality of means using bootstrap hypothesis testing \cite{Efron} (Algorithm 16.2). Using 100,000 samples, we found that the difference in means among all workers in each condition was significantly different ($p=\pAll$). We also found that the difference in means among the subset of engaged workers in each condition was also significant ($p=\pAttentive$). We show separate bootstrapped distributions of sample means in Figure \ref{f:samples}. \subsubsection{Qualitative analysis}\label{s:turk_qual} Our experiment suggested that some properties of the \textsc{ClioQuery}~interface helped participants on the historical reading comprehension task. To try and gain a better understanding of which exact aspects of \textsc{ClioQuery}~may have been helpful, we reviewed qualitative feedback from the 8~\textsc{ClioQuery}~participants who achieved a perfect score in reading comprehension. Each of these participants praised one or more \textsc{ClioQuery}~features in offering qualitative feedback on the system. \textit{``I liked that I could expand the articles and filter them by the number of times the key word was mentioned,''} one top performer wrote. Said another, \textit{``I liked being able to control the number of mentions so I could determine relevance rather than trust a search engine.''} A third liked that \textsc{ClioQuery}, \textit{``made it easy to see which articles I have already read and which ones I have yet to read.''} One high scorer did note that while in-text highlighting was in general helpful, \textit{``the part of this highlighting that I didn't like is that ... it was hard to gain context without reading the unhighlighted text before or after the highlighted sections.''} On the other hand, the 4~IR users who got perfect scores offered scattered feedback. One liked how the results did not show \textit{``a bunch of random stuff or products to buy,''} while two others disagreed if the snippets were useful (one praised them, one said they did not help). Comments from the final high-scoring IR participant suggested that the IR system offered a realistic baseline. \textit{``There wasn't much to like or dislike,''} they said. \textit{``I really didn't find any differences how I would normally do it.''} \subsection{Recruitment, participants and corpora} We recruited five participants (P1-P5) from two universities in the U.S., by emailing students, faculty, and staff listed on history and library department web pages. All participants had advanced degrees (master's or PhD) in history or library science, much like the expected users of our system. We provide more details on the backgrounds of participants in the Appendix. Interviewees from our needfinding study (Section \ref{s:needs}) did not participate in our expert interview study, to avoid what Sedlmair et al.\ describe as a potential form of bias \cite{Sedlmair}. Each participant in the interview study had an established research or curatorial interest in some topic related to late 20th century or early 21st century history, which we express as a single topic word (see Appendix). We identified this designated topic word based on each participant's publication record and professional web presence. Before each interview, we then loaded \textsc{ClioQuery}~with a corpus of \textit{New York Times} (NYT) editorials\footnote{Social researchers sometimes study editorials to better understand media sources \cite{gay_rights,Lule}.} published between 1987-2007 \cite{SandhausNYT} mentioning the designated topic word. \subsection{Data collection}\label{s:datacollection} To administer the study, one researcher from our group conducted five, one-on-one, sixty-minute interviews over Zoom video chat. (See supplemental materials for a detailed script.) During each interview, the researcher asked each participant to brainstorm and then articulate a high-level research question, based on the participant's prior work (10 minutes). They then introduced the participant to \textsc{ClioQuery}~via a tutorial (7 minutes), and asked them to investigate their research question using \textsc{ClioQuery}~(30 minutes). They concluded with a semi-structured interview (13 minutes). Throughout, the researcher observed and recorded participant reactions and invited participants to think-aloud \cite{thinkaloud} as they used the system. If a participant offered feedback on some portion of the interface during their investigation (e.g., offered detailed feedback on the Time Series View), the researcher did not ask about this topic again during the semi-structured interview. \subsection{Thematic coding}\label{s:data_analysis} The researcher who conducted the interviews analyzed automatic Zoom transcriptions for each of the video recordings, and corrected transcription errors. The researcher then extracted 183 quotes from across five interview transcripts. Each quote consisted of a few sentences on a focused topic, along with the preceding question or comment to provide context (e.g., a quote might discuss the Document Feed). The researcher attempted to extract as many quotes as possible, while excluding irrelevant quotes (e.g., tutorial instructions). The researcher then developed a codebook of six high-level codes (described in Section \ref{s:qualresults}), by grouping and re-grouping the 183 quotes to identify common themes, much like the codebook-based approach described in Miles, Huberman and Salda\~na \cite[Chp. 4]{miles_qualitative_2014}.\footnote{Miles, Huberman and Salda\~na \cite[Chp. 4]{miles_qualitative_2014} describe assigning codes in two phases; we assign codes in a single phase.} After each assigning each quote to exactly one of the six codes, the researcher shared the codebook with an undergraduate coder with training and experience in qualitative coding (who was not involved with the development of \textsc{ClioQuery}). The second coder independently assigned codes to the same quotes, using the codebook. The second coder was also invited to add new codes to the codebook if needed, but reported that no new codes were necessary. (Thus we did not modify the codebook.) We include a copy of the codebook in supplemental materials. Following independent coding of each of the 183 quotes, the two coders met for 1 hour over Zoom video chat to discuss 41 disagreements, and attempted to reach consensus via discussion. In 21 cases, the two coders were able to reach agreement regarding the appropriate code. In 20 cases, the coders determined that disagreement reflected genuine ambiguity in qualitative data, and agreed to disagree. McDonald et al.\ \cite[Section 2.2]{McDonaldCoding} use the term \emph{reliability} to describe the extent to which coders reach the same result from independent work, and use the term \emph{agreement} to describe the extent to which coders reach consensus after discussion. Adopting this terminology, we measure the reliability of the two coders by computing a Cohen's $\kappa=0.724$ (using the R \texttt{psych} 2.0 package \cite{psych}), and we measure the agreement of the two coders by computing a $\kappa=0.855$. \subsection{Procedure} We recruited two historians, $H1$ and $H2$ through convenience sampling \cite{given_sage_2008}. The Appendix includes details on their backgrounds. $H1$ and $H2$ did not participate in the initial design or development of \textsc{ClioQuery}, to avoid what Sedlmair et al.\ \cite{Sedlmair} describe as a potential source of bias. During the field study, one member of our research team conducted three one-on-one meetings with each historian over Zoom video chat. The first meeting was 30 minutes long and the subsequent meetings were 60 to 70 minutes long, with 1 to 3 weeks between each meeting. Each meeting in the three meeting sequence had a distinct focus. During the first meeting, the researcher presented a tutorial of the software, described the field study process, and invited the historian to describe a question related to their research. After the first meeting, a member of our research team gathered the data needed to answer the historian's research question and loaded it into \textsc{ClioQuery}~(the Appendix describes this data gathering). During the second meeting, each historian learned to use the \textsc{ClioQuery}~software and performed a preliminary exploration of the data. Then, during the final meeting, each historian investigated some specific query by analyzing the comprehensive set of all mentions using the Document Feed and Document Viewer. During each meeting, the researcher observed each historian and invited the historian to think aloud \cite{thinkaloud} as they used the system. The researcher also asked the historian to describe their findings and explain how \textsc{ClioQuery}~helped or did not help answer their research question. \subsection{\textsc{ClioQuery}~helps experts investigate by {skimming}, an advantage over baselines} During the field study, $H1$ and $H2$ each used \textsc{ClioQuery}~to reach substantive historical conclusions, offering additional evidence for our hypothesis (Section \ref{s:system}) that \textsc{ClioQuery}~can help experts answer research questions from news archives. ${H1}$ used \textsc{ClioQuery}~to verify a well-known claim from Herman and Chomsky, who argue that for-profit news organizations in the United States shape public opinion towards the interests of political and economic elites \cite{MC}. To offer evidence for this theory, in their work, Herman and Chomsky assert that \textit{The New York Times} wrote five articles in February and March of 1984 describing the Salvadoran army as a protector of El Salvador's election. To verify this result, $H1$ searched a \textit{New York Times} corpus (see Appendix) for the query ``{election}'' and then used the filter-by-date feature to select articles from February and March of 1984. $H1$ then used the filter-by-subquery feature to identify those query results which contained the subquery ``{army}.'' $H1$ then systematically reviewed all 32 matching documents, through what $H1$ described as \textit{``skimming highlighted parts''} in the Document Viewer. By using \textsc{ClioQuery}~in this manner, $H1$ said that they were \textit{``able to find what might be the five articles''} Herman and Chomsky used to partially support their conclusions. $H1$ explained, \textit{``The tool is great for exactly this.''} $H1$ found \textsc{ClioQuery}'s in-text highlighting helpful for their research task, drawing a comparison with a baseline keyword document search~system (Section \ref{s:baseline}). \textit{``I like how you have the bold highlighted and colored words in the text itself,''} they said. \textit{``That is the advantage that this interface has over the New York Times website.''} $H1$ also explained how such highlighting reduced reading burden ( compared to a keyword document search). \textit{``What I need to know is the army described as a protector of the election [in an article]},'' he said. \textit{``I don't need to read every word of the article to find that out. I can look at the paragraphs where they are describing the army and I see what they are saying in those paragraphs. That is pretty useful.''} ${H2}$ chose to use \textsc{ClioQuery}~to study how the United States media represented female astronauts Svetlana Savitskaya and Sally Ride in the early 1980s. (${H2}$ needed to answer this question to research a planned book.) To investigate, $H2$ used \textsc{ClioQuery}'s Document Feed and Document Viewer to review portrayal of Sally Ride in \textit{The New York Times}. $H2$ queried for the word {``Ride''} and then scrolled through the Document Feed to skim over mentions of Ride in the 63 matching documents, sometimes also clicking to open individual news stories in the Document Viewer. \textit{``I have some hypotheses that I was able to develop very quickly through the experience of using this [system]},'' $H2$ reported. \textit{``One is that Ride was presented to the American public [in The New York Times] ... first as a woman and second as a scientist.''} $H2$ asked us to continue to provide access after the study, so she could {continue researching her book using the tool}. \textsc{ClioQuery}'s Document Feed was particularly helpful for $H2$, who found that query-focused summarization offered an advantage over a baseline keyword document search~system. Ride was a PhD astrophysicist turned astronaut, and $H2$ wanted to understand how the media portrayed her scientific credentials. The Document Feed helped $H2$ quickly review this information. \textit{``[Here] she's called a flight engineer},'' $H2$ said, pointing to the Document Feed. \textit{``I can see this already [without opening the document].''} $H2$ then scrolled through the Document Feed to find shortened sentences where Sally Ride was described with her academic title (Dr. Ride), and sentences where Ride was described (or not described) as a physicist. $H2$ explained that she could identify this information \textit{``just doing the quick scan [in the Document Feed].''} She went on to explain how she would normally research this question with \textit{The New York Times} archive (by opening and reading individual news stories using a web browser). \textit{``The question is},'' she said, {\textit{``what can I do here [with \textsc{ClioQuery}] that I can't do there [i.e.\ on \textit{The New York Times website]?''}}} $H2$ continued, \textit{``It's exploring the left hand Document Feed here. This is awesome ... I am liking these short contextual pieces [i.e., shortened sentences].''} We illustrate this comparison in Figure \ref{f:field_study_loop}; by using \textsc{ClioQuery}, $H2$ was able to easily gather and analyze mentions of Ride across the corpus. \subsection{\textsc{ClioQuery}~helps with historical sensemaking}\label{s:sensemaking} While using \textsc{ClioQuery}, each of the five participants formed a question and then collected and interpreted evidence to start to answer that question. We observed that \textsc{ClioQuery}~helped historians with this investigative process, which Dalton and Charnigo \cite{DaltonCharnigo} describe as historical sensemaking. Our observations offered partial validation for our hypothesis that \textsc{ClioQuery}~features can aid historians in their work (Section \ref{s:system}). For instance, as part of his research, P1 studies \textit{New York Times} news coverage from journalists embedded with United States military units in the Iraqi city of Falluja during the second U.S.--Iraq war. From prior study, P1 understood that embedded U.S.\ journalists often published news stories reflecting the perspectives of U.S.\ military leaders. But while examining mentions of $Q$=``Falluja'' in \textit{New York Times} editorials using the \textsc{ClioQuery}~interface, P1 expressed surprise when finding a more nuanced perspective from the opinion desk. \textit{``I didn't see nearly as much of the sort of sensational depiction of Falluja, and the militants in Falluja [in editorials] that I expect from embedded journalists [in news stories],''} he reported. Similarly, P2 used~\textsc{ClioQuery}~to find confirming evidence of shifting U.S.\ perspectives towards Robert Mugabe. As P2 expected, early \textit{New York Times} editorials from the corpus praised $Q$=``Mugabe'' as a liberator, but then began to criticize \textit{``him as a bad statesman, as a tyrant and a dictator.''} P3 was likewise able to partially answer a research question with \textsc{ClioQuery}. She explained that while she had \textit{``a deep knowledge of [women in combat]. I don't have a deep knowledge of what the [NYT] editorial board has to say about it.''} Using \textsc{ClioQuery}, she found evidence of editorials using \textit{``the gendered trope that women are supposed to be wives and mothers}.'' P5 also discovered an unexpected connection with musical copyright, while researching a hypothesis surrounding literary copyright. \textit{``The parity [with the music service] Napster that's that's really interesting ... That's not something I thought about ... I was thinking ... definitely more in literary items because that's what I deal with.''} \subsection{\textsc{ClioQuery}~features offer a corpus overview, alongside complementary context}\label{s:features_feedback} Participants offered detailed feedback on \textsc{ClioQuery}~features during interviews, which often matched our design goals for particular components of the interface. To begin, three participants reported that \textsc{ClioQuery}'s \textbf{Time Series View} offered a useful overview of the entire corpus, by directing their attention to salient time periods. P1 said the Time Series View was an \textit{``easy way of visualizing''} corpus trends, and P5 suggested that the Time Series View might be helpful \textit{``when students are kind of in that exploratory phase ... as a way of ... coming up with research questions.''} P4 offered similar feedback. \textit{``I really like this,''} she said. \textit{``This looks really functional and really useful. I like how there is quite a lot of information packed in.''} P1, P2 and P5 reported that the \textbf{Document Feed} was a useful feature of \textsc{ClioQuery}~because it helped summarize query mentions. The Document Feed \textit{``condenses all of the essential information and sort of leaves out all the extra stuff},'' said P1. Said P2, \textit{``I found [the Document Feed] useful, especially the expand button. If I click expand I can see a rundown of the mentions right after the title without seeing the article}.'' P5 reported using the Document Feed to \textit{``do some ... simple kind of topic modeling in my own head ... just to see if I could pull out any ... themes there.''} P5 added that, \textit{``having this here [i.e., Document Feed] is really helpful to kind of see what they're talking about.''} P3 and P4 discussed the Document Feed while describing the importance of context in historical research; we include their feedback on this feature in Section \ref{s:feedback_context}. Several participants also reported that the \textbf{Document Viewer} helped during their research. For instance, P3 reported that automatic in-text highlighting in the feed was very helpful. \textit{``I'm a visual person. So I'm looking for the words. I like that they're in purple and green ... the words that you've given me the pop out ... and I can see if it's a pro or con article pretty quickly just from that}.'' P2 said he used the Document Viewer to \textit{``provide detail.''} P1, P2 and P5 noted that \textsc{ClioQuery}'s linked \textbf{Document Feed and Document Viewer} served complementary purposes. They described how the Document Feed provided a summary of the query term, while the Document Viewer provided necessary and complementary details. \textit{``You need both [the Document Feed and Viewer]},'' said P2. \textit{``With just the Document Feed I won't be able to get the full picture of the story. And with just the Document Viewer I will not be able to trace the mentions quite comprehensively and specifically}.'' P2 then added, \textit{``as a researcher, it's important to see things in detail. If you just conclude from what you see in the Document Feed you are not going to get an objective picture of the context of the story line. But if you see the Document Feed, see the mentions, see what they imply, and then you want to understand the context of the story you are going to get to the Document Viewer}.'' P1 said, \textit{``I like having both the Document Feed and the Document Viewer side by side. [The Document Viewer helps with] reading for more depth when I want more depth and [the Document Feed] helps with ... quick scans pretty easy.''} Similarly, P5 explained, \textit{``I see [the Document Feed and Viewer] working together really well ... I start by looking at the feed to kind of pick out the articles that would want to kind of dive into deeper and then I go into the Document Viewer.''} P4 and P5 specifically mentioned that complementary linked views from the Document Feed and Document Viewer helped with \textbf{mention gathering~and analysis}, as compared to a baseline keyword document search~system. \textit{``A lot of a lot of databases that we work with do something similar to this [i.e., the Document Feed]},'' said P5, while describing a search engine results page. \textit{``But you often then have to click on the article to go into the article to get to that reading ... here it is nice that it was just kind of next to it and you can scroll through it.''} Similarly, P4 described the difficulties of context switching between documents from the Google search engine results page. \textit{``Obviously, it's is a time saver,''} she said, comparing \textsc{ClioQuery}~to the keyword document search~system. \textit{``You can tell ... just using the editorials at one newspaper.''} Two participants relied on \textsc{ClioQuery}'s \textbf{filtering system} to investigate their research topics. P1 investigated the \textit{NYT} editorial board's discussion of the query term ``Falluja'' using the filter-by-subquery feature (e.g., searching for ``Falluja'' and ``resistance'' or ``Falluja'' and ``terrorist''). \textit{``It's pretty interesting to me that I get three hits with the words Falluja and resistance and only one with the word terrorist,''} he said. \textit{``That would suggest a certain orientation from the editorial board that will be unexpected}.'' P2 found the filter-by-count feature very helpful. \textit{``Oh, this is good},'' he said, while testing out the slider. \textit{``It gets us through to the most important, the most critical pieces that we want to read}.'' \subsection{Some disavow obligation to perform comprehensive review, noting high costs}\label{s:expert_interview_comprehensivenesss} During needfinding, interviewees emphasized the importance of comprehensively reviewing all available evidence. However, to our surprise, during the expert interview study, P4 explicitly disavowed an obligation to search comprehensively. \textit{``I don't feel like I have an obligation to look at everything,''} she said. \textit{``I have an obligation to get an overview and I think you know, with a completely unscientific measure of, oh, I think I've got enough now.''} Similarly, P1 commented that, \textit{``I don't think anyone actually does it [search comprehensively].''} He went on \textit{``A lot of people pretend they do it ... [but] in terms of like visiting archives ... everyone's skimming ... they already know what they're looking for and they're just trying to find it.''} P2 pointed out that comprehensive manual review was desirable but ultimately had high costs. \textit{``I am not saying we should get rid of personal scrutiny, the way you do it yourself. [But] you want to save time. If you do it [i.e., read] one-by-one it wastes too much time}.'' We discuss ambiguity surrounding comprehensive review in Section \ref{s:discussion}. \subsection{Context is crucial in historical research, so some are wary of text summarization}\label{s:feedback_context} Like during needfinding (Section \ref{s:needs_context}), participants often emphasized the importance of context in historical research. For instance, P3 described extensive research to prepare for oral history interviews in order to \textit{``get that context to be able to ask them the questions that I asked them}.'' P2 also reported that context is \textit{``very important''} for historians, as it \textit{``helps you understand why things are what they are.''} Some historians' emphasis on context informed their feedback on the Document Feed. While P1, P2 and P5 found the Document Feed useful (Section \ref{s:features_feedback}), P3 and P4 expressed reservations because they felt they needed more context to reach conclusions. P3 took the more extreme position. \textit{``For me, I don't know if [the Document Feed] is necessary},'' she said. \textit{``As a history scholar, you can't take things out of context. You need to know the bigger context.''} On the other hand, P4 reported that she would need more context (i.e., longer extractions from news stories) before the feature would be useful. \textit{``The more context I can take in within as compact a time frame and compact a format, but sufficiently informative [the better]''} she said. \textit{``But I think these [shortened sentences in the Document Feed] might have to be longer for that to work}.'' \subsection[Tradeoffs between neutral review and limited time]{Some users recognize a tradeoff between neutral review and limited time}\label{s:relevance_model_feedback} During needfinding and prototyping, interviewees often stressed the importance of avoiding possible bias from software in historical research. But during our expert interview study, P4 reported that she relied on black-box relevance models to direct her attention while searching archives. \textit{``I do try to use the chronological sorting [when using ProQuest],''} said P4. \textit{``But it is ... too much to wade through. If your corpus is reasonably big then you have to have a relevance kind of algorithm in there. Otherwise, it's just going to be too frustrating.''} P4 also recognized that reliance on ranking introduces confounds. \textit{``I think it would be appropriate to make people look at all of the irrelevant stuff},'' she said. \textit{``So they realize the algorithm is pulling the relevant stuff for you ... but you can't make the search s*** for people just to sort of make that point.''} On the other hand, P5 liked how \textsc{ClioQuery}~used filtering to avoid potential bias. \textit{``I think it's better that its just showing everything,''} he explained. \textit{``I prefer having everything there to kind of whittle down ... as opposed to having certain things like cherry-picked ... I guess it's never super clear to me why certain things might be moved to the top of results ... it raises questions about how things are ordered and how they're brought to light.''} As I3~predicted (Section \ref{s:needs_trust}), P1 described relying on the search function of the \textit{New York Times} website \cite{nytwebsite}, without understanding how the site was ranking search results by relevance. \textit{``I wasn't super aware of how they were pulling up articles for me ... They rank it in terms of views right?''} he said. He added, \textit{``I just don't, you know, have the knowledge of how to navigate these ... search engines well enough.''} We discuss mixed feedback on algorithmic bias in Section \ref{s:discussion_relevance}. \subsection{Access, integrity and integration are important to current practices}\label{s:current_practices} Many participants commented on the importance of access, integrity and integration in describing their current practices with newspaper archives (see also Section \ref{s:limits_and_future}). {P1} reported gathering news articles on U.S.-Iraqi relations from around the web ``for years'' by using search engines like Google or the \textit{New York Times} website \cite{nytwebsite}, saving these articles to the Internet Archive \cite{InternetArchive}, and then organizing this collection using the software program Omeka \cite{omeka}. This participant pointed out that \textsc{ClioQuery}~\textit{``assumes you have found all the stuff you want to work with,''} which is not true for his current research. {P2} said that he had to rely on physical archives of print newspapers in Zimbabwe, which required burdensome international travel. {P3} said that she rarely used newspapers in her own research because many newspaper archives are often inaccessible behind paywalls, and {P4} emphasized the need for better optical character recognition technology to improve search over printed newspapers. {P5} reported that he \textit{``used Zotero a lot''} to store and organize archival sources; he liked that Zotero is open source and integrates with Microsoft Word. \subsection{Overview design patterns}\label{s:related_work_overview} \subsubsection{\textbf{Word clustering}}\label{s:word_clustering_family} Because people often can not review every document in a large corpus, many prior text analytics tools such as Termite \cite{termite}, TIARA \cite{tiara}, Overview \cite{overview}, RoseRiver \cite{HierarchicalTopics}, TextFlow \cite{textflow}, Serendip \cite{serendip}, HierarchicalTopics \cite{HierarchicalTopics_Dou}, and ConVisIT \cite{tiisclusterthree} try to suggest overall themes in a body of text by identifying and displaying groups of thematically-related words in a user interface. We describe this approach as the word clustering design pattern. Many systems which implement the word clustering pattern are based on prior work from NLP, information retrieval, and text mining, focused on identifying and representing patterns of co-occurring words using methods such as topic models \cite{blei2003latent} and word embeddings \cite{word2vec}.\footnote{The system Themail \cite{themail} clusters words by time, instead of by co-occurrence statistics. Because this system shows lists of related words (related by time period), we say the system implements word clustering. Similarly, VisGets shows clusters (of document tags) defined by a user's selection in the interface \cite{visgets}, which we consider to be a form of clustering.} Researchers in HCI and Visualization extend this work by considering how to present such patterns in a graphical interface; some systems show changes in cluster patterns across time \cite{tiara,HierarchicalTopics,textflow} (e.g., Figure \ref{f:wordclustering_family}), others do not show time-based topics \cite{termite,overview}. Because automatic clusters may not match human mental models of a corpus, one line of work investigates human-in-the-loop techniques, which allow people to modify word clusters through interactions with a GUI \cite{Interactive_Topic_Modeling, tiisclusterone, tiisclustertwo, tiisclusterthree, architext, topiclens, starspire}. Word clustering has a clear role in historical research. In query-oriented settings, clustering methods may help people formulate queries they had not considered \cite{Underwood}. Moreover, specialized and computationally-oriented digital humanists \cite{poetics_issue} and historians \cite{programminghistorianldatutorial} have used word clusters from topic models for corpus analysis. Nevertheless, successful application of topic modeling requires specialized knowledge and extensive interpretive effort \cite{Baumer,schmidt2012words}, making this method less accessible to a broader audience of historians. Additionally, many historians approach archives looking for mentions of what Allen and Sieczkiewicz describe as ``specific keywords'' \cite{allen} rather than looking to explore word cluster overviews from a topic model API. Because we design for historians investigating known query terms (Section \ref{s:intro}), we do not employ the word clustering pattern in the \textsc{ClioQuery}~interface. \subsubsection{\textbf{Textual and visual summaries}}\label{s:textual_summary_family} Rather than showing lists of related words to offer a corpus overview, a large body of work on text summarization from NLP \cite{das2007survey} instead attempts to create short paragraphs which convey the most ``important'' information in a corpus, by selecting a collection of sentences or sentence fragments from input documents to form an output summary. (This is sometimes described as extractive summarization \cite{das2007survey} because the output text is extracted from input text.) User-facing systems such as Newsblaster \cite{NewsblasterMain} and NSTM \cite{bambrick-etal-2020-nstm} apply this research by showing such textual summaries in a graphical interface. We say that such tools implement the textual summary design pattern (Figure \ref{f:textual_summary_family}). Other closely related work from text visualization considers how to present summary text in specialized visual layouts such as Document Cards \cite{DocumentCards}, Phrase Nets \cite{phrasenet}, or Word Trees \cite{wordtree}. We say that these interfaces offer structured visual summaries, as they place summary text within some structured visual format (e.g., a directed graph \cite{phrasenet}). Like word clusters, both traditional text summaries and structured visual summaries do not seem to help with mention gathering and analysis. A user can't turn to these forms of summaries to find and review query mentions because ``important'' sentences selected for inclusion in summary output may or may not contain a given query word. Moreover, traditional approaches typically do not explain \textit{how} ``important'' information is chosen, which may be important in the history domain (Section \ref{s:discussion_NLP}). However, two ideas from the text summarization literature may help historians perform mention gathering and analysis. First, work in query-focused summarization tries to identify the most salient information in a corpus, based on a user's query \cite{nenkova2012survey}. Historians might use such query-focused summaries to review keywords in text. Query-focused summaries which define \textit{all} query mentions as important enough to warrant inclusion in summary output may be especially helpful (see Section \ref{s:needs_comprehensive}). Second, work on sentence compression \cite{Knight2000StatisticsBasedS,filippova-strube-2008-dependency,filippova2015sentence} tries to shorten individual sentences by removing words, usually for the purpose of including more (shortened) sentences in a fixed-length summary. These methods, or closely-related sentence fusion techniques \cite{barzilay-mckeown-2005-sentence}, might be used to shorten passages containing query terms to help people quickly review many mentions of a query in context. We apply these two ideas from text summarization in \textsc{ClioQuery}~(see Section \ref{s:feed_and_viewer} and \ref{s:simplification}). \input{figures/families2} \subsubsection{\textbf{Time series plot}}\label{s:time_series_family} Instead of showing text to summarize corpus contents, time series plots present the frequency of words or documents across time to offer a visual (rather than textual) corpus overview. This pattern is often implemented in text analysis tools \cite{voyant,twitinfo,diamonds,featurelens} and keyword search systems \cite{expedition, TimeExplorer, newspapers.com}. Some time series visualizations \cite{Michel176, histdiv, TimeExplorer} show the frequency of a single query term across time (e.g., Figure \ref{f:time_series_plus_family}), often using a line chart. Others show the frequency of multiple terms (e.g., highest-count words) using a stacked area chart \cite{ThemeRiver,sotu}, and may not require a user-supplied query. While time series plots alone can not be used for mention gathering and analysis (because they do not show underlying text from a corpus), such visualizations can hint at important events or changes across documents (e.g., Michel et al.\ \cite{Michel176}). We thus implement this design pattern in \textsc{ClioQuery}~(Section \ref{s:system_ts}). \subsection{Search design patterns}\label{s:related_work_search} \subsubsection{Keyword document search~(baseline)}\label{s:baseline} Traditional keyword document search~tools return relevance-ranked lists of documents on a search engine results page (SERP) in response to a free-text query \cite{irbook}. Because historians often use such tools in practice (Section \ref{s:intro}), we consider these systems to be baselines for mention gathering and analysis.\footnote{ One strand of humanities scholarship critically investigates how widespread adoption of keyword document search~tools might be distorting traditional humanistic research \cite{Putnam,FrancesMaule,Underwood}. } Although keyword document search~tools are widely used (Table \ref{t:baselines}), these systems have clear downsides for finding and reviewing query mentions. First, keyword document search~systems impose unnecessary burdens from reading and context switching. This is described in detail in Section \ref{s:intro}. Additionally, keyword document search~systems rank documents according to a computational model of relevance. This may be undesirable for historians because relevance-ranking introduces opaque algorithmic influence over qualitative conclusions (by guiding people towards particular documents). Section \ref{s:needs} describes the importance of neutral and comprehensive review in historical research. \input{tables/baselines} Ranking aside, keyword document search~tools may also shape user perceptions of the contents of the individual documents in an archive, through displaying single-document summaries (also called query-biased snippets \cite{querybiased}) on the search engine results page.\footnote{ Google sometimes shows complex results snippets on the SERP, using proprietary techniques. Brin and Page briefly mention the need for such ``Result Summarization'' in their original paper \cite[Section 6.1]{pagerank}. } For example, Figure \ref{f:keywordsearch_family} displays three sample single-document summaries, showing what a computer deems to be the most important information from three different search results. Such single-document summaries may be inappropriate for historical research, as some historians may be skeptical of opaque models which select ``important'' information for their review (search engines try to include keywords in snippets, but do not try to explain summaries \cite[Section 6.3.1]{croft2010search}). Prototypes shown in the Appendix describe our own experiences attempting to apply similar document summarization techniques for historians without success. \subsubsection{Multi-document snippet}\label{s:snippets_family} Where keyword document search~systems return links to single documents in response to a user query, other systems return collections of smaller units like paragraphs, sentences, or character spans, which are often drawn from multiple documents (see Figure \ref{f:kwic_family}). We observe two different implementations of this multi-document snippet design pattern in interactive text analytics. First, multi-document snippet features can be used in word clustering systems to help people investigate mentions of particular clustered words in context. For example, TIARA \cite{tiara} allows analysts to review individual words from a cluster in underlying text. However, because TIARA is designed for showing broad themes rather than for reviewing query mentions, it does not comprehensively show all mentions of a given word in its multi-document snippet. Instead, TIARA chooses some selection of mentions for display, optimizing for diversity \cite[Section 6]{tiara}. Such curation may introduce unwanted algorithmic bias (Section \ref{s:needs_trust}), because the system chooses some but not all query mentions for display. Additionally, other text analysis systems which are not necessarily focused on clustering sometimes include keyword-in-context (KWIC) views \cite{voyant,oconnor-2014-mitextexplorer}, showing each mention of a query word (or a selection of such mentions) on its own line of text amid immediately surrounding tokens or characters (e.g., Figure \ref{f:kwic_family}). While this form of multi-document snippet can be used for mention gathering and analysis, KWIC views have some limitations for historical research. First, in many cases, historians need to investigate particular query mentions within the context of full documents (Section \ref{s:needs_context}). While KWIC views may include links to underlying sources, jumping from KWIC views to documents requires context switching into new windows or tabs to gather and analyze evidence. We explain why this is undesirable in Section \ref{s:intro}. Second, KWIC views always show some number of pixels, characters or words immediately surrounding each query mention. This may result in awkward-sounding or choppy snippets that do not include the most salient information in source sentences; evidence suggests that people dislike awkward-sounding snippets \cite{ryenwhitesnippets}. Finally, KWIC views do not offer a way to keep track of which mentions have been reviewed during analysis, which may be important in historical research (Sections \ref{s:needs_comprehensive} and \ref{s:tracking}). Noting these shortcomings, it is possible to interpret certain \textsc{ClioQuery}~features (Section \ref{s:feed_and_viewer} and \ref{s:documentviewer}) as a particular form of KWIC view, addressing some of these limitations. \subsection{Situating \textsc{ClioQuery}~withing the broader literature on visual analytics for text}\label{s:related_comparison} Researchers have proposed many approaches to interactive text analytics \cite{tiara,overview,Gorg2013JigsawReflections,serendip,HierarchicalTopics,chuangheer,termite,tiisclusterone,tiisclustertwo,tiisclusterthree,starspire,pivotpaths,eventriver,jasim2021communitypulse}. Within this broad literature, \textsc{ClioQuery}~is unique because it is designed to help people find and analyze all occurrences of a query word across a corpus (see Section \ref{s:needs_formal_problem}). Using terminology from Chuang et al., \textsc{ClioQuery}~differs from prior work because its central ``unit of analysis'' \cite{chuangheer} is the query word in context; the system's~``visual encodings,'' ``modeling decisions'' \cite{chuangheer}, and user interactions were all designed to help people quickly and comprehensively review mentions of a query word across an archive. For instance, \textsc{ClioQuery}~includes a textual summary feature, that presents a synopsis of all occurrences of a query word in a corpus (Section \ref{s:feed_and_viewer}). Focusing on query words in context is a departure from prior work in text analytics, which emphasizes other latent and observable textual units of analysis, such as topics \cite{tiara}, events \cite{eventriver}, document metadata \cite{pivotpaths}, interrelated entities \cite{Gorg2013JigsawReflections}, or thematic hierarchies \cite{overview}. It is also a departure from keyword document search~systems \cite{nytwebsite,newspapers.com}, which focus on guiding people to ranked documents (Section \ref{s:baseline}). Our atypical decision to design and build a text analytics system for analyzing occurrences of query words across a corpus followed from our systematic investigation into the tasks and expectations of historians and archivists (Section \ref{s:needs}), who often review queries in text. Chuang et al.'s highly-cited guidance for text analytics \cite{chuangheer} stresses the importance of choosing units of analysis which are best ``aligned'' to the ``tasks, expectations and background knowledge'' of intended users. Because~\textsc{ClioQuery}~uses query words in context as its central unit of analysis, the system differs from prior tools in several key ways. We highlight the most important differences below. \subsubsection{\textsc{ClioQuery}~displays query words in underlying text} Unlike \textsc{ClioQuery}, some prior systems for interactive text analysis are designed for analyzing high-level textual units such as corpus themes or temporal trends. As a result, these systems sometimes do not allow people to review occurrences of query words in underlying documents. For instance, structured visual summaries like the Word Tree \cite{wordtree} and Phrase Net visualizations \cite{phrasenet}, or time series displays like Theme River \cite{ThemeRiver} do not show query words in underlying text. Similarly, some text analytics systems such as early versions of Overview \cite{overview} experiment with alternatives to query-based paradigms.\footnote{The Overview authors describe the importance of adding query features in discussing their work \cite{overview,stray}.} By contrast, \textsc{ClioQuery}~is designed to help people find and read query words in underlying documents; the tool's central units of analysis (i.e., query words in context) are spans of text from documents in the corpus (see Section \ref{s:needs_formal_problem}). This design choice was informed by prior work, which emphasizes the importance of displaying underlying text in interactive text analysis (see Section \ref{s:text_seriously}). \subsubsection{\textsc{ClioQuery}~offers complete access to all query mentions in context, without extraneous mediating abstractions} Like \textsc{ClioQuery}, some text analysis tools do include features to help people navigate to query words in context. However, because these systems are chiefly designed for analyzing other textual units (e.g., topics \cite{tiara}, events \cite{eventriver}, or document hierarchies \cite{overview}), they offer only indirect and incomplete access to query words in underlying text, via extraneous mediating abstractions. By contrast, \textsc{ClioQuery}~is designed to help people directly review all occurrences of a query in a corpus. For instance, EventRiver \cite{eventriver} helps people review temporal document clusters, which serve as the system's primary unit of analysis. In principle, a historian could use EventRiver to find and analyze query mentions in context by (1) finding document clusters containing a query word $Q$ using the tool's search-by-keyword feature, (2) clicking such clusters, and (3) using the tool's Shoebox and Storyboard features to review those occurrences of $Q$~which happen to fall within documents from selected clusters. However, such a workflow would have two downsides. First, the workflow would be \textit{indirect}; the historian would have to navigate through clusters to access query words in context. Such indirect navigation would force the historian to attend to what Chuang et al.\ describe as ``extraneous information that might confuse or hamper interpretation'' \cite{chuangheer}. Second, the workflow would be \textit{incomplete}; a historian would have no way to navigate to occurrences of query words which do not happen to fall within algorithmically-defined clusters. As we describe in Section \ref{s:needs}, this would likely pose a problem for historians, who need to directly and comprehensively observe all mentions of their query in context, with minimal confounding algorithmic influence. Our focus on EventRiver merely serves as one illustrative example of a broader phenomenon. TIARA's mediating topic abstraction \cite{tiara}, Overview's mediating hierarchy abstraction \cite{overview}, StarSpire's mediating cluster abstraction \cite{starspire}, and Jigsaw's mediating entity abstraction \cite{Gorg2013JigsawReflections} (some queries are not entities, e.g., ``race suicide'' \cite{racesuicide}) would also force historians and archivists to navigate to query mentions in context via similarly confounding and extraneous abstractions. \subsubsection{\textsc{ClioQuery}~employs query-focused summarization to ease the burden of reading and context switching} Like \textsc{ClioQuery}, some text analytics systems include a snippet feature which shows words extracted from documents in a corpus. Examples include TIARA \cite{tiara} Snippets, the Overview and Footprints Document List \cite{overview,Footprints}, and results snippets from keyword document search~systems \cite[Chp.\ 6.3]{croft2010search}. While such snippets are visually-similar to \textsc{ClioQuery}'s Document Feed (Section \ref{s:feed_and_viewer}), they differ in important ways which are crucial to the work of historians and archivists. Most importantly, many existing snippet components select and display only some occurrences of a query in a corpus. For instance, TIARA's Snippets feature displays a selection of occurrences of a topic word in underlying documents \cite{tiara}, and the Footprints \cite{Footprints} Document List displays the first sentence in a document (regardless of whether the sentence contains a query word). Similarly, keyword document search~systems create and display snippets based on heterogeneous criteria \cite[Chp.\ 6]{croft2010search}, rather than display all occurrences of a query word in ranked documents. Prior systems also do not attempt to help people understand how text is selected for a snippet. For instance, the Overview Document List \cite{overview} selects and displays keywords based on an opaque clustering algorithm. These properties make prior snippet features poorly suited to historians, who need to review all occurrences of a query word in a corpus with minimal algorithmic influence. Therefore, instead of relying on prior snippet features, a historian who wished to review all occurrences of a query word in a corpus would likely have to click through from snippets to underlying documents, which are often \cite{tiara, Gorg2013JigsawReflections, nytwebsite} shown in individual windows or tabs. In Section \ref{s:intro} and Figure \ref{f:field_study_loop}, we explain how this workflow imposes unnecessary reading and context switching costs. By contrast, \textsc{ClioQuery}'s snippet-like Document Feed employs novel query-focused summarization techniques in order to allow historians and archivists to quickly scroll through and examine every single occurrence of a query term in a corpus. We offer qualitative and quantitative evidence of the importance of this feature in Sections \ref{s:qualresults}, \ref{s:fieldstudy} and \ref{s:crowdstudy}. \section*{Codebook for expert interview study} \subsection*{Historical sensemaking} The quote explains, articulates, or offers backstory about a hypothesis regarding specific historical events or processes. The user finds evidence for or against their historical hypothesis, or discovers new evidence that casts a prior historical hypothesis in a different light. The user explores and analyzes historical information based on knowledge of the domain. \subsection*{\textsc{ClioQuery}~Features} The quote addresses some specific feature of the \textsc{ClioQuery}~system including the Document Feed, Document Viewer, Time Series Plot, rug points, filtering system, history tracking system, color coding, in-text highlighting and text simplification. This theme also encompasses suggested features for future versions of \textsc{ClioQuery}~(e.g. complex Boolean queries or wildcards). \subsection*{Comprehensiveness} This quote addresses the role of comprehensive search in the historical research process. Comprehensive search refers to gathering all available evidence and reviewing all evidence to reach conclusions. \subsection*{Context in historical research} This quote addresses the role of context in the historical research process. Historians and archivists sometimes say that they need context to evaluate evidence. For instance, some historians may say they need to understand a quote from a news story in context. Context includes things like when a story was published, and where a story appeared in the paper. \subsection*{Bias and transparency} This quote addresses the role of bias and transparency in historical research. Some users sometimes stress the importance of directly and neutrally observing evidence in archives, without computers exerting any sway over their research process. \subsection*{Current practices} This quote addresses the tools and methods that historians and archivists currently use to search archives. For instance, quotes describing reading historical books, visiting physical archives, using search engines, using microfilm, using microfiche, and using specific services like Google, the Internet Archive or ProQuest are all coded as current practices. \end{document}
train/arxiv
BkiUc4I5qsBC6Hyib6Tn
5
1
\section{#1}} \newcommand\oursubsection[1]{\subsection{#1}} \AtlasTitle{Search for new phenomena in final states with large jet multiplicities and missing transverse momentum with ATLAS using $\sqrt{s} = 13$~\TeV\ proton--proton collisions} \author{The ATLAS Collaboration} \AtlasAbstract{ Results are reported of a search for new phenomena, such as supersymmetric particle production, that could be observed in high-energy proton--proton collisions. Events with large numbers of jets, together with missing transverse momentum from unobserved particles, are selected. The data analysed were recorded by the ATLAS experiment during 2015 using the 13\,\TeV~centre-of-mass proton--proton collisions at the Large Hadron Collider, and correspond to an integrated luminosity of 3.2 fb$^{-1}$. The search selected events with various jet multiplicities from $\ge7$ to $\ge 10$ jets, and with various $b$-jet multiplicity requirements to enhance sensitivity. No excess above Standard Model expectations is observed. The results are interpreted within two supersymmetry models, where gluino masses up to 1400\,\GeV~are excluded at 95\%\,confidence level, significantly extending previous limits. } \AtlasJournal{Phys.\ Lett.\ B.} \PreprintIdNumber{CERN-PH-EP-2016-026} \AtlasRefCode{SUSY-2015-07} \hypersetup{pdftitle={ATLAS draft},pdfauthor={The ATLAS Collaboration}} \begin{document} \oursection{Introduction}\label{sec:introduction} New strongly interacting particles, if present at the \TeV~energy scale, may be produced in high-energy proton--proton ($pp$) collisions and decay to final states with large jet multiplicities. If their decay produces stable particles which only interact weakly, it will also result in a momentum imbalance in the plane transverse to the beam (\etmissvec). Such particles are present in supersymmetry (SUSY)~\cite{Golfand:1971iw,Volkov:1973ix,Wess:1974tw,Wess:1974jb,Ferrara:1974pu,Salam:1974ig}, a theoretically favoured extension of the Standard Model (SM) that predicts partner fields for each of the SM particles. These fields combine into physical superpartners of the SM particles. The scalar partners of quarks and leptons are known as squarks ($\tilde{q}$) and sleptons ($\tilde{\ell}$). The fermionic partners of gauge and Higgs bosons are the gluinos ($\tilde{g}$), the charginos ($\tilde{\chi}^{\pm}_{i}$, with $i$ = 1,2) and the neutralinos ($\tilde{\chi}^{0}_{i}$ with $i$ = 1,2,3,4), with $\tilde{\chi}^{\pm}_{i}$ and $\tilde{\chi}^{0}_{i}$ being the mass eigenstates, ordered from the lightest to the heaviest, formed from the linear superpositions of the SUSY partners of the Higgs and electroweak gauge bosons. Under the hypothesis of $R$-parity conservation~\cite{Farrar:1978xj}, SUSY partners are produced in pairs and decay to the lightest supersymmetric particle (LSP), which is stable and in a large variety of models is assumed to be the lightest neutralino (\ensuremath{\tilde{\chi}_{1}^{0}}\xspace), which escapes detection. The undetected \ensuremath{\tilde{\chi}_{1}^{0}}\xspace would result in missing transverse momentum, while the rest of the cascade can yield final states with multiple jets and possibly leptons and/or photons. The strongly interacting gluinos and squarks can have large production cross-sections at the Large Hadron Collider (LHC), but no evidence of their existence has been observed to date. This \paper{} presents the results of a search for new phenomena, such as supersymmetry, in final states with large jet multiplicities (from $\ge$7 to $\ge$10 jets) in association with \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace{}. This signature is exhibited, for example, by squark and gluino production followed by cascade decay chains, and/or decays to heavy SM particles, such as top quarks or $W$, $Z$ or Higgs bosons, each of which can produce multiple jets in their decays. In contrast to many other searches for the production of strongly interacting SUSY particles, the requirement made here of large jet multiplicity means that the requirement on \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace{} can be modest. Previous searches~\cite{Aad:2011qa,Aad:2012hm,Aad:2013wta} in similar final states have been performed by the ATLAS Collaboration at the lower centre-of-mass energies of $\sqrt{s}=7\TeV$ and $8\TeV$, with integrated luminosities up to $20.3\,$fb$^{-1}$. The larger energy of the present dataset provides increased sensitivity, particularly to particles with higher masses. This paper closely follows the strategy of those previous studies. In particular, data are collected using an online selection relying only on high jet multiplicity and the signal regions (SR) are designed such that the dominant multijet{} background can be determined from the data using regions of lower \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace{} and/or lower jet multiplicity. The data were collected by the ATLAS detector~\cite{Aad:2008zzm} in $pp$ collisions at the LHC at a centre-of-mass energy of $13\TeV$, from \nth{16}~August to \nth{3}~November 2015. The detector covers the pseudorapidity\footnote{ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the $z$-axis along the beam pipe. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the beam pipe. The transverse momentum of a four-momentum is $\ptvec = (p_{x}, p_{y})$, its rapidity is $y=\frac{1}{2} \ln \frac{E+p_z}{E-p_z}$, and the pseudorapidity is defined in terms of the polar angle~$\theta$ as $\eta=-\ln\tan(\theta/2)$.} range of $|\eta| < 4.9$ and is hermetic in azimuth. It consists of an inner tracking detector surrounded by a superconducting solenoid, electromagnetic and hadronic calorimeters, and an external muon spectrometer incorporating large superconducting toroidal magnets. After applying beam-, data- and detector-quality criteria, the integrated luminosity was $3.2 \pm 0.2$\,fb$^{-1}$. The uncertainty was derived using beam-separation scans, following a methodology similar to that detailed in Ref.~\cite{Aad:2013ucp}. \oursection{Physics object definition}\label{sec:objectselection} Jets are reconstructed using the \akt{} clustering algorithm~\cite{Cacciari:2008gp,Cacciari:2005hq} with jet radius parameter $R=0.4$ and starting from clusters of calorimeter cells~\cite{Lampl:1099735}. The effects of coincident $pp$ interactions (`pileup') on jet energies are accounted for by an event-by-event $\ensuremath{p_{\text{T}}}\xspace$-density correction~\cite{ATLASpileup}. The energy resolution of the jets is improved by using global sequential calibrations~\cite{ATLAS-CONF-2015-002,ATL-PHYS-PUB-2015-015}. Events with jets originating from cosmic rays, beam background and detector noise are vetoed using the `loose' requirements of Ref.~\cite{ATLAS-CONF-2015-029}. Jets containing $b$-hadrons ($b$-jets\xspace) are identified using an algorithm exploiting the long lifetime, high decay multiplicity, hard fragmentation and large mass of $b$-hadrons~\cite{ATL-PHYS-PUB-2015-022}. The $b$-tagging\xspace{} algorithm tags $b$-jets\xspace with an efficiency of approximately 70\% in simulated \ttbar{} events, and mis-tags $c$-jets, $\tau$-jets and light-quark or gluon jets with probabilities of approximately 10\%, 4\% and 0.2\% respectively~\cite{ATL-PHYS-PUB-2015-039}. The primary vertex (PV) in each event is the vertex with the largest value of $\Sigma p_{\mathrm{T}}^{2}$ for all tracks associated with it. To reduce the effect of pileup, a jet having $20\GeV<\ensuremath{p_{\text{T}}}\xspace<50\GeV$ and $|\eta|<2.4$ is disregarded when the \ensuremath{p_{\text{T}}}\xspace-weighted sum of its associated tracks indicates that it originated from a pileup collision and not the PV, based on a jet vertex tagger as described in Ref.~\cite{ATLASpileup}. Electron candidates are identified according to the likelihood-based `loose' criterion described in Ref. \cite{ATL-PHYS-PUB-2015-041}, formed from e.g.~calorimeter shower shape and inner-detector track properties. Muon candidates are identified according to the `medium' criterion described in Ref.~\cite{ATL-PHYS-PUB-2015-037}, based on combined tracks from the inner detector and muon spectrometer. These candidates (which may cause an event to be rejected from the signal regions) are required to have $\ensuremath{p_{\text{T}}}\xspace>10\GeV$, $|\eta|<2.47$ for $e$ and $|\eta|<2.5$ for $\mu$. To avoid double-counting of reconstructed objects, electron candidates sharing an inner-detector track with a muon candidate are removed. Next, jet candidates separated from an electron candidate by \mbox{$\Delta R_y < 0.2$} are removed, where $\Delta R_y = \sqrt{(\Delta y)^2 + (\Delta \phi)^2} $. Jet candidates with fewer than three tracks and with $\Delta R_y < 0.4$ from a muon candidate are then removed. Following this, any lepton candidate separated from a surviving jet candidate by $\Delta R_y < 0.4$ is removed. The missing transverse momentum, \Ptmiss, is the negative two-vector sum of the calibrated \ptvec{} of reconstructed jets with $\ensuremath{p_{\text{T}}}\xspace > 20\GeV$ and $|\eta| < 4.5$, electrons, muons and photons~\cite{MET2015}. It includes an additional contribution from inner-detector tracks, matched to the PV, that are not associated with these reconstructed objects. Photons are not considered beyond their contribution to the \Ptmiss unless they are reconstructed as jets. To reduce the effect of pileup, jets do not contribute to the \Ptmiss{} calculation when they are disregarded based on the jet vertex tagger as described above. Additionally, when a jet having $50\GeV< \ensuremath{p_{\text{T}}}\xspace < 70\GeV$, $|\eta|<2.0$ and azimuth relative to the missing momentum $\Delta\phi(\vec{p}_{\mathrm T},\,\Ptmiss) > 2.2$ meets the same vertex-tagging criterion, the event is discarded. Events in which the jet closest in $\phi$ to the \Ptmiss is found in or near an inactive region in the hadronic calorimeter barrel (i.e. $-0.1 < \eta < 1.0$, $0.8 < \phi < 1.1$) are also discarded, in order to reduce the impact of this source of \Ptmiss mismeasurement. These data-quality requirements reduce the expected acceptance of typical SUSY models by approximately 5\%. When defining leptons for control regions (Section~\ref{sec:systematics}), the candidates defined above are required to be isolated, to have a longitudinal impact parameter $z_0$ (with respect to the PV) satisfying $|z_{0}\sin\theta| < 0.5$\,mm, and to have the significance of their transverse impact parameter $|d_{0}/\sigma(d_{0})|$ (with respect to the measured beam position) be less than five for electrons and less than three for muons. Additionally, electrons must satisfy the `tight' criterion of Ref. \cite{ATL-PHYS-PUB-2015-041}. \begin{table}[b] \centering \renewcommand\arraystretch{1.35} \addtolength{\tabcolsep}{-1pt} \subfigure[Signal regions using \ensuremath{{n_\mathrm{50}}}\xspace{}.]{ \begin{small} \hspace*{-2mm}\begin{tabular}{|l||c|c|c||c|c|c||c|c|c|} \hline &\boldSR{8j50} & \boldSR{8j50-1b} & \boldSR{8j50-2b} & \boldSR{9j50} & \boldSR{9j50-1b} & \boldSR{9j50-2b} & \boldSR{10j50} & \boldSR{10j50-1b} & \boldSR{10j50-2b} \\ \hline \ensuremath{{n_\mathrm{50}}}\xspace & \multicolumn{3}{c||}{$\geq 8$} & \multicolumn{3}{c||}{$\geq 9$} & \multicolumn{3}{c|}{$\geq 10$} \\ \hline \ensuremath{{n_{b\mathrm{-jet}}}}\xspace & --- & $\geq 1$ & $\geq 2$ & --- & $\geq 1$ & $\geq 2$ & --- & $\geq 1$ & $\geq 2$ \\ \hline \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace & \multicolumn{9}{c|}{$> 4\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$}\\ \hline \end{tabular}\end{small} \ } \addtolength{\tabcolsep}{+1pt} \subfigure[Signal regions using \ensuremath{{n_\mathrm{80}}}\xspace{}.]{ \begin{small} \begin{tabular}{|l||c|c|c||c|c|c|} \hline & \boldSR{7j80} & \boldSR{7j80-1b} & \boldSR{7j80-2b} & \boldSR{8j80} & \boldSR{8j80-1b} & \boldSR{8j80-2b}\\ \hline \ensuremath{{n_\mathrm{80}}}\xspace & \multicolumn{3}{c||}{$\geq 7$} & \multicolumn{3}{c|}{$\geq 8$} \\ \hline \ensuremath{{n_{b\mathrm{-jet}}}}\xspace & --- & $\geq 1$ & $\geq 2$ & --- & $\geq 1$ & $\geq 2$ \\ \hline \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace & \multicolumn{6}{c|}{$> 4\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$}\\ \hline \end{tabular}\end{small} \ } \caption{\label{tab:SRdefs}% Definition of the signal regions. The selection variables are described in Sections~\ref{sec:objectselection} and~\ref{sec:eventselection}. A long dash `---' indicates that no requirement is made. Events with leptons are vetoed. } \end{table} \oursection{Event selection}\label{sec:eventselection} The signal regions are defined using two jet multiplicity counts: either \ensuremath{{n_\mathrm{50}}}\xspace{}, the number of jets having $\ensuremath{p_{\text{T}}}\xspace>50\GeV$ and $|\eta|<2.0$, or \ensuremath{{n_\mathrm{80}}}\xspace{}, the number of such jets which additionally satisfy the higher requirement $\ensuremath{p_{\text{T}}}\xspace>80\GeV$. The online selection (trigger) for \ensuremath{{n_\mathrm{50}}}\xspace{}-based regions requires events to have at least six jets each with $\ensuremath{p_{\text{T}}}\xspace > 45\GeV$ and $|\eta| < 2.4$, while that for \ensuremath{{n_\mathrm{80}}}\xspace{}-based regions requires at least five jets each with $\ensuremath{p_{\text{T}}}\xspace > 70$\,\GeV. The trigger efficiency is greater than 99.5\% for events satisfying the signal selection described below. Jets with a looser definition -- those having $\ensuremath{p_{\text{T}}}\xspace>40\GeV$ and $|\eta|<2.8$ -- are used to construct the scalar sum $\ensuremath{H_{\textrm{T}}}\xspace = \sum \ensuremath{p_{\text{T}}}\xspace^{\mathrm{jet}}$, while those having $\ensuremath{p_{\text{T}}}\xspace>40\GeV$ and $|\eta|<2.5$ are candidates for $b$-tagging\xspace{}, contributing to the number \ensuremath{{n_{b\mathrm{-jet}}}}\xspace{} of $b$-tagged jets. The signal selection requires large jet multiplicity, which depends on the signal region (SR), as shown in Table~\ref{tab:SRdefs}. Fifteen different SRs are defined, providing wide-ranging sensitivity to models with different final states and mass spectra. There are three different triplets of regions defined in terms of the jet multiplicity \ensuremath{{n_\mathrm{50}}}\xspace{} and two different triplets of regions defined in terms of \ensuremath{{n_\mathrm{80}}}\xspace{}. Within each triplet, different requirements are made on \ensuremath{{n_{b\mathrm{-jet}}}}\xspace{}, from no requirement to the requirement of at least two $b$-jets\xspace. In all cases the final selection is on the ratio of \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace{} to $\sqrt{\ensuremath{H_{\text{T}}}\xspace}$, with the choice of a threshold at 4\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace being a good balance between background rejection and signal efficiency while maintaining the effectiveness of the background estimation. Events containing electron or muon candidates with $\ensuremath{p_{\text{T}}}\xspace>10\GeV$ are vetoed to reduce background from SM processes. The SRs have events in common, for example all events in \SR{9j50-1b} also appear in \SR{9j50}, which does not require the $b$-jet\xspace, and in \SR{8j50} and \SR{8j50-1b}, which have a looser requirement on \ensuremath{{n_\mathrm{50}}}\xspace{}. Events may also appear in both the \ensuremath{{n_\mathrm{50}}}\xspace{} and the \ensuremath{{n_\mathrm{80}}}\xspace{} categories. \oursection{Background and simulation}\label{sec:bg} Standard Model processes contribute to the event counts in the SRs. The dominant background contributions are multijet{} production, including those from purely strong interaction processes and fully hadronic decays of \ttbar{}; partially leptonic decays of \ttbar; and leptonically decaying $W$ or $Z$ bosons produced in association with jets. Top-quark, $W$- and $Z$-boson decays that are not fully hadronic are collectively referred to as `leptonic' backgrounds. They can contribute to the signal regions when no $e$ or $\mu$ leptons are produced, for example $Z\to\nu\nu$ or hadronic $W \to \tau\nu$ decays, or when they are produced but are out of acceptance, lie within jets, or are not reconstructed. The most significant \leptonic{} backgrounds are \ttbar and $W$ boson production in association with jets. The contribution of these two backgrounds to the signal regions is determined from a combined fit as described later in Section~\ref{sec:fit}. The yields for the other, generally subdominant, \leptonic{} backgrounds are taken from the simulations as described below. Monte Carlo simulations are used in the determination of the \leptonic{} backgrounds and to assess sensitivity to specific SUSY signal models. All simulated events are overlaid with multiple $pp$ collisions simulated with the soft QCD processes of \texttt{PYTHIA}\xspace 8.186 \cite{pythia8.1} using the A2 set of parameters (tune)~\cite{ATLAS:2012uec} and the MSTW2008LO parton distribution functions (PDF) \cite{Martin:2009iq}. The simulations are weighted such that the pileup conditions match those of the data. The response of the detector to particles is modelled with an ATLAS detector simulation \cite{:2010wqa} based fully on \textsc{Geant4} \cite{Agostinelli:2002hh}, or using fast simulation based on a parameterisation of the performance of the ATLAS electromagnetic and hadronic calorimeters \cite{atlfast} and on \textsc{Geant4} elsewhere. \Leptonic{} background samples use full simulation, while signal samples (described below) use the fast simulation option. Corrections are applied to the simulated samples to account for differences between data and simulation for the lepton identification and reconstruction efficiencies, and for the efficiency and misidentification rate of the $b$-tagging\xspace algorithm. \oursubsection{Leptonic background simulation}\label{sec:bg:leptonic} For the generation of \ttbar and single top quarks in the $Wt$ and $s$-channels \cite{ATLAS:simul:top} \texttt{Powheg-Box}\xspace~v2~\cite{powheg-box} is used with the CT10 PDF sets \cite{CT10pdf} in the matrix element calculations. Electroweak $t$-channel single-top-quark events are generated using \texttt{Powheg-Box}\xspace~v1. This generator uses the four-flavour scheme for the next-to-leading order (NLO) matrix element calculations together with the fixed four-flavour PDF set CT10f4 \cite{CT10pdf}. For this process, the top quarks are decayed using \texttt{MadSpin}\xspace~\cite{madspin} preserving all spin correlations, while for all processes the parton shower, fragmentation, and the underlying event are simulated using \texttt{PYTHIA}\xspace~v6.428~\cite{pythia6} with the CTEQ6L1 PDF sets \cite{Pumplin:2002vw} and the corresponding Perugia 2012 tune (P2012) \cite{perugia}. The top quark mass is set to $172.5\GeV$. The \texttt{EvtGen}\xspace~v1.2.0 program~\cite{evtgen} models the bottom and charm hadron decays, as it does for all non-\texttt{SHERPA}\xspace-simulated processes mentioned below. The \ttbar{} simulation is normalised to the cross-section calculated to next-to-next-to-leading order (NNLO) in perturbative QCD, including soft-gluon resummation to next-to-next-to-leading-log (NNLL) accuracy \cite{top++}. Events containing \ttbar and additional heavy particles -- comprising three-top, four-top, \ttW, \ttZ and $\ttbar + WW$ production \cite{ATLAS:simul:ttV} -- are simulated at leading order in the strong coupling constant $\alpha_\mathrm{s}$, using \texttt{MadGraph}\xspace~v2.2.2 \cite{Alwall:2014hca} with up to two additional partons in the matrix element, interfaced to the \texttt{PYTHIA}\xspace~8.186 \cite{pythia6,pythia8.1} parton shower model. The A14 tune of the \texttt{PYTHIA}\xspace parameters is used~\cite{a14}, together with the NNPDF2.3LO PDF set~\cite{NNPDF}. The predicted production cross-sections are calculated to NLO as described in Ref.~\cite{Alwall:2014hca} for all processes other than three-top, for which it is calculated to LO. Events containing $W$ bosons or $Z$ bosons with associated jets \cite{ATLAS:simul:V+jets} are likewise simulated using \texttt{MadGraph}\xspace, but with up to four additional final-state partons in the matrix element, and interfaced to \texttt{PYTHIA}\xspace, using the same tunes and particle decay programs. The $W$ + jets and $Z$ + jets events are normalised to NNLO cross-sections \cite{Gavin:2010az}. Diboson processes with at least one boson decaying leptonically \cite{ATLAS:simul:VV} are simulated using the \texttt{SHERPA}\xspace~v2.1.1 generator~\cite{sherpa2}. The matrix element calculations contain all diagrams with four electroweak vertices. They are calculated for up to one (for 4$\ell$, 2$\ell$+2$\nu$, semileptonic $ZZ$) or no additional partons (for 3$\ell$+1$\nu$, other semileptonic processes) at NLO and up to three additional partons at LO using the \texttt{Comix}\xspace \cite{comix} and \texttt{OpenLoops}\xspace \cite{openloops} matrix element generators and interfaced with the \texttt{SHERPA}\xspace parton shower \cite{sherpashower} using the ME+PS@NLO prescription \cite{mepsnlo}. The CT10 PDF set is used in conjunction with dedicated parton shower tuning developed by the \texttt{SHERPA}\xspace authors. Theoretical uncertainties are considered for all these simulated samples. Production of \ttbar is by far the most important process simulated in this analysis and to evaluate the uncertainty on this background several samples are compared. Samples are produced with the factorisation and renormalisation scales varied coherently, along with variations of the resummation damping parameter and with more/less radiation tunes of the parton shower \cite{ATL-PHYS-PUB-2015-011}. Additionally the nominal sample is compared to one with \texttt{Powheg}\xspace interfaced with \texttt{Herwig++}\xspace \cite{Bahr:2008pv} and \texttt{SHERPA}\xspace~v2.1.1 samples with up to one additional jet at next-to-leading order using \texttt{OpenLoops}\xspace and up to four additional jets at leading order, to account for uncertainties in the parton shower and the generator respectively. The comparison with the \texttt{SHERPA}\xspace sample dominates the uncertainty in the signal region prediction. \oursubsection{SUSY signal models}\label{sec:signals} Two classes of SUSY signal models are used when interpreting the results. The first is a simplified model, in which gluinos are pair-produced and then decay via the cascade \begin{align*} \ensuremath{\tilde{g}}\xspace & \to q + \bar{q}' + \ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace \hspace*{0.3cm} (q = u,d,s,c) \\ \ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace & \to W^{\pm} + \ensuremath{\tilde{\chi}_{2}^{0}}\xspace \\ \ensuremath{\tilde{\chi}_{2}^{0}}\xspace & \to Z + \ensuremath{\tilde{\chi}_{1}^{0}}\xspace. \end{align*} The parameters of the model are the masses of the gluino, $m_{\ensuremath{\tilde{g}}\xspace}$, and the lightest neutralino, $m_{\ensuremath{\tilde{\chi}_{1}^{0}}\xspace}$. The mass of the $\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace$ is constrained to be $\frac{1}{2} (m_{\ensuremath{\tilde{g}}\xspace} + m_{\ensuremath{\tilde{\chi}_{1}^{0}}\xspace})$ and the mass of the $\ensuremath{\tilde{\chi}_{2}^{0}}\xspace$ to be $\frac{1}{2} (m_{\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace} + m_{\ensuremath{\tilde{\chi}_{1}^{0}}\xspace})$. All other sparticles are kinematically inaccessible. This model is labelled in the following figures as `2-step'. A second set of SUSY models is drawn from a two-dimensional subspace (a `slice') of the 19-parameter phenomenological Minimal Supersymmetric Standard Model (pMSSM)~\cite{Djouadi:1998di,Berger:2008cq}. The selection is motivated in part by models not previously excluded in the analysis presented in Ref.~\cite{Aad:2015baa}. The models are selected to have a bino-dominated \ensuremath{\tilde{\chi}_{1}^{0}}\xspace{}, kinematically accessible gluinos, and a Higgsino-dominated multiplet at intermediate mass. The Higgsino multiplet contains two neutralinos (the \ensuremath{\tilde{\chi}_{2}^{0}}\xspace{} and \ensuremath{\tilde{\chi}_{3}^{0}}\xspace{}) and a chargino (the \ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace). The mass of these particles is varied by changing the soft SUSY-breaking parameters $M_3$ (for the gluino), $M_1$ (for the \ensuremath{\tilde{\chi}_{1}^{0}}\xspace, set to $60\GeV$), and $\mu$ (for the Higgsinos). In order that other SUSY particles remain kinematically inaccessible, the other parameters, defined in Ref.~\cite{Aad:2015baa}, are set to $M_A=M_2=3\TeV,$ $A_\tau=0$, $\tan\beta=10$, $A_t=A_b=m_{\tilde{L}_{(1,2,3)}}=m_{(\tilde{e},\tilde{\mu},\tilde{\tau})}=m_{\tilde{Q}_{(1,2,3)}}=m_{(\tilde{u},\tilde{c},\tilde{t})}=m_{(\tilde{d},\tilde{s},\tilde{b})}=5\TeV$. Mass spectra with consistent electroweak symmetry breaking are generated using \texttt{softsusy}~3.4.0~\cite{Allanach:2001kg}. The decay branching ratios are calculated with \texttt{SDECAY/HDECAY}~1.3b/3.4~\cite{Djouadi:2006bz}, and when $m_{\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace} \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 500\GeV$ and $m_{\ensuremath{\tilde{g}}\xspace} \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 1200\GeV$ the predominant decays are $\ensuremath{\tilde{g}}\xspace \to t + \bar{t} + \ensuremath{\tilde{\chi}_{2,3}^{0}}\xspace$ and $\ensuremath{\tilde{g}}\xspace \to t + \bar{b} + \ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace$, with \ensuremath{\tilde{\chi}_{2,3}^{0}}\xspace decaying to $Z / h + \ensuremath{\tilde{\chi}_{1}^{0}}\xspace$ and \ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace to $W^{\pm} + \ensuremath{\tilde{\chi}_{1}^{0}}\xspace$ (numerical values are provided in Ref. \cite{hepdata}). When these decays dominate they lead to final states with many jets, several of which are $b$-jets\xspace, but relatively little \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace. This renders this search particularly sensitive compared to most other SUSY searches, which tend to require high \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace. At higher $m_{\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace}$ and lower $m_{\ensuremath{\tilde{g}}\xspace}$, the decay $\ensuremath{\tilde{g}}\xspace \to q + \bar{q} + \ensuremath{\tilde{\chi}_{1}^{0}}\xspace$ becomes dominant and this search starts to lose sensitivity. This model is labelled in the following figures as `pMSSM'. The signal events are simulated using \texttt{MadGraph}\xspace~v2.2.2 at LO interfaced to \texttt{PYTHIA}\xspace~8.186, as for those of $W$+jets\xspace and $Z$+jets\xspace. The signal cross-sections are calculated at NLO in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leading-logarithmic (NLL) accuracy \cite{Beenakker:1996ch,Kulesza:2008jb,Kulesza:2009kq,Beenakker:2009ha,Beenakker:2011fu}. The nominal cross-section is taken from an envelope of cross-section predictions using different PDF sets and factorisation and renormalisation scales, as described in Ref.~\cite{Kramer:2012bx}. For the model points shown later in Figures \ref{fig:multijet_templates}--\ref{fig:metsig_final}, with $m_{\ensuremath{\tilde{g}}\xspace}=1300\GeV$ slightly beyond the Run-1 exclusion limits, the SR selection efficiencies are around 8\% in the SRs most sensitive to those models. \oursubsection{Multijet background}\label{sec:bg:multijet} The signal regions were chosen such that the background from the multijet{} process can be determined from the data. The method relies on the observation~\cite{Aad:2011qa} that where \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace{} originates predominantly from calorimeter energy mismeasurement, as is the case for the multijet{} contributions, the distribution of the ratio \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} is almost invariant under changes in jet multiplicity. This invariance, which is illustrated in Figure~\ref{fig:multijet_templates}, occurs because the calorimeter resolution that produces the momentum imbalance in these events is dominated by stochastic processes which have variance proportional to \ensuremath{H_{\text{T}}}\xspace{}, and is largely independent of the jet multiplicity. \newcommand{WillKplotdump/SUSY4_12Jan_speedy}{WillKplotdump/SUSY4_12Jan_speedy} \newcommand{_Internal}{_Internal} \renewcommand{_Internal}{} \newcommand{_HT_0-600-800-1000-inf\bonusprelim_mod}{_HT_0-600-800-1000-inf_Internal_mod} \newcommand{\binwidths}{Variable bin sizes are used with bin widths (in units of \ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace) of 0.25 (up to $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace = 2$), 0.5 (from 2 to 6), 1 (from 6 to 8), 2 (from 8 to 12) and 4 thereafter, with the last bin additionally containing all events with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace > 20\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$.} \begin{figure}\begin{center} \hfill \subfigure[$\ensuremath{{n_\mathrm{50}}}\xspace=7$, using a template with $\ensuremath{{n_\mathrm{50}}}\xspace=6$.]{ \includegraphics[width=0.49\textwidth]{figures/fig_01a}} \hfill \subfigure[$\ensuremath{{n_\mathrm{80}}}\xspace=6$, using a template with $\ensuremath{{n_\mathrm{80}}}\xspace=5$.]{ \includegraphics[width=0.49\textwidth]{figures/fig_01b}} \hfill\\ \caption{\label{fig:multijet_templates} Example distributions of the ratio \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} in the validation region with jet multiplicities (a) $\ensuremath{{n_\mathrm{50}}}\xspace=7$, and (b) $\ensuremath{{n_\mathrm{80}}}\xspace=6$. The templates, which are in each case built from a jet multiplicity one smaller than that of the data, are normalised to the data in the region with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace<1.5\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$. The templates are weighted by \ensuremath{H_{\text{T}}}\xspace{} as described in the text. No requirement is made on $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace$. Variable bin sizes are used with bin widths (in units of \ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace) of 0.25 (up to $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace = 2$), 0.5 (from 2 to 6), 1 (from 6 to 8), 2 (from 8 to 12) and 4 thereafter, with the last bin additionally containing all events with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace > 20\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$.{} The total background can lie below the \leptonic{} background contribution in individual bins, since the template can give a negative contribution. The dashed lines labelled `pMSSM' and `2-step' show the (small) signal contamination from example SUSY models described in Section~\ref{sec:signals} -- a pMSSM model point with ($m_{\ensuremath{\tilde{g}}\xspace}$,~$m_{\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace}$) = (1300,~200)\,\GeV~and a cascade decay model with ($m_{\ensuremath{\tilde{g}}\xspace}$,~$m_{\ensuremath{\tilde{\chi}_{1}^{0}}\xspace}$) = (1300,~200)\,\GeV. The sub-plots show the ratio of the data to the SM prediction, with the blue hatched band showing the statistical uncertainty arising from a finite number of MC events and limited data in the templates and $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace<1.5$ normalisation regions. } \end{center}\end{figure} The shape of the \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} distribution is measured in control regions (CR) with lower jet multiplicities than the signal regions, and correspondingly much higher multijet contributions. For the \ensuremath{{n_\mathrm{50}}}\xspace{} signal regions, the CR contains events with exactly six jets having $\ensuremath{p_{\text{T}}}\xspace>50\GeV$. For the \ensuremath{{n_\mathrm{80}}}\xspace{} signal regions, the CR requires exactly five jets with $\ensuremath{p_{\text{T}}}\xspace>80\GeV$. For each SR jet selection, an appropriate \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} distribution template is normalised to the data in a further CR having the same jet multiplicity as the SR but with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace<1.5\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$. That normalised template then provides the background prediction for the SR multiplicity in the region with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace>4\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$. Since semileptonic $b$-hadron decays can contribute to \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace{}, these \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} template distributions are built separately for each $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace$ requirement. For example, the multijet{} contribution to the \SR{9j50-1b} signal region is determined using a template built from events with exactly six jets with $\ensuremath{p_{\text{T}}}\xspace>50\GeV$, and $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace \ge 1$. That template is normalised to \SR{9j50-1b} in the region with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace<1.5\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$. When constructing and normalising the \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} templates, the same lepton veto is used as for the signal regions. However, some \leptonic{} background contributions persist, and so the expected \leptonic{} backgrounds to those templates (normalised according to their theoretical cross-sections, as described in Section~\ref{sec:bg:leptonic}) are subtracted from the data distributions. The uncertainties associated with the \leptonic{} backgrounds are included in the systematic uncertainty in the prediction. Non-stochastic contributions to calorimeter resolution, which lead to a residual dependence of the \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} distribution on \ensuremath{H_{\text{T}}}\xspace{} (at the $\mathcal{O}(10\%)$ level), are reduced by constructing the templates in four bins of \ensuremath{H_{\text{T}}}\xspace{} in the kinematic region of interest. Those proto-templates are combined with weights which reflect the \ensuremath{H_{\text{T}}}\xspace{} distribution of the CR with the same jet multiplicity as the target SR but with $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace<1.5\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$. The effect of changing the \ensuremath{H_{\text{T}}}\xspace binning is included in the systematic uncertainty. The validity of assuming \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} invariance is tested with data, using a series of validation regions (VR) with smaller jet multiplicities or smaller \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} (between $1.5\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$ and $3.5 \ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$) than the SRs, or both. These VRs are found to be described by the templates, constructed as described above, mostly to within 10\%--20\%. However, for the tightest regions (with very few events) the discrepancy reaches 60\%. The tests are performed separately for each of the three $b$-jet requirements, and the largest difference for each set, including VRs with jet multiplicity up to and including that of the SR in question, is included as an overall `closure' systematic uncertainty associated with the method. \oursection{Statistical treatment and systematic uncertainties}\label{sec:systematics} Systematic uncertainties specific to the multijet{} and \leptonic{} background contributions are described in Sections~\ref{sec:bg:multijet} and \ref{sec:bg:leptonic} respectively. Further uncertainties that apply to signal processes and all simulated backgrounds include those on the jet energy scale, jet resolution, integrated luminosity, the $b$-tagging\xspace efficiency (for correct and incorrect identifications of both the $b$- and non-$b$-jets), and the lepton identification efficiency and energy scale. They are in general small compared to the aforementioned ones, being at most one third the size of the largest of those. The effect of the systematic uncertainties on the SM background calculations is reduced by constraining the normalisations of the \ttbar{} and $W$+jets backgrounds using dedicated control regions kinematically close to, but distinct from, the signal regions, as shown in Table~\ref{tab:CRdefs}. Each \leptonic{} control region contains events with one electron or muon that meets the stricter requirements described in Section~\ref{sec:objectselection} and has transverse momentum $\ensuremath{p_{\text{T}}}\xspace^\ell>20\GeV$. There must be no additional lepton candidates with $\ensuremath{p_{\text{T}}}\xspace^\ell>10\GeV$. Each such region uses the same multijet trigger as its corresponding SR. To reduce generic background from new particles which may decay to a final state with leptons and \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace, a modest upper bound of 120\,\GeV\ is placed on the transverse mass $\ensuremath{m_\mathrm{T}}\xspace = \left(2\, \ensuremath{E_{\text{T}}^{\text{miss}}}\xspace\ensuremath{p_{\text{T}}}\xspace^\ell - 2\,\Ptmiss\cdot\vec{p}_\mathrm{T}^\ell\right)^\frac{1}{2}$. Since it is predominantly through hadronic $\tau$ decays that $W$ bosons and \ttbar{} pairs contribute to the signal regions, the corresponding control regions are created by recasting the muon or electron as a jet. If that lepton has sufficient \ensuremath{p_{\text{T}}}\xspace{} (without any additional calibration) it may contribute to the jet multiplicity count (denoted \ensuremath{{n_\mathrm{50}^\mathrm{CR}}}\xspace or \ensuremath{{n_\mathrm{80}^\mathrm{CR}}}\xspace), as well as to \ensuremath{H_{\text{T}}}\xspace{} and hence to \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{}. In order to yield sufficient numbers of events in these CRs, the requirement on the jet multiplicity in each CR is one fewer than that in the corresponding SR, and a somewhat less stringent requirement is made on \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{} compared to the SRs. For each SR (regardless of its own requirement on \ensuremath{{n_{b\mathrm{-jet}}}}\xspace) there are two CRs, which require either exactly zero or at least one $b$-jet\xspace. These help constrain the combination of \ttbar{} and $W$+jets backgrounds, with the \ttbar{} background being enhanced in the CR that requires a $b$-jet\xspace. Figure~\ref{fig:CR:njet} shows the resulting \ensuremath{{n_\mathrm{50}^\mathrm{CR}}}\xspace{} jet multiplicity distributions in these control regions. \begin{table} \centering\renewcommand\arraystretch{1.3} \begin{tabular}{|l||c|c|c|c|} \hline \textbf{SR name} & \multicolumn{2}{c|}{\hfill \boldSR{$n$j50} or \boldSR{$n$j50-1b} or \boldSR{$n$j50-2b} \hfill} & \multicolumn{2}{c|}{\hfill \boldSR{$n$j80} or \boldSR{$n$j80-1b} or \boldSR{$n$j80-2b} \hfill} \\ \hline \textbf{CR name} & \boldSR{CR$(n-1)$j50-0b} & \boldSR{CR$(n-1)$j50-1b} & \boldSR{CR$(n-1)$j80-0b} & \boldSR{CR$(n-1)$j80-1b} \\ \hline\hline $\ensuremath{p_{\text{T}}}\xspace^{\ell}$ ($\ell\in\{e\,\mu\}$) & \multicolumn{4}{c|}{$>20\GeV$}\\ \hline \ensuremath{m_\mathrm{T}}\xspace & \multicolumn{4}{c|}{$<120\GeV$}\\ \hline \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace & \multicolumn{4}{c|}{$>3\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$} \\ \hline \ensuremath{{n_\mathrm{50}^\mathrm{CR}}}\xspace & \multicolumn{2}{c|}{$\geq \ensuremath{{n_\mathrm{50}}}\xspace-1$} & \multicolumn{2}{c|}{---} \\ \hline \ensuremath{{n_\mathrm{80}^\mathrm{CR}}}\xspace & \multicolumn{2}{c|}{---} & \multicolumn{2}{c|}{$\geq \ensuremath{{n_\mathrm{80}}}\xspace-1$} \\ \hline \ensuremath{{n_{b\mathrm{-jet}}}}\xspace & 0 & $\geq 1$ & 0 & $\geq 1$\\ \hline \end{tabular} \caption{\label{tab:CRdefs} Leptonic control region definitions for each of the signal regions. In the names, the symbols $n$ and $n-1$ refer to the corresponding jet multiplicity requirements. For example the three signal regions \SR{9j50}, \SR{9j50-1b} and \SR{9j50-2b} are each independently controlled by both the \SR{CR8j50-0b} and \SR{CR8j50-1b} control regions. } \end{table} \begin{figure} \centering \subfigure[$\ensuremath{{n_{b\mathrm{-jet}}}}\xspace=0$] {\includegraphics[width=0.49\textwidth]{figures/fig_02a}} \subfigure[$\ensuremath{{n_{b\mathrm{-jet}}}}\xspace \geq 1$]{\includegraphics[width=0.49\textwidth]{figures/fig_02b}} \caption{ Control regions -- requiring one lepton -- showing the \ensuremath{{n_\mathrm{50}}}\xspace{} jet multiplicity distributions after all selections aside from \ensuremath{{n_\mathrm{50}}}\xspace{}. That lepton is permitted to contribute to the jet multiplicity count and to \ensuremath{H_{\text{T}}}\xspace{}. The sub-plots show the ratio of the data to the Standard Model prediction. The blue hatched bands on those sub-plots show MC statistical uncertainties. The dashed lines labelled `pMSSM' and `2-step' refer to benchmark signal points -- a pMSSM slice model with ($m_{\ensuremath{\tilde{g}}\xspace}$, $m_{\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace}$) = (1300, 200)\,\GeV~and a cascade decay model with ($m_{\ensuremath{\tilde{g}}\xspace}$, $m_{\ensuremath{\tilde{\chi}_{1}^{0}}\xspace}$) = (1300, 200)\,\GeV. All backgrounds are normalised according to their theoretical (pre-fit) cross-sections. } \label{fig:CR:njet} \end{figure} \label{sec:fit} For each signal region, a simultaneous fit is performed to the number of events found in the corresponding two CRs, using the HistFitter package~\cite{Baak:2014wma}. For the purpose of exclusion, the simultaneous fit also includes data in the SR. In the fit the normalisations of the \ttbar{} and $W$+jets background contributions are allowed to float, while the other \leptonic{} backgrounds, which are generally subdominant, are determined directly from their yields using the corresponding theoretical cross-sections. The event yields in each CR and SR are assumed to be Poisson distributed. The systematic uncertainties are treated as Gaussian-distributed nuisance parameters, and are assumed to be correlated within each fit. The multijet{} background yield in the SR is determined separately from the data using the methods described in Section~\ref{sec:bg:multijet}. The normalisations for the \ttbar{} and $W$+jets backgrounds are generally found to be consistent with their corresponding theoretical predictions when uncertainties are considered. Systematic uncertainties are larger than statistical uncertainties for the regions with looser selection criteria, with the situation reversed for those with tighter selection criteria. The systematic uncertainties with the largest impact include theoretical uncertainties on the \ttbar{} background, the impact of limited numbers of events in the control regions, the closure of the multijet background estimation method and the jet energy scale. The overall post-fit values range from 14\% to 42\% with the theoretical uncertainties on the \ttbar{} backgrounds typically being the most significant contribution. \oursection{Results}\label{sec:results} \begin{figure}\begin{center} \subfigure[$\ensuremath{{n_\mathrm{50}}}\xspace\ge10$.]{ \includegraphics[width=0.49\textwidth]{figures/fig_03a}} \subfigure[$\ensuremath{{n_\mathrm{50}}}\xspace\ge10$ and $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace\geq2$.]{ \includegraphics[width=0.49\textwidth]{figures/fig_03b}} \subfigure[$\ensuremath{{n_\mathrm{80}}}\xspace\ge8$.]{ \includegraphics[width=0.49\textwidth]{figures/fig_03c}} \subfigure[$\ensuremath{{n_\mathrm{80}}}\xspace\ge8$ and $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace\geq2$.]{ \includegraphics[width=0.49\textwidth]{figures/fig_03d}} \caption{\label{fig:metsig_final} Example distributions of the selection variable \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{}, for the largest multiplicities required of the number of jets with \ensuremath{p_{\text{T}}}\xspace{} larger than 50\,\GeV{} (top) or 80\,\GeV{} (bottom). The plots on the left have no selection on the number of $b$-tagged jets, while those on the right are for events with $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace\geq2$. $W$+jets\xspace and \ttbar are normalised to their post-fit values, while the other \leptonic{} backgrounds are normalised to their theoretical cross-sections. The multijet{} templates are normalised to data at lower jet multiplicities in the region $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{}<1.5\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$, in the manner described in Section~\ref{sec:bg:multijet}. The SRs lie where $\ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace{}>4\ensuremath{\,{\textrm{GeV}}^{1/2}}\xspace$. The dashed lines labelled `pMSSM' and `2-step' refer to benchmark signal points -- a pMSSM slice model with ($m_{\ensuremath{\tilde{g}}\xspace}$, $m_{\ensuremath{\tilde{\chi}_{1}^{\pm}}\xspace}$) = (1300, 200)\,\GeV~and a cascade decay model with ($m_{\ensuremath{\tilde{g}}\xspace}$, $m_{\ensuremath{\tilde{\chi}_{1}^{0}}\xspace}$) = (1300, 200)\,\GeV. Other details are as described in Figure~\ref{fig:multijet_templates}. } \end{center}\end{figure} \begin{table} \centering \begin{tabular}{@{}>{\small\bfseries}lr@{${}\pm{}$}lr@{${}\pm{}$}lr@{${}\pm{}$}lc@{}} \toprule \multirow{2}{*}{Signal region} & \multicolumn{6}{c}{Fitted background} & \multirow{2}{*}{Obs events} \\ \cline{2-7} & \multicolumn{2}{c}{Multijet} & \multicolumn{2}{c}{Leptonic} & \multicolumn{2}{c}{Total} &\\ \midrule \SR{8j50} & $109.3$ & $6.9$ & $80$ & $25$ & $189$ & $26$ & $157$ \\ \SR{8j50-1b} & $76.7$ & $2.7$ & $62$ & $21$ & $138$ & $21$ & $97$ \\ \SR{8j50-2b} & $33.8$ & $2.1$ & $33$ & $13$ & $67$ & $13$ & $39$ \\ \SR{9j50} & $16.8$ & $1.3$ & $12.8$ & $5.4$ & $29.6$ & $5.6$ & $29$ \\ \SR{9j50-1b} & $13.5$ & $2.0$ & $10.2$ & $4.9$ & $23.8$ & $5.3$ & $21$ \\ \SR{9j50-2b} & $6.4$ & $1.6$ & $5.8$ & $3.3$ & $12.1$ & $3.6$ & $9$ \\ \SR{10j50} & $2.61$ & $0.61$ & $1.99$ & $0.62$ & $4.60$ & $0.87$ & $6$ \\ \SR{10j50-1b} & $2.42$ & $0.62$ & $1.44$ & $0.49$ & $3.86$ & $0.79$ & $3$ \\ \SR{10j50-2b} & $1.40$ & $0.87$ & $0.83$ & $0.37$ & $2.23$ & $0.94$ & $1$ \\ \addlinespace \SR{7j80} & $40.0$ & $5.3$ & $30$ & $13$ & $70$ & $14$ & $70$ \\ \SR{7j80-1b} & $29.1$ & $3.4$ & $20.8$ & $10$ & $50$ & $11$ & $42$ \\ \SR{7j80-2b} & $11.5$ & $1.6$ & $11.0$ & $5.0$ & $22.5$ & $5.2$ & $19$ \\ \SR{8j80} & $4.5$ & $1.9$ & $4.9$ & $2.2$ & $9.3$ & $2.9$ & $8$ \\ \SR{8j80-1b} & $3.9$ & $1.5$ & $3.8$ & $2.1$ & $7.6$ & $2.6$ & $4$ \\ \SR{8j80-2b} & $1.72$ & $0.93$ & $2.3$ & $1.1$ & $4.1$ & $1.5$ & $2$ \\ \bottomrule \end{tabular} \caption{For each signal region, the expected SM background (and separately the multijet and leptonic contributions) and the observed number of data events. The SM background normalisations are obtained from fits to the data in control regions, as described in Sections~\ref{sec:bg} and \ref{sec:systematics}. The signal regions are as defined in Table~\ref{tab:SRdefs}. } \label{tab:fitResults} \end{table} \begin{figure}\begin{center} \centering \subfigure[\ensuremath{{n_\mathrm{50}}}\xspace]{\includegraphics[width=0.75\textwidth]{figures/fig_04a}} \\ \subfigure[\ensuremath{{n_\mathrm{80}}}\xspace]{\includegraphics[width=0.75\textwidth]{figures/fig_04b}} \caption{\label{fig:pies:SR} Post-fit signal region compositions. The area of each pie chart is scaled to log$_{10}$ of the total expected yield (as printed above each one). } \end{center}\end{figure} Figure~\ref{fig:metsig_final} shows the post-fit \ensuremath{\met/\sqrt{H_{\mathrm{T}}}}\xspace\xspace distributions in the most sensitive signal regions (see below), while Figure~\ref{fig:pies:SR} shows the background composition in all fifteen SRs. The background is split between multijet and \leptonic{} processes, with the latter being 60--90\% \ttbar{}. The yields in each of the 15 signal regions are reported in Table~\ref{tab:fitResults}. No significant excess is observed above the SM expectations in any SR, and most have confidence levels for the background-only hypothesis larger than 10\%, as shown in Table~\ref{tab:upperLimits}. The table also shows the model-independent limits -- 95\% confidence level (CL) limits on the maximum contribution of new physics processes to the event yields in the various SRs, assuming zero signal contamination in control regions. \begin{table} \begin{center} \setlength{\tabcolsep}{0.0pc} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}lccccc} \noalign{\smallskip}\hline\noalign{\smallskip} {\bf Signal region} & $\langle\epsilon{\rm \sigma}\rangle_{\rm obs}^{95}$[fb] & $S_{\rm obs}^{95}$ & $S_{\rm exp}^{95}$ & $CL_{B}$ & $p(s=0)$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} \SR{8j50} & $11$ & $36$ & ${49}^{+19}_{-13}$ & $0.14$ & $0.50$ \\ \SR{8j50-1b} & $6.8$ & $22$ & ${37}^{+13}_{-10}$ & $0.04$ & $0.50$ \\ \SR{8j50-2b} & $3.8$ & $12$ & ${22}^{+8}_{-6}$ & $0.03$ & $0.50$ \\ \SR{9j50} & $5.8$ & $19$ & ${19}^{+4}_{-5}$ & $0.49$ & $0.50$ \\ \SR{9j50-1b} & $5$ & $16$ & ${17}^{+2}_{-6}$ & $0.38$ & $0.50$ \\ \SR{9j50-2b} & $2.6$ & $8$ & ${10}^{+3}_{-2}$ & $0.31$ & $0.50$ \\ \SR{10j50} & $2.5$ & $8$ & ${6}^{+3}_{-1}$ & $0.74$ & $0.26$ \\ \SR{10j50-1b} & $1.6$ & $5$ & ${6}^{+2}_{-1}$ & $0.37$ & $0.50$ \\ \SR{10j50-2b} & $1.1$ & $4$ & ${4}^{+2}_{-1}$ & $0.27$ & $0.50$ \\ \addlinespace \SR{7j80} & $10$ & $32$ & ${32}^{+11}_{-9}$ & $0.51$ & $0.50$ \\ \SR{7j80-1b} & $6.2$ & $20$ & ${24}^{+6}_{-5}$ & $0.29$ & $0.50$ \\ \SR{7j80-2b} & $4.2$ & $14$ & ${14}^{+6}_{-2}$ & $0.33$ & $0.50$ \\ \SR{8j80} & $3.2$ & $10$ & ${11}^{+2}_{-4}$ & $0.41$ & $0.50$ \\ \SR{8j80-1b} & $1.7$ & $5$ & ${7}^{+3}_{-2}$ & $0.20$ & $0.50$ \\ \SR{8j80-2b} & $1.4$ & $4$ & ${5}^{+2}_{-1}$ & $0.24$ & $0.50$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} \end{tabular*} \end{center} \caption{ The results of a fit to the control and signal region data, assuming no signal contamination in the control regions. Left to right: 95\% CL upper limits on the visible cross-section ($\langle\epsilon\sigma\rangle_{\rm obs}^{95}$) and on the number of signal events ($S_{\rm obs}^{95}$ ). Convergence and stability tests of the fits suggest uncertainties of order 5\% on $S_{\rm obs}^{95}$ resulting from these effects. The third column ($S_{\rm exp}^{95}$) shows the 95\% CL upper limit on the number of signal events, given the expected number (and $\pm 1\sigma$ excursions on the expectation) of background events. The last two columns indicate the $CL_B$ value, i.e. the confidence level observed for the background-only hypothesis, and the discovery $p$-value ($p(s = 0)$). The test is one-sided, so the $p$-value is 0.50 when the observed number of events is smaller than the prediction. Yields are not statistically independent, since there are correlated systematic uncertainties and since signal regions overlap. \label{tab:upperLimits}} \end{table} The results are interpreted in the context of the two supersymmetric models described in Section~\ref{sec:signals}. The limit for each signal region is obtained by comparing the observed event count with that expected from Standard Model background plus SUSY signal processes, with their contamination of the leptonic control regions, typically below 10\% for points close to the exclusion contour, being accounted for. All uncertainties on the Standard Model expectation are considered, including those which are correlated between signal and background (for instance jet energy scale uncertainties) and all, except theoretical cross-section uncertainties (PDF and scale), on the signal expectation. The resulting exclusion regions, shown in Figure~\ref{fig:contour:both}, are obtained using the CL\textsubscript{s}\xspace prescription~\cite{clsread}. For each signal model point, the signal region with the best expected limit is used. Signal regions defined by \ensuremath{{n_\mathrm{50}}}\xspace{} and those defined by \ensuremath{{n_\mathrm{80}}}\xspace{} both contribute to the best expected limit. The most sensitive signal regions are found to be those with no requirement on $\ensuremath{{n_{b\mathrm{-jet}}}}\xspace{}$ for the simplified model decay. For the pMSSM slice, which has large branching ratios for gluinos to third-generation quarks, the best signal regions are those requiring either one or two $b$-jets\xspace. In both cases, gluino masses up to $1400\GeV$ are excluded at 95\% confidence level, significantly extending previous limits for the simplified model decay. \begin{figure} \centering \subfigure[pMSSM slice] {\includegraphics[width=0.65\textwidth]{figures/fig_05a}} \subfigure[Simplified cascade decay (`2-step') model]{\includegraphics[width=0.65\textwidth]{figures/fig_05b}} \caption{ 95\% CL exclusion curve for the two supersymmetric models described in the text. The solid red and dashed blue curves show the 95\% CL observed and expected limits, respectively, including all uncertainties except the theoretical signal cross-section uncertainty (PDF and scale). The dotted red lines bracketing the observed limit represent the result produced when moving the signal cross-section by $\pm1\sigma$ (as defined by the PDF and scale uncertainties). The shaded yellow band around the expected limit shows the $\pm 1\sigma$ variation of the expected limit. The shaded grey area shows the observed exclusion from the combination of ATLAS $\sqrt{s}=8\TeV$ analyses performed in Ref. \cite{Aad:2015iea} (Figure~25 therein). Excluded regions are below and to the left of the relevant lines. } \label{fig:contour:both} \end{figure} \FloatBarrier \oursection{Conclusions}\label{sec:conclusions} A search is presented for new phenomena with large jet multiplicities (from $\ge 7$ to $\ge 10$) and missing transverse momentum. The search used 3.2\,fb$^{-1}$ of $\sqrt{s}=13\TeV$ $pp$ collision data collected by the ATLAS experiment at the Large Hadron Collider. The increase in the LHC centre-of-mass energy provided increased sensitivity to higher-mass sparticles compared with previous searches. Further sensitivity was gained by considering separately final states with $\ge 0$, $\ge 1$ and $\ge 2$ $b$-tagged\xspace{} jets. The Standard Model predictions are found to be consistent with the data. The results are interpreted in the context of a simplified supersymmetry model, and a slice of the pMSSM, each of which predict cascade decays of supersymmetric particles and hence large jet multiplicities. The data exclude gluino masses up to $1400\GeV$ at the 95\% CL in these models, significantly extending previous bounds. Model-independent limits were presented which allow reinterpretation of the results to cases of other models which also predict decays into multijet final states in association with invisible particles. \begin{small} \section*{Acknowledgements} \input{Acknowledgements} \end{small} \FloatBarrier \printbibliography \newpage \input{atlas_authlist} \clearpage \end{document}
train/arxiv
BkiUbj04uzki00TqUmFi
5
1
\section{Introduction} Having been prepared at a high starting temperature ($T_s$), when a homogeneous mixture is quenched to a final temperature ($T_f$), that falls inside the miscibility gap, it renders unstable to fluctuation and separates into regions or domains rich in particles of similar type \cite{binder,bray,onuki,ral,gunton}. Kinetics of such phase separation is of immense interest from both scientific and technological viewpoints. To probe the aging during such evolution, often one studies the decay of the two-time auto-correlation function \cite{mz} \begin{eqnarray}\label{auto_corr_fn} C_{\textrm{ag}}(t,t_w)=&\langle\psi(\vec{r},t)\psi(\vec{r},t_w)\rangle-\nonumber\\ &\langle{\psi(\vec{r},t)}\rangle \langle\psi(\vec{r},t_w)\rangle. \end{eqnarray} Here $\psi$, chosen scalar by keeping the content of the paper in mind, is a space ($\vec{r}$) and time dependent order parameter, while $t$ and $t_w$ ($\leq t$) are referred, respectively, to as the observation and waiting times. \par Due to the violation of time-translation invariance in nonequilibrium systems, $C_{\textrm{ag}} (t,t_w)$ for different $t_w$ are not equivalent to each other. In other words, if this correlation function is plotted versus $t-t_w$, there will be no collapse of data for different values of $t_w$. However, it is found that in many systems $C_{\textrm{ag}}(t,t_w)$ exhibits the scaling behavior \cite{mz,fisher_huse,liu,yrd,henkel,corberi,lorenz,midya_jpcm,midya_pre,nv,paul,bera,corberi_villa,humayun,bray_humayun,dutta,puri_kumar} \begin{equation}\label{scaling_auto_corr} C_{\textrm{ag}}(t,t_w) \sim (\ell/\ell_w)^{-\lambda}, \end{equation} where $\ell$ and $\ell_w$ are the average sizes of domains at times $t$ and $t_w$, respectively. Note that $\ell$ typically has a power-law time dependence \cite{binder,bray,onuki,ral,mz} \begin{equation} \ell \sim t^\alpha, \end{equation} in phase ordering systems. Here $\lambda$ and $\alpha$ are referred to as the aging and growth exponents. Values of these exponents, along with few other properties \cite{bray,bray_majumdar}, define the nonequilibrium universality classes \cite{bray,humayun}. \par It has been argued that for same model, depending upon the spatial correlation in the initial configurations there can be different universality classes \cite{humayun,bray_humayun,dutta} -- one for $T_s=\infty$ and the other for $T_s=T_c$, the latter being the critical temperature. Note here that at $T_{s}=\infty$ a system, in standard picture, has a correlation length $\xi=0$ and at $T_s=T_c$, $\xi=\infty$, when the system is of thermodynamically large size \cite{bray,onuki,fisher}. For ordering in uniaxial ferromagnets \cite{bray,fisher}, this fact of universality has been demonstrated \cite{humayun,bray_humayun}. There the understanding is that even though $\alpha$ remains the same \cite{humayun}, $\lambda$ and other dynamic and structural quantities are different in the two classes \cite{humayun,bray_humayun,saikat_epjb,saikat_pre,blanchard}. \par In contrast to the magnetic case, for which there is no constraint on the conservation of system integrated order parameter during evolution \cite{bray}, the task of understanding of coarsening phenomena is known to be significantly more difficult, at least theoretically and computationally, for conserved order parameter dynamics that applies to kinetics of phase separation in multi-component mixtures \cite{bray}. Computational difficulty \cite{amar,sm_2010,sm_2011}, to a certain extent, arises from the significantly slower dynamics in the latter case. Note that for the nonconserved case \cite{bray,allen_cahn} $\alpha = 1/2$, whereas for the conserved case \cite{bray,lifshitz,huse} $\alpha=1/3$. Furthermore, irrespective of the type of dynamics, conserved or nonconserved, quantitative understanding of aging behavior, even for simple models, still remains difficult, convergence in the settlement of issues being rather slow \cite{fisher_huse,liu,yrd,henkel,corberi,lorenz,midya_jpcm,midya_pre,nv,paul,bera,corberi_villa,humayun,bray_humayun,bray_majumdar,marko,sm_2013,ahmad,yeung_jasnow}, despite the availability of huge computational resources. \par Nevertheless, significant progress has recently been made, following adoption of methods of analysis that are analogous to the popular techniques used for extracting information about equilibrium systems. In a recent work \cite{midya_pre} we have quantified the values of $\lambda$ for phase separating binary mixtures (A+B) in different space dimensions $d$, via formulation and application of finite-size scaling technique \cite{midya_jpcm,fisher_barber} to Monte Carlo (MC) simulation results, for quenches with initial $\xi=0$. For this and a number of other situations, including the ferromagnetic case, we have demonstrated \cite{midya_jpcm,midya_pre,nv,paul,bera} that $\lambda$ satisfies certain bounds. Here note that Fisher and Huse (FH) argued \cite{fisher_huse}: \begin{equation}\label{fh_bound} \lambda \geq \frac{d}{2}. \end{equation} Later, Yeung, Rao and Desai (YRD) \cite{yrd} provided a more accurate and generic bound: \begin{equation}\label{yrd_bound} \lambda \geq \frac{d+\beta}{2}, \end{equation} where $\beta$ is an exponent related to the short wave number ($k$) behaviour of structure factor \cite{yeung}, viz., \begin{equation}\label{powerlaw_beta} S(k \rightarrow 0, t_w) \sim k^\beta. \end{equation} For random initial configurations ($\xi=0$), $\beta=0$ and so, the YRD bound coincides with that of FH. For nonconserved order parameter, when $T_s=\infty$, $\beta=0$ even in the long time limit. The latter, however, is not true for the conserved case \cite{yeung,furukawa,majumdar_huse}. This is one of the reasons for our observation of vastly different $\lambda$ values in the two cases, irrespective of space dimension, for quenches with $\xi=0$. When started from $T_s=T_c$, it is expected that one will have different structural scaling \cite{humayun, bray_humayun}. If so, the bounds for both conserved and nonconserved order-parameter will be different from that when quenched from $T_s = \infty$. This provides an intuitive understanding that $\lambda$ for both conserved and nonconserved classes will be different for $T_s = \infty$ and $T_s = T_c$, giving rise to different universalities. This is demonstrated, as already stated, theoretically and computationally, for the non-conserved case \cite{humayun,bray_humayun}. In this paper we focus on the conserved case, i.e., we take up the task of estimating $\lambda$ for binary (A+B) mixtures with $T_s = T_c$. Note that nonequilibrium universality classes are also decided \cite{bray} by the space dimension, order-parameter symmetry and presence of hydrodynamics. In this paper we focus on $d=2$ and scalar order parameter, in absence of hydrodynamics, i.e., in our model system coarsening occurs due to simple diffusive transport, as expected in solid alloys. To validate our method, and thus, the result, we also estimate $\lambda$ for the nonconserved case that can be readily compared with the existing results from other approaches \cite{humayun,bray_humayun}. We show that the obtained values of $\lambda$ are consistent with the YRD bound. These numbers are discussed with reference to the corresponding numbers \cite{midya_pre} for $T_s=\infty$. It transpires that for conserved order parameter also $\lambda$ for $T_s = T_c$ is hugely different from that for $T_s=\infty$. Another recent study of ours \cite{nv_2} suggest that in both the cases the growth exponent remains same, like in the nonconserved case. Thus, there exists qualitative similarity between conserved and nonconserved cases, with respect to relaxation following quenches to the ordered region. \section{Models and Methods} We study nonequilibrium dynamics in solid binary mixtures and uniaxial ferromagnets, via Kawasaki exchange \cite{kawasaki} and Glauber spin flip \cite{glauber} Monte Carlo methods \cite{binder_heermann,landau,dan_frenkel}, respectively, using the Ising model \cite{fisher} on a 2D square lattice, with periodic boundary conditions \cite{landau} in both the directions. The Hamiltonian of the model is given by \cite{fisher,landau} \begin{equation}\label{hamilton} H=-J\sum\limits_{<ij>} S_i S_j; \, S_i=\pm 1; \,J>0, \end{equation} where the values $+1$ and $-1$ correspond to particles of types A and B, respectively, in the case of binary mixture, and up and down spins in the case of ferromagnet. The value of critical temperature of this model \cite{fisher,landau} in $d=2$ is $\simeq 2.269J/k_B$, where $J$ is the interaction strength and $k_B$ is the Boltzmann constant. A trial move in the Kawasaki exchange Ising model (KIM) is the interchange of particles between randomly selected nearest neighbor sites, whereas in the Glauber Ising model (GIM) a move is performed by flipping a randomly selected spin. In both the cases the probability of acceptance of trial moves is given by \cite{binder_heermann,landau,dan_frenkel} \begin{equation}\label{metropolis} P(i \rightarrow j) = \textrm{min} (1,\exp(-(E_i-E_j)/k_B T_f)), \end{equation} where $E_{i(j)}$ is the energy of the state $i(j)$. Time in our simulation is estimated in units of MC steps (MCS), where one MCS is equivalent to $L^2$ trial moves, $L$ being the linear dimension of a square box, in units of the lattice constant $a$. In the rest of the paper, we set $J$, $k_B$ and $a$ to unity. Unless otherwise mentioned, we quench the systems from $T_s = T_c^L$, $T_c^L$ being the system-size dependent critical temperature \cite{fisher_barber,luijten}, to a final temperature $T_f=0.6T_c$. In order to obtain the equilibrium configurations at $T_c^L$, we have performed simulations using Wolff algorithm \cite{wolff}, that, to a good degree, helps avoiding the critical slowing down \cite{hohenberg}. Here, instead of a single spin, a randomly selected cluster of similar particles or spins is flipped. The average domain lengths of a system during evolution have been calculated via \cite{sm_2010,sm_2011} \begin{equation} \ell(t)=\int P(\ell_d,t)\ell_d d\ell_d, \end{equation} where $P(\ell_d,t)$ is a domain-size distribution function, and $\ell_d$ is the distance between two successive interfaces in a specific direction. In the calculation of the autocorrelation functions [see Eq. (\ref{auto_corr_fn})], the order parameter $\psi$ at a space point corresponds to the value of spin in Eq. (\ref{hamilton}) at a lattice site. All the presented results, for both KIM and GIM, are averaged over a minimum of $100$ independent initial configurations. \section{Results} \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig1.eps} \caption{\label{fig1} Snapshots for the Kawasaki Ising model during evolutions following quenches from $T_s=\infty$ (upper frames) and $T_s = T_c^L$ (lower frames), with $L=128$. In each of the cases pictures from three different times are shown. The dots represent A particles and the rest of the space is occupied by B particles. Here and in other places all results are from quenches to $T_f = 0.6T_c$.} \end{figure} We start by presenting results from the KIM. In Fig. \ref{fig1} snapshots during the evolutions for different $T_s$ values are presented. The upper frames are for $T_s = \infty$ and the lower ones are for $T_s = T_c^L$. All the pictures are for $L=128$. The difference in structure in the two cases is recognizable, even though there exist strong finite-size effects in the initial configurations \cite{landau,fisher_barber} for $T_s=T_c^L$. The latter is in addition to the standard finite-size effects \cite{sm_2010,sm_2011,heermann} that is observed for $T_s=\infty$, when $\ell$ approaches $L$. As is well known \cite{fisher}, \begin{equation}\label{powerlaw_xi} \xi \sim \epsilon^{-\nu};\, \epsilon = \frac{T_s-T_c}{T_c}, \end{equation} $\nu$ being a static critical exponent. For a true phase transition, achievable in thermodynamically large systems, of course, $\xi =\infty$ at the critical point. However for $L<\infty$, which is always the case for computer simulations, $\xi$ is finite, the maximum attainable value being $\xi=L$. Because of that, for finite $L$, when $T_s=T_c^L$, following quenches the systems quickly deviate from the desired \cite{humayun,bray_humayun} scaling form, different from that for quenches with $T_s=\infty$, of the nonequilibrium structure. This can be realized by taking a closer look at the snapshots for $T_s = T_c^L$ in Fig. \ref{fig1} -- the fractality is changing with time. This additional finite-size effect must be taken care of via appropriate extrapolation of the size-affected quantitative data in the $L=\infty$ limit. This requires knowledge of $T_c^L$ for various values of $L$. Related results we present next before showing data for the autocorrelation functions. \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig2.eps} \caption{\label{fig2} Plot of finite-size critical temperatures $T_c^L$ as a function of the inverse system size $1/L$. These results were obtained for GIM. The continuous line is a fit of the data set to the scaling form in Eq. (\ref{powerlaw_tcl}), by fixing $T_c$ and $\nu$ to their 2D Ising values. Unless otherwise mentioned, all the results below will correspond to $T_s=T_c^L$.} \end{figure} \par Phase behavior for a model can be obtained via computer simulations by calculating the temperature dependent, appropriately defined, order-parameter distribution functions \cite{landau,luijten}. Such a phase diagram or coexistence curve will always suffer from finite-size effects due to the fact that, as mentioned above, in simulations we always have $L < \infty$. Nevertheless, via the applications of well-established scaling principles phase behavior, including the critical point, in the thermodynamic limit, can be satisfactorily obtained \cite{luijten,roy_das,midya_jchem}. In the two-phase or coexistence region the order-parameter distribution will have a double peak structure, locations of the peaks representing points along the coexistence curve. On the other hand, in the homogeneous (one-phase) region these distributions will have single peak shape (with temperature dependent width). The temperature at which the crossover from double peak to single peak structure occurs is identified as the value of $T_c^L$. A plot of $T_c^L$ versus $1/L$ is shown in Fig. \ref{fig2}. These results were obtained from GIM. Given that static critical universality is very robust, we will use the same data for the study of nonequilibrium phenomena in KIM as well. Note that the results for $T_c^L$ are expected to satisfy the scaling form \cite{landau,luijten,roy_das,midya_jchem} \begin{equation}\label{powerlaw_tcl} T_c^L - T_c \sim L^{-1/\nu}, \end{equation} validity of which can be checked from its consistency with Eq. (\ref{powerlaw_xi}). For the Ising model (universality class) $\nu = 1$ in $d=2$. The data set in Fig. \ref{fig2}, thus, is in agreement with this expected critical point behavior. Note that the continuous line in Fig. \ref{fig2} is a fit of the simulation data set to the scaling form in Eq. (\ref{powerlaw_tcl}), by fixing $\nu$ and $T_c$ to the 2D Ising values. \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig3a.eps} \vskip 0.3cm \includegraphics*[width=0.4\textwidth]{fig3b.eps} \caption{\label{fig3} (a) Log-log plots of autocorrelation function, $C_{\textrm{ag}} (t,t_w)$, for the KIM, versus $\ell/\ell_w$. Data for a few different $t_w$ are shown. All results are for $L=256$. (b) Same as (a) but here we have fixed $t_w$ to $5$ and presented results for a few values of $L$. Inside both the frames the solid lines represent power laws. The values of the exponents are mentioned next to the lines.} \end{figure} \par Following the discussion and presentation of results relevant for the scaling analysis of the aging data for the critical starting point, we now focus on the primary objective. In Fig. \ref{fig3} we present results for $C_{\textrm{ag}}(t,t_w)$, versus $\ell/\ell_w$, for the KIM. In part (a) we fix the system size and include data from few different $t_w$ values. On the other hand, in part (b) $t_w$ is fixed and $L$ is varied. In none of the cases collapse of data is observed. This should be contrasted with the available literature \cite{midya_jpcm,midya_pre} for quenches from $T_s = \infty$. Such non-scaling behavior for quenches from the critical point is because of the fact that for $L<\infty$, the structure, during evolution, quickly starts deviating from the desired scaling, as already mentioned. To overcome this problem we will perform extrapolation exercise to obtain the value of $\lambda$ in the $L=\infty$ limit. Note that very early-time structural change brings non-monotonicity in the length. This is reflected in the plots of Fig. \ref{fig3} for smaller values of $t_w$. During this period, we believe, the system is trying to arrive at the scaling structure. However, with the increase of time departure from this structure occurs, earlier for smaller systems. In both Fig. \ref{fig3} (a) and \ref{fig3} (b), a common feature is the following. Each of the data sets tend to stabilize to a power-law decay over a certain range of $\ell/\ell_w$, but deviates from it when $\ell$ approaches $L$, i.e., $\xi$. These stabilized exponent values are, however, different from each other in part (a) as well as in part (b). In part (a), this is because of the fact that the structure for each $t_w$ is different. Recall, we have already mentioned above that this is a nonequilibrium feature related to finite system of any particular size. On the other hand, even though in the case of part (b) $t_w$ is fixed, here one has different finite-size effects for different $L$ to start with, owing to different initial $\xi$ for different $L$. \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig4.eps} \caption{\label{fig4} Instantaneous exponent $\lambda_i$ is plotted versus $\ell/\ell_w$, for the KIM, for two values of $L$. In each of the cases we have $t_w =5$. We extract $L$-dependent value, $\lambda_L$, from the flat regions of these plots.} \end{figure} \par Nevertheless, for a fix $t_w$, with the increase of system size the exponents keep staying stable for longer ranges. Also, the rate of change of the exponent with the increase of $L$ keeps decreasing. That way one may like to consider a very large system to obtain $\lambda$ value that will be very close to that for $L=\infty$. We, however, would like to rely on an extrapolation method using relatively smaller systems. An advantage of using smaller systems is that one can get better statistics by running simulations with many independent initial configurations, using the same computational power that is needed to run single large system. Here note that reduction of error is not directly proportional to the size of a system \cite{sm_2010,heermann}. \par For the purpose of extrapolation, we need to obtain the exponent values in the stabilized regions accurately. For this we take help of the instantaneous exponent \cite{midya_pre,huse,amar} \begin{equation}\label{lambda_i} \lambda_i = -\frac{d\ln C_{\textrm{ag}} (t,t_w)}{d\ln x};\,\, x=\frac{\ell}{\ell_w}. \end{equation} In Fig. \ref{fig4}, as illustration, we plot this quantity as a function of $x$, for the KIM, for two values of $L$, by fixing $t_w$ to $5$. The $L$ dependent exponent, $\lambda_L$, we obtain from the flat regions of the plots, that also correspond to the minima. One can justify this by taking a closer look at the behavior of $C_{\textrm{ag}} (t,t_w)$ in Fig. \ref{fig3}. We expect that $\lambda_L$ in the limit $L=\infty$ will have same convergence for all values of $t_w$, because of the following reasons. For the meaningful scaling evolution, in the $L=\infty$ limit the structure should obey certain self-similarity all along \cite{humayun,bray_humayun}. If so, the value of $\lambda$ should not be affected by the choice of $t_w$. Note that in such a situation the bound of Eq. (\ref{yrd_bound}) does not change. For finite $L$, of course, the situation is different, as discussed and being observed. However, the intended extrapolation is expected to lead us to the thermodynamic $\lambda$, same for all $t_w$. If this is the case and the corresponding $\lambda$ is different from that for $T_s=\infty$, like in the ferromagnetic case, it should give indirect evidence that there exist different structural scalings in the conserved case also for $T_s=\infty$ and $T_s=T_c$. \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig5.eps} \caption{\label{fig5} We have plotted $\lambda_L$ as a function of $1/L$, for the KIM. Results for a few values of $t_w$ are included. The solid lines are linear fits for extracting $\lambda=\lambda_{L=\infty}$, value of which is marked by an arrow-headed line.} \end{figure} \par Finally, to obtain the thermodynamic limit value, in Fig. \ref{fig5} we have plotted $\lambda_L$, as a function of $1/L$, for a few values of $t_w$, again from the KIM. These multiple plots provide a good sense of convergence. From this exercise we quote \begin{equation}\label{lambda_kim_tcl} \lambda = \lambda_{L=\infty} = 0.155 \pm 0.025. \end{equation} Since all the data sets appear linear, we have obtained the above quoted number from linear fittings. This number we compare with \cite{midya_pre} $\lambda$ for KIM when $T_s = \infty$ in $d=2$, viz., \begin{equation}\label{lambda_kim} \lambda \simeq 3.6. \end{equation} There exists huge difference between the quoted values in Eqs. (\ref{lambda_kim_tcl}) and (\ref{lambda_kim}). \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig6.eps} \caption{\label{fig6} Same as Fig. \ref{fig5} but here we have presented results from the GIM.} \end{figure} \begin{figure} \centering \includegraphics*[width=0.4\textwidth]{fig7.eps} \caption{\label{fig7} Plots of equal time structure factor $S(k,t_w=0)$ as a function of wave number $k$. We have shown results from $T_s = \infty$ and $T_s = T_c^L$. In each of the cases, we have used $L=128$. The solid lines represent the values of $\beta$.} \end{figure} \par To validate our result of Eq. (\ref{lambda_kim_tcl}), we have applied the same method to the simulation data for the GIM. For this case, plots of $\lambda_L$ versus $1/L$, for different $t_w$ values, are shown in Fig. \ref{fig6}. Here also one can appreciate nice convergence of the data sets for different values of $t_w$. The corresponding value is \begin{equation}\label{lambda_gim_tcl} \lambda = 0.13 \pm 0.01. \end{equation} This is certainly in extremely good agreement with the theoretical prediction \cite{humayun,bray_humayun}, viz., $\lambda = 0.125$. We mention here that in the previous simulation studies \cite{corberi_villa,humayun,bray_humayun} no such attempts have been made to estimate $\lambda$ for $T_s=T_c$ even for the GIM. Only checks for the consistency with the analytical theory were performed. The outcome of this exercise here certainly puts confidence in the number quoted in Eq. (\ref{lambda_kim_tcl}). The number in Eq. (\ref{lambda_gim_tcl}), for GIM in $d=2$, should be compared with the corresponding value for $T_s=\infty$, which is $\simeq 1.3$. Thus, for both KIM and GIM, the values of $\lambda$ for $T_s=\infty$ and $T_s=T_c$ universality classes are vastly different. Next we aim at checking whether these numbers satisfy the YRD bound. For that purpose, in Fig. \ref{fig7} we have plotted $S(k,0)$ as a function of $k$, on a log-log scale. We have included data from both $T_s=\infty$ and $T_s = T_c^L$, with $L=128$. In the case of $T_s=\infty$, flat behavior for the whole range of $k$ is observed. So, we have $\beta=0$. Naturally, $\lambda \simeq 3.6$ (for KIM) and $\lambda \simeq 1.3$ (for GIM) satisfy the corresponding bound, which is $\lambda\geq1$. However, for the conserved case, when scaling (overlap of data from different $t_w$) is observed starting from large $t_w$, for $T_s=\infty$, value of $\beta$ by then changes \cite{yeung,ahmed} to approximately $4$. In that case the bound becomes $\lambda \ge 3$. So, the estimate $\lambda \simeq 3.6$ still satisfies the modified bound in the scaling regime of $t_w$. In the nonconserved case, however, as already stated, $\beta$ remains zero for the whole evolution. It appears that the bounds are satisfied for $T_s = T_c^L$ also. In this case $\beta$ assumes a negative value, viz., $\beta\simeq -2$. Thus, the corresponding lower bound is below both the above quoted values, i.e., $\lambda\simeq0.16$ (for KIM) and $\lambda\simeq0.13$ (for GIM). We have verified that no violation occurs even with the progress of time. \section{Conclusion} We have presented results for aging phenomena in the two-dimensional Ising model \cite{fisher}. The results were obtained from Mote Carlo simulations \cite{binder_heermann,landau,dan_frenkel} with implementation of two different mechanisms. Our primary focus was on kinetics of phase separation in solid binary mixtures. For this we have used the Kawasaki exchange kinetics \cite{kawasaki}. For the purpose of verification of the adopted scaling method and thus, the outcome for the binary mixture, we have presented results for ordering in uniaxial ferromagnets as well, for which there exists theoretical prediction for comparison. In this case the results were obtained via the implementation of Glauber kinetics \cite{glauber}. Our objective was to estimate the aging exponent $\lambda$, related to the power-law decay of the order-parameter autocorrelation function \cite{fisher_huse} $C_{\textrm{ag}} (t,t_w)$, corresponding to the universality class \cite{humayun,bray_humayun} decided by quenches from $T_s=T_c$, for which one has infinitely correlated configurations \cite{fisher}. For quenches from the critical point, simulation results suffer significantly from finite-size effects. This problem was appropriately taken care of by implementing finite-size scaling technique of equilibrium critical phenomena and devising an extrapolation method for analysis of the out-of-equilibrium data. We believe that our results are quite accurate for thermodynamically large systems. \par It appears that for both types of systems, viz., phase separating binary mixtures and ordering ferromagnets, the values of $\lambda$ for $T_s=T_c$ are drastically smaller than those for the universality class corresponding \cite{liu,midya_jpcm,midya_pre} to $T_s=\infty$. Nevertheless the obtained values for $T_s=T_c$ satisfy the lower bounds predicted by Yeung, Rao and Desai \cite{yrd}. To the best of our knowledge, these are the first results for solid mixtures, as far as quenches from $T_c$ is concerned. \par In the case of ferromagnets already it was shown that the growth exponent remains same for the two above mentioned universality classes \cite{humayun,bray_humayun}. Our recent work \cite{nv_2} on growth for the KIM also points towards the same possibility. Overall, thus, it appears that there exists strong qualitative similarity between cases with conserved and non-conserved dynamics, as far as the universalities with respect to quenches from correlated and decorrelated initial configurations are concerned. \par Other important exponent that can be calculated for the binary mixture with both $T_s=\infty$ and $T_s=T_c$ is the one related to the decay of persistent probability \cite{bray_majumdar}. For this exponent, however, due to certain technical reasons \cite{derrida} quenches to very low temperature becomes necessary. In that case, for conserved dynamics, there exists severe problem with metastability. This makes the problem rather challenging.
train/arxiv
BkiUei_xK7kjXIK5s2Oe
5
1
\section{Introduction} The extent of chemically mixed regions associated to stellar convective cores is notoriously uncertain. Several physical processes that remain challenging to describe theoretically are known to extend convective cores beyond the theoretical Schwarzschild limit. The most often cited among them is core overshooting. According to Schwarzschild's criterion, the boundary of a convective core corresponds to the layer above which upward-moving convective blobs are braked. However, this criterion neglects the inertia of the ascending blobs, which are expected to penetrate over a certain distance (overshoot) inside the radiative zone. The theoretical complexity of this phenomenon is well illustrated by the large number of developments that were proposed to describe it (e.g. \citealt{saslaw65}, \citealt{shaviv71}, \citealt{roxburgh78}, \citealt{zahn91} to quote only a few) and by the diversity of the predicted distances $d_{\rm ov}$ over which convective eddies are expected to overshoot in the stable region (predicted values for $d_{\rm ov}$ range from 0 to 2 $H_P$, where $H_P$ is the local pressure scale height). Current numerical simulations of overshooting are encouraging, but they are still far from reproducing the very high turbulence of stellar convection and cannot yet be used to obtain reliable prescriptions for core overshooting (see \citealt{dintrans09} for a review). Another complication arises from the fact that convective cores can also be extended due to rotationally-induced mixing (see \citealt{maeder09} and references therein). As a result, it is still an open issue to determine (1) over which distance convective cores are extended, (2) what the temperature stratification is like in these core extensions, and (3) how chemical elements are mixed in these regions. Since convective cores constitute reservoirs for nuclear reactions, the uncertainty on their sizes generates significant uncertainties on stellar ages, especially near the end of the main sequence (MS). \cite{lebreton14b} for instance estimated that an extension of convective cores over a typical distance of 0.2 $H_P$ can generate errors on stellar ages as large as 30\% at the turnoff. It also affects the isochrones that have turnoff masses above $\sim1.1M_\odot$, and thus the age of rather young clusters. To account for the combined effects of core overshooting and rotational mixing, 1D stellar models often consider an ad-hoc extra mixing at the edge of the convective core, which is either modeled as an \textit{instantaneous} mixing (simple extension of the mixed core) or as a diffusion process (\citealt{ventura98}), i.e. as a \textit{non-instantaneous} mixing (see \citealt{noels10} for a review). In both cases, the extent of the extra-mixing (usually known as the \textit{overshooting distance} $d_{\rm ov}$ even though overshooting may not be the only mechanism at work) depends on one free parameter. These models are clearly overly simplistic but current observations have not yet permitted to constrain more complex models. The overshooting distance has been observationally constrained by fitting isochrones to the color-magnitude diagrams of open clusters (e.g. \citealt{maeder81}, \citealt{vandenberg06}) and by performing calibrations using eclipsing binaries (e.g. \citealt{claret07}, \citealt{stancliffe15}). These studies typically pointed toward an instantaneous mixing over a distance $d_{\rm ov}\sim0.2 H_P$ (where $H_P$ is the local pressure scale height) with rather large star-to-star variations. The case of low-mass stars (typically $M \lesssim 1.5\,M_\odot$) is known to be problematic within this formalism. Indeed, for stars with small convective cores the overshooting region becomes unrealistically large because $H_P(r)\rightarrow\infty$ when $r$ goes to zero. This prompted several authors to consider an overshoot parameter $\alpha\ind{ov}$ that increases with stellar mass in the approximate mass range $1.1M_\odot\lesssim M\lesssim1.5M_\odot$ (e.g. \citealt{pietrinferni04}, \citealt{bressan12}). In these cases, an ad-hoc linear increase of $\alpha\ind{ov}$ as a function of $M$ was chosen, with some success in reproducing the turnoff of clusters with turnoff-masses around 1.3 $M_\odot$ (\citealt{pietrinferni04}). The problem remains however poorly constrained in this range of mass, and the use of eclipsing binary systems for this purpose is unfortunately of little help (\citealt{valle16}). Recently, constraints on the extent of the extra mixing beyond convective cores has been obtained from asteroseismology. Sharp variations in the mean molecular weight profile at the boundary of the mixed core create a glitch to which oscillation modes are sensitive, which can be used to measure the extent of the mixed region associated to convective cores. This approach has been successfully applied to solar-like pulsators in the main sequence (\citealt{deheuvels10a}, \citealt{goupil11}, \citealt{silva13}, \citealt{guenther14}, \citealt{appourchaux15}), in the subgiant phase \citep{lanzarote,deheuvels11} and to several main-sequence B stars (e.g. \citealt{degroote10}, \citealt{neiner12}, \citealt{moravveji15}). All these studies reported the need for extended convective cores and confirmed the great potential of asteroseismology to measure this extension. However, we are still lacking consistent seismic studies of larger samples of stars, which are needed to better understand how the overshooting distance varies with stellar parameters In this paper, we took advantage of the detection of solar-like oscillations in hundreds of solar-like pulsators with an unprecedented level of precision by the space mission \textit{Kepler}\ (\citealt{borucki10}) to consistently measure the extent of the convective core in a larger sample of stars. We have focused on stars whose masses lie around the mass-limit for having a convective core ($M\gtrsim1.1M_\odot$ at solar metallicity). For these stars, a large part of the core luminosity comes from the burning of $^3$He outside of equilibrium. Core overshooting can considerably increase the abundance of $^3$He in the core, and therefore also the core luminosity, size, and lifetime (\citealt{roxburgh85}, \citealt{deheuvels10a}). For instance an instantaneous overshooting over a distance of $0.1\,H_P$ in a 1.3-$M_\odot$ star generates an increase of as much as 50\% in the convective core radius during the main sequence\footnote{This is shown in Fig. \ref{fig_cc_cesam_mesa} of this paper, but note that this depends on the exact prescription that is adopted for core overshooting, as is discussed in Sect. \ref{sect_calibrate}.}. As a consequence, these stars are particularly good tracers of the existence and amount of core overshooting. It has been shown in previous studies that the small separations built with $l=0$ and $l=1$ modes are particularly sensitive to the structure of the core (\citealt{provost05}, \citealt{deheuvels10a}, \citealt{silva11}), and that their ratios $r_{010}$ to the large separations are nearly insensitive to the so-called near-surface effects (\citealt{roxburgh03}). In Sect. \ref{sect_test_d01}, we show that this diagnostic can be used to obtain a model-dependent estimate of the extent of the mixed core by building a grid of models with the evolution code \textsc{Cesam2k}\ (\citealt{morel08}). We then select a subsample of 24 solar-like pulsators among \textit{Kepler}\ targets that are the most likely to provide constraints on the amount of core overshooting based on the results of our grid of models, and we extract their mode frequencies from their oscillation spectra in Sect. \ref{sect_analysis}. In Sect. \ref{sect_grid}, we confront the observed ratios $r_{010}$ to those of two grids of models computed with \textsc{Cesam2k}\ and \textsc{MESA}\ (\citealt{paxton11}). We consistently detect convective cores in eight of the selected targets and we obtain measurements of the extent of the mixed core in these stars. In Sect. \ref{sect_calibrate} we discuss the different existing prescriptions for core overshooting in low-mass stars, we show how our results can be used to calibrate the prescription used in the code \textsc{Cesam2k}, and we address the question whether such a calibration can be adapted in \textsc{MESA}. \section{Estimating the core size with seismology \label{sect_test_d01}} \subsection{Asteroseismic diagnostics} A sharp gradient of the mean molecular weight $\mu$ builds up at the boundary of the homogeneous convective core, which induces rapid variations in the sound speed profile, and even makes it discontinuous in the case of a growing core without microscopic diffusion. It is well known that such a glitch in $c(r)$ adds an oscillatory modulation to the expression of the mode frequencies as a function of the radial order. The period of this modulation is directly related to the depth of the glitch (\citealt{gough90}). This is not specific to the boundary of convective cores, and such acoustic glitches can also be produced by the base of convective envelopes or the helium ionization regions. When the period of this oscillation is smaller than the frequency range of the observed frequencies, the acoustic depth of the glitch can be estimated in a model-independent way. I has been recently shown that the depth of the second helium ionization zone and the base of the convective envelope can be estimated with such a diagnostic (\citealt{lebreton12}, \citealt{mazumdar14}). Unfortunately, the glitch caused by convective cores induces a longer-period oscillation and only a fraction of the period can be observed. This makes it much more difficult to obtain model-independent information about the boundary of convective cores. \cite{cunha11} and \cite{brandao14} showed that the amplitude of the sound speed discontinuity at the core edge may be recovered in some favorable cases. It is not clear whether a model-independent estimate of the extent of the mixed core can be obtained. However, it has been shown by several studies that a model-dependent measurement of the core size can be obtained through seismology. Combinations of mode frequencies built with $l=0$ and $l=1$ modes are well suited for this type of study because they are particularly sensitive to the core structure (\citealt{provost05}, \citealt{deheuvels10a}). \cite{roxburgh03} advised to use the five-point separations $d_{01}$ and $d_{10}$ defined as \begin{align} d_{01}(n) & = \frac{1}{8} \,(\nu_{0,n-1}-4\nu_{1,n-1}+6\nu_{0,n}-4\nu_{1,n}+\nu_{0,n+1}) \label{eq_d01} \\ d_{10}(n) & = -\frac{1}{8} \, (\nu_{1,n-1}-4\nu_{0,n}+6\nu_{1,n}-4\nu_{0,n+1}+\nu_{1,n+1}) \label{eq_d10}. \end{align} They showed that the ratios between these small separations and the large separations constructed as \begin{eqnarray} r_{01}(n) & = & \frac{d_{01}(n)}{\Delta\nu_1 (n)} \\ r_{10}(n) & = & \frac{d_{10}(n)}{\Delta\nu_0 (n+1)} \end{eqnarray} where $\Delta\nu_l (n) = \nu_{l,n}- \nu_{l,n-1}$ are largely insensitive to the structure of the outer layers, which makes them almost immune to the so-called near-surface effects. These ratios, referred to as $r_{010}$ when combined together, have been used e.g. to estimate the depth of the convective envelope and the second helium ionization zone in the Sun (\citealt{roxburgh09a}) or to establish the existence of a convective core in a \textit{Kepler}\ target (\citealt{silva13}). We note that \cite{cunha07} proposed to use a combination of frequencies using modes of degrees up to 3 ($dr_{0213}$), which can interestingly be related to the intensity of the sound speed jump at the edge of growing cores. However, $l=3$ modes have low amplitudes in stars other than the Sun, and although several detections of such modes have been obtained (e.g. \citealt{deheuvels10b}, \citealt{metcalfe10}), it remains exceptional to reliably estimate their frequencies over several consecutive radial orders. In this study, we have tested and used the diagnostic based on the $r_{010}$ ratios. Fig. \ref{fig_ratio_parabola} shows the behavior of the $r_{010}$ ratios for a model of 1.2 $M_\odot$ evolved from the zero-age main sequence (ZAMS) to the beginning of the subgiant phase. The ratios are represented only in the frequency range where modes are expected to be observed, i.e. over about 12 radial orders around the frequency of maximum power of the oscillations $\nu_{\rm max}$. As mentioned above, only a fraction of the period of the oscillation induced by the edge of the core can be observed, and the $r_{010}$ ratios can in fact be well approximated by second-order polynomials throughout the MS, as can be seen in Fig. \ref{fig_ratio_parabola}. Several studies have shown that the slope and mean value of $r_{010}$ ratios are a good indicator of the size of the mixed core (\cite{popielski05}, \citealt{deheuvels10a}, \citealt{silva11}). However, these previous studies either focused on a particular star or worked with models that share the same physical properties other than the mixing at the edge of the core. We know that several other parameters, such as the abundance of heavy elements, have a significant impact on the size of the convective core. We here aimed at testing the efficiency of this diagnostic tool. \begin{figure} \begin{center} \includegraphics[width=9cm]{fig_ratio_parabola.ps} \end{center} \caption{Variations in the ratio $r_{010}$ around $\nu_{\rm max}$ as a function of frequency for models of 1.2 $M_\odot$ from the ZAMS (dark blue) to the beginning of the post main sequence (dark red). The dashed lines correspond to fits of $2^{\rm nd}$ order polynomials. \label{fig_ratio_parabola}} \end{figure} \subsection{Testing the diagnostic of $r_{010}$ ratios \label{sect_test_diagnostic}} \subsubsection{Description of the grid \label{sect_descript_grid}} To determine in which circumstances the extent of the core can be estimated with the $r_{010}$ ratios, we computed a grid of models using the stellar evolution code \textsc{Cesam2k}\ (\citealt{morel08}). We used the OPAL 2005 equation of state and opacity tables as described in \cite{lebreton08}. The nuclear reaction rates were computed using the NACRE compilation (\citealt{angulo99}) except for the $^{14}N(p,\gamma)^{15}O$ reaction where we adopted the revised LUNA rate (\citealt{formicola04}). The atmosphere was described by Eddington's gray law. We assumed the classical solar mixture of heavy elements of \cite{asplund09} (hereafter AGSS09). Convection was treated using the Canuto-Goldman-Mazzitelli (CGM) formalism (\citealt{canuto96}). This description involves a free parameter, the mixing length, which is taken as a fraction $\alpha\ind{CGM}$ of the pressure scale height $H_P$. We here assumed a value of $\alpha\ind{CGM}$ calibrated on the Sun ($\alpha_\odot=0.64$, \citealt{samadi06}). To account for the physical processes that could increase the size of convective cores, we considered an instantaneous mixing beyond convective cores over a distance $d_{\rm ov}$ taken as a fraction $\alpha\ind{ov}$ of the pressure scale height $H_P$. The free parameter $\alpha_{\rm ov}$ is as often referred to as the \textit{overshoot parameter}. In order to avoid the overshooting region from unrealistically extending over a distance as large as the core itself, \textsc{Cesam2k}\ models define the overshooting distance as \begin{equation} d_{\rm ov} = \alpha\ind{ov} \times\min(H_P, r_{\rm{s}}) \label{eq_dov} \end{equation} where $r_{\rm{s}}$ is the Schwarzschild limit of the core. We note that this is the case during most of the main sequence for stars with masses $\lesssim1.5M_\odot$, as shown by Fig. \ref{fig_rc_hp}. We have imposed the adiabatic temperature gradient in the overshoot region. \begin{figure} \begin{center} \includegraphics[width=9cm]{fig_rc_hp.ps} \end{center} \caption{Variations in the pressure scale height $H_P$ (dashed line) and the radius of the extended convective core $R_{\rm c}$ (solid blue line) with age for a 1.3 $M_\odot$ \textsc{Cesam2k}\ model with solar metallicity, a solar-calibrated value for the mixing length, $Y_0=0.26$, and $\alpha\ind{ov}=0.1$. The gray solid line indicates the Schwarzschild limit. \label{fig_rc_hp}} \end{figure} Microscopic diffusion is known to increase the abundance in heavy elements in the core as the star evolves, and thus to increase the size of convective cores. In this section, microscopic diffusion is not included in the models so that the core extension imposed by the overshoot parameter $\alpha_{\rm ov}$ can be partly attributed to its effects. The contribution from microscopic diffusion is addressed in Sect. \ref{sect_grid}. The grid was computed with masses ranging from 0.9 to 1.5 $M_\odot$ (step 0.05 $M_\odot$), metallicities from $-0.4$ to 0.4 dex (step 0.1 dex), and two values of the initial helium abundance (0.26 or 0.30). Models were computed for values of $\alpha\ind{ov}$ ranging from 0 to 0.3 (step 0.05). For each evolutionary sequence, the mode frequencies were computed with the oscillation code \textsc{losc}\ (\citealt{losc}) for about 60 models between the ZAMS and the beginning of the subgiant phase. We stopped the evolution as soon as mixed modes appear around $\nu_{\rm max}$, because these modes cause brutal variations in the $r_{010}$ ratios and prevent them from being directly used as a diagnostic for the core size. \begin{figure*} \begin{center} \includegraphics[width=8cm]{evol_ratio.ps} \includegraphics[width=8cm]{evol_cc.ps} \end{center} \caption{\textbf{Left}: Evolutionary tracks of stellar models of 1.2 $M_\odot$ in the $(a_1,a_0)$ plane for different amounts of core overshooting: $\alpha\ind{ov}=0$ (gray), 0.1 (blue), 0.15 (cyan), 0.2 (green), 0.25 (red), and 0.3 (purple). Full (resp. dashed) lines indicate that the model has a convective (resp. radiative) core. \textbf{Right}: Variations in the size of the convective core as a function of age for the same models. \label{fig_evol_ratio}} \end{figure*} For each of the models along the evolutionary tracks, we fitted 2$^{\rm nd}$ order polynomials of the type \begin{equation} P(\nu) = a_0 + a_1 (\nu-\beta) + a_2(\nu-\gamma_1)(\nu-\gamma_2) \label{eq_poly} \end{equation} to the $r_{010}$ ratios. The parameters $\beta$, $\gamma_1$, and $\gamma_2$ were chosen to ensure that $P(\nu)$ is a sum of orthogonal polynomials for each model. The fits were performed in the approximate frequency range where modes are expected to be observed, i.e. about 12 orders around the frequency of maximum power of oscillations $\nu_{\rm max}$. This latter frequency was estimated for stellar models by assuming that it scales as the acoustic cutoff frequency. This assumption, which is the basis of the so-called seismic scaling relations, was observationally verified to work at the level of a few percent at least (\citealt{stello08}, \citealt{huber11}, \citealt{silva12}), and is gaining theoretical support (\citealt{belkacem11}). We note that during most of the MS, the $r_{010}$ ratios vary roughly linearly with frequency in the range of observed frequencies, so that the coefficient $a_2$ of the fit is negligible. \subsubsection{Evolutionary tracks in the $(a_1, a_0)$ plane} Before commenting on the results of the grid, we show as an example the evolutionary tracks in the $(a_1, a_0)$ plane (slope versus mean value) of 1.2-$M_\odot$ models for different amounts of core overshooting (Fig. \ref{fig_evol_ratio}a). For comparison, the variations in the size of the convective core for the same models are shown as a function of age in Fig. \ref{fig_evol_ratio}b. As mentioned by \cite{silva11}, the trajectory of models in the $(a_1, a_0)$ plane depends in a complex way on the evolutionary stage, the size of the convective core, and the amplitude of the glitch in the sound speed. However, we can still broadly understand it. At the beginning of the MS, the stars with different $\alpha_{\rm ov}$ start roughly at the same point in the $(a_1, a_0)$ plane (bottom right corner in Fig. \ref{fig_evol_ratio}a). Indeed, the $\mu$-gradient at the edge of the core has not had time to build up yet, so the $r_{010}$ ratios are still nearly independent from the size of the convective core. As the star evolves, the glitch in the sound speed profile builds up, which causes the amplitude of the oscillations of the $r_{010}$ ratios to increase. Therefore both the mean value $a_0$ and the absolute value of the slope $|a_1|$ of the ratios increase. But also, as the star evolves, its $\nu_{\rm max}$ frequency decreases. As a result, the range of observable frequencies shifts to a different part of the oscillation produced by the glitch. As can be seen in Fig. \ref{fig_ratio_parabola}, when stars reach the end of the MS, the $r_{010}$ ratios lie around a maximum of this oscillation, which results in a decrease of the absolute value of the slope $|a_1|$. For post-main sequence stars, the mean slope $a_1$ even becomes positive. This explains why the evolutionary tracks of models in the $(a_1, a_0)$ plane are vaguely circular, as can be seen in Fig. \ref{fig_evol_ratio}. For models with larger amounts of overshooting, the convective core is larger. Therefore, the period of the oscillation caused by the glitch is shorter and the absolute mean slope $|a_1|$ of the $r_{010}$ ratios is larger. As a result, stars with larger $\alpha\ind{ov}$ are shifted to the left in the $(a_1, a_0)$ plane, and they draw larger circles. This confirms previous statements that the position in the $(a_1,a_0)$ plane is discriminant for the size of the core if all parameters other than $\alpha\ind{ov}$ are fixed. \subsubsection{Results of the grid} \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_trend_110.ps} \includegraphics[width=8cm]{fig_ratio_trend_95.ps} \includegraphics[width=8cm]{fig_ratio_trend_80.ps} \includegraphics[width=8cm]{fig_ratio_trend_65.ps} \end{center} \caption{Location of models in the $(a_1,a_0)$ plane at fixed $\Delta\nu$. Colors indicate the amount of core overshooting: $\alpha\ind{ov}=0$ (gray), 0.1 (blue), 0.15 (cyan), 0.2 (green), 0.25 (red), 0.3 (magenta). Open squares indicate models with a convective core, and crosses, models with radiative cores. The black open circles indicate models that are in the post-main-sequence ($X_{\rm c} < 10^{-2}$). \label{fig_ratio_trend}} \end{figure*} When solar-like oscillations are detected in a star, it is usually straightforward to estimate the mean large separation of its acoustic modes $\Delta\nu$. We thus chose to show the results of the grid at fixed values of $\Delta\nu$. This time, each evolutionary sequence of our grid is represented as a dot in the $(a_1,a_0)$ plane, provided its large separation matches the chosen value of $\Delta\nu$ at some point along the evolution. Fig. \ref{fig_ratio_trend} shows the location of the models in the $(a_1,a_0)$ plane for four values of $\Delta\nu$ : 110, 95, 70, and 65 $\mu$Hz. For $\Delta\nu= 110\,\mu$Hz (top left plot), there is a relative degeneracy of the models in the $(a_1,a_0)$ plane. This can be understood because only low-mass unevolved stars reach such a high value of $\Delta\nu$. Higher-mass stars begin the MS with a lower $\Delta\nu$, and this quantity further decreases as the star evolves\footnote{For instance, 1.25-$M_\odot$ stars at solar metallicity reach the ZAMS with $\Delta\nu\sim110\,\mu$Hz, so more massive stars never reach this value.}. For this reason, few of the stars with $\Delta\nu=110\,\mu$Hz have a convective core. And those that have one are still close to the ZAMS, so the $\mu$-gradient has not had time to build up yet and the $r_{010}$ ratio still does not feel it. The diagnostic is thus less efficient for $\Delta\nu\gtrsim110\,\mu$Hz. For lower values of $\Delta\nu$, different populations are represented: (1) evolved low-mass stars (in the PoMS for the lowest masses) and (2) MS higher-mass stars. In these cases, Fig. \ref{fig_ratio_trend} clearly shows that the location of a model in the $(a_1,a_0)$ plane can be used to estimate: \begin{itemize} \item \textbf{the evolutionary state}: as mentioned before, when stars leave the MS, the mean slope $a_1$ of the ratios increases and becomes positive. As a result, PoMS models occupy a place in the $(a_1,a_0)$ plane that is increasingly distinct from that of MS models, as the large separation decreases. This opens the possibility to determine the evolutionary status of a star from its location in the $(a_1,a_0)$ plane. \item \textbf{the existence and the size of the convective core}: for stars with large separations below $\sim$ 95 $\mu$Hz, models with $\alpha\ind{ov}=0$, 0.1, 0.2, and 0.3 occupy distinct regions in the $(a_1,a_0)$ plane, which suggests that it should be possible to measure the size of the mixed core by using the location of the star in this plane. \end{itemize} We stress that the effects of metallicity on the size of the core are here taken into account in a very conservative way, since the models of Fig. \ref{fig_ratio_trend} include a wide range of metallicities ($-0.4$ to 0.4 dex). In practice, the metallicity of an observed star is usually known with a much better accuracy if spectroscopic measurements are available. We thus conclude that the $r_{010}$ ratios are in principle an efficient tool to measure the size of convective cores, provided the observed star is evolved enough to have developed a glitch in the sound speed at the edge of the core. \section{Extracting the $r_{010}$ ratios from \textit{Kepler}\ targets \label{sect_analysis}} \subsection{Selection of targets} Based on the tests performed on stellar models in Sect. \ref{sect_test_diagnostic}, we established a set of criteria to select \textit{Kepler}\ targets for which the $r_{010}$ ratios should provide a good diagnostic for the core structure. We selected stars for which \begin{itemize} \item the mean large separation is below $110\,\mu$Hz, so that the diagnostic tool is efficient \item no mixed modes are contaminating the $r_{010}$ ratios \item a long enough data set is available, so that a good precision can be attained in the estimates of the parameters $a_i$. Even with 9 months of \textit{Kepler}\ data, the $r_{010}$ ratios of a target studied by \cite{silva13} were contaminated by a spurious increase in the low-signal-to-noise part of the spectrum. To avoid these features that might bias our estimates of the $a_i$ parameters, we selected only stars that were observed for at least 9 months. \item the observed modes are narrow enough: we excluded F stars, whose modes are too wide to unambiguously distinguish the $l=1$ ridge from the $l=0$ and $l=2$ ridges in an \'echelle diagram. We note that Bayesian methods have been proposed to identify the degree of the modes and extract the mode frequencies even in these cases (e.g. \citealt{benomar09a}). However, this type of analysis requires dedicated works, which can be undertaken as an interesting follow-up of this work to explore the sizes of convective cores in higher-mass stars. \end{itemize} We applied these criteria to the solar-like pulsators whose global parameters were determined by \cite{chaplin14} and obtained a list of 24 targets, which are given in Table \ref{tab_param}. Most of these stars were also observed spectroscopically from the ground, which yielded estimates of the effective temperature and of the surface metallicity. When available, these measurements are specified in Table \ref{tab_param}. \begin{table*} \begin{center} \caption{Global parameters of the selected targets. \label{tab_global_param} \label{tab_param}} \begin{tabular}{l c c c c c c} \hline \hline \T \B KIC ID & $\Delta\nu$ ($\mu$Hz) & $\nu\ind{max}$ ($\mu$Hz) & $T_{\rm eff}^{\rm photo}$ (K)$^{\rm a}$ & $T_{\rm eff}^{\rm spectro}$ (K)$^{\rm b}$ & $[$Fe/H$]$ (dex)$^{\rm b}$ & $M/M_\odot$ \\ \hline \T 8394589 & $109.44 \pm 0.04$ & $2373 \pm 39$ & $6251 \pm 54$ & $6114 \pm 60$ & $-0.36 \pm 0.06$ & $1.18 \pm0.08$ \\ 9098294 & $108.92 \pm 0.03$ & $2282 \pm 26$ & $6020 \pm 51$ & $5840 \pm 60$ & $-0.13 \pm 0.06$ & $1.00 \pm0.05$ \\ 9410862 & $107.21 \pm 0.08$ & $2278 \pm 42$ & $6230 \pm 53$ & - & - & $1.17 \pm0.08$ \\ 6225718 & $106.00 \pm 0.03$ & $2316 \pm 38$ & - & $6230 \pm 60$ & $-0.17 \pm 0.06$ & $1.29 \pm0.08$ \\ 10454113 & $105.55 \pm 0.07$ & $2394 \pm 75$ & $6197 \pm 45$ & $6120 \pm 60$ & $-0.06 \pm 0.06$ & $1.41 \pm0.16$ \\ 6106415 & $104.20 \pm 0.02$ & $2224 \pm 25$ & - & $5990 \pm 60$ & $-0.09 \pm 0.06$ & $1.15 \pm0.06$ \\ 10963065 & $103.15 \pm 0.03$ & $2180 \pm 22$ & $6316 \pm 45$ & $6060 \pm 60$ & $-0.20 \pm 0.06$ & $1.15 \pm0.05$ \\ 6116048 & $100.72 \pm 0.02$ & $2081 \pm 22$ & $6072 \pm 49$ & $5935 \pm 60$ & $-0.24 \pm 0.06$ & $1.07 \pm0.05$ \\ 5184732 & $ 95.64 \pm 0.02$ & $2080 \pm 21$ & $5841 \pm 290$ & $5840 \pm 60$ & $ 0.38 \pm 0.06$ & $1.28 \pm0.06$ \\ 3656476 & $ 93.16 \pm 0.02$ & $1910 \pm 10$ & $5684 \pm 56$ & $5710 \pm 60$ & $ 0.34 \pm 0.06$ & $1.06 \pm0.03$ \\ 7296438 & $ 88.68 \pm 0.04$ & $1848 \pm 16$ & $5749 \pm 56$ & - & - & $1.18 \pm0.05$ \\ 4914923 & $ 88.58 \pm 0.02$ & $1800 \pm 15$ & - & $5905 \pm 60$ & $ 0.17 \pm 0.06$ & $1.14 \pm0.05$ \\ 12009504 & $ 88.38 \pm 0.04$ & $1848 \pm 22$ & $6270 \pm 61$ & $6065 \pm 60$ & $-0.09 \pm 0.06$ & $1.30 \pm0.07$ \\ 8938364 & $ 85.59 \pm 0.02$ & $1652 \pm 10$ & $5965 \pm 62$ & $5630 \pm 60$ & $-0.20 \pm 0.06$ & $0.94 \pm0.03$ \\ 7680114 & $ 85.18 \pm 0.02$ & $1697 \pm 10$ & $5800 \pm 56$ & $5855 \pm 60$ & $ 0.11 \pm 0.06$ & $1.11 \pm0.04$ \\ 10516096 & $ 84.43 \pm 0.03$ & $1666 \pm 12$ & $6123 \pm 48$ & $5940 \pm 60$ & $-0.06 \pm 0.06$ & $1.11 \pm0.04$ \\ 7206837 & $ 79.10 \pm 0.07$ & $1653 \pm 23$ & $6392 \pm 59$ & $6304 \pm 60$ & $ 0.14 \pm 0.06$ & $1.54 \pm0.09$ \\ 8176564 & $ 77.86 \pm 0.08$ & $1518 \pm 10$ & $6109 \pm 51$ & - & - & $1.21 \pm0.05$ \\ 8694723 & $ 75.22 \pm 0.04$ & $1431 \pm 11$ & $6351 \pm 62$ & $6120 \pm 60$ & $-0.59 \pm 0.06$ & $1.17 \pm0.05$ \\ 12258514 & $ 74.96 \pm 0.02$ & $1491 \pm 13$ & $5990 \pm 85$ & $5990 \pm 60$ & $ 0.04 \pm 0.06$ & $1.29 \pm0.06$ \\ 6933899 & $ 72.26 \pm 0.02$ & $1377 \pm 8$ & $5841 \pm 56$ & $5860 \pm 60$ & $ 0.02 \pm 0.06$ & $1.14 \pm0.04$ \\ 11244118 & $ 71.50 \pm 0.02$ & $1376 \pm 7$ & $5618 \pm 64$ & $5745 \pm 60$ & $ 0.35 \pm 0.06$ & $1.15 \pm0.04$ \\ 7510397 & $ 62.43 \pm 0.04$ & $1183 \pm 17$ & $6211 \pm 67$ & $6110 \pm 60$ & $-0.23 \pm 0.06$ & $1.39 \pm0.09$ \\ \B 8228742 & $ 62.29 \pm 0.04$ & $1170 \pm 7$ & $6130 \pm 51$ & $6042 \pm 60$ & $-0.14 \pm 0.06$ & $1.33 \pm0.05$ \\ \hline \end{tabular} \end{center} {\small \textbf{References:} $^{\rm a}$\cite{pinsonneault12}, $^{\rm b}$\cite{bruntt12}} \end{table*} \subsection{Extraction of the mode frequencies} The mode frequencies of 13 out of the 24 selected targets were already extracted from \textit{Kepler}\ observations by \cite{appourchaux12b}. However, this study was performed with nine months of \textit{Kepler}\ data, whereas at current time almost three years of data are available in the most favorable cases. We thus decided to reanalyze all the targets of the selected sample using the full \textit{Kepler}\ data sets available (until Q16) to date. For this purpose, we used a maximum likelihood estimation (MLE) method in the same way as previously applied to \corot\ and \textit{Kepler}\ targets (e.g. \citealt{appourchaux08}, \citealt{deheuvels10a}). For each star, we adjusted Lorentzian profiles to all the modes simultaneously (global fits). We here neglected the rotational splitting of the modes and fitted only one component for each multiplet of degree $l$ and radial order $n$. Since the stars of the sample are expected to be slow rotators, the rotational multiplets should be approximately symmetrical with respect to their $m=0$ component. As a result, we expect negligible bias due to rotation in our estimates of the mode frequencies. We stress that in this work, we were only interested in estimating the $a_i$ parameters of a polynomial fit to the $r_{010}$ ratios of the observed stars. As a result, we did not seek to estimate the frequencies of lower signal-to-noise modes around the edges of the frequency range of observed modes. We obtained estimates of the mode parameters over 9 to 15 overtones for the 24 targets. The results are given in Tables \ref{tab_freq0} to \ref{tab_freq5} in Appendix \ref{app_tabfreq}. Our results are in good agreement with those obtained by \cite{appourchaux12b} for the targets that are among our sample. We indeed found that 31\% (resp. 8\%, 3\%) of the fitted mode frequencies agree within 1 (resp. 2, 3) $\sigma$ with the results of \cite{appourchaux12b}, which is close to what is statistically expected. We used the estimated mode frequencies to evaluate the global seismic parameters of the selected targets. A linear regression of the frequencies of $l=0$ modes as a function of the radial order $n$ provided an estimate of the mean large separation $\Delta\nu$. The obtained values are given in Table \ref{tab_param}. We then performed a gaussian fit to the mode amplitudes as a function of frequency. The central frequency of the fitted Gaussian provides an estimate of the frequency of maximum power of the oscillations $\nu_{\rm max}$ (see Table \ref{tab_param}). Seismic scaling relations were then used to relate the global seismic parameters $\Delta\nu$ and $\nu_{\rm max}$, and the effective temperature $T_{\rm eff}$ to the stellar mass and radius. The underlying assumption behind seismic scaling relations was already mentioned in Sect. \ref{sect_descript_grid}. Whenever it was available, we used the spectroscopic $T_{\rm eff}$ obtained by \cite{bruntt12}. For the three stars of the sample that were not observed by \cite{bruntt12}, we used a photometric estimate obtained from the recipe proposed by \cite{pinsonneault12}, which was applied to the \textit{griz} photometry available from the \textit{Kepler}\ input catalogue (KIC). We thus obtained stellar masses ranging from 0.94 to 1.39 $M_\odot$ (see Table \ref{tab_param}). We note that for all the stars for which both spectroscopic and photometric estimates of $T_{\rm eff}$ were available, the agreement on the stellar masses obtained with both sets of $T_{\rm eff}$ is excellent (below 1 $\sigma$ for all stars except one at 1.7 $\sigma$). \subsection{Polynomial fit to $r_{010}$ ratios \label{sect_fit_r010}} \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_trunc_006106415.ps} \includegraphics[width=8cm]{fig_ratio_trunc_012258514.ps} \end{center} \caption{Ratios $r_{010}$ computed for KIC~6106415 (left) and KIC~12258514 (right) using the mode frequencies extracted from the \textit{Kepler}\ oscillation spectra (see text). The colored dashed lines correspond to $2^{\rm nd}$-order polynomial fits to the observed ratios using either the raw covariance matrix (gray lines) or the covariance matrix modified through truncated SVD (see Sect. \ref{sect_fit_r010}). \label{fig_ratio_MLE}} \end{figure*} We used the fitted mode frequencies listed in Tables \ref{tab_freq0} to \ref{tab_freq5} in Appendix \ref{app_tabfreq} to compute the $r_{010}$ ratios of all the stars of the sample. Two representative examples are shown in Fig. \ref{fig_ratio_MLE}. KIC6106415 (left plot) is still in a phase where the $r_{010}$ ratios are roughly linear in the range of observed frequencies, while the ratios of KIC12258514 have a more parabolic shape. As predicted by stellar models, we found that the observed ratios are well reproduced by 2$^{\rm nd}$ degree polynomials. For several targets, the $r_{010}$ ratios deviate from a mere parabola because of a short-period oscillation around the parabolic general trend. This is expected and corresponds to the signature of the base of the convective envelope. In this work, the polynomial fit that we applied to the $r_{010}$ ratios filters out this contribution. This is to our advantage here since we are merely interested in probing the core properties in this study. However, we stress that these signatures of the bottom of the convective envelope can potentially yield precious model-independent constraints on the stellar structure (\citealt{mazumdar14}) and deserve further investigation. The dip in the profile of the adiabatic index $\Gamma_1$ corresponding to the region of second ionization of helium can also create a short-period oscillation in seismic indexes, however the $r_{010}$ ratios are almost insensitive to these shallow regions and the amplitude of the corresponding oscillation is expected to be negligible. To fit polynomials to the observed $r_{010}$ ratios, one needs to take into account the high level of correlation between the data points. Indeed, each mode frequency is used by several data points. The covariance matrix between linear combinations of the mode frequencies (e.g. between the $d_{01}$ and $d_{10}$ separations as defined by Eq. \ref{eq_d01} and \ref{eq_d10}) can easily be computed analytically, but it is much harder for the $r_{010}$ ratios because of the division by the large separations. We therefore resorted to Monte Carlo simulations using the observed mode frequencies and their associated error bars to estimate the covariance matrix $\vect{C}$ for each star. This approach supposes that the errors in the mode frequency estimates are normally distributed, which has been shown to be a valid approximation (\citealt{benomar09a}), except for low signal-to-noise-ratio modes, which we have excluded here\footnote{When fitting the modes following a Bayesian approach coupled with a Markov chain Monte Carlo algorithm, the covariance matrix can be estimated without having to assume normally distributed errors (e.g. \citealt{davies15}).}. The optimal parameters $a_0$, $a_1$, and $a_2$ of the polynomial described in Eq. \ref{eq_poly} were then obtained by a least-square minimization of the residuals weighted by the coefficients of the inverse $\vect{W}$ of the covariance matrix, as described in Appendix \ref{app_polyfit}. This type of fitting is now applied routinely to fit stellar models constrained by combinations of mode frequencies (e.g. \citealt{silva13}, \citealt{lebreton14}). However, when applied directly to our simple case of a polynomial fit of the $r_{010}$, we obtained poor fits to the observed ratios (see gray dashed lines in Fig. \ref{fig_ratio_MLE}). After careful inspection of the results, we found that the covariance matrix $\vect{C}$ is in fact ill-conditioned, with a conditioning of the order of $10^5$ or $10^6$. As a result, the covariance matrixes are nearly non-invertible, which explains the poor agreement obtained by direct fitting. This property is not specific to our particular case, and we expect any covariance matrix built with combinations of frequencies to show similar behavior as the number of points increases. The conditioning of matrix $\vect{C}$ increases as the number of modes involved in the combinations of frequencies increases, which explains why the problem is so obvious for the $r_{010}$ ratios, but with a large enough number of points, it also arises for three-point separations. To remedy this problem, we applied truncated SVD to the covariance matrix as explained in Appendix \ref{app_polyfit}. We found that suppressing the 5 smallest eigenvalues of matrix $\vect{C}$ is generally enough to obtain satisfactory fits to the observed $r_{010}$ ratios (red dashed lines in Fig. \ref{fig_ratio_MLE}). \section{Measuring the size of mixed cores in \textit{Kepler}\ targets \label{sect_grid}} Since the $r_{010}$ ratios have been shown to efficiently cancel out the contribution from the outer layers (\citealt{roxburgh03}), the observed ratios could be directly compared to those of models. We thus confronted the observed $r_{010}$ ratios to those of two grids of models: the one computed with \textsc{Cesam2k}, which was described in Sect. \ref{sect_test_diagnostic}, and a second equivalent grid that was built with the evolutionary code \textsc{MESA}\ (\citealt{paxton11}, \citealt{paxton13}), which is described below. Obviously, these grids are too coarse to provide in themselves statistically reliable estimates of the stellar parameters, and in particular of the amount of core overshooting. However, based on the tests performed in Sect. \ref{sect_test_diagnostic}, these grids can be used to identify stars with a convective core and obtain a rough estimate of the extension of the mixed core in these stars. As a second step presented in Sect. \ref{sect_calibrate}, these estimates were refined using a more sophisticated optimization procedure. \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_kepler_006225718_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_010454113_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_005184732_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_012009504_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_012258514_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_007510397_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_008228742_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_007206837_seb.ps} \end{center} \caption{Location in the $(a_1,a_0)$ plane (star symbols and black error bars) of the stars of the sample that were found to be on the MS with a convective core in this study. Models that reproduce the observed large separation, the spectroscopic estimate of metallicity, and the stellar mass derived from scaling laws within 3 $\sigma$ errors are overplotted. The symbols have the same meaning as in Fig. \ref{fig_ratio_trend}. \label{fig_ratio_kepler1}} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_kepler_008394589_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_009098294_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_009410862_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_006106415_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_010963065_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_006116048_seb.ps} \end{center} \caption{Location in the $(a_1,a_0)$ plane of the MS stars for which the presence of a convective core is uncertain. The symbols have the same meaning as in Fig. \ref{fig_ratio_kepler1}. \label{fig_ratio_kepler2}} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_kepler_003656476_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_004914923_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_007296438_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_008938364_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_007680114_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_010516096_seb.ps} \end{center} \caption{Location in the $(a_1,a_0)$ plane of the first six PoMS stars of the sample. The symbols have the same meaning as in Fig. \ref{fig_ratio_kepler1}. \label{fig_ratio_kepler3}} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_kepler_008176564_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_008694723_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_006933899_seb.ps} \includegraphics[width=8cm]{fig_ratio_kepler_011244118_seb.ps} \end{center} \caption{Location in the $(a_1,a_0)$ plane of the last four PoMS stars of the sample. The symbols have the same meaning as in Fig. \ref{fig_ratio_kepler1}. \label{fig_ratio_kepler4}} \end{figure*} \subsection{\textsc{Cesam2k}\ models \label{sect_cesam}} For each star of the sample, we selected among the grid described in Sect. \ref{sect_descript_grid} the models that have a surface metallicity within 3 $\sigma $ of the spectroscopic $[$Fe/H$]$ (all metallicities were included in the cases where no spectroscopic measurement was available), and a stellar mass within 3 $\sigma$ of the estimate obtained from scaling laws (see Table \ref{tab_param}). Among the selected evolutionary sequences, we retained only the models whose mean large separations bracket the observed $\Delta\nu$. We note that for both models and observations, the mean value of $\Delta\nu$ was estimated using only the modes below $\nu_{\rm max}$ so that the corresponding large separations are only slightly affected by near-surface effects. For the selected models, we fitted polynomials to the $r_{010}$ ratio as defined by Eq. \ref{eq_poly}. For this purpose, we used the same modes and the same values of $\beta$, $\gamma_1$, and $\gamma_2$ (see Eq. \ref{eq_poly}) as those found from the observations, so that the parameters $a_i$ of the models can be directly compared to the observed ones. Since the models that we retained do not exactly match the observed large separation, we performed an interpolation to obtain the parameters $a_i$ that correspond exactly to the observed $\Delta\nu$. This process was repeated for all the stars of the sample. Fig. \ref{fig_ratio_kepler1} through \ref{fig_ratio_kepler4} show the location of the selected models and the observations in the $(a_1,a_0)$ plane. The first comforting observation is that all the observed stars occupy a place in the $(a_1,a_0)$ plane that is populated by models. This shows that in all cases, there exist models that simultaneously reproduce the observed trend of the $r_{010}$ ratio and the other global observational constraints. Secondly, as anticipated in the previous section, the evolutionary status of the observed stars can be unambiguously established in most cases using the diagnostic from the $r_{010}$ ratios. For 13 stars of the sample, the profile of the $r_{010}$ ratio is only compatible with MS models, the PoMS models lying at least several $\sigma$ away in the $(a_1,a_0)$ plane (see Fig. \ref{fig_ratio_kepler1} and \ref{fig_ratio_kepler2}). Conversely, 10 stars are clearly in the PoMS phase judging by their location in the $(a_1,a_0)$ plane (see Fig. \ref{fig_ratio_kepler3} and \ref{fig_ratio_kepler4}). We stress that it was not obvious at first sight that these 10 stars are in the subgiant phase. Indeed, the PoMS status of solar-like pulsators is generally established by the presence of mixed modes in their oscillation spectrum. However, at the beginning of the subgiant phase, the lowest order g modes have not yet reached the frequency range of observed modes and such a diagnostic cannot be applied. It is the case for these 10 stars of the sample, and we here showed that the general trend of the $r_{010}$ ratios is a powerful diagnostic for the evolutionary status in this case. The evolutionary status remains ambiguous only for one star of the sample, KIC9410862, which is either at the end of the main sequence or at the beginning of the subgiant phase (Fig. \ref{fig_ratio_kepler2}). Among the 13 MS targets, eight have values of the parameters $a_0$ and $a_1$ that can be reproduced only by models that have a convective core. These stars are listed in Table \ref{tab_convcore} and their locations in the $(a_1,a_0)$ plane are shown in Fig. \ref{fig_ratio_kepler1}. As predicted in Sect. \ref{sect_test_diagnostic}, we were able to use the position in the $(a_1,a_0)$ plane of the stars that have a convective core to obtain an estimate of the amount of core overshooting. Interestingly, the eight stars draw a quite consistent picture of the extension of convective cores in low-mass stars. \begin{itemize} \item {All the targets require an extended core compared to the classical Schwarzschild criterion}. Indeed, all the stars that have a convective core lie several $\sigma$ away from models computed without overshooting. \item {None of the targets were found to be consistent with a core overshooting above $\alpha\ind{ov}=0.2$}. \item The only target which is consistent with a core overshooting around $\alpha\ind{ov}=0.2$ (KIC7206837) corresponds to the highest-mass star of the sample ($1.54\pm0.09\,M_\odot$ according to seismic scaling relations). This raises the question of a potential mass-dependence of the amount of core overshooting as implemented in the evolution code \textsc{Cesam2k}, which is addressed in more details in Sect. \ref{sect_calibrate}. \end{itemize} We stress that seismology provides information about the size of the mixed core at the current age of the star. The amounts of overshooting that are quoted above are those required so that the evolution code \textsc{Cesam2k}\ produces cores with an appropriate size. One should be careful that the values that were obtained for $\alpha\ind{ov}$ hold only for the prescription of core overshooting that is implemented in \textsc{Cesam2k}\ and they should not be directly applied to other codes. We discuss this point in details in Sect. \ref{sect_calibrate}. \begin{figure} \begin{center} \includegraphics[width=9cm]{compare_cc_kepler.eps} \end{center} \caption{Fractional mass of the convective core for the eight stars that were found to have a convective core in this study. For each star, the open symbols correspond to the core size of the five models of the two grids (blue squares for \textsc{Cesam2k}\ models, red circles for \textsc{mesa} models) that yield the lowest values of $\chi^2$ as defined by Eq. \ref{eq_chi2}. The filled squares give the core sizes obtained from a Levenberg-Marquardt optimization and the evolution code \textsc{Cesam2k}\ (see Sect. \ref{sect_optim}). \label{fig_compare_cc}} \end{figure} A more relevant result to quote is the extent of the mixed core obtained from seismic constraints. To determine this for each of the stars for which a convective core was detected, we selected a subset of optimal models from the grid of models, defined as those that minimize the quantity \begin{equation} \chi^2 = \sum_{i=1}^N \frac{(\mathcal{O}_i^{\rm mod}-\mathcal{O}_i^{\rm obs})^2}{\sigma_i^2} \label{eq_chi2} \end{equation} where the $\mathcal{O}_i^{\rm obs}$ correspond to the $N$ observables used to constrain the models, namely the effective temperature $T_{\rm eff}$, the surface metallicity $(Z/X)$ (if available), the asteroseismic $\log g$, and the parameters $a_0$ and $a_1$ of the $2^{\rm nd}$ order polynomial fit of the observed $r_{010}$ ratio. The $\sigma_i$ are the measurement errors, and the $\mathcal{O}_i^{\rm mod}$ are the values corresponding to the observables computed from the models. We note that the observables can be regarded as independent (since we fitted a sum of orthogonal polynomials to the observed ratios) so that Eq. \ref{eq_chi2} holds. For each star, the fractional mass of the convective core $M_{\rm c}/M_\star$ for the five best models is shown in Fig. \ref{fig_compare_cc} (blue squares for \textsc{Cesam2k}\ models). We note that the spreads in $M_{\rm c}/M_\star$ observed in Fig. \ref{fig_compare_cc} cannot be interpreted as uncertainties on this quantity. Indeed, to estimate proper uncertainties one should have chosen the set of optimal models based on the variations of the $\chi^2$ function compared to the lowest value of $\chi^2$ in the grid ($\Delta\chi^2=1$, 4, and 9 provide 1, 2, and 3~$\sigma$ errors, respectively) but the grid computed here is too coarse to make such an approach possible\footnote{Proper uncertainties on the core sizes are obtained from optimizations in Sect. \ref{sect_optim}}. \subsection{\textsc{MESA} models \label{sect_mesa}} \begin{figure*} \begin{center} \includegraphics[width=8cm]{fig_ratio_kepler_010454113_isa.ps} \includegraphics[width=8cm]{fig_ratio_kepler_007206837_isa.ps} \includegraphics[width=8cm]{fig_ratio_kepler_006106415_isa.ps} \includegraphics[width=8cm]{fig_ratio_kepler_006933899_isa.ps} \end{center} \caption{Location in the $(a_1,a_0)$ plane of four stars of the sample compared to the location of models computed with the evolution code \textsc{MESA}. Symbols are the same as in Fig. \ref{fig_ratio_kepler1}, except for the colors, which indicate diffusive overshooting parameters of: $f=0.004$ (gray), 0.010 (blue), 0.016 (cyan), 0.022 (green), 0.028 (red), 0.035 (magenta). \label{fig_ratio_kepler_isa}} \end{figure*} As mentioned above, we have also computed a second grid of models with the evolution code \textsc{MESA}\ (\citealt{paxton11}, \citealt{paxton13}). The \textsc{MESA}\ models were computed using the OPAL 2005 equation of state from the tables of \cite{rogers02}, which are completed at lower temperature by the tables of \cite{saumon95}. \textsc{MESA}\ opacity tables are constructed by combining radiative opacities with the electron conduction opacities from \cite{cassisi07}. Radiative opacities are taken from \cite{ferguson05} for $2.7<\log T<4.5$ and OPAL opacities \citet{iglesias93,iglesias96} for $3.75<\log T<8.7$. The low temperature opacities of \cite{ferguson05} include the effects of molecules and grains on the radiative opacity. The nuclear reaction rates module from \textsc{MESA}\ contains the rates computed by \cite{caughlan88} and \cite{angulo99} (NACRE), with preference given to the NACRE rates when available. The atmosphere was described as Hopf's gray law. We used the solar mixture from \cite{grevesse93}. Convection was treated using the classical mixing-length theory (MLT, \citealt{bohm58}) with a fixed mixing length parameter $\alpha_{\rm MLT}=1.9$, which corresponds to a solar calibration (\citealt{paxton11}). Core overshooting is included and described as a diffusive process, following \cite{herwig00}. For this purpose, an extra diffusion is added at the edge of the core, with a coefficient \begin{equation} D_{\rm ov}(r) = D_0 \exp \left[ -\frac{2(r-r_{\rm s})}{f H_P} \right] \end{equation} where $D_0$ is the MLT-derived diffusion coefficient near the Schwarzschild boundary, $H_P$ is the pressure scale height at this location, and $f$ is the adjustable overshooting parameter. To avoid unrealistically large extensions of convective cores, the current version of \textsc{MESA}\ uses a modified value $\widetilde{H}_P$ for the pressure scale height, defined as \begin{equation} \widetilde{H}_P = r_{\rm s}/\alpha_{\rm MLT} \label{eq_hptilde} \end{equation} in the case where the mixing length $\ell_{\rm MLT} = \alpha_{\rm MLT} H_P$ becomes larger than the Schwarzschild limit $r_{\rm s}$ of the core. This prescription is different from the one adopted in the \textsc{Cesam2k}\ code. When using the same prescription for core overshooting (instantaneous or diffusive) and the same overshooting parameter at the boundary of small convective cores, the approach followed by \textsc{MESA}\ is expected to yield core extensions that are smaller by a factor $\alpha_{\rm MLT}$ compared to the extensions produced with the \textsc{Cesam2k}\ approach in the saturated regime. Gravitational settling and chemical diffusion are taken into account by solving the equations of \cite{burgers69} using the method and diffusion coefficients of \cite{thoul94}. For each star of the sample, we performed the same model selection as was done with \textsc{Cesam2k}\ models, and for each selected model we fitted $2^{\rm nd}$-order polynomials to the $r_{010}$ ratios in the same way as described in Sect. \ref{sect_cesam}. This allowed us to compare the location of the observed stars in the $(a_1,a_0)$ plane to that of \textsc{MESA}\ models. Fig. \ref{fig_ratio_kepler_isa} shows the results obtained for four stars of the sample, which are representative of the different cases identifid in Sect. \ref{sect_cesam}: KIC8228742 and KIC7206837 are in the MS and have a convective core, KIC6106415 is in the MS but has no convective core, and KIC6933899 is in the PoMS. The \textsc{MESA}\ grid agrees with the \textsc{Cesam2k}\ grid on the evolutionary status of all the stars of the sample. The star KIC94110862, whose evolutionary status was uncertain based on \textsc{Cesam2k}\ models, was found to be more consistent with models shortly after the end of the MS using the \textsc{MESA}\ grid. Additionally, the eight stars identified as having a convective core with the \textsc{Cesam2k}\ grid were also found to have one with the \textsc{MESA}\ grid. The locations of two of these stars in the $(a_1,a_0)$ plane are shown in the upper panels of Fig. \ref{fig_ratio_kepler_isa}. It is clear that the extension can be estimated from the $a_0$ and $a_1$ parameters, as was claimed in Sect. \ref{sect_cesam}. Interestingly, all the conclusions reached with \textsc{Cesam2k}\ models about the amount of overshooting that is required are confirmed. The eight stars with convective cores all require an extended core with overshooting parameters ranging from 0.010 to 0.035, and the star that requires the largest amount of overshooting corresponds to the highest-mass stars of the sample (KIC7206837) as was found in Sect. \ref{sect_cesam}. Obviously, the overshooting parameters obtained from the \textsc{MESA}\ models are not directly comparable to those found from the \textsc{Cesam2k}\ grid because a diffusive overshooting was chosen in \textsc{MESA}\ models. A more detailed comparison is provided in Sect. \ref{sect_cesam_vs_mesa}, but we can already compare directly the absolute sizes of the extended cores found with both evolution codes. For all the stars that have a convective core, we selected the five models of the \textsc{MESA}\ grid that minimize the $\chi^2$ function as defined by Eq. \ref{eq_chi2}. The fractional mass of the mixed core $M_{\rm c}/M_\star$ in these models is shown in Fig. \ref{fig_compare_cc}. Interestingly, there is a quite good agreement on the size of the extended cores obtained with both evolution codes, in spite of the different prescriptions for core overshooting. This is further indication that the seismic diagnostic based on $r_{010}$ ratios can provide a measurement of the size of the mixed core mostly independently of the input physics, as was already suggested by \cite{silva11}. \subsection{Instantaneous vs diffusive mixing beyond convective cores \label{sect_diff_vs_step}} In this study, we have chosen to adopt two different prescriptions for core overshooting, an instantaneous overshooting (\textsc{Cesam2k}\ models) and a diffusive overshooting (\textsc{MESA}\ models), with the aim to confront the two most frequently used prescriptions to \textit{Kepler}\ data. It is interesting to address the question whether we can distinguish between these two types of mixing beyond convective cores using $r_{010}$ ratios. The \textsc{MESA}\ code offers the possibility to test this since both treatments have been implemented. We computed a 1.3-$M_\odot$ \textsc{MESA}\ model including diffusive overshooting with a parameter $f=0.020$, which we evolved until $X_{\rm c}$ has dropped to 0.2 (chosen arbitrarily). We also computed a \textsc{MESA}\ model including a step overshooting with $\alpha\ind{ov}=0.22$ and a slightly higher mass (1.31 $M_\odot$) evolved until it has the same large separation as the diffusive-overshooting model. We found that both models are undistinguishable from an observational point of view (within typical observational errors), and they also share a very similar behavior of the $r_{010}$ ratios, as is shown in Fig. \ref{fig_r010_diff_vs_step}. This shows that the seismic diagnostic based on the $r_{010}$ ratios is unfortunately not capable of distinguishing between the two scenarios regarding the nature of the extra mixing beyond the core. \begin{figure} \begin{center} \includegraphics[width=9cm]{fig_r010_diff_vs_step.ps} \end{center} \caption{Profile of $r_{010}$ for two \textsc{MESA}\ models: one with a diffusive overshooting ($f=0.02$) and a 1.3-$M_\odot$ mass (blue squares) and the other with a step overshooting ($\alpha\ind{ov}=0.22$) and a 1.31-$M_\odot$ mass (red circles). Both models have the same mean large separation. \label{fig_r010_diff_vs_step}} \end{figure} \section{Toward a calibration of core overshooting for low-mass stars \label{sect_calibrate}} In Sect. \ref{sect_grid}, we were able to measure the sizes of mixed cores in eight low-mass stars using seismology. The question is then how these results can be used to estimate the efficiency of the extra-mixing beyond convective cores. Answering this question is not straightforward. One could consider simply comparing the convective core masses obtained in Sect. \ref{sect_grid} to the convective core masses that would be obtained with identical stellar parameters but no mixing beyond the core. This is however inapplicable in practice because increasing the size of the convective core at the beginning of the main sequence has large subsequent effects on its composition and evolution. For stars in the mass range that we considered here, the main effect is that the abundance of $^3$He in the core increases, which increases its luminosity, and thus also its size because the Schwarzschild radius increases. For instance, extending the convective core of a 1.3-$M_\odot$ star over 10\% of the Schwarzschild radius in fact results in an increase of the core radius of as much as 50\% during the main sequence. Another consequence is that the lifetime of small convective cores can be dramatically extended (see \citealt{roxburgh85}, \citealt{deheuvels10a}). For instance, a stellar model of KIC62245718 computed with the same stellar parameters as those found in Sect. \ref{sect_grid} but without including any extra-mixing beyond the core has lost its convective core at current age. It therefore seems that the problem of the efficiency of convective core extensions cannot be studied independently from the evolution of the star, even though asteroseismology only tells us about the size of the mixed core at current age. We thus chose to estimate the efficiency of the extra-mixing beyond the core by adjusting the overshooting parameter ($\alpha\ind{ov}$ for instantaneous mixing or $f$ for diffusive mixing) considered constant throughout the evolution, so that stellar models have the right convective core size at current age. As mentioned in the introduction, so far we have had to model convective core extensions using such simplistic parametric models because we lack observational constraints that would justify using more complex models. Our aim in this section is to search for correlations between the efficiency of overshooting and properties of stellar interiors, which might eventually give us better insight on the physical processes that are responsible for core extensions, and lead us to prefer more realistic modelings of this phenomenon. On the shorter term, this type of study can enable us to propose a calibration of the overshooting parameter, which can later be used in 1D stellar models. \subsection{Calibration of core overshooting in \textsc{Cesam2k} \label{sect_optim}} To calibrate core overshooting in \textsc{Cesam2k}\ models, we needed to obtain more quantitative estimates of the amounts of core overshooting that are required for the stars of the sample. \subsubsection{Stars with a convective core \label{sect_convcore}} We performed optimizations for the eight stars that were found to have a convective core in Sect. \ref{sect_grid}. For this purpose, we used the Levenberg-Marquardt algorithm, which is an appealing alternative to grid-search minimization when the number of free parameters is large. This algorithm combines the low sensitivity to initial guesses of the gradient search method and the rapidity of convergence of the Newton-Raphson method. Its use has first been suggested for the purpose of stellar modeling by \cite{miglio05}. The main drawback of such an optimization technique is the risk to converge toward a secondary minimum of the cost function if the initial guesses are to far from the optimum set of parameters. In our particular case, this risk is minimized since we used the best models of the grid computed in Sect. \ref{sect_cesam} as initial guesses. To find optimal models, we minimized the quantity $\chi^2$ as defined in Eq. \ref{eq_chi2}. We used the same observables as those listed in Sect. \ref{sect_mesa}, to which we added the frequency of the lowest-order observed radial mode. This observable is preferred to the observed mean large separation because of its lower dependence on the structure of the outer layers. We note that the $a_2$ parameter of the $2^{\rm nd}$ order polynomial fit of the observed $r_{010}$ ratio was here included as a constraint. This parameter becomes constraining for evolved stars, for which the observed $r_{010}$ ratios depart from a simple linear relation (see Fig. \ref{fig_ratio_parabola}). To reproduce these observables five parameters were left free: the stellar mass, age, initial helium abundance $Y_{\rm i}$, initial metallicity $(Z/X)_{\rm i}$, and the parameter of core overshooting $\alpha_{\rm ov}$. We imposed a lower limit of 0.24 for $Y_{\rm i}$ in order to exclude models with initial helium abundances significantly below the standard big bang nucleosynthesis (SBBN) values of $Y_0=0.248\pm0.007$ (\citealt{steigman10}). To limit the number of free parameters, we kept the mixing length fixed to $\alpha_{\rm CGM}=0.64$, which was obtained from a solar calibration. As a consequence, the fit that we performed has two degrees of freedom and a reduced value $\chi^2_{\rm red}$ was thus obtained by dividing the regular $\chi^2$ by two. For each star, two types of optimizations were performed, one where the effects of microscopic diffusion are neglected, and another that includes these effects following the formalism of \citealt{burgers69}. This procedure enabled us to test the influence of microscopic diffusion on the amount of core overshooting that is required. As mentioned above, diffusion increases the abundance of heavy elements in the core and thus the opacity, which results in an increase in the size of the convective core. We therefore expected to require less core overshooting when microscopic diffusion is included. Since \textsc{Cesam2k}\ does not include the computation of radiative accelerations of chemical elements, their effect was neglected in this study. Since radiative levitation acts against gravitational settling in the interior of stars with masses above about 1.2 $M_\odot$, our models including microscopic diffusion likely overestimate the sinking of heavy elements in this mass range. We thus expect our models computed with microscopic diffusion and stellar masses above $1.2\,M_\odot$ to provide us with an upper limit to the effects of diffusion, in particular on the sizes of convective cores. \begin{figure*} \begin{center} \includegraphics[width=9cm]{fig_mass_ov.ps} \includegraphics[width=9cm]{fig_zsx_ov.ps} \end{center}\caption{Amount of core overshooting found for the stars of the sample that have a convective core as a function of the fitted stellar mass (\textbf{left}) and as a function of the fitted initial metallicity (\textbf{right}). Blue squares indicate models computed without microscopic diffusion and grey circles, models where microscopic diffusion is included following \cite{burgers69}. The vertical arrows indicate upper limits of $\alpha\ind{ov}$ (see Sect. \ref{sect_noconvcore}). \label{fig_mass_ov}} \end{figure*} The parameters of the best-fit models are given in Table \ref{tab_convcore}. The quoted error bars were obtained as the diagonal coefficients of the inverse of the Hessian matrix. The results confirm that the amount of core overshooting can be well constrained by using the parameters $a_i$. The values obtained for $\alpha\ind{ov}$ range from 0.07 to 0.18 in the case without diffusion, which is in good agreement with the results of the grids of models (Sect. \ref{sect_cesam}). As foreseen, the models that include microscopic diffusion require lower amounts of core overshooting to reproduce the seismic observations, with values ranging from 0.05 to 0.15. However, our results show that the effects of diffusion cannot account in themselves for the entire extension of convective cores since core overshooting was required for all eight stars of the sample. We note that for several stars of the sample, the fitted value of the initial helium abundance $Y_{\rm i}$ coincides with the lower limit of 0.24 that we have imposed to avoid sub-SBBN helium abundances. Similar results have been found in several studies where seismic modelings were performed (e.g. \citealt{metcalfe14}, \citealt{silva15}). This is potentially the consequence of the well-known correlation between stellar mass and helium abundance (\citealt{lebreton12}). For these stars, we have performed additional fits imposing a higher limit to $Y_{\rm i}$ (0.26) and found results that agree within 1-$\sigma$ errors with the values quoted in Table \ref{tab_convcore} (in particular, we found very little difference in the sizes of convective cores, which is our main interest here). The optimizations also provided estimates of the stellar mass, which are given in Table \ref{tab_convcore}. The agreement with estimates from scaling laws is quite good (below 1.3~$\sigma$ for all the stars). We note that KIC12009504 was already modeled by \cite{silva13} who already found that this stars possesses a convective core that extends beyond the Schwarzschild limit. Our results for this star are in good agreement with those of \cite{silva13}. The values of $\chi^2_{\rm red}$ for some of our fits are significantly larger than 1, which in principle indicates either disagreements between models and observations, or underestimated error bars for the observables. Table \ref{tab_convcore} gives the level of agreement with observations for each fitted parameter normalized by observational 1-$\sigma$ errors. It shows that a very good level of agreement is reached for the $a_0$ and $a_1$ parameters, as was expected based on the results of Sect. \ref{sect_grid}. On the contrary, disagreements above the 3-$\sigma$ level arise for the $a_2$ parameter. This occurs mainly for stars where the $r_{010}$ ratios vary nearly linearly with frequency, so that the $a_2$ coefficient is small. In this case, the observational estimate of $a_2$ can be altered by the short-period oscillation that arises because of the glitch at the base of the convection zone (see Sect. \ref{sect_fit_r010}). Disagreements above the 2-$\sigma$ level also arise for the effective temperature and the surface metallicity. We note that we have used the error bars of \cite{bruntt12} for these quantities, which have been deemed somewhat underestimated in previous studies (\citealt{silva13}). This might at least partly explain this disagreement. Also, we note that the agreement with the observed surface metallicities improves when including microscopic diffusion in models. Using our optimizations, we could also obtain estimates of the total size of the mixed core in the eight stars. Since the size of the core is not a fitted parameter, the optimization algorithm does not directly provide error bars on these obtained values. However, they can be deduced from the relation \begin{equation} \sigma_{M_{\rm c}} = \sqrt{\sum_{j=1}^{P} \sigma_j^2\left(\frac{\partial M_{\rm c}}{\partial b_j}\right)^2} \end{equation} where the $b_j$ terms correspond to the $P$ free parameters and the derivatives $\left(\partial M_{\rm c}/\partial b_j\right)$ can be evaluated with the models used to compute the Hessian matrix. The fractional masses of the convective cores for the eight stars are plotted along with their error bars in Fig. \ref{fig_compare_cc}. The refined estimates of the amount of core overshooting in the eight stars that have a convective core enabled us to test correlations between the overshooting parameter $\alpha\ind{ov}$ and other stellar parameters. Fig. \ref{fig_mass_ov}a shows the obtained values of $\alpha\ind{ov}$ as a function of the stellar mass. We observe that there seems to be a tendency of core overshooting to increase with stellar mass in this mass range. This tendency is less clear for the models where microscopic diffusion was included (grey circles), but we still found in this case that the three less massive stars of the sample require less core overshooting that the five more massive ones. Clearly more data points are required to be more conclusive, but if such a tendency is confirmed, then an empirical law could be derived and implemented in the \textsc{Cesam2k}\ code in order to better model the extent of mixed cores for stars in this mass range. We also note that we have found no apparent dependency of the amount of core overshooting required with stellar metallicity (see Fig. \ref{fig_mass_ov}b). \afterpage{ \begin{landscape} \begin{table} \begin{center} \caption{Fitted parameters and characteristics of the best-fit models obtained for the stars of the sample that have a convective core. \label{tab_convcore}} \begin{tabular}{| l c | c c c c c | c | c c c c c c | c |} \hline \T & & \multicolumn{5}{ c |}{Free parameters} & Mixed core size & \multicolumn{6}{ c |}{Level of agreement normalized} & \\ \B & & & & & & & & \multicolumn{6}{ c |}{by observational 1-$\sigma$ errors} & \\ \hline \T\B KIC ID & diffusion & $M$ $(M_\odot)$ & $(Z/X)_0$ & $Y_0$ & Age (Myr) & $\alpha\ind{ov}$ & $M_{\rm c}/M_\star$ & $T_{\rm eff}$ & $\log g$ & $(Z/X)_{\rm s}$ & $a_0$ & $a_1$ & $a_2$ & $ \chi^2_{\rm red}$\\ \hline \T 6225718 & no & $1.26 \pm 0.03$ & $0.019 \pm 0.001$ & $0.24 \pm 0.01$ & $ 1603 \pm 225$ & $0.14 \pm 0.01$ & $0.06 \pm 0.03$ & $ 2.1$ & $ 0.5$ & $ 2.9$ & $ 0.2$ & $ 0.3$ & $ 2.4$ & $ 9.4$ \\ \B & B69 & $1.27 \pm 0.04$ & $0.019 \pm 0.002$ & $0.24 \pm 0.03$ & $ 1311 \pm 209$ & $0.14 \pm 0.01$ & $0.05 \pm 0.03$ & $ 2.2$ & $ 0.6$ & $ 2.0$ & $ 0.0$ & $ 0.0$ & $ 2.7$ & $ 8.3$ \\ \T 10454113 & no & $1.27 \pm 0.02$ & $0.023 \pm 0.002$ & $0.24 \pm 0.02$ & $ 1761 \pm 145$ & $0.17 \pm 0.01$ & $0.06 \pm 0.02$ & $ 1.3$ & $ 0.6$ & $ 2.4$ & $ 0.2$ & $ 0.2$ & $ 1.3$ & $ 4.9$ \\ \B & B69 & $1.29 \pm 0.03$ & $0.023 \pm 0.002$ & $0.24 \pm 0.02$ & $ 1413 \pm 423$ & $0.15 \pm 0.01$ & $0.06 \pm 0.03$ & $ 2.6$ & $ 0.7$ & $ 1.7$ & $ 0.5$ & $ 1.1$ & $ 0.9$ & $ 6.2$ \\ \T 5184732 & no & $1.20 \pm 0.01$ & $0.057 \pm 0.001$ & $0.31 \pm 0.01$ & $ 3957 \pm 416$ & $0.07 \pm 0.01$ & $0.05 \pm 0.01$ & $ 0.2$ & $ 0.2$ & $ 1.4$ & $ 0.1$ & $ 0.2$ & $ 0.9$ & $ 1.7$ \\ \B & B69 & $1.17 \pm 0.02$ & $0.055 \pm 0.002$ & $0.32 \pm 0.01$ & $ 3770 \pm 351$ & $0.05 \pm 0.01$ & $0.05 \pm 0.03$ & $ 0.2$ & $ 0.0$ & $ 0.5$ & $ 0.1$ & $ 0.1$ & $ 0.9$ & $ 0.6$ \\ \T 12009504 & no & $1.20 \pm 0.01$ & $0.021 \pm 0.002$ & $0.25 \pm 0.02$ & $ 4275 \pm 939$ & $0.11 \pm 0.02$ & $0.06 \pm 0.02$ & $ 2.0$ & $ 0.2$ & $ 2.0$ & $ 0.1$ & $ 1.9$ & $ 0.6$ & $ 6.0$ \\ \B & B69 & $1.24 \pm 0.01$ & $0.023 \pm 0.001$ & $0.24 \pm 0.01$ & $ 3977 \pm 365$ & $0.05 \pm 0.03$ & $0.03 \pm 0.06$ & $ 1.1$ & $ 0.4$ & $ 1.4$ & $ 0.1$ & $ 1.1$ & $ 3.8$ & $ 9.4$ \\ \T 7206837 & no & $1.44 \pm 0.04$ & $0.035 \pm 0.002$ & $0.25 \pm 0.02$ & $ 2250 \pm 147$ & $0.18 \pm 0.02$ & $0.12 \pm 0.02$ & $ 0.9$ & $ 0.5$ & $ 1.9$ & $ 1.1$ & $ 1.9$ & $ 1.0$ & $ 5.2$ \\ \B & B69 & $1.40 \pm 0.08$ & $0.036 \pm 0.001$ & $0.26 \pm 0.03$ & $ 2179 \pm 276$ & $0.13 \pm 0.01$ & $0.10 \pm 0.02$ & $ 0.1$ & $ 0.3$ & $ 0.1$ & $ 0.0$ & $ 0.2$ & $ 0.0$ & $ 0.1$ \\ \T 12258514 & no & $1.24 \pm 0.02$ & $0.028 \pm 0.002$ & $0.28 \pm 0.02$ & $ 4472 \pm 138$ & $0.10 \pm 0.02$ & $0.07 \pm 0.01$ & $ 0.1$ & $ 0.5$ & $ 2.0$ & $ 0.5$ & $ 0.5$ & $ 3.3$ & $ 7.9$ \\ \B & B69 & $1.12 \pm 0.02$ & $0.027 \pm 0.001$ & $0.35 \pm 0.01$ & $ 3640 \pm 132$ & $0.07 \pm 0.01$ & $0.07 \pm 0.01$ & $ 2.1$ & $ 0.1$ & $ 0.6$ & $ 0.1$ & $ 0.2$ & $ 2.4$ & $ 5.4$ \\ \T 7510397 & no & $1.36 \pm 0.04$ & $0.017 \pm 0.001$ & $0.24 \pm 0.02$ & $ 3385 \pm 129$ & $0.15 \pm 0.01$ & $0.08 \pm 0.02$ & $ 2.5$ & $ 0.8$ & $ 3.0$ & $ 0.6$ & $ 0.6$ & $ 1.1$ & $ 8.8$ \\ \B & B69 & $1.35 \pm 0.01$ & $0.017 \pm 0.001$ & $0.24 \pm 0.01$ & $ 3397 \pm 31$ & $0.09 \pm 0.02$ & $0.07 \pm 0.01$ & $ 1.6$ & $ 0.7$ & $ 1.2$ & $ 0.2$ & $ 0.4$ & $ 0.2$ & $ 2.4$ \\ \T 8228742 & no & $1.33 \pm 0.05$ & $0.018 \pm 0.001$ & $0.24 \pm 0.03$ & $ 3968 \pm 88$ & $0.17 \pm 0.01$ & $0.08 \pm 0.01$ & $ 1.3$ & $ 0.2$ & $ 1.7$ & $ 0.3$ & $ 0.3$ & $ 0.6$ & $ 2.5$ \\ \B & B69 & $1.28 \pm 0.05$ & $0.019 \pm 0.002$ & $0.27 \pm 0.03$ & $ 3716 \pm 80$ & $0.13 \pm 0.02$ & $0.08 \pm 0.03$ & $ 1.2$ & $ 0.0$ & $ 1.0$ & $ 0.4$ & $ 0.5$ & $ 0.5$ & $ 1.6$ \\ \hline \end{tabular} \end{center} \end{table} \end{landscape} } \subsubsection{Stars without a convective core \label{sect_noconvcore}} Information about core overshooting can also be drawn from stars that have no convective core but lie just below the mass limit for having one. Indeed, above a certain amount of core overshooting, the models all develop a convective core and the profile of the $r_{010}$ becomes at odds with the observations. So these targets can be used to obtain an upper limit to the amount of core overshooting. For these targets, we performed optimizations using the Levenberg-Marquardt algorithm as before, except that we fixed the parameter of core overshooting to predefined values ranging from 0 to 0.3. The result of this procedure is shown as an example for the case of KIC~10516096. For $\alpha\ind{ov}=0$, the fit converges toward a PoMS model with a mass of 1.12 $M_\odot$, an age of about 6.4 Gyr, a metallicity of $(Z/X)=0.0229$ and no convective core. For $0\leqslant\alpha\ind{ov}\leqslant0.15$, the fits converge toward roughly the same model. The only difference between the best-fit models is that the initial convective core survives longer for higher values of $\alpha\ind{ov}$ (about 1 Gyr for $\alpha\ind{ov}=0.15$ compared to 30 Myr for $\alpha\ind{ov}=0$). However, even with $\alpha\ind{ov}=0.15$ the convective core vanishes long before the end of the MS and its effect on the core structure has been washed out by the age of 6.4 Gyr. On the contrary, for $\alpha\ind{ov}=0.2$ this model keeps a convective core until the end of the MS. As a result, the duration of the MS is extended and by the time the model reaches the observed large separation, it is still in the MS with a convective core and the $r_{010}$ ratio of this model is in poor agreement with the observations. Therefore, to decrease the $\chi^2$, the fit converges toward a model with higher metallicity ($Z/X = 0.0281$) for which the convective core vanishes before the end of the MS. However, this latter model is in less good agreement with the observations as can be seen in Fig. \ref{fig_ov_chi2}. We thus obtained an upper limit of the overshooting parameter of about 0.19 for this star (value of $\alpha\ind{ov}$ above which the obtained $\chi^2$ is larger than $\min(\chi^2)+9$). Similar results were found for one other PoMS star (KIC~6933899) and three MS stars (KIC~6106415, KIC~6116048, and KIC~8394589). For all these stars, the agreement deteriorates for an upper limit $0.16<\alpha_{\rm lim}<0.20$. These constraints were added as vertical arrows in Fig. \ref{fig_mass_ov}. Unfortunately, they are too loose to confirm the tendency of $\alpha\ind{ov}$ to increase with mass that was found in Sect. \ref{sect_convcore}. \begin{figure} \begin{center} \includegraphics[width=9cm]{fig_ov_chi2_010516096.ps} \end{center} \caption{Value of the $\chi^2$ of the best-fit model as a function of the (fixed) amount of core overshooting for the star KIC~10516096. \label{fig_ov_chi2}} \end{figure} \begin{table*} \begin{center} \caption{Characteristics of the best-fit models obtained for KIC5184732 when modifying the chosen input physics. The first column gives the alternate choices adopted in each new optimization. As mentioned in the text, the reference models have \textsc{OPAL05} equation of state, NACRE+LUNA reaction rates, no microscopic diffusion, and AGSS09 solar mixture. \label{tab_sensitivity}} \begin{tabular}{l c c c c c c c c} \hline \hline \T \B Tested input physics & $M$ $(M_\odot)$ & $(Z/X)_0$ & $Y_0$ & Age (Myr) & $\alpha\ind{ov}$ & $M_{\rm c}/M_\star$ & $\chi^2_{\rm red}$\\ \hline \T \textbf{Reference} & $1.20 \pm 0.01$ & $0.057 \pm 0.001$ & $0.31 \pm 0.01$ & $ 3957 \pm 416$ & $0.07 \pm 0.01$ & $0.05 \pm 0.01$ & $ 1.7$ \\ \T \textbf{Equation of State} & & & & & & & \\ \B \textsc{OPAL01} & $1.21 \pm 0.03$ & $0.056 \pm 0.005$ & $0.30 \pm 0.02$ & $ 3879 \pm 124$ & $0.08 \pm 0.02$ & $0.05 \pm 0.02$ & $ 1.6$ \\ \T \textbf{Nuclear reaction rates} & & & & & & & \\ \B NACRE & $1.21 \pm 0.02$ & $0.059 \pm 0.002$ & $0.32 \pm 0.01$ & $ 3408 \pm 555$ & $0.03 \pm 0.01$ & $0.06 \pm 0.03$ & $ 2.2$ \\ \T \textbf{Solar mixture} & & & & & & & \\ \B GN93 & $1.20 \pm 0.02$ & $0.055 \pm 0.004$ & $0.30 \pm 0.01$ & $ 3867 \pm 318$ & $0.07 \pm 0.01$ & $0.06 \pm 0.02 $ & $3.8$ \\ \hline \end{tabular} \end{center} \end{table*} \subsubsection{Sensitivity to input physics \label{sect_sensitivity}} We here briefly address the question of the sensitivity of our results to some of the choices of the model input physics. We focused on one star (KIC5184732) chosen arbitrarily among the stars which was found to have a convective core and was modeled in Sect. \ref{sect_convcore}. We performed additional optimizations of this target modifying each time one assumption on the model input physics. We note that the influence of microscopic diffusion, in particular on the size of the mixed core, was already addressed in Sect. \ref{sect_convcore}. We did not expect the measurement of the mixed core size to be modified because we have confirmed in this study that its inference is mostly independent of the model physics. However, the amount of overshooting required to produce the appropriate core size at current age does depend on the input physics. \paragraph{Equation of state} Our reference \textsc{Cesam2k}\ models were computed using the OPAL05 equation of state (\citealt{rogers02}). To estimate uncertainties linked to the choice of EoS, we performed a new optimization for the target KIC5184732 using the OPAL01 EoS instead. As can be seen in Table \ref{tab_sensitivity}, the fitted parameters all lie within 1-$\sigma$ errors of the results obtained with the OPAL05 EoS. \paragraph{Nuclear reaction rates} We also calculated models of KIC5184732 conserving the NACRE nuclear reaction rate for the $^{14}$N$({\rm p},\gamma)^{15}$O reaction instead of the revised rate obtained from the LUNA facility (\citealt{formicola04}), which was used in Sect. \ref{sect_optim}. Table \ref{tab_sensitivity} gives the obtained fitted parameters. The amount of overshooting that is required to produce a mixed core with the appropriate size is significantly reduced. This is understandable since the NACRE cross section for the $^{14}$N$({\rm p},\gamma)^{15}$O reaction was about 30 \% higher than the revised LUNA rate. As a consequence, models computed with this previous cross section have a larger luminosity in the core, and thus a larger mixed core. The other fitted parameters are little modified compared to the reference fit. In particular, the size of the mixed core is unchanged, within statistical errors. \paragraph{Solar mixture} We adopted the solar mixture of AGSS09 for which $(Z/X_\odot) = 0.0181$ in our reference models in Sect. \ref{sect_optim}. We here explored the impact of considering instead the solar mixture of \cite{grevesse93} (GN93), for which $(Z/X_\odot) = 0.0244$. As can be seen from Table \ref{tab_sensitivity}, this new optimization converged toward a solution with roughly the same abundance of heavy elements as in the reference fit using AGSS09. As a result, the fitted parameters are very similar to the reference case. \subsection{Applicability to \textsc{MESA}\ models \label{sect_cesam_vs_mesa}} We now address the question whether the prescription obtained for the \textsc{Cesam2k}\ code in Sect. \ref{sect_optim} can be applied to \textsc{MESA}\ models. We found in Sect. \ref{sect_diff_vs_step} that an instantaneous overshooting with $\alpha\ind{ov}$ is roughly equivalent to a diffusive overshooting with $f\sim\alpha\ind{ov}/10$, as was already pointed out in several studies before (e.g. \citealt{noels10}). At first sight this correspondence leads to believe that the \textsc{MESA}\ models require core extensions larger than the \textsc{Cesam2k}\ models. For instance, for the target KIC7206837, an instantaneous overshooting with $\alpha\ind{ov}=0.18$ was found necessary with \textsc{Cesam2k}\ (see Table \ref{tab_convcore}), while \textsc{MESA}\ models required a diffusive overshoot parameter of $f=0.035$, which would translate into $\alpha\ind{ov}\approx0.35$ according to the established correspondence. However, we have mentioned in Sect. \ref{sect_mesa} that when using the same prescription for core overshooting (instantaneous or diffusive) and the same overshooting parameter, \textsc{MESA}\ yields core extensions that are smaller by a factor $\alpha_{\rm MLT}$ compared to the extensions produced with \textsc{Cesam2k}. Since \textsc{MESA}\ models were computed with $\alpha_{\rm MLT}=1.9$, the overshooting parameters obtained with \textsc{MESA}\ should be divided by a factor 1.9 to be compared to the \textsc{Cesam2k}\ overshoot parameters. By doing this, we find that the diffusive overshooting parameter of $f=0.035$ obtained with \textsc{MESA}\ for KIC7206837 is equivalent to an instantaneous overshooting with $\alpha\ind{ov}=0.35/1.9\approx0.18$ using the \textsc{Cesam2k}\ formalism, which is in agreement with the value of $\alpha\ind{ov}$ obtained with \textsc{Cesam2k}\ for this star. To push further the comparison between \textsc{Cesam2k}\ and \textsc{MESA}\ in terms of convective core size, we checked that if the exact same formalism is used for core overshooting (and therefore the same prescription for small convective cores), the two codes provide similar sizes for the extended convective cores. For this purpose, we evolved a 1.3-$M_\odot$ model with both \textsc{Cesam2k}\ and \textsc{MESA}, either without or with overshooting. In the latter case we used an instantaneous overshooting with $\alpha\ind{ov}=0.1$ in both codes, and redefined in \textsc{MESA}\ the overshooting distance for small convective cores using Eq. \ref{eq_dov} instead of Eq. \ref{eq_hptilde}. As shown by Fig. \ref{fig_cc_cesam_mesa}, the variations in the core size with age are very similar for \textsc{Cesam2k}\ (solid lines) and for \textsc{MESA}\ (dashed lines), both in the case without overshooting (black lines) and in the case with overshooting (red lines). The only slight differences occur right after the exhaustion of the initial $^{12}$C in the core, whose burning outside of equilibrium creates the sharp peak in the core size between 15 and 25 Myr, and at the end of the main sequence, whose duration is slightly different in the two codes because of small differences in the input physics. \begin{figure} \begin{center} \includegraphics[width=9cm]{fig_cc_cesam_mesa.ps} \end{center} \caption{Variations in the size of the convective core with age for a 1.3 $M_\odot$ model without overshooting (black lines) and with $\alpha\ind{ov}=0.1$ (red lines). The gray lines indicate the Schwarzschild limit for the case with overshooting. \textsc{Cesam2k}\ models are shown as solid lines, while \textsc{MESA}\ models are represented by dashed lines. \label{fig_cc_cesam_mesa}} \end{figure} We thus conclude that the prescription for the overshooting parameter as a function of stellar mass obtained with \textsc{Cesam2k}\ models should also be applicable to \textsc{MESA}\ models, provided the exact same formalism is considered for core overshooting. Consistency tests such as the one presented above should be performed before applying this prescription to other stellar evolution codes. \section{Conclusion} The main result of this paper is the detection of a convective core in eight main-sequence solar-like pulsators observed with the \textit{Kepler}\ space mission, and the asteroseismic measurement of the extent of the core in these stars. For this purpose, we tested the seismic diagnostic for the size of the core based on the $r_{010}$ ratios, which had been successfully applied to isolated targets before (e.g. \citealt{silva13}) but whose general validity had not been addressed. By computing a grid of stellar models with varying mass, age, helium abundance, metallicity, and core overshooting, we established that the slope and mean value of the $r_{010}$ ratios can be used to estimate (1) whether the star has left the main sequence or not, (2) whether it has a convective core or not, and (3) the extent of the convective core if the star possesses one. The efficiency of this diagnostic stems from the presence of a sharp $\mu$-gradient at the boundary of the mixed core, which adds an oscillatory component to the $r_{010}$ ratios. Since unevolved stars have not yet built up such a $\mu$-gradient, the diagnostic is ineffective on these targets. Based on this, we selected a subset of 24 G and late-F solar-like pulsators among \textit{Kepler}\ targets, avoiding too unevolved stars. We extracted the oscillation mode frequencies of these stars using the complete \textit{Kepler}\ data set (nearly four years) and fitted second-order polynomials to the observed $r_{010}$ ratios. At this occasion, we realized that the covariance matrix of the observables is very ill-conditioned, which in some cases leads the fit astray. We therefore resorted to truncated SVD to solve the problem. This issue should be kept in mind, as it can be suspected to occur in any seismic modeling where combinations of mode frequencies are used as observables, as is now frequently done (e.g. \citealt{lebreton14}). By confronting the slope and mean value of the $r_{010}$ ratios of the 24 selected targets to those of a grid of models computed with the \textsc{Cesam2k}\ code, we were able to establish that \begin{itemize} \item 10 of these targets are in the post main sequence and therefore do not possess convective cores, \item 13 targets are in the main sequence (the evolutionary status of the remaining target is uncertain) and among them eight stars have a convective core, \item the convective cores of these eight targets extend beyond the classical Schwarzschild boundary. \end{itemize} Interestingly, identical conclusions were reached using a similar grid of models computed with the \textsc{MESA}\ code. We were able to obtain measurements of the extent of the convective cores of the eight targets that possess one, with a good agreement between the values obtained with \textsc{Cesam2k}\ and \textsc{MESA}. We also produced precise estimates of the stellar parameters of these eight stars that we obtained through seismic modelings. Consequently, these stars are ideal targets to test and potentially calibrate theoretical models of physical processes that could be responsible for the extension of convective cores, such as core overshooting itself or rotational mixing. Before realistic models of these processes are available, the results obtained in this paper can be used to calibrate the simple parametric models of convective core extensions that are included in most 1D stellar evolution codes. We addressed this question using the code \textsc{Cesam2k}, in which cores are extended over a fraction $\alpha\ind{ov}$ of either the pressure scale height $H_P$, or the radius of the core in the sense of the Schwarzschild limit if it is smaller than $H_P$. We were able to efficiently constrain $\alpha\ind{ov}$ for the eight stars, obtaining values ranging from 0.07 to 0.18. We showed that microscopic diffusion is responsible for only a small fraction of the core extension. Interestingly, we observed a tendency of $\alpha\ind{ov}$ to increase with stellar mass, which opens the possibility to derive an empirical law for $\alpha\ind{ov}(M)$ in the mass range of observed targets ($1.1\leqslant M/M_\odot\leqslant 1.5$), and thus to a calibration of what is usually referred to as core overshooting, but in fact encompasses the effects of all non-standard processes that extend convective cores. One must be careful that such a calibration necessarily depends on the prescription chosen to model the extension of convective cores in 1D stellar models. We can also suspect that it depends on the evolution code itself. We have however verified in this study that the sizes of convective cores produced by the code \textsc{MESA}\ are very similar to those produced by the code \textsc{Cesam2k}, provided the same prescription for core overshooting is adopted. This study thus constitutes a first step towards the calibration of the extension of convective cores in low-mass stars. Constraints on the extent of the convective cores of more stars will be required to confirm and enrich our results. In that respect, the \textsc{PLATO}\ mission (\citealt{rauer14}), which was recently selected by ESA, will be particularly helpful. Reciprocally, obtaining a calibration of the distance over which convective cores extend will reduce the uncertainties on stellar ages, which will be useful to stellar physics in general, and in particular to the \textsc{PLATO}\ mission, for which the precise determination of stellar ages is crucial. \begin{acknowledgements} The authors wish to thank the anonymous referee for suggestions that helped clarify the paper. This work was performed using HPC resources from CALMIP (Grant 2015-P1435). We acknowledge support from the Centre National d'\'Etudes Spatiales (CNES, France). IMB and MSC are supported by Funda\c{c}\~ao para a Ci\^encia e a Tecnologia (FCT) through the Investigador FCT contract of reference IF/00894/2012 and POPH/FSE (EC) by FEDER funding through the program COMPETE. Funds for this work were provided also by the FCT research grant UID/FIS/04434/2013 and by EC, under FP7, through the project FP7-SPACE-2012-312844. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106). The research is supported by the ASTERISK project (ASTERoseismic Investigations with SONG and Kepler) funded by the European Research Council (Grant agreement no.: 267864). V.S.A acknowledges support from VILLUM FONDEN (research grant 10118). \end{acknowledgements} \bibliographystyle{aa.bst}
train/arxiv
BkiUdHA4eIZjuQleB2Ub
5
1
\section{Introduction} The nanoscale measurement of electrical signals in liquid is of great practical importance in electrochemistry (batteries, supercapacitors), biosensing (ions channels, nanopores) and molecular electronics. The underlying challenges rise from the presence of parasitic capacitances and from the fact that under typical measurements conditions, the current scales with the sensor area, leading to difficulties in retrieving the signal with micro- and nanoscale electrodes. Several approaches have been explored to tackle the challenge, using redox cycling \cite{fan_single_1996,kang_electrochemical_2013,byers_single_2015,chennit_electrochemical_2017,li_redox-labelled_2022}, high frequency measurements \cite{grall_attoampere_2021} and fluorescence \cite{moerner_methods_2003,huang_high-throughput_2015,hao_single-molecule_2020}.\\ We propose here to exploit and formalize the shot-noise induced by reversible single electron transfers of electroactive molecules attached to an electrode. Shot-noise has been extensively studied in solid-state physics \cite{machlup_noise_1954,ghibaudo_theory_1989,hung_unified_1990} and more recently in molecular electronics \cite{djukic_shot_2006,clement_1_2007,chan_reversal_2009,sung_scanning_2011,kim_noise_2010,song_origin_2016,karimi_shot_2016,guo_molecular_2016,lumbroso_electronic_2018}, but not in electrochemistry, except for the shot-noise due to a variation of the number of molecules in a nanogap \cite{kang_electrochemical_2013}. Such measurements are challenging because of the ubiquitous $1/f$ noise (e.g. in solid-state physics \cite{karnatak_1_2017}, quantum transport\cite{paladino_1_2014}, molecular electronics \cite{adak_flicker_2015,kim_noise_2021} or in liquid \cite{hladky_measurement_1982,cottis_interpretation_2001,wen_generalized_2017,fragasso_comparing_2020}) which is typically circumvented by low-temperature measurements and by measurements at higher relative frequencies. \\ The $1/f$ noise is here avoided thanks to the well-defined energy level of the redox molecules of the monolayer, allowing to study its low-frequency shot-noise. A simple and straight-forward equation of the shot-noise is proposed, giving direct access to the charge transfer rates and the number of charge carriers. This approach provides clearly readable signals even when faradaic currents become unmeasurable, avoids the parasitic capacitance issue and allows for measurements without extra excitation other than the thermal noise.\\ \begin{figure*} \includegraphics[trim={2cm 0 2cm 0},clip,width=\textwidth]{Figure_1.pdf}% \caption{Illustration of current and noise behavior versus time and voltage, considering a slow scan rate compared to the electron transfer rates ($k_{sum}\gg 1/t_{step}$). (a) $I$ vs $E$, with the evolution of the current as the time after voltage step is increased. (b) Voltnoisograms (PSD vs $E$) taken at low frequency (Eq. \ref{S_f0}) corresponding to the same conditions as in (a). (c) Sampled staircase voltammetry example, with the raw current data (black dots), a double exponential decay fit of the current (red) and the voltage steps (yellow). (d) Raw currents subtracted with exponential fits (blue). (e) PSD spectrum of one current timetrace obtained using the raw current subtracted with the exponential fit. \label{figure1}} \end{figure*} Electroactive redox molecules can be seen as single-electron quantum dots with extremely small energy dispersion, even in liquid and at ambient temperature \cite{trasobares_estimation_2017}. The equilibrium reaction of an ideally reversible redox couple M$^+$/M attached to a metallic electrode and held at a distance $z$ from the electrode (insets Figure \ref{figure1} (a)) can be written as: \begin{center} \ce{M <=>[k_{ox}][k_{red}] M^+ + e^-} \end{center} \begin{align} k_{ox} = & k_0 e^{-\beta z} e^{\alpha \frac{q}{k_B T}(E-E^0)}\\ k_{red} = & k_0 e^{-\beta z} e^{-(1-\alpha) \frac{q}{k_B T}(E-E^0)} \end{align} \noindent with $k_{ox}$ the oxidation rate, $k_{red}$ the reduction rate, $E$ the potential at the electrode, $E^0$ the standard potential of the molecule, $q$ the elementary charge, $T$ the temperature, $k_B$ the Boltzmann constant, $\beta$ the tunneling decay coefficient (1 Å$^{-1}$) and $k_0$ the standard electron transfer rate at a distance $z$ = 0 (in s$^{-1}$). This is written assuming Butler-Volmer formalism, equivalent to Marcus formalism in the present conditions (reorganization energy of 0.85 eV and $|E-E^0|<0.3$~V) \cite{chidsey_free_1991}. Sampled current staircase voltammetry (SCV) is the electrochemical technique used to interrogate the surface-attached redox species \cite{heering_using_1999,huang_random_2013,rodriguez_electron_2021}. The electrode potential is raised in small steps of height $E_{step}$, and the current is recorded as a function of time, up to a time $t_{step}$, corresponding to the steps duration (Figure \ref{figure1} (c)). The probability $P_{ox}$(E) for a molecule to be oxidized at a given voltage $E$ after time $t$ can be written as : \begin{equation} \label{pox} P_{ox}=\frac{k_{ox}}{k_{sum}}(1-e^{-(k_{sum}t_{step})}) \end{equation} with $k_{sum}=k_{ox}+k_{red}$, $\nu=E_{step}/t_{step}$ the voltage scan rate. In the case where the electron transfer is fast compared to the time spent at each step ($k_{sum}\gg 1/t_{step}$), $P_{ox}$ simplifies to: \begin{equation} \label{pox_eq} P_{ox}=\frac{k_{ox}}{k_{sum}} \end{equation} The proportion of attached molecules being effectively oxidized at a given time is approaching $P_{ox}$ following an exponential decay as all molecules end up in the equilibrium state. Defining the current $I$ as the number of transferred charges per unit of time, the current is: \begin{align} I =& Nq \frac{dP_{ox}}{dt} =Nq\nu \frac{dP_{ox}}{dE} \label{I} \end{align} \noindent with $N$ the total number of molecules. For the rest of the study, unless mentioned otherwise, we consider the case of relatively slow scan rates, where electron transfer rates are large compared to the inverse of the time spent at each voltage step ($k_{sum}\gg 1/t_{step}$). $t$ is defined here as the sampling time, with $0\leq t\leq t_{step}$. In this case, for long sampling times (i.e. $k_{sum}\gg 1/t$), $P_{ox}=k_{ox}/k_{sum}$ and $I$ simplifies to: \begin{equation} \label{I_simple} I = \frac{N q\nu}{4k_BT}\frac{1}{\cosh^2(\frac{q}{2k_BT}(E-E^0))} \end{equation} with a full width at half maximum (FWHM): \begin{equation} \label{FWHM_I} E^I_{FWHM} = 4\operatorname{acosh}(\sqrt{2})\frac{k_BT}{q}\approx 90.6 \text{~mV} \end{equation} where $\operatorname{acosh}$ is the inverse of the hyperbolic cosine (considering $T=$ 298 K for the numerical value) \cite{laviron_general_1979}. Figure \ref{figure1} (a) shows $I$ versus applied voltage $E$ at a given scan rate and at different times $t$ after the voltage step, exhibiting a quick decrease of amplitude.\\ One way to consider the noise of the current versus time (Figure \ref{figure1} (b)) is to look at its power spectrum density (PSD, noted $S$ in equations). The PSD (Figure \ref{figure1} (b) and (e)) can be seen as a description of how the variance of the measured signal is spread in the frequency domain \cite{cottis_interpretation_2001}. Considering the results of Machlup \cite{machlup_noise_1954} for the PSD of a two-states system corresponding to oxidized/reduced molecular states, using $k_{ox}$ and $k_{red}$ as charge transfer rates for the oxidized and reduced states respectively and assuming $N$ independent molecules, the PSD can be expressed as: \begin{equation} \label{S_deltaI} S(f,E,N) = 4N\Delta I^2 \frac{k_{ox}k_{red}}{k_{ox}+k_{red}} \frac{1}{(k_{ox}+k_{red})^2+(2\pi f)^2} \end{equation} \noindent with $f$ the frequency and $\Delta I$ the current corresponding to the oxidation (or reduction) of one molecule. If we consider $\Delta I$ as the transfer of one electron of charge $q$ per the average time taken for transferring one electron (i.e., $\Delta I=\frac{q}{\frac{1}{k_{ox}} +\frac{1}{k_{red}}}$), $S$ can be rewritten as: \begin{equation} \label{S_full} S(f,E,N) = 4N q^2 \frac{(k_{ox}k_{red})^3}{(k_{ox}+k_{red})^3} \frac{1}{(k_{ox}+k_{red})^2+(2\pi f)^2} \end{equation} \noindent which becomes at low frequency (assuming $\alpha=0.5$): \begin{equation} \begin{split} \lim_{f \rightarrow 0} S(E,N) &=4N q^2 \frac{(k_{ox}k_{red})^3}{(k_{ox}+k_{red})^5}\\ &=\frac{1}{8}Nq^2\frac{k_0e^{-\beta z}}{\cosh^5(\frac{q}{2k_bT}(E-E^0))} \label{S_f0} \end{split} \end{equation} This equation expresses the dependence of the low frequency electrochemical shot-noise of the redox self-assembled monolayer (SAM) versus the electrode potential. The corresponding curve is plotted in Figure \ref{figure1} (b). Similarly to SCV signals, it presents a peak at $E^0$, but narrower than that of the SCV peak, with a full width at half-maximum (FWHM): \begin{equation} E^S_{FWHM}=4\operatorname{acosh}(\sqrt[5]{2})\frac{k_BT}{q}\approx 56 \text{~mV} \label{FWHM_S} \end{equation} Note that unlike the current, $S$ does not depend on $\nu$ as the PSD is considered for a system at equilibrium. $S$ is also independent of the potential scan direction. Interestingly, the limiting cases of $E=E^0$ and $f\rightarrow 0$ give access to the electron transfer rate $k_0$ and the total number of molecules $N$. \begin{equation} \label{S_V0} \lim_{E \rightarrow E^0} S(f,N) = \frac{1}{2}N q^2 k_0 e^{-\beta z} \frac{1}{4+(\frac{2\pi f}{k_0 e^{-\beta z}})^2} \end{equation} \noindent with the corner frequency $f_c$: \begin{equation} \label{fc} f_c = \frac{1}{2\pi}k_0 e^{-\beta z} \end{equation} \begin{equation} \label{S_V0_f0} \lim_{E\rightarrow E^0, f\rightarrow 0} S(N) = \frac{1}{8}N q^2 k_0 e^{-\beta z} \end{equation} The main result of the present work is Eq. \ref{S_V0_f0}, linking directly and simply $k_0$ and $N$ to the noise measured at low frequency for $E=E^0$. Provided the corner frequency of the PSD, $f_c$ (Figure \ref{SM_FFT} (b)) can be measured, the individual values of $k_0$ and $N$ are obtained from (Eq. \ref{S_V0} and Eq. \ref{S_V0_f0}). Alternatively, if $N$ is known independently, $k_0$ can be straightforwardly derived from $S$ at $E=E^0$ (Eq. \ref{S_V0_f0}).\\ \begin{figure} \includegraphics[trim={6cm 1cm 4.5cm 1cm},clip,width=0.5\textwidth]{CV_I.pdf}% \caption{Example of current CVs obtained at different $\nu$ (electrode area $\approx 45$ mm$^2$, $E$ vs Ag/AgCl (3 M NaCl), electrolyte: [NaClO$_4$]=0.5 M). \label{CV_current}} \end{figure} To demonstrate the validity of the previous analysis, an experiment is set using ferrocene undecanethiol Fc(CH$_2$)$_{11}$SH self-assembled on a gold microelectrode. A two-electrode electrochemical cell setup is used in a Faraday cage, using a [NaClO$_4$]=0.5 M aqueous electrolyte and a Ag/AgCl electrode (3 M NaCl) acting as both reference and counter electrode. Details about the sample preparation and the measurement setup can be found in Supplementary Materials (Figure \ref{map_electrodes} and \ref{SM_measurement_setup}). The system is interrogated using staircase voltammetry (Figure \ref{figure1} (c)), which is equivalent to linear cyclic voltammetry (CV) at slow scan rates \cite{christie_theory_1965}. Our motivation is to offer a comparison of the well-known technique of cyclic voltammetry with the results obtained looking at the shot-noise of the system. The Figure \ref{CV_current} shows an example of current CVs at different (low) scan rates $\nu$. The signal is centered around a potential value of $E^0=0.35 \pm 0.02$ V vs Ag/AgCl, which corresponds to the expected standard potential for such surface-attached Fc molecules \cite{tian_modulated_2013,nerngchamnong_nonideal_2015,gupta_role_2021}. The density is estimated here at 4.2$\times 10^{-10}$ mol/cm$^2$, close to the values reported in the literature for packed SAMs ($4.4 \sim 4.9\times 10^{-10}$ mol/cm$^2$) \cite{nijhuis_molecular_2009,trasobares_17_2016}. The peak current of the CV exhibit the usual behavior for a surface-confined reversible couple, with a linear dependency of the current versus $\nu$ (exemple data Figure \ref{SM_sweeprates}). The shot-noise measurements are only meaningful in the case where the fluctuations of the current are due to a faradaic charge transfer at equilibrium. To avoid the contribution of any transient current, two types of measurements are realized. The first one corresponds to staircase sampled current experimental conditions (``SCV conditions''). For each potential step $E_{step}$ the whole timetrace of the current is recorded and fitted with a double exponential decay (Figure \ref{figure1} (c), details in Figure \ref{SM_trimming} and \ref{SM_timetraces}): \begin{equation}\label{exp_fit} I=A_ce^{\frac{-t}{\tau_c}}+A_fe^{\frac{-t}{\tau_f}} \end{equation} \noindent with $(A_c,\tau_c)$ the amplitude and time constant of the capacitive current, due to the relaxation of the double layer, and $(A_f,\tau_f)$ the amplitude and time constant of the faradaic current, due to the charge transfer between Fc molecules and the electrode. The PSD spectra are extracted from ``flattened'' currents, obtained by subtracting the exponential decay contribution (Figure \ref{figure1} (d), details in Figure \ref{SM_FFT} $-$ \ref{SM_timetraces}). For $t_{step} > 5\times max(\tau_c,\tau_f)$ (here corresponding to $\nu <10$~mV/s), the transients were well-resolved and the low-frequency noise could be acquired using this method (Figure \ref{figure1} (e)). The second type of measurement, called here ``Noise conditions'' consists in using very slow scan rates ($\nu < 0.1$~mV/s) in order to bring the system as close as possible to equilibrium at each electrode potential value (Figure \ref{SM_FFT}). This allows to average the PSD obtained from each timetrace at least 10 times, and this for each voltage step with no transient contribution to the PSD (details in Figure \ref{SM_averaging}).\\ \begin{figure} \includegraphics[trim={11cm 2cm 11cm 1cm},clip,width=0.5\textwidth]{CV_PSD_2.pdf} \caption{(a) PSD of the current versus $E$ obtained at different $\nu$, at $f \approx$ 20 Hz. (b) PSD at $f \approx$ 20 Hz and $E=$ 0.35 V versus $\nu$.\label{CV_PSD}} \end{figure} PSD signals were measured at several scan rates, their magnitude at 20 Hz versus $E$ (called ``voltnoisogram'' for concision) shown in Figure \ref{CV_PSD} (a), (full set see Figure \ref{SM_sweeprates_PSD}). The PSD voltnoisograms behave as expected with a peak-shaped curve centered around $E^0\approx 0.35$ V, close to the standard potential of Fc. As predicted from Eq. \ref{S_f0}, the peak value of the PSD voltnoisograms (Figure \ref{CV_PSD} (b)) remains quasi-constant for $\nu<3$ mV/s. The small dependency for $\nu>3$ mV/s can be due to the contribution of transient currents, difficult to entirely remove at high scan rates, and to a source of noise not taken into account such as a non-constant dielectric losses with frequency (dielectric losses noise discussed in SM Figure \ref{SM_PSD_full_spectrum}). The very small scaling of the PSD peak value with $\nu$ confirms that the measurement of the noise is well-suited for the study of systems with small numbers of charge carriers at very slow scan rates, where the parasitic capacitive currents can be suppressed. The FWHM of the PSD peaks on Figure \ref{CV_PSD} is $\approx 80$ mV, which is slightly broader than the 56 mV predicted by Eq. \ref{FWHM_S}, due to a slight shift between the oxidation and the reduction potentials. This can be understood since voltnoisograms are measuring a convolution of the oxidized and reduced peaks, with any shift between the oxidation and the reduction potentials resulting in an apparent broadening of the peak in the noise.\\ \begin{figure} \includegraphics[trim={11cm 0 12cm 0},clip,width=0.55\textwidth]{noise_spectrum_2.pdf}% \caption{(a) CV of a FcC$_{11}$SH SAM on gold. (b) PSD measured at 0.04~mV/s at $E<E^0$, $E\approx E^0$ and $E>E^0$. (c) PSD versus potential at 10 Hz. The dashed line is fitted using Eq. \ref{S_f0_g} ($g=0.31$).\label{noise_spectrum}} \end{figure} Figure \ref{noise_spectrum} (a) shows a CV acquired in the Noise conditions ($\nu=0.04$ mV/s). At such a slow scan rate no clear faradaic current signal can be identified in the CV. Simultaneously with the CV data, PSDs were acquired at each voltage step, with only PSDs obtained on the forward scan at $E<E^0$, $E\approx E^0$ and $E>E^0$ at low frequencies shown for clarity in Figure \ref{noise_spectrum} (b) (full spectrum and details in SM Figure \ref{SM_PSD_full_spectrum}, \ref{SM_sweeprates_smoothing} and \ref{SM_sweeprates}). Figure \ref{noise_spectrum} (c) shows a PSD slice (dashed line indicated in Figure \ref{noise_spectrum} (b)) obtained at 10 Hz versus $E$, where a peak is clearly visible. The dashed line on Figure \ref{noise_spectrum} (c) is obtained using a modified version of the Eq. \ref{S_f0}: \begin{equation} \lim_{f \rightarrow 0} S(V,N) =\frac{1}{8}Nq^2\frac{k_0e^{-\beta z}}{\cosh^5(g\frac{q}{2k_bT}(E-E^0))} \label{S_f0_g} \end{equation} \noindent with $g$ a parameter taking into account the broadening of the peak. The above results illustrate the gain in sensitivity brought by the shot-noise measurements, allowing to the detect an electrochemical reaction at an electrode in conditions where the average current (CV) signal shows nothing. The number of molecules $N=7.5\times 10^{10}$ is obtained from the current CV data at higher scan rates (Figure \ref{SM_sweeprates}). Using this $N$ value and Eq. \ref{S_V0_f0}, the peak amplitude of PSD data shown in Figure \ref{CV_PSD} yields $k_0 = 6.3\times10^{7}$~s$^{-1}$ ($z=1$~nm), which is in good agreement with literature \cite{chidsey_free_1991,zevenbergen_fast_2009}.\\ In conclusion, we demonstrated the measurement of the shot-noise generated by an ensemble of surface-attached Fc redox molecules, which can be seen as identical single-electron boxes, in liquid and in ambient conditions, along with the formalism to understand it. This constitutes a further step toward nanoelectrochemistry and single molecule measurements, which could be practically achieved using our technique combined with a transducer such as a nanotransistor. On the methodological side, our technique allows to measure electron transfer rates from data acquired at low frequencies, without the need of highly time-resolved instrumentation to measure fast electron transfer rates. Although we compared our technique with traditional voltammetry techniques, exhibiting a clear signal in PSD when $I$ tended to zero, the very concept of ``potential scan'' is actually not required to perform noise measurements. As few as two points at potentials far from $E^0$ and one at $E^0$ can suffice to resolve the eventual background noise of the experiment and the noise due to the attached molecule, yielding $k_0$ and $N$ provided the knowledge of $\beta$ and $z$. This opens perspectives in the field of biosensors \cite{li_redox-labelled_2022}, where the limit of detection of existing techniques could be further extended by shot-noise analysis. Concurrently, since the measurements is carried out at equilibrium, capacitive contributions are altogether avoided, improving the signal and simplifying drastically the interpretation of the data. \section{Acknowledgments} \begin{acknowledgments} This work has been supported by the JSPS Core-to-Core Program (JPJSCCA20190006), the EU-ATTRACT project (Unicorn-Dx) and the French ``Agence Nationale de la Recherche''(ANR) through the ``SIBI'' project (ANR-19-CE42-0011-01).\\ S.G designed the acquisition system, conducted the experiments and data analysis and developed the theory, S.L. fabricated the devices, L.J. designed the acquisition system, SH. K. and A.C. contributed to the scientific interactions on electrochemistry, C.D. and N.C. conceived and supervised the whole project. All authors actively contributed to the discussions and the writing of the paper. \end{acknowledgments} \section{Figure 1 plotting} \clearpage \section{Sample preparation} The electrode is fabricated by gold sputtering on a silicon wafer. Plain electrodes (From 45 mm$^2$ down to 0.78 mm$^2$) and microelectrodes (as described Figure \ref{map_electrodes}) were used. The later was preferred for noise measurements, as in a 2-electrode electrochemical setup, keeping low-current to avoid potential drop is critical. For the microelectrodes configuration, a layer of SU8 (thickness = 25 \textmu m) is spin-coated on all the wafer except the designated gold areas to prevent unwanted reactions with silicon dioxide and limit the parasitic capacitance (Figure \ref{map_electrodes}). The electrode is incubated in a 1 mM ethanol solution of ferrocene undecanethiol for at least one day, with further incubation in a 1 mM undecanethiol solution for 2h. CV curves Figure \ref{SM_sweeprates} were used to extract $N= 7.5\pm 0.05\times10^{10}$. The electrochemical experiments are all carried out in 0.1 M NaClO$_4$ aqueous electrolyte and after N$_2$ bubbling. The electrochemical cell (SEC-3F, ALS Co., Japan) is made of a small silicon gasket delimiting a channel ($3\times 16 \times 0.3$ mm) over the microelectrodes and connected to the Ag/AgCl electrode. The cell can be sealed with microfluidic switches after filling with the electrolyte. \begin{figure} \includegraphics[trim={0cm 0 3cm 0},clip,width=\textwidth]{SM_map_electrodes.pdf}% \caption{Layout of the microelectrodes used, with a zoom on the right indicating the exposed area and the SU8 mask.\label{map_electrodes}} \end{figure} \clearpage \section{Measurement setup} The microelectrodes are connected to a current amplifier (CA5351, NF Corporation, Japan), itself connected to a digital-to-analog converter (USB-4431, National Instrument) (Figure \ref{SM_measurement_setup}). The voltage is applied using a Ag/AgCl (3 M NaCl) reference electrode. Though only two electrodes are used, the very low currents expected make the voltage drop across the electrochemical cell negligible. As a control, we verified that experiments carried out in a three-electrode configuration (using a platinum wire counter electrode) yielded similar results as those obtained with our two-electrode setup. The whole experiment is controlled using a Labview program. As a precaution, a platinum counter electrode is added to the setup and cyclic voltammograms are measured both with and without the counter electrode, with no significant differences. \begin{figure} \includegraphics[trim={1cm 0 1cm 0},clip,width=\textwidth]{SM_measurement_setup.pdf}% \caption{Schematics of the setup used to scan the voltage over the Fc SAM and recover the current and PSD data. \label{SM_measurement_setup}} \end{figure} \clearpage \begin{figure} \includegraphics[trim={8cm 2cm 8cm 0},clip,width=\textwidth]{SM_FFT.pdf}% \caption{(a) Computer-generated time traces of the current for illustrative purposes, with long waiting time such as $I$ reaches zero at each voltage step (black). Voltage steps $E_{step}$ are represented in yellow. The right panel represents the distribution of the current around its average, with $\Delta I$ as described in the main text. (b) Noise versus frequency for different $E$, with the corner frequency indicated for $E=E^0$ The noise data are obtained at each $E_{step}$ by Fourier transform of the autocorrelation of the current data when the average current reaches zero (yellow window on (a)). \label{SM_FFT}} \end{figure} \begin{figure} \includegraphics[trim={7cm 1cm 6cm 0},clip,width=\textwidth]{staircase.pdf}% \caption{(a) Staircase voltammetry example, with the raw current data (black dots), a double exponential decay fit of the current (red) and the voltage steps (yellow). (b) Flattened currents (raw current corrected by the subtraction of exponential fits). (c) PSD spectrum of one current time trace obtained using the raw current subtracted with the exponential fit (blue). The black line represents the PSD obtained from raw data, and the red one from the exponential fits. \label{SM_timetrace_trimming}} \end{figure} \begin{figure} \includegraphics[trim={7cm 0 6cm 0},clip,width=\textwidth]{SM_trimming.pdf}% \caption{(a) Raw current data trace, a double exponential decay fit of the current is shown (red). (b) Flattened current (raw currents corrected by the subtraction of exponential fits) with only the last 4/5$^{th}$ kept for PSD calculation. (c) PSD spectra obtained from raw data (black), exponential fits (red), flattened current (green) and trimmed flattened current (blue). The later is used throughout this work for processing "CV conditions" data. \label{SM_trimming}} \end{figure} \section{CV conditions versus Noise conditions} \subsection{CV conditions} \begin{figure} \includegraphics[trim={4cm 5cm 5cm 4cm},clip,width=1\textwidth]{SM_timetraces.pdf}% \caption{time traces recorded at $E=E^0$ at (a) 16.1 mV/s and (b) 7.7 mV/s. The fitting lines are obtained using Eq. \ref{exp_fit} \label{SM_timetraces}} \end{figure} \begin{figure} \includegraphics[trim={8cm 0cm 8cm 0cm},clip,width=\textwidth]{SM_averaging.pdf}% \caption{Illustration for the "Noise conditions" measurements. For each voltage step, the current is measured $n$ times ($n=7$ in this example, $n>10$ during experiments) during a time window of $t$ and converted into PSD (plotted versus the frequency $f$). After $n$ measurements, the PSDs obtained are averaged over $n$ (bottom right graph) and the measurement restarts at the next voltage step. \label{SM_averaging}} \end{figure} Eq. \ref{pox} shows the probability of oxidation at a sweep rate $\nu$ for a measuring time $t$. ``SCV conditions'' corresponds here to the conditions where $k_{sum} \gg 1/t$. Experimentally, we used $t>5\times \tau$ which corresponds to $\nu<10$~mV/s, with $\tau=\max(\tau_c,\tau_f)$ (obtained from Eq. \ref{exp_fit}) the largest time constant experimentally measured at $E\approx E^0$ (where $k_{sum}$ is minimum) as indicated Figure \ref{SM_timetraces}. Noise measurements at scan rates higher than 10 mV/s were not reliable, as the transient could not be fully resolved and it was difficult to estimate its full extent (\ref{SM_timetraces}). For $\nu< 10$~mV/s, the time trace was fitted, straightened and trimmed, keeping only the $4/5^{th}$ of the time trace to calculate the PSD (Figure \ref{SM_timetrace_trimming}). The resulting PSD does not vary significantly as shown in Figure \ref{CV_PSD}, which supports $t>5\times \tau$ criteria being reasonable. \subsection{Noise conditions} The second type of measurement, called here "Noise conditions", aims at answering the transient problem more satisfactorily than the trimming method. Several time traces are recorded at each voltage step, giving time to the system to reach equilibrium for a given voltage ($k_{sum}\gg 1/t$), meaning that after the initial time trace recorded just after the voltage change, no transient phenomenon occurs. In this case, the mean current tends to zero (Figure \ref{figure1} (a) and (c)), but not the noise (Figure \ref{figure1} (b) and \ref{SM_FFT}). This allows to average the PSD obtained from each time trace at least 10 times (Figure \ref{SM_averaging}), and this for each voltage step with no transient contribution to the PSD. The other consequence is that the effective voltage scan rates are very low, on the order of 0.1 mV/s and below. The measure of the noise out of equilibrium (if it can still be called "noise" in such case) is possible but is beyond the scope of the present article.\\ In Figure \ref{CV_PSD}, PSD measurements were carried out from $\nu=$ 1 mV/s up to 16.1 mV/s only, due to transients interfering at higher scan rates. Lower scan rates were measured using the Noise conditions. \clearpage \section{Dielectric losses} The PSD spectra exhibit a $\tan(\delta)$ dielectric loss contribution: \begin{equation} \label{S_dielectric} S_D(f) = 8k_BT\pi C\tan(\delta)f \end{equation} \noindent with $C$ the capacitance and $\tan(\delta)$ the dielectric loss tangent \cite{westerlund_capacitor_1994,wen_generalized_2017}. Negligible at low-frequency, the dielectric noise allows access to the low-frequency shot-noise, but unfortunately not to $f_c$ (Eq. \ref{fc}). The origin of the dielectric losses noise can be attributed to the SAM's capacitance. Considering the area of the exposed electrode ($A=7.84\times10^{-9}$m$^2$) and the capacitance of the SAM ($C=1.5\times10^{-8}$ F), PSD acquired far from $V_0$ can be fit with Eq. \ref{S_dielectric} (Figure \ref{SM_PSD_full_spectrum}) with dielectric losses around 1, slightly higher than measured in dry \cite{clement_relaxation_2010} and on the order of common polymer \cite{prabha_study_2015}. However, this assumes a constant $\tan(\delta)$ with frequency, which is not necessarily the case if for example, water molecules penetrate the SAM. The curve obtained at $E^0$ Figure \ref{SM_PSD_full_spectrum} seems to suggest a non-constant loss with frequency. Though beyond the scope of the current work, further investigations could allow to use such noise measurements as a probing method for the effective capacitance of the SAM\cite{israeloff_dielectric_1996}. \clearpage \begin{figure} \includegraphics[trim={8cm 5cm 9cm 3cm},clip,width=0.8\textwidth]{SM_PSD_full_spectrum.pdf}% \caption{Full PSD spectrum with partial fits (dashed lines) of dielectric noise losses (red) at $E\ll E^0$ and electrochemical shot noise (green) at $E=E^0$. \label{SM_PSD_full_spectrum}} \end{figure} \begin{figure}[h] \includegraphics[trim={11cm 0cm 11cm 0cm},clip,width=0.7\textwidth]{SM_sweeprates_smoothing.pdf}% \caption{Smoothing procedure used to obtain Figure \ref{CV_PSD} in the main text. The noise is taken at 20 Hz (yellow square on the spectrum), averaging with neighboring frequencies (blue lines, average in black). A rolling median (window$\approx$ 7\%, green line) followed by an adjacent averaging (window$\approx$7\%, red line) were used.\label{SM_sweeprates_smoothing}} \end{figure} \begin{figure}[h] \includegraphics[trim={4cm 0 4cm 0},clip,width=\textwidth]{SM_sweeprates.pdf}% \caption{Current CV (a) with zoomed data (b) corresponding to the data shown Figure \ref{CV_PSD} in the main paper. (c) shows the dependency of the current with the scan rate. From the area of the oxidation peak on the current CV, we estimate the number of molecules $N \approx 7.5\times10^{10}$. \label{SM_sweeprates}} \end{figure} \begin{figure}[h] \includegraphics[trim={4cm 5cm 4cm 2cm},clip,width=\textwidth]{SM_sweeprates_PSD.pdf}% \caption{PSD of the current versus $E$ obtained at different $\nu$, at $f \approx$ 20 Hz, corresponding to the data shown in the main paper Figure \ref{CV_PSD}. \label{SM_sweeprates_PSD}} \end{figure} \clearpage
train/arxiv
BkiUdTLxK6nrxrSHffW8
5
1
\section{INTRODUCTION} Phase transitions in the early Universe occur with the formation of vacuum bubbles of a new phase. During the expansion and mutual intersection of new-phase bubbles, old-phase bubbles completely surrounded by the new phase can also be formed \cite{Kirznits,Kirznits2,ZeldovKobzarOcyn,KobzOkunVoloshin,Col, CalCol,ColLuc}. Investigating the evolution of such bubbles and their subsequent fate is of interest in connection with the problems of primordial black holes. The evolution of vacuum bubbles was considered in many papers mainly under the assumption of a de Sitter metric for the bubble interior. The region that separates the bubble interior and exterior is a domain wall. The shin-shell formalism suggested by Israel \cite{Israel} and subsequently developed in detail by Berezin, Kuzmin, and Tkachev \cite{BerKuzTkach}, as applied to cosmological problems, is commonly used to describe the latter. Various special cases of this problem were considered in many papers on cosmological phase transitions (see, e.g., the early papers \cite{ZeldovKobzarOcyn,KobzOkunVoloshin,Col,ColLuc,Sato, SatoSasKodMae1,Sato1,Sato2}). The end result of the evolution of vacuum bubbles can be the formation of primordial black holes and various types of wormholes with baby universes inside \cite{Sato,BerKuzTkach,BerKuzTkach1,IpserSikivie,Aurilia,Aurilia1, Aurilia2,Aguirre,BlGuenGuth,BerKuzTkach2,BerKuzTkach6,BerKuzTkach4, BerKuzTkach5,Kardashev}. The evolution of vacuum bubbles in the Schwarzschild-de Sitter world was investigated in \cite{BlGuenGuth,BerKuzTkach,DokCher,DokCher1}. The dynamics of a bubble in the Friedman-Schwarzschild world was investigated in \cite{Sato,SatoSasKodMae1} but without including the surface tension of the bubble (shell) wall. In this paper, we analyze the full dynamics of a vacuum shell in the Friedman- Schwarzschild world in the thin-wall approximation by taking into account the surface energy density of the shell. Such a configuration can result from the production of particles during the vacuum decay inside the bubble by analogy with the final stage of inflation (see also \cite{Rubakov}). As a result of our analysis, we found all of the possible types of evolution of vacuum shells in the Friedman-Schwarzschild world and constructed the corresponding global geometries. We also found approximate asymptotic solutions to the equation of motion of vacuum shells in the Friedman- Schwarzschild world. \section{A SHELL IN THE FRIEDMAN- SCHWARZSCHILD WORLD} Let us consider a spherically symmetric shell whose interior (far from the boundary) is described by the Friedman metric \begin{equation} ds^2=dt_{\rm in}^2-a^2(t_{\rm in})\left[\frac{dq^2} {1-kq^2}+q^2d\Omega^2\right], \end{equation} where $t_{\rm in}$ is the time of an observer in the Friedman world, $a=a(t_{\rm in})$ is the scale factor, $q$ is the inner radial coordinate, $d\Omega$ is an element of the solid angle, $k=1$, $k=0$, and $k=-1$ for closed, flat, and open worlds, respectively. The exterior of the shell is described by the Schwarzschild metric \begin{equation} ds^2=\left(1-\frac{2m}{r}\right)dt_{\rm out}^2- \left(1-\frac{2m}{r}\right)^{-1}dr^2-r^2d\Omega^2, \end{equation} where m is the total outer mass of the shell and r is the outer radial coordinate. In what follows, the subscripts "in" and "out" pertain to the inner Friedman and outer Schwarzschild worlds, respectively. The metric of the transition region is modelled in the form of a thin shell $\Sigma$: \begin{equation} ds^2|_{\Sigma}=d\tau^2-\rho^2(\tau)d\Omega^2, \end{equation} where $\tau$ is the proper time of an observer on the shell and $\rho=\rho(\tau)$ is the shell radius. The inner and outer metrics are joined on the shell using the thin-shell method \cite{Israel} to give the equations of motion of the shell \cite{BerKuzTkach} \begin{equation} 4\pi S_{0}^{0}=[K_{2}^{2}], \quad \frac{dS_{0}^{0}}{d\tau}+2(S_{0}^{0}- S_{2}^{2})\frac{\dot{\rho}}{\rho}+[T_{0}^{n}]=0, \label{eqmotion} \end{equation} where $S_{\alpha}^{\beta}$ is the surface energy density tensor on the shell, $K_{\alpha}^{\beta}$ is the external curvature tensor, $T_{\alpha}^{\beta}$ is the fluid energy-momentum tensor, and $[A] = A_{out} - A_{in}$. For a vacuum shell, $S_{0}^{0}=S_{2}^{2}=S=const$. Let us first consider the interior of our bubble. The Friedman equations are \begin{equation} \frac{\dot{a}^2+k}{a^2}=\frac{8\pi}{3}\varepsilon, \qquad \frac{\ddot{a}}{a}=-\frac{4\pi}{3}(\varepsilon+3P), \label{friedman} \end{equation} where the dot denotes differentiation with respect to the time $t_{in}$, $\varepsilon$ is the energy density, and $P$ is the pressure. We will make the classification for an arbitrary equation of state, but we will always keep in mind a linear equation of state, where $P=\alpha\varepsilon$ and $\alpha=const\neq-1$ (for the classification of solutions in the case of $\alpha=-1$ corresponding to the de Sitter vacuum metric, see \cite{DokCher,DokCher1}). For a linear equation of state, the solution to the Friedman equations is ($k = 0$) \begin{equation} a=At_{in}^n, \quad \varepsilon=\frac{3n^2}{8\pi t_{in}^2}, \quad \label{frsolution} \end{equation} where $A$ is a constant and $n=2/(3(1+\alpha))$. For the Friedman metric and using the condition for joining the Friedman metric and the shell, $\rho=aq$, Berezin et al. \cite{BerKuzTkach} calculated (for any k) the invariants \begin{equation} \Delta\equiv g^{\alpha\beta}\rho_{,\alpha}\rho_{,\beta}= \frac{8\pi}{3}\varepsilon\rho^2-1 \label{Delta} \end{equation} and the external curvature tensor component (since the problem is spherically symmetric, we will need only one component) \begin{equation} K_{2}^{2}= -\frac{\sigma}{\rho}\sqrt{\left(\frac{d\rho}{d\tau}\right)^2 +1-\frac{8\pi}{3}\varepsilon\rho^2}, \end{equation} where $\sigma=\pm1$; $\sigma=1$ if the radius of the two-dimensional sphere increases in the direction of the outward normal and $\sigma=-1$ in the opposite case. In turn, depending on the sign of the invariant $\Delta$, the shell moves either in the space-time region $R$ or in $T$ \cite{BerKuzTkach3}. The boundary that separates the space-time regions $R$ and $T$ for the Friedman metric is located at the radius \begin{equation} \rho_{\Delta}=\sqrt{\frac{3}{8\pi\varepsilon}}, \label{rhodelta} \end{equation} which is the root of the equation $\Delta(\rho)=0$. For the outer Schwarzschild metric, the external curvature tensor component $K_{2}^{2}$ is \begin{equation} K_{2}^{2}= -\frac{\sigma}{\rho}\sqrt{\left(\frac{d\rho}{d\tau}\right)^2+1 -\frac{2m}{\rho}}\,, \end{equation} and the radius at which $\Delta$ changes its sign coincides with the radius of the event horizon $r_h=2m$. As a result, the main equation of motion of the shell (\ref{eqmotion}) that arises as a condition for joining the outer and inner metrics can be written for any $k$ as \cite{BerKuzTkach} \begin{equation} 4\pi S= \frac{\sigma_{\rm in}}{\rho}\sqrt{\left(\frac{d\rho}{d\tau}\right)^2 +1-\frac{8\pi}{3}\,\varepsilon\rho^2} -\frac{\sigma_{\rm out}}{\rho}\sqrt{\left(\frac{d\rho}{d\tau}\right)^2 +1-\frac{2m}{\rho}}\,. \label{eqmot} \end{equation}\ In this equation, the radius $\rho$ depends on the proper time $\tau$ of an observer on the shell and the energy density $\varepsilon$ depends on the time $t_{\rm in}$ for an observer inside the shell in the Friedman world. Therefore, the equation of motion should be supplemented with another equation obtained when the inner Friedman metric and the metric on the shell are joined: \begin{equation} dt_{in}^2-a^2dq^2=d\tau^2. \label{shifka} \end{equation} In the Eq.~(\ref{eqmot}), the shell radius $\rho$ may be considered as a function of the time $t_{\rm in}$ (below, we omit the subscript to save space). More specifically, $\tau=\tau(t)$ can be expressed from Eq.~(\ref{shifka}) and substituted into Eq.~ (\ref{eqmot}), i.~e., $\rho(\tau)=\rho(\tau(t))$. We will begin our analysis with the full classification of the solutions to the equation of motion of the shell (\ref{eqmot}) and the construction of the corresponding global geometries. Subsequently, we will find an approximate solution to the equation of motion of the shell in some special cases. \section{ANALYSIS OF THE EQUATION OF MOTION OF THE SHELL} For the subsequent analysis, it is convenient to represent the equation of motion of the shell (\ref{eqmot}) as an equation for the effective energy, \begin{eqnarray} \left(d\rho/d\tau\right)^2/2+U(\rho)=0 \nonumber \end{eqnarray} (see also \cite{DokCher,DokCher1}), where the effective potential is \begin{widetext} \begin{equation} U(\rho)=\frac{1}{2}\Bigg[1-\left(2\pi S+ \frac{\varepsilon}{3S}\right)^2\rho^2 -\frac{m}{\rho}\left(1-\frac{\varepsilon}{6\pi S^2}\right)- \frac{m^2}{16\pi^2 S^2\rho^4}\Bigg]. \label{potential} \end{equation} \end{widetext} Its graph is presented in Fig.~1. Equation (\ref{potential}) for the effective potential $U(\rho)$ should be supplemented with the following conditions on the signs of the quantity $\sigma$ present in the original equation of motion of the shell (\ref{eqmot}): \begin{eqnarray} \label{sigmain} \sigma_{\rm in}&=& sign\left[m-\frac{4\pi}{3}\varepsilon\rho^3+8\pi^2 S^2\rho^3\right], \\ \sigma_{\rm out}&=&sign\left[m-\frac{4\pi}{3}\varepsilon\rho^3 -8\pi^2 S^2\rho^3\right]. \label{sigmaout} \end{eqnarray} It is easy to show that the second derivative of this potential with respect to the radius for any $\rho=\rho(\tau(t))$ is negative: \begin{widetext} \begin{equation} \frac{\partial^2U}{\partial\rho^2}= -\frac{1}{2}\Bigg[\frac{2m}{\rho^3}+ \frac{m^2}{\pi^2S^2\rho^6}+8\pi^2S^2+ \frac{8\pi}{3}\varepsilon+\frac{\varepsilon^2}{9S^2} +\left(\frac{\varepsilon}{3S}- \frac{m}{2\pi S\rho^3}\right)^2\Bigg]<0\,. \end{equation} \end{widetext} Thus, there are no static solutions in this problem \cite{IshakLake}. Setting the first derivative of the potential with respect to the radius equal to zero, we find the point of maximum potential $\rho_{\rm max}^3=my_{\rm max}$, where \begin{widetext} \begin{equation} y_{\rm max}=\left[1-\frac{\varepsilon}{6\pi S^2}+ \sqrt{\left(1-\frac{\varepsilon}{6\pi S^2}\right)^2+ 8\left(1+\frac{\varepsilon}{6\pi S^2}\right)^2}\,\right] \left(4\pi S+\frac{2\varepsilon}{3S}\right)^{-2}>0. \label{ymax} \end{equation} \end{widetext} Note, that the point of maximum potential is a function of time, $\rho_{\rm max}=\rho_{\rm max}(t)$. As will be shown below, the total mass m (Schwarzschild mass) of the shell measured by an observer at spatial infinity is a convenient parameter for the classification of the possible types of its evolution. This mass with the gravitational mass defect includes the total energy of the inner Friedman world and the total energy of the shell with its surface tension energy and its kinetic energy. Substituting $\rho=\rho_{\rm max}$ into Eq.~(\ref{potential}) for the potential, we find the first important mass parameter of the shell, $m_{\rm max}=m_{0}$, at which a contracting or expanding shell passes through the point of maximum potential: \begin{widetext} \begin{equation} m_{0}=\sqrt{y_{\rm max}}\Bigg[1-\frac{\varepsilon}{6\pi S^2}+ \frac{1}{16\pi^2S^2y_{\rm max}}+\left(2\pi S+ \frac{\varepsilon}{3S}\right)^2y_{\rm max}\Bigg]^{-3/2}>0. \label{m0} \end{equation} \end{widetext} The potential $U(\rho_{\rm max})<0$ for $m>m_{0}$ and, conversely, $U(\rho_{\rm max})>0$ for $m<m_{0}$. Thus, depending on the total mass m of the shell, the potential either intersects the $U = 0$ axis or does not. In other words, this means that the presence or absence of a bounce point during the temporal evolution of the shell radius depends on the total mass m of the shell. It should be kept in mind that the mass parameter $m_{0}$ (and all of the mass parameters introduced below) is a function of time t, because the energy density $\varepsilon$ depends on t in accordance with the Friedman equations (\ref{friedman}). Therefore, at a fixed total mass m, inequalities of the form $m>m_{0}$ or $m<m_{0}$ can change with time to the opposite ones. Accordingly, the bounce point can appear and/or disappear as the shell evolves. Note also that the energy density $\varepsilon$ decreases with time for a linear equation of state, $P=\alpha\varepsilon$, at $\alpha>-1$ and $k=0$. Therefore, on fairly long time scale, we have the following asymptotic for the mass parameter: $m_{0}(t\rightarrow\infty)=4/(27\pi S)$ Accordingly, the inequality $U(\rho_{\rm max})\lessgtr0$ will hold on fairly long time scales for $m\gtrless m_{0}(t\rightarrow\infty)$. Next, it follows from Eq.~(\ref{sigmain}) that $\sigma_{\rm in}$ changes its sign ($\sigma_{\rm in}=0$) at the shell radius $\rho=\rho_{1}$, where \begin{equation} \rho_{1}^3= \frac{3}{4\pi}\,\frac{m}{\varepsilon-6\pi S^2}, \label{rho1} \end{equation} The radius $\rho_{1}$ exists only at $\varepsilon(t,k)>6\pi S^2$. For a linear equation of state, $t<t_{1}=n/(4\pi S)$ at $k=0$. The radius $\rho_{1}$ does not exist ($\rho_{1}<0$) at $t>t_{1}$. We see from these relations that on time scales $t<t_{1}$ there exists a radius in a flat universe at which $\sigma_{in}$ changes its sign; the latter, in turn, is related to the regions $R_+$ (where $dr/dq > 0$) and $R_-$ (where $dr/dq < 0$). The solution of the Friedman equations determines the time scales at which the radius $\rho_{1}$ will exist for other equations of state. Note also that a periodic function can be the solution of the Friedman equations for a closed universe. Therefore, the radius $\rho_{1}$ can appear and disappear an infinite number of times. We will assume that the radius appears only once. This is a very rough approximation that can subsequently lead to contradiction on the Carter-Penrose diagrams if this condition is disregarded. In turn, it follows from Eq.~(\ref{sigmaout}) that $\sigma_{\rm out}$ changes its sign ($\sigma_{\rm out}=0$) at the shell radius $\rho=\rho_{2}$, where \begin{equation} \rho_{2}^3= \frac{3}{4\pi}\,\frac{m}{\varepsilon+6\pi S^2}, \label{rho2} \end{equation} The relations between $\rho_{1}$, $\rho_{2}$ and $\rho_{\rm max}$ follow from Eqs. (\ref{ymax}), (\ref{rho1}) and (\ref{rho2}): \begin{eqnarray} \rho_{2}&<&\rho_{\rm max}; \\ \rho_{1}&>&(\rho_{2},\rho_{\rm max}) \quad \hbox{for} \quad \varepsilon>6\pi S^2. \end{eqnarray} Substituting the radius $\rho=\rho_{1}$ into Eq.~(\ref{potential}) for the potential $U(\rho)$ and solving the equation \begin{equation} U(\rho_{1})\equiv\frac{1}{2}\left[1- 2\!\left(\frac{4\pi}{3}\right)^{1/3}\!\! \left(\frac{m\varepsilon^{3/2}}{\varepsilon- 6\pi S^2}\right)^{2/3}\right]=0. \end{equation} we find the mass parameter $m=m_1$, where \begin{equation} m_{1}=\frac{1}{4}\sqrt{\frac{3}{2\pi}} \frac{\varepsilon-6\pi S^2}{\varepsilon^{3/2}}. \end{equation} We see from this relation that $U(\rho_{1})\lessgtr0$ for $m\gtrless m_{1}$. This parameter exists, just as $\rho_{1}$, only at $\varepsilon>6\pi S^2$. Similarly, substituting the radius $\rho=\rho_{2}$ into Eq.~(\ref{potential}) for the potential $U(\rho)$ and solving the equation \begin{equation} U(\rho_{2})\!\equiv\!\frac{1}{2}\left[1\!- \!2\!\left(\frac{4\pi}{3}\right)^{1/3}\!\!\!m^{2/3} (\varepsilon\!+\!6\pi S^2)^{1/3}\right]=0, \end{equation} we find another mass parameter, $m=m_2$, where \begin{equation} m_{2}= \frac{1}{4}\sqrt{\frac{3}{2\pi(\varepsilon+6\pi S^2)}}. \label{m2} \end{equation} According to Eq.~(\ref{frsolution}), the energy density in the Friedman world decreases as $\varepsilon\propto t^{-2}$ (for a linear equation of state and at $k=0$). Therefore, the mass parameter $m_{2}$ increases with time: \begin{equation} d m_{2}/dt>0, m_{2}(t\rightarrow\infty)\rightarrow(8\pi S)^{-1} \end{equation} For $m<m_{2}$, the potential is always positive at the point with $\rho=\rho_{2}$, i.~e., $U(\rho_{2})>0$. In contrast, for $m>m_{2}$, the potential is always negative at the point with $\rho=\rho_{2}$, i.~e., $U(\rho_{2})<0$. In other words, the point with coordinates $(\rho_{2},0)$ lies under the graph of $U(\rho)$ for $m<m_{2}$ and above the graph of $U(\rho)$ for $m>m_{2}$. At $m=m_{2}$, the radius $\rho_{2}$ intersects the potential. Note also that $m_{2}>m_{1}$. For the potential on the event horizon of the Schwarzschild metric, $\rho_{h}=2m$, we find \begin{equation} U(\rho_h)=-\left[\pi Sm\left(1+ \frac{\varepsilon}{6\pi S^2}\right)-\frac{1}{64\pi Sm}\right]^2\leq0. \end{equation} We see that the point with coordinates $(\rho_h,0)$ always lies either above the graph of $U(\rho)$ or touches the graph of the potential at $m=m_{2}$ ($U(\rho_{h})=0$). At $m=m_2$, the radius of the event horizon $\rho_{h}=2m$ coincides with the radius $\rho_{2}$ at which $\sigma_{\rm out}$ changes its sign. Using Eqs. (\ref{m0}), (\ref{rho2}) and (\ref{m2}) for $m_{0}$, $\rho_{2}$ and $m_{2}$, we find that $m_{0}>m_{2}$ and \begin{equation} \rho_{h} \gtrless \rho_{2} \quad \hbox{for} \quad m \gtrless m_{2}. \end{equation} Finally, substituting the radius of the boundary between the space-time regions $R$ and $T$ in the Friedman world, $\rho=\rho_{\Delta}$ from (\ref{rhodelta}), into Eq.~(\ref{potential}) for the potential $U(\rho)$ yields \begin{widetext} \begin{equation} U(\rho_{\Delta})=-\frac{1}{2\rho_{\Delta}}\left[2\pi S\left(1- \frac{\varepsilon}{6\pi S^2}\right) \left(\frac{3}{8\pi\varepsilon}\right)^{3/4}+\frac{m}{4\pi S} \left(\frac{8\pi\varepsilon}{3}\right)^{3/4}\right]^2 \leq0. \end{equation} \end{widetext} We see that the point with coordinates $(\rho_{\Delta},0)$ cannot be under the graph $U(\rho)$. Only at $m=m_{1}$ does the point with coordinates $(\rho_{\Delta},0)$ lie on the graph of the potential and, in this case, $\rho_{\Delta}=\rho_{1}$. Accordingly, $\rho_{1}\gtrless\rho_{\Delta}$ for $m\gtrless m_{1}$. Note that the radius $\rho_{\Delta}$ at which the regions $R$ and $T$ are interchanged has the same properties as the radius of the event horizon in the Schwarzschild metric, $r_h=2m$, with respect to our potential. It can also be shown that $\rho_{\Delta}>\rho_{\rm max}$ at $m=m_{\rm max}$. Finally, let us introduce the last mass parameter \begin{equation} m_{3}=\frac{1}{4}\sqrt{\frac{3}{2\pi\varepsilon}}, \end{equation} which is the root of the equation $\rho_{\Delta}=\rho_{h}$. As a result, we will obtain the relations $\rho_{h}\lessgtr\rho_{\Delta}$ for $m\lessgtr m_{3}$. It can be shown that $m_{3}>m_{0}$. For a linear equation of state, $m_{3}(t\rightarrow\infty)\rightarrow\infty$ at $k = 0$. We now have all of the necessary parameters to construct a full classification of the possible types of solutions to the equation of motion of the shell in the Friedman-Schwarzschild world and to find the corresponding global geometries. \section{GLOBAL GEOMETRIES OF THE FRIEDMAN-SCHWARZSCHILD WORLD} Let us consider all of the possible types of solutions to the equation of motion of a vacuum shell in the Friedman-Schwarzschild world and then give a physical interpretation of these solutions. \subsection{The Case of $m>m_{3}$} We will begin our consideration with the case where $m>m_{3}$, i.~e., where the shell has a large mass that exceeds all of the characteristic masses in our problem, and will sequentially consider shells with an increasingly small mass. In this case, the relations $\rho_{h}>\rho_{\Delta}$, $\rho_{1}>\rho_{2}$ (if $\rho_{1}$ exists), $\rho_{1}>\rho_{\Delta}$ and $\rho_{h}>\rho_{2}$ hold. The potential and the location of the characteristic radii for this case are shown in Fig.~2a. As we see from Fig. 2a, there is no bounce point for the vacuum shell in this case. Let us consider the special case where the vacuum shell initially expands. To determine the type of space-time regions $R$ and $T$, it is important to know which signs $\sigma_{\rm in}$ and $\sigma_{\rm out}$ will have when the shell intersects the radii $\rho_{\Delta}$ and $\rho_{h}$, respectively. When the radius $\rho_{\Delta}$ is intersected, $\sigma_{\rm in}=1$ for any function $\varepsilon(t,k)$. Consequently, the shell initially moves in the region $R_{+}$. When the radius $\rho_{h}$ is intersected, $\sigma_{\rm out}=-1$ and, hence, the shell is in the region $R_{-}$. The Carter-Penrose diagram (global geometry) corresponding to this case is shown in Figs.~2b-2d (at $k = 0,-1$) for various equations of state (see also \cite{BerKuzTkach3}). Below, we will give the Carter-Penrose diagrams only for the case where $P=\varepsilon/3$, i.~e., $\alpha=1/3$, since the corresponding diagrams for other equations of state with $\alpha=const$ are constructed in a similar way. The Carter-Penrose diagrams for a closed universe will be the same as those for an open one (see Figs.~2b-2d). For a contracting shell, the signs of $\sigma$ at which the shell intersects the radii $\rho_{h}$ and $\rho_{\Delta}$ will remain the same. The corresponding Carter-Penrose diagram for a closed geometry is shown in Fig.~2e. Additional peculiarities appear for a closed universe, i.~e., at $k = 1$. For example, the expansion of a closed universe changes to its contraction, while we see from the Carter-Penrose diagram for a closed universe (see Fig.~2b) that the expansion of the universe cannot change to its contraction, since there is no region $T_{-}$ for a closed Friedman world. In particular, this is because a time interval will always be found when this diagram will not be valid or, more specifically, the condition $m>m_{3}$ will be violated. We will give an answer to this question (and to similar questions for other diagrams) at the end of this section. \subsection{The Case of $m_{0}<m<m_{3}$} This case differs from the previous one only in that the radii $\rho_{h}$ and $\rho_{\Delta}$ are interchanged. The potential and the Carter-Penrose diagram for an expanding shell for an equation of state with $\alpha=1/3$ are shown, respectively, in Figs.~2f and $2g$ ($k = 0,\pm1$). The Carter-Penrose diagrams for a contracting shell and for other values of $\alpha$ are constructed in much the same way as in the previous case. \subsection{The Case of $m_{2}<m<m_{0}$} The effective potential in this case shown in Fig. 2h has a region where $U(\rho)>0$. In this case, the shell bounces, i.~e., the contraction and expansion are interchanged, at $U(\rho)=0$. If the shell begins its motion from the coordinate origin ($\rho(0)=0$), then the expansion of the shell changes to its contraction and, in the long run, it will contract into a singularity. The Carter-Penrose diagram for a closed world is shown in Fig.~3a. If, alternatively, the shell begins to contract from infinity, then this contraction will change to its expansion and the shell will again expand to infinity. The corresponding Carter-Penrose diagram for a closed world is shown in Fig.~3b. \subsection{The Case of $m_{1}<m<m_{2}$} For a shell contracting from infinity, the situation will not change compared to the previous case. However, for a shell expanding from the coordinate origin, the situation will change radically. Now, $\sigma_{\rm out}=+1$. The corresponding graph of the potential and the Carter- Penrose diagram for a closed world are shown in Figs.~3c and 3d. \subsection{The Case of $m<m_{1}$} The potential for this case is shown in Fig.~3e. The situation where the shell expands from the coordinate origin will not change compared to the previous case, while the situation where the shell contracts from infinity differs in that the radius $\rho_{1}$ will be under the graph of the potential (if it will exist at all by that time). Two alternatives are possible. If the radius $\rho_{1}$ is absent at the time when the shell intersects the radius $\rho_{\Delta}$, then the situation is reduced to the previous case. If, alternatively, the radius $\rho_{1}$ exists at the time when the shell intersects the radius $\rho_{\Delta}$, then $\sigma_{\rm in}$, will change its sign or, more specifically, $\sigma_{\rm in}=-1$. The Carter-Penrose diagram for this alternative is shown in Fig.~3f. The classification under consideration allows the dynamics of the vacuum shell to be completely described without restricting generality to a short time interval $t$. Indeed, let the mass $m$ be fixed and the condition $m>m_{3}$ be satisfied for some short time interval. The parameter $m_{3}$ increases with time $t$ and will become larger than $m$ at some time. The condition $m_{0}<m<m_{3}$ will then be satisfied and the vacuum shell will satisfy the corresponding solution for this new inequality depending on whether it intersected other characteristic radii or not. The entire subsequent dynamics of the shell can be traced in a similar way. The evolution of a contracting vacuum shell can be considered just as the evolution of an expanding one, since the Friedman Eqs.~(5) are invariant with respect to the change of sign of the time $t$ to $-t$. Let the vacuum shell begin its motion from the coordinate origin at $t = 0$. The parameter $m$ will then be larger than all of the other mass parameters $m_{i}=(m_{0},m_{1},m_{2},m_{3})$, because $\varepsilon\to\infty$ and $m_{i}\to0$ when $t\to0$. Let an open or flat universe initially exist inside the vacuum bubble. Nothing will hinder the expansion of the vacuum shell. Depending on the relation between the parameters, several situations can arise. Either the shell will intersect the radius $\rho_{\Delta}$ and then the radius $\rho_{h}$ or an exchange between the radii will first take place, $\rho_{\Delta}\lessgtr\rho_{h}$ (since $m_{3}$ increases linearly with time, the parameter $m_{3}$ will become larger than a given m at some time). If $m>m_{0}(t\rightarrow\infty)=4/(27\pi S)$, then the shell will just go to infinity. If, alternatively, $m<m_{0}(t\rightarrow\infty)=4/(27\pi S)$, then the inequality $m<m_{0}$ will hold after some time (i.~e., the potential will intersect the $U=0$ axis and a bounce point will emerge). However, since the universe inside the bubble is open, the expansion cannot change to contraction, i.~e., the vacuum shell can pass only into the region to the right of the graph of the potential (if the shell passed into the region to the left of the potential, then it would bounce at the bounce point and would contract). In the region to the left of the potential, the vacuum shell would continue its expansion, going to infinity. There could also be other special cases during the expansion. For example, $m<m_{2}(t\rightarrow\infty)$, but this case is similar to the previous one. Thus, generally, the Carter-Penrose diagram evolves with time and this evolution is described in different time intervals by the above diagrams. If, alternatively, a closed universe exists inside the bubble, then there always comes a time when $m<m_{0}(t\rightarrow\infty)=4/(27\pi S)$, since the expansion should change to contraction. The vacuum shell can then be located only to the left of the potential and the reflection from the bounce point is possible (i.~e., the expansion will change to contraction). In the long run, such a shell will contract (collapse) into a singularity. Qualitatively, the embedding diagrams \cite{Zeldov} for vacuum shells in the Friedman-Schwarzschild world are shown in Fig.~4a for open and flat Friedman worlds and in Figs.~4b and 4c for a closed Friedman world (see also \cite{FrolMarMuk,Novik,Novik1}). We can see from the Carter-Penrose diagram that semiclosed worlds are formed in almost all cases of shell evolution. This is because the shell moves in the region $R_{-}$ of the Schwarzschild world, while the regions $R_{+}$ and $R_{-}$ are connected by a tunnel (wormhole). \subsection{APPROXIMATE SOLUTION} In certain limiting cases, the equation of motion of a vacuum shell in the Friedman-Schwarzschild world can be solved approximately. When the shell contracts from infinity and the effective potential intersects the $U = 0$ axis, the term \begin{equation} -\frac{m}{\rho}\left(1-\frac{\varepsilon}{6\pi S^2}\right)-\frac{m^2}{16\pi^2 S^2\rho^4}. \label{potential2} \end{equation} can be neglected in the effective potential (13). The equation of motion of the shell (13) will then be significantly simplified and can be reduced to \begin{equation} \left(\frac{d\rho}{d\tau}\right)^2=\phi^2\rho^2-1, \label{eqmot2} \end{equation} where we denote $\phi=2\pi S+\varepsilon/(3S)$. In this case, Eq.~(12) for joining the inner Friedman metric and the metric on the shell can be rewritten as \begin{widetext} \begin{equation} \left[1+\left(\frac{d\rho}{d\tau}\right)^2\right] \dot\rho^2-2\left(\frac{d\rho}{d\tau}\right)^2H\rho \dot\rho+\left(\frac{d\rho}{d\tau}\right)^2 \left(H^2\rho^2-1\right)=0, \label{shifka2} \end{equation} \end{widetext} where $H=\dot a/a=(8\pi\varepsilon/3)^{1/2}$ is the Hubble constant. From two equations, (\ref{eqmot2}) and (\ref{shifka2}), we obtain \begin{equation} \dot\rho=(H\sqrt{\phi^2\rho^2-1} \pm|\phi-4\pi S|)\frac{\sqrt{\phi^2\rho^2-1}}{\phi^2\rho}\,. \label{34} \end{equation} In this equation, we can already assume that $\rho$ depends only on $t$. Integrating this equation, we will obtain the function $\rho(t)$ and then can find the function $\tau(t)$ using Eq. (\ref{shifka}) at $\rho=aq$: \begin{equation} \dot\tau^2=1-\left(\dot\rho-H\rho\right)^2. \label{dottau} \end{equation} Finding the inverse function $t=t(\tau)$ from this equation, we will ultimately obtain the function $\rho(\tau)$. For a linear equation of state ($k = 0$) and taking into account the solution of the Friedman equations (\ref{frsolution}), we have the relation $H=\dot{a}/a=n/t$. Let us find an asymptotic solution to Eq.~(\ref{34}) for $t\rightarrow\infty$. In this limit, Eq.~(\ref{34}) can be rewritten as \begin{equation} \frac{d\rho}{dt}=\frac{\sqrt{\left(2\pi S\rho\right)^2-1}} {\left(2\pi S\rho\right)^2}\left[\frac{n}{t} \sqrt{\left(2\pi S\rho\right)^2-1}\pm2\pi S\right]. \end{equation} Integrating the latter equation at $n\neq1$ yields \begin{equation} \rho=\frac{1}{2\pi S}\sqrt{1+ \frac{(2\pi S)^2[t-Bn(n-1)t^n]^2}{(n-1)^2}}, \end{equation} where $B$ is the constant of integration. Accordingly, at $n=1$, we obtain \begin{equation} \rho=\frac{1}{2\pi S}\sqrt{1+(2\pi S)^2t^2(B+\ln{t})^2}. \end{equation} In the limit of $t\rightarrow\infty$ under consideration, we ultimately obtain \begin{equation} \rho\simeq\left\{ \begin{array}{lr} [t+B(1-n)nt^n]/(1-n), & n\neq1; \\ t\ln{t}, & n=1. \end{array} \right. \label{rho39} \end{equation} It follows from this solution and Eq.~({\ref{dottau}) that $d\tau/dt=0$, i.~e., the dependence of $t$ on $\tau$ vanishes in the limit under consideration. The equation of motion of the shell can also be solved in the other limiting case where $t\rightarrow0$. In this limit, the equation of motion of the shell is \begin{eqnarray} \frac{d\rho}{dt}=\frac{n}{t}\rho\pm1. \end{eqnarray} The solution to this equation is \begin{equation} \rho\simeq\left\{ \begin{array}{lr} Ct^n\pm t/(1-n), & n\neq1; \\ Ct\pm t\ln{t}, & n=1, \end{array} \right. \label{rho41} \end{equation} where $C$ is the constant of integration. In this limiting case, there is no dependence of $t$ on $\tau$ either. In a similar way, we can find an approximate solution to the equation of motion of the vacuum shell when it moves while being located to the left of the potential. In this case, terms of the form \begin{equation} -\frac{\varepsilon^2\rho^2}{9S^2}-4\pi^2 S^2\rho^2- \frac{4\pi}{3}\varepsilon\rho^2. \end{equation} can be neglected in the potential. The corresponding solutions to the equation of motion of the vacuum shell in the limits $t\to\infty$ and $t\rightarrow0$, are similar in form to (\ref{rho39}) and (\ref{rho41}). \section{CONCLUSIONS} We considered the dynamics of a thin vacuum shell in the Friedman-Schwarzschild world. The total mass m (Schwarzschild mass) of the shell measured by an observer at spatial infinity is a convenient parameter for the classification of the possible types of its evolution. This mass with the gravitational mass defect includes the total energy of the inner Friedman world and the total energy of the shell with its surface tension energy and its kinetic energy. The classification under consideration allows the dynamics of the vacuum shell to be completely described without restricting generality to a short time interval. The end result of the evolution of the vacuum shells under consideration in the Friedman-Schwarzschild world was shown to be the for motion of black holes and wormholes with baby universes inside in a wide range of initial conditions parameterized by the total initial shell mass. The interior of this world can be a closed, flat, or open Friedman universe. In the same way, more complex configurations, for example, where another bubble inside which a world, other than the Friedman one can be located, is formed within one bubble \cite{Sato2}, can be investigated using the method of an effective potential. Such configurations, where the evolution of the inner and outer bubbles is determined by the metrics inside and outside the shell, can be analyzed by the method of an effective potential individually. It should be noted that the method of an effective potential is inapplicable in the situation where the bubbles intersect. In the case of very small bubbles, where the bubble interior is inhomogeneous due to edge effects and the Friedman equations are inapplicable, we go beyond the scope of the formalism under consideration.
train/arxiv
BkiUd3o5qsNCPfj-miWZ
5
1
\section{Introduction} \label{sec:introduction} Programming languages and techniques based on logic and constraints \cite{intro_constraints_stuckey} provide programmers with powerful high-level, declarative abstractions that are well suited for a wide spectrum of applications where the computational problem can be represented as a search for some, all, or an optimal solution (i.e., a model) that satisfies a set of logical formulas and constraints on variables \cite{Dechter03Constraints,Apt03}. Over time, several ecosystems of such languages, tools and programming practices have evolved, each with a slightly different focus and features, better suited or more specialized for one application area or another. Prolog \cite{SterlingShapiro94,bratko-short,hermenegildo10:ciao-design-tplp-tr} is probably the basis for the best known family of the Constraint Logic Programming (CLP) language implementations, and has influenced many others, such as Mercury \cite{mercury-manual}, Oz \cite{mozart-oz-tutorial}, and Erlang \cite{erlang}. In this paper, we are concerned with replicating and reimplementing the essential features of the Constraint Handling Rules (CHR) \cite{fruehwirth09:CHR_book}, a language that has been developed for writing constraint solvers -- i.e., the CLP tools themselves. CHR is actually a rule language layer on top of a host language. While in principle the choice of the host language is not restrictive, the reference implementation of CHR (and the most of the current CHR code) works on top of Prolog \cite{schrijvers2005constraint}. However, there is nothing intrinsically dependent on Prolog in the semantics of CHR rules. Several CHR systems have been implemented on top of Java, C and Haskell \cite{kuleuven-chr-impl}. While there are arguably many situations where the developers using the mainstream programming environments and tools, such as those for Java, would benefit from using CHR techniques for writing custom constraint solvers, developing the CHR code together with the ``main'' application / library code is still a difficult and cumbersome process. Even if the CHR host language is the same as the main application language (e.g., Java), this still calls for additional intermediate compilation tools and steps, frequently disrupts the normal development workflow, and offers little if any rule debugging. These practical problems, on both unit and integration level, often discourage the use of CHR (and CLP) based techniques in the mainstream programming environments -- ironically exactly for the problems for which these approaches are best suited. We argue that an effective way to address most of these problems is to express the declarative CHR-based solver logic directly in the host language -- in this case Java -- without introducing an additional language layer and the intermediate compilation tools and steps. In the proposed approach, the CHR-based code is written in a domain-specific language (DSL) which is a subset of Java, and the key constraint handling components are exposed as Java objects with well-defined interfaces that support transactional behavior, event notifications, tracing and debugging. The paper is based on an implementation of the proposed system. In the remainder of the paper, we give a motivating example in Section~\ref{sec:motivating-example}, present the DSL for the constraint handlers in Sections~\ref{sec:chr-as-java} and~\ref{sec:lazy-compilation-dsl}, and briefly explain their semantics in Section~\ref{sec:semantics}. Section~\ref{sec:impl-notes-advanc} presents some implementation notes with the advanced transactional, debugging, and safe termination features. Finally, Section~\ref{sec:conclusions} offers some conclusions. \section{Motivating Example} \label{sec:motivating-example} Configuration management is one of the traditional CLP fields of application, starting from the early systems where it was used for querying a (static) database of available components, planning installation steps and building the dependency chains \cite{Dart:1991:CCM:111062.111063,amos-cbd}, to automatic configuration of autonomous network devices \cite{5188808}, to automated synthesis of complex components that meet some functional requirements \cite{DBLP:conf/ijcai/PistoreMBT05}. A significant part of the effort to build workable cloud application platforms is related to configuration management, and relies on rich cloud software component models \cite{DBLP:conf/sefm/CosmoZZ12}. Additional complexities in the cloud configuration management include: \begin{compactitem} \item Components that are controlled and hosted by third parties that publish only their interfaces and descriptions. \item Different component granularity -- from libraries to separately deployed virtual machines and servers. \item Multiple configurations for coarse-grained deployable components that implement the same or a similar functionality. \item Quality of Service (QoS) attributes and requirements, related to performance, cost, availability, and other quality concerns. \end{compactitem} In many cases, these aspects can be naturally addressed using constraint models that involve not only the traditional Boolean, finite and numeric domains, but also much richer and extensible ones. For instance, QoS values and their ranges can be quantified using a variety of floating point, fixed or arbitrary precision numbers with units of measurement attached. QoS value distributions as mathematical objects can be represented using data sets or analytic functions. Regular expressions can be used to restrict service identifiers and attributes. Textual version information can be converted into objects that keep hierarchic version numbering, time-stamps and release tags. Subsumption and compatibility constraints can be also placed on service interfaces based on their operations, argument and return types. This clearly calls for constraint-solving capabilities as a part of the runtime cloud programming environment \cite{DBLP:conf/esocc/PredaGGMM12}. For most of the rich constraint domains mentioned above, there are well tested libraries and optimized algorithms already in place, and the object themselves are accessed through their interfaces, without looking at the data structure implementation. Therefore, from the interoperability point of view, the constraint solving components should ideally behave as the standard host language -- e.g. Java -- components, which are packaged and deployed in the standard way, as \emph{.jar} libraries, OSGi bundles, or Web/application server packages. Obviously, that is difficult to achieve if the constraint solver implementation language is different from the host language. But even where that is not the case, the current limitations and maturity levels of the systems that compile CHR into Java (e.g. JCHR \cite{vanWeert+2005}), call for a simpler solutions which are more closely integrated with Java. \section{Constraint Rules as a Java DSL} \label{sec:chr-as-java} CHR units that implement constraint solving functionality over some domain are called \emph{handlers}, and contain constraint declarations and rule definitions, typically written in a CHR-specific syntax, which admits a subset of the host language expressions and data/object notation. One problem with that approach is the need to translate the rules from the CHR-specific syntax into the host language, before integrating it with the rest of the application and libraries. When testing and debugging, it can be difficult to trace the solver behavior back to the CHR source code. Another problem is that the CHR syntax has to be updated from time to time to keep up with the innovations in the host language, such as the introduction of generics and enumerations in Java 5, enhanced type inference in Java 7, or the forthcoming introduction of lam\-bda-expressions in Java 8. While these new features normally do not deprecate the old ones, keeping up-to-date is certainly desirable.% \footnote{This is not just a matter of experimental features. In a language like Java, where each language innovation comes after a prolonged process of drafting and discussion, inclusion of a new language feature usually means that most coders will start using it very soon.} To simplify and streamline the integration with the host language, we propose to express handlers, constraints, and rules in a domain-specific language which is a subset of Java. This is not unlike the inversion-of-control design pattern: instead of the CHR level controlling Java classes, we let Java code construct and configure CHR handlers using specific APIs. Instead of a static CHR-to-Java compilation, we use a transparent runtime compilation of the handler logic into the back-end Java objects that fire rules and update the store. And instead of imposing restrictions on Java constructs that are recognized by CHR, we define CHR-specific APIs callable from arbitrary Java code. \begin{figure}[tb] \centering \begin{minipage}[t]{0.6\textwidth} \begin{lstlisting} import cr.core.Symbol; import cr.core.Handler; // Other imports public class MyHandler<...> extends Handler { // Symbol declarations public MyHandler(...) { // Constructor code initialize(); } public void setup() { // Constraint declarations // Constraint rules } // Guard methods // Other methods } \end{lstlisting} \end{minipage} \caption{The general form of a constraint handler.} \label{fig:template} \end{figure} Figure \ref{fig:template} shows the general shape of a constraint handler in our approach. Each handler class, which may have type parameters, extends the abstract class \texttt{cr.core.Handler}, which is the part of the CHR-in-Java library. The other imported class, \texttt{cr.core.Symbol} is used to name constraints and data elements. The four main DSL-specific parts of the handler are: symbol declarations, constraint declarations, rule definitions, and guard methods. \begin{figure}[tb] \centering \begin{minipage}[t]{0.75\textwidth} \tt\footnotesize \begin{tabbing} \nonterm{SymbolDecl} ::= \=\textbf{public} Symbol \nonterm{Name}[, \nonterm{Name}]$^\ast$ ;\\ \>\nonterm{Name} ::=\'\angled{\mbox{\it a valid Java field name}} \end{tabbing} \end{minipage} \caption{Syntax for symbol declarations.} \label{fig:symdecl} \end{figure} Symbol declarations follow the simple scheme from Figure~\ref{fig:symdecl}. Two predefined symbols in \texttt{cr.core.Handler} are \texttt{fail} (representing the unsatisfiable constraint) and \texttt{\char95} (the underscore, used to represent an arbitrary object). Note that the symbol fields are public, but not initialized -- the \texttt{initialize()} method of \texttt{cr.core.Handler} -- which needs to be called at the end of a custom handler constructor -- uses Java reflection to initialize each public field of type \texttt{cr.core.Symbol} to a fresh symbol with the same name. No particular naming strategy is enforced, but it is customary to use names starting with a lowercase letter for constraints, and those starting with an uppercase letter for data objects. One advantage of declaring symbols using public fields is that one can use the usual Java refactoring tools in modern IDEs, such as Eclipse, NetBeans, or IntelliJ/IDEA, to perform project-wide consistent renaming of constraints. \begin{figure}[tb] \centering \begin{minipage}[t]{0.75\textwidth} \tt\footnotesize \begin{tabbing} \qquad~\nonterm{Decl} ::= \= constraint(\nonterm{Name}[, \nonterm{KeyClass}]$^\ast$) \\ \> [.with(\nonterm{DataClass}[, \nonterm{DataClass}]$^\ast$)] ; \\[3pt] \> \nonterm{KeyClass} ::=% \'% \angled{\mbox{\it a Java expression of type~~\rm\tt Class<Comparable>}} \\ \> \nonterm{DataClass} ::=% \' \angled{\mbox{\it a Java expression of type~~\rm\tt Class<?>}} \end{tabbing} \end{minipage} \caption{Syntax for constraint declarations.} \label{fig:const-decl} \end{figure} The handler class needs to implement method \texttt{setup()} which is called from \texttt{initialize()}, whose task is to declare the constraints and define the rules. Constraint need to be declared before being used in a rule, and figure \ref{fig:const-decl} shows the corresponding DSL syntax. Each constraint is uniquely identified with its \textit{Name} (a declared symbol), and may have zero or more key and data fields, whose classes are given in the declaration. The key fields hold \texttt{Comparable} objects, and they uniquely identify a constraint literal instance, while the data fields (introduced after ``\texttt{.with}'') carry additional information (arbitrary objects) associated to the constraint literal instance, which may vary over time. Nulls are allowed in both the key and data fields. For instance, the following statements in \texttt{setup()}: \begin{lstlisting} constraint(leq, String.class, String.class); // less-than-or-equal constraint(lt, String.class, String.class); // less-than constraint(eq, String.class, String.class); // equal constraint(neq, String.class, String.class); // not equal \end{lstlisting} declare constraints named \texttt{leq}, \texttt{lt}, \texttt{eq}, and \texttt{neq} (all declared symbols) between two string keys (constrained variable names). Also: \begin{lstlisting} constraint(dom, String.class).with(Integer.class, Integer.class); \end{lstlisting} declares a constraint \texttt{dom} which associates a range of integer values (between the limits in the data fields) to a variable whose name is given as the key. \begin{figure}[tb] \centering \begin{minipage}[t]{0.9\textwidth} \tt\footnotesize \begin{tabbing} \qquad\qquad\qquad\qquad\llap{\nonterm{Rule} ::=} \= \nonterm{Head} [\nonterm{Guard}] [\nonterm{Body}] ; \\[8pt] \> \nonterm{Head} ::=% \' when(\nonterm{Name}[, \nonterm{Pattern}]) [.with(\nonterm{Pattern})] \nonterm{Modifiers} \\ \> \repeatz{.and(\nonterm{Name}[, \nonterm{Pattern}]) [.with(\nonterm{Pattern})] \nonterm{Modifiers}} \\[3pt] \> \nonterm{Guard} ::=% \' .where(\nonterm{GuardName}[, \nonterm{Pattern}])\\ \>\repeatz{.and(\nonterm{GuardName}[, \nonterm{Pattern}])} \\[3pt] \> \nonterm{Body} ::=% \' .then(\nonterm{Name}[, \nonterm{Pattern}]) \\ \> \repeatz{.and(\nonterm{Name}[, \nonterm{Pattern}])} \\[6pt] \> \nonterm{Pattern} ::=% \' \angled{\mbox{\it a Java expression}} [, \nonterm{Pattern}] \\ \> \nonterm{Modifiers} ::=% \' [.passive()] [.keep()] \end{tabbing} \end{minipage} \caption{Syntax for rule definitions.} \label{fig:rule-def} \end{figure} The syntax for rules is more complex, and is given in Figure~\ref{fig:rule-def}. In this section we present different aspects of the rule definitions with an informal explanation of their intended meaning. More detailed discussion of the rule semantics is given in the Section~\ref{sec:semantics}. The simplest rule may have only a head, as in the following two examples: \begin{lstlisting} when(leq, X, X); when(eq, X, X); \end{lstlisting} which (with \texttt{X} a declared symbol) simply consume or throw away the trivial (in)equalities. The fields are compared on the basis of the \texttt{equals()} method. Most often, rules have a body. An example of a simplification rule is: \begin{lstlisting} when(lt, X, Y).then(leq, X, Y).and(neq, X, Y); \end{lstlisting} which converts strict inequality $x<y$ into the equivalent conjunction of $x\leq y$ and $x\neq y$. Another simplification example is: \begin{lstlisting} when(neq, X, X).then(fail); \end{lstlisting} which detects inconsistencies. An example of \texttt{.passive()} modifier is: \begin{lstlisting} when(leq, X, Y) .and(leq, Y, X).passive() .then(eq, X, Y); \end{lstlisting} which simplifies $x \leq y \land y \leq x$ into $x=y$, but since the case is completely symmetric, wants to avoid firing twice, on $y\leq x$. Another use of \texttt{.passive()} is to prevent proliferation of non-informative facts. For instance: \begin{lstlisting} when(eq, X, Y) .and(eq, X, Y).passive().keep(); \end{lstlisting} consumes $x=y$ if that fact is already known. Note also the modifier \texttt{.keep()} which prevents the known fact from being consumed too. In fact, modifier \texttt{.keep()} is the mechanism for implementing propagation and simpagation rules. For instance, the following rule ensures the symmetry of \texttt{eq}: \begin{lstlisting} when(eq, X, Y).keep() .then(eq, Y, X); \end{lstlisting} and the next one propagates the domains of the equal variables: \begin{lstlisting} when(eq, X, Y).keep() .and(dom, X).with(A, B).keep() .and(dom, Y).with(C, D).keep() .where("!equals", X, Y) // avoid the trivial case .then(dom, X).with(C, D) .and(dom, Y).with(A, B); \end{lstlisting} In the last example, we have seen an example of a guard, introduced with ``\texttt{.where}'', whose first argument is a string that points to the corresponding guard method, with the leading bang (``\texttt{!}'') signifying the negation. The corresponding guard method: \begin{lstlisting} public boolean equalsGuard(Object x, Object y) { return (x == null ? y == null : y != null && x.equals(y)); } \end{lstlisting} is built into \texttt{cr.core.Handler}. A non-negated guard succeeds when all of the arguments have the correct type, and the returned value is \texttt{true} (or the guard method return type is \texttt{void}). A negated guard succeeds exactly when the non-negated guard would fail. The guard mechanism is very flexible and powerful, handles the automatic conversion between Java primitive values and objects, accepts variable argument lists, and allows the guard methods to compute new information that can be used in the body of the rule. For instance, the following rule detects inconsistencies: \begin{lstlisting} when(dom, X).with(A, B) .and("!lessOrEqual", A, B) .then(fail); \end{lstlisting} using the rule method: \begin{lstlisting} public boolean lessOrEqualGuard(int a, int b) { return a <= b; } \end{lstlisting} (Note that \texttt{"!lessOrEqual"} guard succeeds if either of the two arguments are \texttt{null}.) This rule ignores non-informative bounds: \begin{lstlisting} when(dom, X).with(A, B) // newly told .and(dom, X).with(C, D).passive().keep() // already known, kept .where("includes", A, B, C, D); // [C,D] already included in [A,B] \end{lstlisting} with the guard method: \begin{lstlisting} public boolean includesGuard(int a, int b, int c, int d) { return (a <= c) && (d <= b); } \end{lstlisting} The following rule treats the informative bounds: \begin{lstlisting} when(dom, X).with(A, B) // newly told .and(dom, X).with(C, D).passive() // already known .where("!includes", A, B, C, D) // [A,B] does not include [C,D] .and("isect", A, B, C, D, E, F) // [E,F] is the intersection .then(dom, X).with(E, F); // update the domain to [E,F] \end{lstlisting} with the new guard method that computes the intersection: \begin{lstlisting} public void isectGuard(int a, int b, int c, int d, @NotNull Symbol e, @NotNull Symbol f) { e.set(Math.max(a, c)); f.set(Math.min(b, d)); } \end{lstlisting} This guard always succeeds (if no argument is \texttt{null}), and stores the results in \texttt{e} and \texttt{f}, used in the rule body as the updated value range. Guard method parameters of type \texttt{cr.core.Symbol} are passed not by value, but by reference. \section{Runtime rule compilation and DSL expressiveness} \label{sec:lazy-compilation-dsl} At this point, before proceeding to the semantics, it is useful to comment on some aspects of the proposed approach and its implementation, and to highlight and motivate the choices these are based on. First and foremost, the elimination of the static rule compilation phase, as mentioned at the beginning of the previous section, comes at the cost of a runtime compilation of rules into Java objects. In the current implementation, this is done every time a new instance of the handler is instantiated (i.e., during and after the execution of the \texttt{setup()} method), but in a slightly improved implementation, most of this overhead can be dealt with on once-per-class basis, provided that \texttt{setup()} does not depend on the handler constructor parameters. The first runtime rule compilation phase is building the internal rule object representations, which is done using the \texttt{constraint()}, \texttt{when()}, \texttt{where()}, \texttt{then()}, and other API methods. The most complex part here is the treatment of guards, which relies on Java reflection to ensure that the corresponding methods exist, and to create adapters that take care of the correct argument count, types, conversion, variable-argument list passing, returning result interpretation, etc. The use of strings for guard names, while not as elegant as the other parts of the DSL, allows the use of the negation prefix (``\texttt{!}'') and avoids the need to declare guard names as symbols, and thus clutter the code.% \footnote{Note that Java method names and field names populate different namespaces.} The second runtime compilation phase is weaving the compiled rules into a per-instance index structure that is used for firing rule heads. A handler instance can be created only if the runtime rule compilation succeeds. Otherwise, a \texttt{cr.core.HandlerException} is thrown with a fault description. Not having all the errors in the handler detected statically is arguably the greatest drawback of our scheme, although it is less critical in the context of the agile development methodologies. Any runtime rule compilation errors would be weeded out early on during the handler's unit testing phase, before integrating it with the rest of the application modules. It should also be noted that JVM-based languages such as Scala \cite{scala-lang} provide much better facilities for development of DSLs than ``pure'' Java. In particular, Scala's flexible system for defining operators, together with a functional representation of methods as first-class objects (on the same level as the variable and value fields) may eliminate the need for strings as guard names and run-time argument number and type checking. This makes implementing a Scala interface for the constraint rules library an interesting next step. Scala can be also used as the implementation language, but since it introduces its own object (reference / value) hierarchy on top of Java, this would be be more suitable when the client code is also written in Scala. \section{Semantics} \label{sec:semantics} The semantics of the constraint rules introduced in Section~\ref{sec:chr-as-java} as a Java DSL follows the general lines of CHR, but differs from its standard semantics with respect to the organization of the store, the firing of rules, and the absence of special built-in solvers. Each instance of the handler class (i.e., the one that extends \texttt{cr.core.Handler}) encapsulates five key elements: the symbols, the store, the goal (or the queue), and the rules, which are explained below. As mentioned in the previous section, symbols are just objects with an immutable name, and are used to name the constraints and data field values in rules. (Using the same symbol for both purposes is not forbidden, but the resulting code may look confusing.) When testing guards and firing rules, symbols denoting data fields also store field values as objects. Since the rules operate on the committed-choice basis, this is done using destructive updates, by calling their \texttt{.set()} and \texttt{.get()} methods. When used as data objects outside the rules, the symbols' values should be treated as volatile. \sloppy The handler state is a tuple $(G,S)$, where $G$ is the goal, and $S$ is the store. The \emph{store} is an object that keeps the \emph{known facts} about the declared constraints. Unlike the standard CHR where the store is a multi-set of constraint literals, we take the approach where each declared constraint $c$ is a partial function of the form: \begin{displaymath} c : K_1 \times K_2 \times \cdots \times K_n \pto D_1 \times D_2 \times \cdots \times D_m \end{displaymath} where $n,m\geq 0$. Each $K_i$ corresponds to a Java class implementing \texttt{java.lang.Comparable} interface, and each $D_j$ to an arbitrary Java class. (Each $K_i$ and $D_j$ is also implicitly extended to include the null reference.) If $n$ or $m$ is zero, the corresponding product degenerates to a singleton set containing only the unit tuple $()$. Each constraint literal (or \emph{fact}) is a statement of the form: \begin{displaymath} c : (k_1,k_2,\ldots,k_n) \mapsto (d_1,d_2,\ldots,d_m) \end{displaymath} where $k_i\in K_i$ and $d_j\in D_j$, which tells that $c$ is defined at $(k_1,k_2,\ldots,k_n)$ and has value $(d_1,d_2,\ldots,d_m)$. Initially, the store is typically empty, which means that all constraints are undefined for all possible keys. For instance, if the \textit{dom} constraint is declared as: \begin{displaymath} \mathit{dom} : \mathbb{V} \pto \mathbb{Z} \times \mathbb{Z} \end{displaymath} where $\mathbb{V}$ is a set of variable labels (as strings), and the two integers are the pair of min/max bounds, then the partial function representation ensures that we may have at most one pair of bounds in the store for any variable label. \fussy The partial function representation of constraints is chosen over the multi-sets as a more structured solution which can leverage the efficient Java data structures such as \texttt{java.util.SortedMap}, and is more powerful than the set semantics. Note that the set semantics can be simulated by taking $m=0$ and keeping all constraint data in the key fields. Similarly, the multi-set semantics can be simulated by taking, e.g., $n=1$ and $K_1\equiv\mathtt{java.lang.Integer}$, putting all constraint information into the data part, and making sure that $k_1$ is always ignored in the \texttt{when()} part of the rules (using symbol ``\texttt{\_}''), as well as initialized to a fresh value for each new fact inserted into the store. In contrast to the store which contains the already known facts, the \emph{goal} is a conjunction of \emph{newly told} facts that await processing. The goal is processed one fact a time, in a chronological (or left-to-right) order, and new facts produced by firing rules are appended to it. For these reasons, the goal is also known as the \emph{queue}. Note that our proposal does not make the distinction between the \emph{built-in} and \emph{user-defined} (relational) constraints. All constraints used in the solver have to be declared, and their rules explicitly specified.% \footnote{This does not mean writing huge monolitic solvers. The developers can use subclassing, delegation and other usual Java techniques for software modularization and reuse.} Also, any object inspection and matching has to be done explicitly by invoking the accessor methods in guards. \begin{figure}[tb] \begin{minipage}[t]{0.40\linewidth} \begin{algorithmic}[1] \Function{MainLoop}{} \State $\mathit{forcedExit \gets \mathtt{false}}$ \While{$\neg\mathit{forcedExit} \land |G|>0$} \State $\phi \gets \mathit{first}(G)$; $G \gets \mathit{rest}(G)$ \If{$\phi\equiv \mathtt{fail} : () \mapsto ()$} \State $\mathit{signal\ failure}$ \Else \State \Call{FireAllRules}{$\phi$} \If{$|G| > \mathit{limit}$} \State $\mathit{forcedExit \gets \mathtt{true}}$ \EndIf \EndIf \EndWhile \State\Return{$|G|>0$} \EndFunction \State \Function{Tell}{$\phi$} \State {\it append $\phi$ to $G$} \State \Return{\Call{MainLoop}{}} \EndFunction \end{algorithmic} \end{minipage}% \begin{minipage}[t]{0.58\linewidth} \begin{algorithmic}[1] \Procedure{FireAllRules}{$\phi$} \State $U\gets\emptyset \For{$\mathit{each\ active\ head\ element}\ H\ \mathit{matching}\ \phi$} \State $\bar H' \gets \mathit{all\ head\ elements\ in\ the\ rule\ except}\ H$ \State $\mathit{fired} \gets \mathtt{false}$ \ForAll{$\mathit{facts}\ \bar\phi'\ \mathit{from}\ S\ \mathit{matched\ by}\ \bar H'$} \If{\textit{the rule guard succeeds}} \State $\mathit{fired} \gets \mathtt{true}$ \State {\it append the rule body to $G$} \State $U\gets U\cup\left\{ \bar\phi_{i}' \:|\: \bar H_{i}' \mathit{without}\ \mathtt{.keep()}\right\}$ \EndIf \EndFor \If{$\mathit{fired} \land H\ \mathit{without}\ \mathtt{.keep()}$} \State $S\gets S\setminus U$ \State\Return{} \EndIf \EndFor \State $S\gets (S\setminus U)\cup \{\phi\}$ \EndProcedure \end{algorithmic} \end{minipage} \caption{The main loop and the rule firing algorithms.} \label{fig:main-loop} \end{figure} For simplicity, we present here the operational semantics of the rules using the algorithms from Figure~\ref{fig:main-loop}, which destructively update the state. The \textsc{MainLoop} starts from some initial state $(G,S)$ -- where $G$ is normally non-empty -- and tries to reach a fixpoint state $(G',S')$ where $G'$ is empty, i.e., all possible rules have been fired and nothing else remains to be done. That is achieved by successively reading facts from the goal (in a FIFO fashion), and firing all applicable rules (or signaling a failure). \textsc{MainLoop} is typically initiated with a \textsc{Tell} operation which communicates a new fact to the handler. \textsc{MainLoop} can return before reaching a fixpoint in two cases: when explicitly asked to do so from a rule guard (using the \texttt{forceExit()} handler method), or when it detects that the goal size has exceeded some optional and pre-configured safety level set to prevent uncontrolled memory consumption. In both cases such an early termination is safe, in the sense that no information is lost, and that the computation can always be resumed. Procedure \textsc{FireAllRules} uses an internal index structure to iterate through all head elements of all rules that are active (i.e., not marked with \texttt{.passive()}). For each such active head element, an attempt is made to fire the rule for each combination of facts from the store that correspond to the remaining head elements in the same rule (for which the rule guard succeeds if present). The facts consumed in each firing (not marked with \texttt{.keep()}) are marked for removal, but are not removed immediately to give chance to all other applicable rules to fire. If the firing head element is marked with \texttt{.keep()}, the next firing head (in the same rule or one of the following rules) is tried, otherwise the processed fact is consumed, and the processing stops. If the processed fact is not consumed by any rule, it is added to the store. The order of rules is significant. The rules are fired in the same order in which they are defined. When an earlier rule consumes the processed fact, it effectively cuts the remaining rules off. However, those rules that do fire behave as if they do so simultaneously, since the removal of the consumed facts is performed at the end. While the handler's \texttt{tell()} method informs it of the new facts, which are added to the goal (i.e., the queue), the results of the computation are held in the store, and can be inspected using the \texttt{select()} method. \section{Implementation Notes and Advanced Features} \label{sec:impl-notes-advanc} The current implementation is a set of Java classes and interfaces packaged in a lightweight standalone \emph{.jar} file, without external dependencies.% \footnote{An archive with the binaries and the documentation can be downloaded from \texttt{http://software.imdea.org/\~{ }idragan/cr}} It contains an example numerical interval solver that can be tested using the visual debugger based on the advanced features below. \subsection{Event notifications, tracing and debugging} \label{sec:event-notif-trac} \begin{figure}[tb] \centering \includegraphics[width=1.0\textwidth]{Figs/debugger} \caption{A screeshop of the debugging session.} \label{fig:debug} \end{figure} Insufficient support for debugging is one of the key disadvantages of the current CHR implementations. In our implementation, both tracing and debugging are achieved using the publish-subscribe mechanism by which one or more event listeners can be attached to a handler instance and can observe different events, such as adding a goal to the store or firing a rule. In the latter case, the information from Java reflection relates the firing point to the handler source. An example of the full GUI debugging session is given in Figure~\ref{fig:debug}, with the debugging console, source code tracing, breakpoints, and constraint views. \subsection{Transactional state behavior} \label{sec:trans-state-behav} It is often useful to save the state of the handler before telling more constraints, and to revert to the previous state if the problem turns out to be over-constrained or insatisfiable. A typical use would be checking if a solution to the problem exists under some additional assumptions, and if not reverting to the previous state and trying something else. Our implementation enables arbitrarily nested state savepoints, analogous to those in the transactional database, using the following operations: \begin{itemize} \item \texttt{begin()} -- saves the current state and begins a new, nested transaction. \item \texttt{commit()} -- closes the current nested transaction and saves its current state to the parent transaction. \item \texttt{partialCommit()} -- saves the current state to the parent transaction, while keeping the nested transaction open. \item \texttt{rollback()} -- discards the current nested transaction and returns to the parent transaction and its saved state. \end{itemize} These operations work on both components of the state (the goal and the store), and are orthogonal to the \texttt{tell()} and \texttt{select()} handler operations. The default store that is created for each new handler instance is a map-based in-memory store. We are working on an implementation where the in-memory store can be replaced with a persistent store stored in the file system. \section{Conclusions} \label{sec:conclusions} Implementing CHR as a domain-specific language embedded in Java has several advantages over the classical approach where CHR handlers are written in an additional language layer on top of the host language, here Java. These advantages include avoiding the additional compilation steps that disrupt the usual development cycle, better leverage of the host language features, support for tracing and debugging, and the application of the existing powerful refactoring tools in the modern Java IDEs. On the overall, this can help improve the acceptance of CHR and CLP programming techniques in the component-based, Java-centric, cloud programming environment. The future work will be directed towards more robust implementations, integration with persistent transactional store back-ends, development of a spectrum of ready-to-use constraint handlers, introduction of some CHR$^\lor$ features \cite{chrd-98}, and exploring applications in distributed event processing. \bibliographystyle{plain}
train/arxiv
BkiUbrU5qsNCPdQKtCwu
5
1
\section{Introduction} \label{sec:intro} Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative} have achieved impressive results in image generation. By taking inspiration from the Turing test, a generator function is asked to fool a discriminator function which, in turn, tries to distinguish real samples from generated ones. GANs are known to generate very realistic images when trained properly. A special generation task is image-to-image translation, which learns to map each image for an input domain into an image in a (possibly different) output domain. In most real-world domains, there are no pairs of examples showing how to translate an image into a corresponding one in another domain, yielding the so called UNsupervised Image-to-image Translation (UNIT) problem. In an UNIT problem, two independent sets of images belonging to two different domains (e.g. cats-dogs, male-female, summer-winter, etc.) are given and the task is to translate an image from one domain into the corresponding image in the other domain, even though there exist no paired examples showing this mapping. Unfortunately, estimating a joint distribution of the images in the two domains from the distributions in the original single domains is known to have infinite possible solutions. Therefore, one possible strategy consists in mapping pairs of corresponding images to the same latent space using auto-encoders and then learning to reconstruct an image from its representation in latent space. Combining auto-encoders with GANs has been proposed in~\cite{rosca2017variational,li2017alice} and outstanding results on image translation have been reported by~\cite{zhu2017unpaired,liu2016coupled,liu2017unsupervised}. This paper proposes a general approach to visual generation and translation that combines learning capabilities with logic descriptions of the images that are generated. The generation problem is translated into a constrained satisfaction problem, where each constraint forces the generated image to have some predefined feature. A main advantage of this approach is to decouple the logic description level from the generative models. The logic layer is architecture agnostic, allowing to inject into the logic layers any generator model based on deep learning. In particular, expressing the task using logic knowledge allows to easily extend the involved classes to additional translation categories as well as yielding an easier to understand learning scheme. The translations are then interleaved and jointly learned using the constraints generated by the framework that allow to obtain truly realistic images on different translation types. Integration of learning and logic reasoning has been studied in the past few years, but no framework emerged as generic interface layer. For example, Minervini et al.~\cite{minervini2017adversarial} corrects the inconsistencies of an adversarial learner but the employed methodology is limited in terms of scope and defined ad-hoc for the task. A fuzzy generalization of First Order Logic is used both by Hu et al.~\cite{hu2016harnessing} and Logic Tensor Networks~\cite{serafini2016learning} to integrate logic and learning, but both approaches are limited to universally quantified FOL clauses with specific forms. Another line of research~\cite{rocktaschel2015injecting,demeester2016lifted} attempts at using logical background knowledge to improve the embeddings for Relation Extraction. Also these works are based on ad-hoc solutions that lack a common declarative mechanism that can be easily reused. Markov Logic Networks (MLN)~\cite{richardson2006markov} and Probabilistic Soft Logic (PSL)~\cite{kimmig2012short,bach2015hinge} are two probabilistic logics, whose parameters are trained to determine the strength of the available knowledge in a given universe. MLN and PSL with their corresponding implementations have received lots of attention but they provide a shallow integration with the underlying learning processes working on the low-level sensorial data. In MLN and PSL, a low-level learner is trained independently, then frozen and stacked with the AI layer providing a higher-level inference mechanism. The framework proposed in this paper instead allows to directly improve the underlying learner, while also providing the higher-level integration with logic. TensorLog~\cite{cohen2016tensorlog} is a recent framework to reuse the deep-learning infrastructure of TensorFlow (TF) to perform probabilistic logical reasoning. However, TensorLog is limited to reasoning and does not allow to optimize the learners while performing inference. This paper utilizes a novel framework, called LYRICS~\cite{marra2019lyrics} (Learning Yourself Reasoning and Inference with ConstraintS)\footnote{URL: https://github.com/GiuseppeMarra/lyrics .}, which is a TensorFlow ~\cite{abadi2016tensorflow} environment based on a declarative language for integrating prior knowledge into machine learning. The proposed language generalizes frameworks like Semantic Based Regularization~\cite{diligenti2012bridging,diligenti2015semantic} to any learner trained using gradient descend. The presented declarative language provides a uniform platform to face both learning and inference tasks by requiring the satisfaction of a set of rules on the domain of discourse. The presented mechanism provides a tight integration of learning and logic as any computational graph can be bound to a FOL predicate. In the experimental section, an image-to-image task is formulated using logic including adversarial tasks with cycle consistency. The declarative approach allows to easily interleave and jointly learn an arbitrary number of translation tasks. \section{Constrained Learning and Reasoning} \label{sec:clare} \begin{table}[b] \centering \begin{tabular}{|c|c|c|c|} \hline \diagbox{formula}{t-norm} & {\bf G$\ddot{\mbox{o}}$del} & {\bf \L ukasiewicz} & {\bf Product} \\ \hline $\neg x$ & $1-x$ & $1-x$ & $1-x$ \\ \hline $x\wedge y$ & $\min\{x,y\}$ & $\max\{0,x+y-1\}$ & $x\cdot y$ \\ \hline $x\vee y$ & $\max\{x,y\}$ & $\min\{1,x+y\}$ & $x+y-x\cdot y$ \\ \hline $x\Rightarrow y$ & $x\leq y?1:y$ & $\min\{1,1-x+y\}$ & $x\leq y?1:y/x$ \\ \hline $x\Leftrightarrow y$ & $x=y?1:\min\{x,y\}$ & $1-|x-y|\}$ & $x=y?1:\min\{x/y,y/x\}$ \\ \hline \end{tabular} \vspace{0.1cm} \caption{Fundamental t-norms and their algebraic semantics.} \label{tab:tnorms} \end{table} In this paper, we consider a unified framework where both learning and inference tasks can be seen as constraint satisfaction problems. In particular, the constraints are assumed to be expressed by First-Order Logic (FOL) formulas and implemented in LYRICS, a software we developed converting automatically FOL expressions into TensorFlow computational graphs. Given a set of task functions to be learned, the logical formalism allows to express high-level statements among the outputs of such functions. For instance, given a certain dataset, if any pattern $x$ has to belong to either a class $A$ or $B$, we may impose that $\forall x:\,f_A(x) \lor f_B(x)$ has to hold true, where $f_A$ and $f_B$ denote two classifiers. As shown in the following of this section, there are several ways to convert FOL into real-valued functions. Exploiting the fuzzy generalization of FOL originally proposed by Novak~\cite{novak1987first}, any FOL knowledge base is translated into a set of real-valued constraints by means of fuzzy logic operators. A \emph{t-norm fuzzy logic} \cite{hajek1998} can be used to transform these statements into algebraic expressions, where a t-norm is a commutative, monotone, associative $[0,1]$-valued operation that models the logical AND. Assuming to convert the logical negation $\neg x$ by means of $1-x$, the algebraic semantics of the other connectives is determined by the choice of a certain t-norm. Different t-norm fuzzy logics have been proposed in the literature and we report in Table~\ref{tab:tnorms} the algebraic operations corresponding to the three fundamental continuous t-norm fuzzy logics, G$\ddot{\mbox{o}}$del, \L ukasiewicz and Product logic. In the following, we will indicate by $\Phi(f(\cal X))$ the algebraic translation of a certain logical formula involving the task functions collected in a vector $f$ and by ${\cal X}$ the available training data. The constraints are aggregated over a set of data by means of FOL quantifiers. In particular, the universal and existential quantifiers can be seen as a logic AND and OR applied over each grounding of the data, respectively. Therefore, different quantifiers can be obtained depending on the selection of the underlying t-norm. For example, for a given logic expression $E\big(f({\cal X})\big)$ using the function outputs $f({\cal X})$ as atoms, the product t-norm defines: \begin{equation}\label{eq:forall} \forall x_i\, E\big(f({\cal X})\big) \longrightarrow \displaystyle\prod_{x_i \in {\cal X}_i} \Phi_E\big(f({\cal X})\big) \ , \end{equation} where ${\cal X}_i$ denotes the available sample for the $i$-th task function $f_i$. In the same way, the expression of the existential quantifier when using the G$\ddot{\mbox{o}}$del t-norm becomes the \textit{maximum} of the expression over the domain of the quantified variable: \[ \exists x_i\,E\big(f({\cal X})\big) \longrightarrow \displaystyle\max_{x_i \in {\cal X}_i} \; \Phi_E\big(f({\cal X}) \big) \ . \] Once the translation of the quantifiers are defined, they can be arbitrarily nested and combined in more complicated expressions. The conversion of formulas into real-valued constraints is carried out automatically in the framework we propose. Indeed, LYRICS takes as input the expressions defined using a declarative language and builds the constraints once we decide the conversion functions to be exploited. This framework is very general and it accommodates learning from examples as well as the integration with FOL knowledge. In general terms, the learning scheme we propose can be formulated as the minimization of the following cost function: \def\mbox{\boldmath $f$}{\mbox{\boldmath $f$}} \begin{equation} \begin{array}{rcl} C(f( {\cal X} )) &=& \displaystyle\sum_{h=1}^H \lambda_h \mathcal{L}\Big(\Phi_h \big(f({\cal X})\big) \Big) \ , \end{array} \label{eq:empirical_objective_function} \end{equation} where $\lambda_h$ denotes the weight for the $h$-th logical constraint and the function $\mathcal{L}$ represents any monotonically decreasing transformation of the constraints conveniently chosen according to the problem under investigation. In particular, in this paper we exploit the following mappings \begin{equation} \begin{array}{l} \label{eq:L} {\bf (a)}\;\;\mathcal{L}\Big(\Phi_h \big(f({\cal X})\big) \Big)=1-\Phi_h \big(f({\cal X})\big),\\ {\bf (b)}\;\;\mathcal{L}\Big(\Phi_h \big(f({\cal X})\big) \Big)=-\log\Big(\Phi_h \big(f({\cal X})\big)\Big) \ . \end{array} \end{equation} When the mapping defined in Equation~\ref{eq:L}-{\bf (b)} is applied to an universally quantified formula as in Equation~\ref{eq:forall}, it yields the following constraint: \[ \mathcal{L}\left( \displaystyle\prod_{x_i \in {\cal X}_i} \Phi_E\big(f({\cal X})\big)\right) = -\log \left(\displaystyle\prod_{x_i \in {\cal X}_i} \Phi_E\big(f({\cal X})\big)\right) = \displaystyle\sum_{x_i \in {\cal X}_i} -\log\left( \displaystyle \Phi_E\big(f({\cal X})\big) \right) \ , \] that corresponds to a generalization to generic fuzzy-logic expressions of the cross-entropy loss, which is commonly used to force the fitting of the supervised data for deep learners. \begin{example}[From logic formulas to constraints] Let us consider the rule \[ \forall x \forall y ~ Married(x,y)\Rightarrow (Republican(x)\Leftrightarrow Republican(y)) \] where $Republican$ and $Married$ are a unary and a binary predicates indicating if a certain person $x$ votes for a republican and if $x$ is married with a certain person $y$, respectively. The rule states that, if two persons are married, then they vote for the same party. From a learning point of view, enforcing such a rule allows us to exploit the manifold defined by the predicate $Married$ (possibly known) to improve the classification performance about $Republican$ predicate by correlating the predictions of married pairs. In this case, the input of the predicates can be any vector of features representing a person (e.g. pixel of images, personal data), while the predicates are generally implemented as deep neural models (e.g. a convolutional neural network). The rule can be converted into a continuous loss function using e.g. the Product t-norm as reported in table~\ref{tab:tnorms} and the previously reported semantics for the quantifiers: \[ \prod_{x,y \in {\cal X}} \min\left\{1, \frac{\min\{f_R(x)/f_R(y),f_R(y)/f_R(x)\}}{f_M(x,y)}\right\} \ , \] where $f_R,f_M$ are the functions approximating the predicates $Republican$ and $Married$, respectively and $\cal X$ is the set of patterns representing the available sample of people\footnote{For simplicity we do not consider here the case $f_R(x)=f_R(y)=0$.}. The corresponding loss is obtained by applying Equation~\ref{eq:L}-{\bf (b)}: \[ \sum_{x,y \in {\cal X}} \max\left\{0,-\log\left( \frac{\min\{f_R(x)/f_R(y),f_R(y)/f_R(x)\}}{f_M(x,y)}\right)\right\} \ . \] \end{example} \section{Generative Learning with Logic} \label{sec:generative} This section shows how the discriminative and generative parts of an image-to-image translation system can be formulated by merging logic and learning, yielding a more understandable and easier to extend setup. Let us assume to be given a set of images $\mathcal{I}$. There are two components of a translator framework. First, a set of \textit{generator} functions $g_{j}: \mathcal{I} \rightarrow \mathcal{I}$, which take as input an image representation and generate a corresponding image in the same output domain, depending on the semantics given to the task. Second, a set of \textit{discriminator} functions $d_i: \mathcal{I} \rightarrow [0,1]$ determining whether an input image $x\in \mathcal{I}$ belongs to class $i$ (i.e. stating if an image has got or not a given property) and, thus, they must be intended in a more general way than in traditional GANs. Interestingly, all learnable FOL functions (i.e. functions mapping input elements into an output element) can be interpreted as generator functions and all learnable FOL predicates (i.e. functions mapping input elements into a truth value) can be interpreted as discriminator functions. The {\bf discriminator training} corresponds to enforcing the fitting of the supervised examples as: \begin{equation}\label{eq:discr1} \forall x\, S_i(x) \Rightarrow d_i(x), ~~ i = 1,2,\ldots\ \end{equation} where $S_i(x)$ is a given function returning true if and only if an image is a positive example for the $i$-th discriminator. These constraints allow to transfer the knowledge provided by the supervision (i.e. the $S_i(x)$) inside the discriminators, which play a similar role. However, $d_i(x)$ functions are differentiable and can be exploited to train the generators functions. To this end, assuming that a given function has to generate an image with a certain property, we can force the corresponding discriminator function for such a property to positively classify it. The {\bf generator training} for the $j$-th class can be performed by enforcing the generator to produce images that look like images of class $j$, this can be compactly expressed by a rule: \begin{equation}\label{eq:gen1} \forall x\, d_j(g_j(x)), ~~ j = 1,2,\ldots \end{equation} The logical formalism provides a simple way to describe complex behavior of generator functions by interleaving multiple positive or negative discriminative atoms (i.e $d_i(g(x))$). By requiring that a given image should be classified as realistic, the GAN framework implements a special case of these constraints, where the required property is the similarity with real images. Cycle consistency~\cite{zhu2017unpaired} is also commonly employed to impose that by translating an image from a domain to another one and then translating it back to the first one, we should recover the input image. Cycle consistency allows to further restrict the number of possible translations. Assuming the semantic of the $i$-th generator is to generate images of class $i$, {\bf cycle consistency} can be naturally formulated as: \begin{equation}\label{eq:cycle} \forall x ~ S_i(x) \Rightarrow g_{i}(g_{j}(x)) = x~~ i=1,2,\ldots, ~~ j=1,2,\ldots \end{equation} Clearly, in complex problems, the chain of functions intervening in these constraints can be longer. \begin{figure}[th] \centering \includegraphics[width=0.3\linewidth]{3x3.jpeg} \caption{The first column pictures represents the input images. The second and third column pictures show the outputs of the functions \texttt{next} and \texttt{previous}, respectively, computed on the input image.} \label{fig:generation} \end{figure} The images in different domains are typically required to share the same latent space. Let us indicate $e:\mathcal{I} \rightarrow \mathbb{R}^n$ an encoding function mapping the image into a latent space. This encoding function must be jointly learned during the learning phase. In this special case, the generators must be re-defined as decoder functions taking as input the latent representation of the images, namely: $g_{j}: \mathbb{R}^n \rightarrow \mathcal{I}$. The {\bf auto-encoding} constraints can be expressed using FOL as follows: \begin{equation} \label{eq:identity} \forall x~ S_i(x)\Rightarrow g_{i}(e(x)) = x, ~~ i=1,2,\ldots \end{equation} Up to now, the described constraints are very general and they can be exploited in almost all generative translation tasks. However, the logical formalism (and the LYRICS environment) allows the enforcement of any complex available knowledge about the task at hand. We will see some examples in the following experiment. \subsubsection{Next and Previous Digits Generation} As a toy example, we show a task in which we are asked to learn two generative functions, $next$ and $previous$, which, given an image of a $0,1,2$ digit, will produce an image of the next and previous digit, respectively. In order to give each image a next and a previous digit in the chosen set, a circular mapping was used such that $0$ is the next digit of $2$ and $2$ is the previous digit of $0$. The functions $next$ and $previous$ are implemented by feedforward neural networks with 50 neurons and 1 hidden layer. Since the output of such functions are still images, the output size of the networks is equal to the input size. A $1$-hidden layer RBF with a $3$-sized softmax output layer is used to implement the $zero$, $one$ and $two$ discriminators bound to the three outputs of the network, respectively. The RBF model, by constructing closed decision boundaries, allows the generated images to resemble the input ones. Finally, let $isZero$, $isOne$ and $isTwo$ be three given functions, defined on the input domain, returning $1$ only if an image is a $0$, $1$ or $2$, respectively. They play the role of the $S_i(x)$ in the general description. The idea behind this task is to learn generative functions without giving any direct supervision to them, but simply requiring that the generation is consistent with the classification performed by some jointly learned classifiers. The problem can be described by the following constraints to learn the discriminators \[ \forall x\,isZero(x) \Rightarrow zero(x), \quad \forall x\,isOne(x) \Rightarrow one(x), \quad \forall x\,isTwo(x) \Rightarrow two(x) \] and the following constraints to express that the generation functions are constrained to return images which are correctly recognized by the discriminators. \begin{equation*} \begin{array}{l} \forall x~zero(x) \Rightarrow one(next(x)) \land two(previous(x)) \\ \forall x~one(x) \Rightarrow two(next(x)) \land zero(previous(x)) \\ \forall x~two(x) \Rightarrow zero(next(x)) \land one(previous(x)) \end{array} \end{equation*} In addition, in order to force the generated images to be similar to at least one digit in the domain, we enforce the following constraints: \begin{equation*} \begin{array}{l} \forall x~\exists y~(isZero(x) \land isOne(y)) \Rightarrow next(x) = y \\ \forall x~\exists y~(isZero(x) \land isTwo(y))\Rightarrow previous(x) = y \\ \forall x~\exists y~(isOne(x) \land isTwo(y))\Rightarrow next(x) = y \\ \forall x~\exists y~(isOne(x) \land isZero(y))\Rightarrow previous(x) = y \\ \forall x~\exists y~(isTwo(x) \land isZero(y))\Rightarrow next(x) = y \\ \forall x~\exists y~(isTwo(x) \land isOne(y))\Rightarrow previous(x) = y \ . \end{array} \end{equation*} Finally, the cycle consistency constraints can be expressed by: \[ \forall x\, next(previous(x)) = x \qquad \forall x\, previous(next(x)) = x \ . \] We test this idea by taking a set of around $15000$ images of handwritten characters, obtained extracting only the $0$, $1$ and $2$ digits from the MNIST dataset. The above constraints have been expressed in LYRICS and the model computational graphs have been bound to the predicates. Figure~\ref{fig:generation} shows an example of image translation using this schema, where the image on the left is an original MNIST image and the two right images are the output of the $next$ and $previous$ generators. Before proceeding, we want to dwell on the possibilities of this approach after an example has been provided. The declarative nature of the logical formalism and its subsequent translation into real-valued constraints, exploited as loss functions of an optimization problem, enable the construction of very complex generative problems by means of only an high-level semantic description. By exploiting models inherited from the literature, a final user is allowed to face the most different problems with the minimum implementation effort. In the following section, we show a real image-to-image translation task applying the general setup described in this section, including auto-encoders, GANs and cycle consistency. The declarative nature of the formulation makes very easy to add an arbitrary number of translation problems and it allows to easily learn them jointly. \section{Experiments on Image Translation} \label{sec:gan} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{male_to_female.jpg} \caption{\textbf{Face Gender Translation: male to female.} The top row shows input male images, the bottom row shows the correspondent generated female images.} \label{fig:m2f} \end{figure*} \begin{figure*}[t] \includegraphics[width=0.98\textwidth]{female_to_male.jpg} \caption{\textbf{Face Gender Translation: female to male.} The top row shows input female images, the bottom row shows the correspondent generated male images.} \label{fig:f2m} \end{figure*} UNIT translation tasks assume that there are no pairs of examples showing how to translate an image into a corresponding one in another domain. Combining auto-encoders with GANs is the state-of-the-art solution for tackling UNIT generation problems~\cite{zhu2017unpaired,liu2016coupled,liu2017unsupervised}. In this section, we show how this adversarial setting can be naturally described and extended by the proposed logical and learning framework. Furthermore, we show how the logical formulation allows a straightforward extension of this application to a greater number of domains. The CelebFaces Attributes dataset~\cite{liu2015faceattributes} was used to evaluate the proposed approach, where celebrities face images are labeled with various attributes gender, hair color, smiling, eyeglasses, etc. Images are defined as 3D pixel tensors with values belonging to the $[0,1]$ interval. The first two dimensions represent width and height coordinates while the last dimension indexes among the RGB channels. In particular, we used the \emph{Male} attribute to divide the entire dataset into the two input categories, namely male and female images. In the following $S_M(x)$ and $S_F(x)$ (such that $\forall x ~ S_F(x) \Leftrightarrow \lnot S_M(x)$) are two given predicates holding true if and only if an image $x$ is or is not tagged with the \emph{male} tag. Let $e$ be an encoding function mapping images into the the latent domain ${\mathcal Z}=\mathbb{R}^n$. The encoders are implemented as multilayer convolutional neural networks with resblocks~\cite{he2016deep}, leaky-ReLU activation functions and instance normalization at each layer (see \cite{liu2017unsupervised} for a detailed description of the architecture). The generative functions $g_M$ and $g_F$ map vectors of the domain $\mathcal Z$ into images. These functions are implemented as multilayer transposed convolutional neural networks (also called "deconvolutions") with resblocks, leaky-ReLU activation functions and instance normalization at each layer. To implement the shared latent space assumption, $g_M$ and $g_F$ share the parameters of the first layer. The functions $d_M$ and $d_F$ are trained to discriminate whether an image is real or it has been generated by the $g_M$ and $g_F$ generator functions. For example, if $x$ and $y$ are two images such that $S_M(x), S_F(y)$ hold true, then $d_M(x)$ should return $1$ while $d_M(g_M(e(y)))$ should return $0$. The problem can be described by the logical constraints that have been introduced in a general form in Section \ref{sec:generative} and that the encoding and generation functions need to satisfy. First, Equation~\ref{eq:identity} is used to enforce the encoder and generator of the same domain to be circular, that is to map the input into itself: \begin{align} \forall x ~ S_M(x) \Rightarrow g_M(e(x)) = x \label{eq:l11} \\ \forall x ~ S_F(x) \Rightarrow g_F(e(x)) = x \label{eq:l12} \end{align} where the equality operator comparing two images in Equations \ref{eq:l11} and \ref{eq:l12} is bound to a continuous and differentiable function computing a pixel by pixel similarity between the images, defined as $1 -\tanh( \frac{1}{P}\sum_p |x_p - y_p|)$ where $x_p$ and $y_p$ are the $p$-th pixel of the $x$ and $y$ images and $P$ is the total number of pixels. Cycle consistency is also imposed as described by the Equation~\ref{eq:cycle}. \begin{align} \forall x ~ S_M(x) \Rightarrow g_M(e(g_F(e(x))) = x \label{eq:cycle1} \\ \forall x ~ S_F(x) \Rightarrow g_F(e(g_M(e(x))) = x \label{eq:cycle2} \end{align} where the same equality operator is used to compare the images. Finally, according to the Equation~\ref{eq:gen1}, the generated images must fool the discriminators so that they will be detected as real ones as: \begin{align} \forall x ~ S_M(x) \Rightarrow d_F(g_F(e(x)))\label{eq:adv_g1}\\ \forall x ~ S_F(x) \Rightarrow d_M(g_M(e(x)) )\label{eq:adv_g2} \end{align} On the other hand, the discriminators must correctly discriminate real images from generated ones by the satisfaction of the following constraints, as stated by Equation~\ref{eq:discr1}: \begin{align} \forall x ~ S_M(x) \Rightarrow d_M(x) \land \lnot d_F(g_F(e(x)))\label{eq:adv_d1}\\ \forall x ~ S_F(x) \Rightarrow d_F(x) \land \lnot d_M(g_M(e(x))) \label{eq:adv_d2} \end{align} Using logical constraints allows us to give a clean and easy formulation of the adversarial setting. These constraints force the generation function to generate samples that are categorized in the desired class by the discriminator. Moreover, the decoupling between the models, used to implement the functions and which can be inherited from the previous literature, and the description of the problem makes really straightforward to extend or transfer this setting. We implemented this mixed logical and learning task using LYRICS. The Product t-norm was selected to define the underlying fuzzy logic problem. This selection of the t-norm is particularly suited for this task because, as shown earlier, it defines a cross-entropy loss on the output of the discriminators, which is the loss that was used to train these models in their original setup. The $e$, $g_M$, $g_F$ functions are trained to the satisfaction of the constraints defined in \Cref{eq:l11,eq:l12,eq:cycle1,eq:cycle2,eq:adv_g1,eq:adv_g2}, while $d_M$ and $d_F$ are trained to satisfy \Cref{eq:adv_d1,eq:adv_d2}. Weight learning for the models was performed used the Adam optimizer with a fixed learning rate equal to $0.0001$. Some male-to-female and female-to-male translations are shown in figures \ref{fig:m2f} and \ref{fig:f2m} respectively. \subsubsection{Adding Eyeglasses} Given this setting, we can integrate a third domain in the overall problem adding the corresponding constraints for this class. Let $S_E(x)$ be a given predicate holding true if and only if an image $x$ is tagged with the \emph{eyeglasses} tag in the dataset. Let $g_E(x)$ be the corresponding generator and $d_E(x)$ the corresponding discriminator for this property. The same network architectures of the previous description are employed to implement $d_E$ and $g_E$. The addition of this third class requires to add the following constraints for the generators, to be integrated with the male and female classes, \begin{align*} \forall x& ~ S_M(x) \Rightarrow d_E(g_E(e(x)))\\ \forall x& ~ S_F(x) \Rightarrow d_E(g_E(e(x)))\\ \forall x& ~ S_E(x) \Rightarrow g_E(e(x)) = x \\ \forall x& ~ S_M(x) \wedge S_E(x) \Rightarrow d_E(g_F(e(x))) \\ \forall x& ~ S_F(x) \wedge S_E(x) \Rightarrow d_E(g_M(e(x))) \\ \forall x& ~ S_M(x) \wedge S_E(x) \Rightarrow g_E(e(g_F(e(x))) = g_F(e(x)) \\ \forall x& ~ S_F(x) \wedge S_E(x) \Rightarrow g_E(e(g_M(e(x))) = g_M(e(x)) \\ \forall x& ~ S_M(x) \wedge \neg S_E(x) \Rightarrow g_M(e(g_E(e(x))) = g_E(e(x)) \\ \forall x& ~ S_F(x) \wedge \neg S_E(x) \Rightarrow g_F(e(g_E(e(x))) = g_E(e(x)) \end{align*} and to add the following for the discriminator: \begin{align*} \forall x& ~ S_E(x) \Rightarrow d_E(x)\\ \forall x& ~ S_M(x)\wedge\neg S_E(x) \Rightarrow \neg d_E(g_E(e(x)))\\ \forall x& ~ S_F(x)\wedge\neg S_E(x) \Rightarrow \neg d_E(g_E(e(x))) \end{align*} We note that in this case, the class eyeglasses is not mutually exclusive neither with male nor female class. This is the reason why we have to consider some constraints with a conjunction on premises. In addition, we have to distinguish how the male and female generators behave in presence of the attribute eyeglasses. In particular we enforce that translating a gender attribute does not affect the presence of eyeglasses. Figure~\ref{fig:eyeglasses} shows some examples of the original face images, and the corresponding generated images of the faces with added eyeglasses. As we already said, the proposed approach is very general and can be exploited to manage possibly several attributes in a visual generation task combining a high-level logical description with deep neural networks. The most distinguishing property is the flexibility of describing new generation problems by simple logic descriptions, which leads to attack very different problems. Instead of looking for specific hand-crafted cost functions, the proposed approach offers a general scheme for their construction that arises from the t-norm theory. Moreover, the interleaving of different image translations tasks allows us to accumulate a knowledge base that can dramatically facilitate the construction of new translation tasks. The experimental results shows the flexibility of the proposed approach, which makes it possible to deal with realistic face translation tasks. \begin{figure}[t] \includegraphics[width=0.98\textwidth]{eyeglasses.jpg} \caption[Face Gender Translation: male/female to eyeglasses]{\textbf{Face Gender Translation: male/female to eyeglasses.} The top row shows input male/female images whereas the bottom row shows the correspondent generated faces with eyeglasses.} \label{fig:eyeglasses} \end{figure} \section{Conclusions} \label{sec:conclusion} This paper shows a new general approach to visual generation combining logic descriptions of the target to be generated with deep neural networks. The most distinguishing property is the flexibility of describing new generation problems by simple logic descriptions, which leads to attack very different problems. Instead of looking for specific hand-crafted cost functions, the proposed approach offers a general scheme for their construction that arises from the t-norm theory. Moreover, the interleaving of different image translations tasks allows to accumulate a knowledge base that can dramatically facilitate the construction of new translation tasks. The experimental results shows the flexibility of the proposed approach, which makes it possible to deal with realistic face translation tasks. \bibliographystyle{splncs04}
train/arxiv
BkiUeL44eIOjReIV-mgw
5
1
\section{Introduction} \label{sec:intro} Understanding the statistical properties of the extremes of stochastic processes is a task of paramount importance in a wide range of contexts, including the physics of disordered systems \cite{D81,BBP07}, computer science \cite{KM00,MK02,MK03}, and evolutionary biology \cite{SBA98,KJ05}. During the last century, these properties have been investigated systematically within the field of Extreme Value Statistics (EVS) -- for a recent review, see \cite{MP20}. Given a one-dimensional time series $x(\tau)$, where $0\leq \tau\leq T$ indicates time, one of the central quantities in EVS is the global maximum $M$ of the process up to time $T$, defined as \begin{equation} M=\max_{0\leq \tau\leq T}x(\tau)\,. \end{equation} A schematic representation of a stochastic process $x(\tau)$ is shown in Fig.~\ref{fig:tmax_schem}, where the global maximum $M$ is highlighted. Even though computing the distribution of $M$ is generally quite nontrivial, a few exactly solvable cases exist. In particular, one of the fundamental results in EVS deals with the case where the positions of the process at different times are independent and identically distributed (i.i.d.) random variables (meaning that $x(\tau)$ and $x(\tau')$ are i.i.d. if $\tau\neq \tau'$). In this i.i.d. case, one can show that for large $T$ the distribution of $M$ always belongs to one of three universality classes, independently of the specific distribution of the random variables $x(\tau)$ \cite{Gumbel_book}. This universal result can also be extended to the case where the process $x(\tau)$ is weakly correlated, meaning that the autocorrelation function of $x(\tau)$ decays exponentially in time as \begin{equation} \langle x(\tau) x(\tau')\rangle -\langle x(\tau) \rangle\langle x(\tau')\rangle \sim f\left(\frac{|\tau-\tau'|}{\xi}\right)\,, \label{autocorrelation_function} \end{equation} where $\xi$ is the correlation time of the process and $f(z)$ decays faster than any power law for large $z$. Indeed, using a ``block renormalization'' argument, one can still apply the same universal result as for i.i.d. variables when $T\gg \xi$ \cite{MP20}. Even though in many cases one is interested in the magnitude $M$ of the maximum, an equally important observable is the time $t_{\rm m}$ at which the maximum is attained (see Fig.~\ref{fig:tmax_schem}). Indeed, determining the time at which a time series will reach its global maximum is relevant in many different situations, from finance \cite{DW80,BC04,RM07,MB08} to sports \cite{CK15}. For instance, the time $t_{\rm m}$ at which a stock price in the financial market reaches its global maximum within a fixed time window $T$ (e.g., a trading day) is a quantity of clear practical importance. The distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ of the maximum has been investigated for a wide range of processes. For instance, when the variables $x(\tau)$ for $0\leq \tau\leq T$ are i.i.d. it is easy to show that the distribution of $t_{\rm m}$ is uniformly distributed in the interval $[0,T]$, i.e., that \begin{equation} P(t_{\rm m}|T)=\frac1T\,, \label{uniform_mead_intro} \end{equation} for $0\leqt_{\rm m}\leq T$. When correlations are present the probability density function (PDF) $P(t_{\rm m}|T)$ is usually more complicated. For instance, in the paradigmatic case of an overdamped Brownian motion (BM) in one dimension the distribution of $t_{\rm m}$ was first computed by L\'evy, who showed that \cite{Levy,Feller,SA53} \begin{equation} P(t_{\rm m}|T)=\frac{1}{\pi\sqrt{t_{\rm m}(T-t_{\rm m})}}\,. \label{arcsine_intro} \end{equation} Since the corresponding cumulative distribution reads \begin{equation} P(t_{\rm m}\leq t|T)=\int_{0}^{t}dt_{\rm m}P(t_{\rm m}|T)=\frac{2}{\pi}\sin^{-1}\left(\sqrt{\frac{\pi}{T}}\right)\,, \end{equation} this distribution is known as L\'evy's arcsine law. More recently, the distribution $P(t_{\rm m}|T)$ has been studied for several generalizations of BM, including constrained BM \cite{She79,B03,RFM07,MB08,MRK08,SLD10,MY10,MMS19,MLD20}, BM with stochastic resetting starting from the origin \cite{SP21,MMSS21}, BM with drift \cite{She79,B03,MB08}, fractional BM \cite{DW16,SDW18}, Bessel process \cite{SLD10}, L\'evy flights \cite{SA53,M10}, random acceleration process \cite{MRZ10}, and heterogeneous diffusion \cite{S22}. The distribution of $t_{\rm m}$ has also been studied in the case of run-and-tumble particles (RTP) \cite{SK19,MLD20a,MLD20} and for $N$ vicious walkers \cite{RS11}. Moreover, the distribution of the time of the maximum plays a central role for computing the mean area of the convex hull of a two-dimensional process \cite{RMC09,MCR10,DMR13,HMSS20,MMSS21,SKMS22} and for determining the hitting probability for anomalous diffusion processes \cite{MRZ10}. However, to the best of our knowledge, before our recent Letter \cite{MMS21}, the time of the maximum was never systematically investigated in the case of {\it stationary} stochastic processes. \begin{figure}[t] \includegraphics[scale=1]{tmax_schematic.pdf} \caption{\label{fig:tmax_schem} Schematic representation of a stochastic process $x(\tau)$ as a function of $\tau$, for $0\leq \tau\leq T$. The global maximum $M=x(t_{\rm m})$ is reached at time $t_{\rm m}$.} \end{figure} Stationary processes, i.e., stochastic processes that are invariant under a time shift, can be observed at very different scales in nature, from Brownian motors inside the cell \cite{D97} to climate systems \cite{WFM20}. A fundamental step in characterizing a stationary system is to determine whether it is at equilibrium or out of equilibrium. In addition to being stationary, equilibrium processes satisfy a stronger condition, namely detailed balance, which requires all probability currents in phase space to vanish. As a consequence of the detailed balance condition, equilibrium processes are also invariant under time-reversal symmetry and their physical properties are generally well-understood within the framework of statistical physics. On the other hand, nonequilibrium processes are characterized by probability currents in the steady state. Moreover, even though in recent years several general results have been derived concerning the fluctuations in out-of-equilibrium systems \cite{jarzynski,K98,C1999,seifert05,seifert12,HG20}, it still remains challenging to characterize precisely the statistical properties of these fluctuations. For this reason, several techniques for detecting nonequilibrium fluctuations in steady states have been developed -- for a review see \cite{GMG18}. Notably, the case in which the autocorrelation function of the process decays over a typical timescale $\xi$ (as in Eq.~\eqref{autocorrelation_function}) has been recently investigated in the context of EVS \cite{EM_2011,MP20,MMSS21,MMSS21b}. In particular, both for the Ornstein-Uhlenbeck process \cite{MP20} and BM with resetting \cite{EM_2011,MP20,MMSS21}, it has been shown that the distribution of the maximum $M$, when properly rescaled, converges to the universal Gumbel form for $T\gg \xi$, where the correlation timescale $\xi$ depends on the details of the process. A similar late-time universality has also been observed for the record statistics of random walks with resetting \cite{MMSS21b}. The reason for these universal results is that, when $T\gg \xi$, one can apply a block renormalization argument which reduces the system to a collection of i.i.d.~variables (for the details of this argument, see \cite{MP20}). On the other hand, when $T\ll \xi$ the process is strongly correlated and one cannot apply the universal results, valid for i.i.d.~variables. Therefore changing the observation time $T$, the process interpolates between a strongly correlated state (for $T\ll\xi$) and an independent state (for $T\gg\xi$), as summarized in Fig.~\ref{fig:cross}. For this reason, stationary processes provide a natural laboratory to investigate the role of correlations in EVS. Since the time $t_{\rm m}$ is one of the central quantities in EVS, it is natural to ask whether the universality at late times also applies to the distribution $P(t_{\rm m}|T)$. Note that, even for short times, one could naively argue that for a stationary process the distribution $P(t_{\rm m}|T)$ should be given by the uniform measure in Eq.~\eqref{uniform_mead_intro}, as a consequence of the time-translational invariance. Interestingly, this is not the case due to the time-correlations of the process, as shown in \cite{MMS21}. Nevertheless, one expects the time-correlations of the process to become negligible for $T\gg \xi$, leading to the uniform distribution in Eq.~\eqref{uniform_mead_intro}. In our recent Letter \cite{MMS21}, using a path-decomposition technique, we have shown that this is the case only in the ``bulk'' of the distribution $P(t_{\rm m}|T)$, i.e., for $\xi\llt_{\rm m}\ll (T-\xi)$. In the ``edge regimes'' for $t_{\rm m}\to 0$ and $t_{\rm m} \to T$ the distribution $P(t_{\rm m}|T)$ strongly deviates from the uniform distribution. Moreover, we have also shown that for a large class of equilibrium processes the full distribution $P(t_{\rm m}|T)$, including the edge regimes, becomes universal at late times. These results were recently announced, albeit without any further details, in \cite{MMS21}. The present paper provides a detailed description of these derivations, which we believe could be useful to investigate other problems in EVS. \begin{figure}[t] \includegraphics[scale=1]{cross.pdf} \caption{\label{fig:cross} The behavior of stationary stochastic processes of total duration $T$ and correlation time $\xi$ is controlled by the dimensionless parameter $T/\xi$. When $T\ll \xi$ the process is strongly correlated, while one can map the process into a collection of i.i.d.~variables for $T\gg\xi$. } \end{figure} In this paper we study the distribution of $P(t_{\rm m}|T)$ for several processes, both in and out of equilibrium. It turns out that computing $P(t_{\rm m}|T)$ analytically is very hard, except for a handful of processes. In this paper, we present analytical solutions of $P(t_{\rm m}|T)$ for two equilibrium processes corresponding to an overdamped BM in a confining potential, respectively, of the form (i) $V(x)= \alpha |x|$ and (ii) $V(x)=\alpha x^2$ (the latter is the standard Ornstein-Uhlenbeck process). Similarly, we also obtain analytical solutions of $P(t_{\rm m}|T)$ for two very different out-of-equilibrium processes: (iii) a resetting Brownian motion (RBM) in one dimension and (iv) an RTP moving in the presence of a confining potential $V(x)= \mu |x|$. In the case of RBM, previous results for $P(t_{\rm m}|T)$ were known only for the case when the particle starts initially at a fixed position $x_0=0$, namely at the origin \cite{MMSS21,SP21}. In contrast, we show in this paper that when the initial position $x_0$ is sampled from the stationary distribution, $P(t_{\rm m}|T)$ is considerably different and is harder to compute. In addition to these four cases, we also study several other examples using numerical simulations and we highlight some universal properties of $P(t_{\rm m}|T)$, in particular at the edges when $t_{\rm m}\to 0$ or $t_{\rm m}\to T$. We then provide a block renormalization group argument to compute some of these universal edge scaling functions. Finally, we also provide a rather general sufficiency test to decide whether a given stationary process is out of equilibrium without having any a priori knowledge of its underlying dynamics. This test turns out to be incredibly simple: if $P(t_{\rm m}|T)$ turns out to be asymmetric around $T/2$ (either from simulations or analytical computations), the underlying process is surely out of equilibrium. If $P(t_{\rm m}|T)$ turns out to be symmetric around $T/2$, the test is inconclusive. The rest of the paper is organized as follows. In Section \ref{sec:summary}, we provide a summary of our main results. In Section \ref{sec:eq}, we investigate the time $t_{\rm m}$ of the maximum in the case of equilibrium processes. We consider the paradigmatic model of an overdamped Brownian particle in one dimension subject to an external potential $V(x)$ such that $V(x)\approx\alpha|x|^p$ for large $|x|$. Using a path-decomposition technique, we derive an exact result in the cases $V(x)=\alpha |x|$ (subsection \ref{sec:p1}) and $V(x)=\alpha x^2$ -- corresponding to the Ornstein-Uhlenbeck process (subsection \ref{sec:p2}). Moreover, in subsection \ref{sec:univ} we show that for $p>0$ the distribution of $t_{\rm m}$ becomes universal at late times. In Section \ref{sec:neq}, we investigate the distribution of $t_{\rm m}$ for nonequilibrium processes, including RBM (subsection \ref{sec:res_BM}) and a confined RTP (subsection \ref{sec:rtp}). In addition, in subsection \ref{sec:criterion}, we formulate a simple criterion, based on the estimation of the distribution $P(t_{\rm m}|T)$, to detect nonequilibrium fluctuations in steady states. Finally, in Section \ref{sec:conclusion}, we conclude with a summary and we discuss possible perspectives. Some details of the computations are presented in the appendices. \section{Models and summary of the main results} \label{sec:summary} Since the paper is rather long, we provide a concise description of the models and a summary of our main results, so that the main mathematical formulae can be easily retrieved without a detailed search in the main body of the paper. We consider a one-dimensional stationary process $x(\tau)$ for $0\leq \tau\leq T$. We assume that at time $\tau=0$, the process has already reached a steady state. This is equivalent to assuming that the system is initialized at some arbitrary state at time $\tau=-\infty$ and that we start to observe it at time $\tau=0$. Our goal is to compute analytically the distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ at which the process reaches its global maximum up to time $T$. Note that the domain of $t_{\rm m}$ is the time interval $[0,T]$. We consider different stochastic models, both at equilibrium and out-of-equilibrium. \subsection{Equilibrium processes} We consider a class of equilibrium processes corresponding to an overdamped Brownian particle in a confining potential $V(x)$, such that $V(x)\approx\alpha |x|^p$, with $\alpha>0$ and $p>0$. The position $x(\tau)$ of the process evolves according to the Langevin equation \begin{equation} \frac{dx(\tau)}{d\tau}=-V'(x)+\sqrt{2D}\eta(\tau)\,, \end{equation} where $\eta(\tau)$ is a Gaussian white noise with zero mean and correlator $\langle \eta(\tau)\eta(\tau')\rangle=2D\delta(\tau-\tau')$, $D>0$ is the diffusion constant, and $V'(x)=dV(x)/dx$. The equilibrium stationary state of this process is given by the Boltzmann weight $P_{\rm st}(x_0)\propto e^{-V(x)/D}$. Computing the full distribution $P(t_{\rm m}|T)$ for any $p>0$ is challenging. Nevertheless, we are able to calculate this quantity in two exactly solvable cases. \subsubsection{The case $p=1$} In the case $V(x)=\alpha |x|$, we show that \begin{equation} P(t_{\rm m}|T)=\frac{\alpha^2}{4D}F_1\left(\frac{\alpha^2}{4D}t_{\rm m},\frac{\alpha^2}{4D}(T-t_{\rm m})\right)\,,\label{scaling_p1_summary} \end{equation} where the double Laplace transform of $F_1(T_1,T_2)$ is given by \begin{eqnarray} \label{LT_scaling_p1_summary} &&\int_{0}^{\infty}dT_1 e^{-s_1 T_1} \int_{0}^{\infty}dT_2 e^{-s_2 T_2} F_1(T_1,T_2)\\&=& \frac{1}{2(1+\sqrt{1+s_1})(1+\sqrt{1+s_2})}\Bigg[1+\int_{0}^{\infty}dz\,e^{-z}\frac{\left(\sqrt{1+s_1}+1-e^{-\sqrt{1+s_1}z}\right)\left(\sqrt{1+s_2}+1-e^{-\sqrt{1+s_2}z}\right)}{\left(\sqrt{1+s_1}-1+e^{-\sqrt{1+s_1}z}\right)\left(\sqrt{1+s_2}-1+e^{-\sqrt{1+s_2}z}\right)}\Bigg]\,.\nonumber \end{eqnarray} Inverting this double Laplace transform is highly nontrivial. Nevertheless, from this expression it is easy to check that $P(t_{\rm m}|T)$ is symmetric around the midpoint $t_{\rm m}=T/2$, i.e., that $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. This implies that the first moment of $t_{\rm m}$ is simply given by $\langlet_{\rm m} \rangle=T/2$. Interestingly, this property, which is a consequence of the time-reversal symmetry, is valid for any equilibrium process and is confirmed by numerical simulations (see Fig.~\ref{fig:comparis_eq-neq}{\bf a}). This observation will lead us to formulate the criterion discussed below to decide whether or not a stationary time series is at equilibrium. In addition, from the expression in Eq.~\eqref{LT_scaling_p1_summary}, it is possible to extract the asymptotic behavior of $P(t_{\rm m}|T)$ for small and large $T$. When $T\ll \xi$, where $\xi=(4D)/\alpha^2$ is the correlation time of the process, we find \begin{equation} P(t_{\rm m}|T)\approx\frac{1}{\pi\sqrt{t_{\rm m}(T-t_{\rm m})}}\,, \end{equation} which corresponds to the arcsine law, valid for free BM (see Eq.~\eqref{arcsine_intro}). Thus, for short times the process is strongly correlated and behaves as a BM. On the other hand, in the late-time regime $T\gg\xi$, we find \begin{equation} P(t_{\rm m}|T)\approx \begin{cases}\frac1T G\left(\frac{\alpha^2}{4D}t_{\rm m}\right)\quad &\text{ for }\quadt_{\rm m}\lesssim 4D/\alpha^2\,,\\ \\ \frac1T \quad &\text{ for }\quad 4D/\alpha^2\ll t_{\rm m} \ll T-4D/\alpha^2\,,\\ \\ \frac1T G\left[\frac{\alpha^2}{4D}(T-t_{\rm m})\right]\quad &\text{ for }\quadt_{\rm m}\gtrsim T- 4D/\alpha^2\,,\\ \end{cases} \label{intro_PT_asymptotics_1} \end{equation} where \begin{equation} G(z)=\frac12 \left[1+\operatorname{erf}(\sqrt{z})+\frac{1}{\sqrt{\pi z}}e^{-z}\right]\,, \label{G_summary} \end{equation} and $\operatorname{erf}(z)=(2/\sqrt{\pi})\int_{0}^{z}du~e^{-u^2}$. This function $G(z)$ has asymptotic behaviors \begin{equation} G(z)\approx \begin{cases} 1/(2\sqrt{\pi z})\quad &\text{ for } z\to 0\,,\\ \\ 1+e^{-z}/(4\sqrt{\pi}z^{3/2})\quad &\text{ for } z\to \infty\,.\\ \end{cases} \label{PT_asymp_p1_summary} \end{equation} Thus, the PDF $P(t_{\rm m}|T)$ becomes constant in the bulk regime where $4D/\alpha^2\ll t_{\rm m} \ll T-4D/\alpha^2$. The edge regimes $t_{\rm m}\to0$ and $t_{\rm m}\to T$ are instead described by the function $G(z)$ in Eq.~\eqref{G_summary}. In particular, $P(t_{\rm m}|T)$ diverges as $\sim 1/\sqrt{t_{\rm m}}$ for $t_{\rm m} \to 0$ and by symmetry as $\sim 1/\sqrt{T-t_{\rm m}}$ for $t_{\rm m} \to T$. The width of the edge regime is in this case $\mathcal{O}(1)$. \subsubsection{The case $p=2$} In the case $V(x)=\alpha x^2$, corresponding to the Ornstein-Uhlenbeck process, we obtain \begin{equation} P(t_{\rm m}|T)=\alpha F_{\rm OU}(\alphat_{\rm m},\alpha(T-t_{\rm m}))\,. \label{summary_scaling_relation_OU} \end{equation} where \begin{equation} \int_{0}^{\infty}dT_1~\int_{0}^{\infty}dT_2~e^{-s_1 T_1-s_2 T_2}F_{\rm OU}(T_1,T_2)=\frac{1}{\sqrt{8\pi}}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\frac{D_{-1-s_1/2}\left(-z\right)}{D_{-s_1/2}\left(-z\right)}\frac{D_{-1-s_2/2}(-z)}{D_{-s_2/2}(-z)}\,. \label{summary_scaling_OU} \end{equation} Here, $D_p(z)$ is the parabolic cylinder function \cite{DLMF}. From this expression, we find that the distribution $P(t_{\rm m}|T)$ is symmetric around $t_{\rm m}=T/2$, implying $\langlet_{\rm m}\rangle=T/2$. This is in agreement with the fact that the process is at equilibrium. The asymptotic behaviors of $P(t_{\rm m}|T)$ for short and late times are qualitatively similar to the ones we obtained for $p=1$ (see Eq.~\eqref{intro_PT_asymptotics_1}). In particular, in the short-time regime $T\ll\xi$, where $\xi=1/\alpha$ for this model, we find that the distribution $P(t_{\rm m}|T)$ approaches the arcsine law in Eq.~\eqref{arcsine_intro}. Thus, for short times the process is strongly correlated and we find that the distribution of $t_{\rm m}$ approaches that of a BM. On the other hand, in the late-time regime $T\gg\xi$, we obtain \begin{equation} P(t_{\rm m}|T)\approx \begin{cases}\frac1T G\left(\alpha \ln(T)~t_{\rm m}\right)\quad &\text{ for }\quadt_{\rm m}\lesssim 1/(\alpha \ln(T))\,,\\ \\ \frac1T \quad &\text{ for }\quad 1/(\alpha \ln(T))\ll t_{\rm m} \ll T-1/(\alpha \ln(T))\,,\\ \\ \frac1T G\left(\alpha \ln(T)~(T-t_{\rm m})\right)\quad &\text{ for }\quadt_{\rm m}\lesssim 1/(\alpha \ln(T))\,,\\ \end{cases} \label{PT_asymp_p2_summary} \end{equation} where the function $G(z)$ is given again in Eq.~\eqref{G_summary}. Interestingly, we find that the late-time behavior of $P(t_{\rm m}|T)$ is the same for $p=1$ and $p=2$. The only difference is the width of the edge regimes, which is $\mathcal{O}(1)$ for $p=1$ and $\mathcal{O}(1/\ln(T))$ for $p=2$. This result is quite unexpected and led us to ask whether this universality extends to any $p>0$. \subsubsection{Universality at late times} \begin{figure*}[t] \includegraphics[scale=0.7]{ptm_late_times.pdf} \caption{\label{fig:ptm_asymp} Schematic representation of the late-time distribution $P(t_{\rm m}|T)$ as a function of $t_{\rm m}$ for Brownian motion in a potential $V(x)=\alpha |x|^p$ with diffusion constant $D$. The blue curve represents the universal result in Eq.~\eqref{universal_G_summary}. The distribution is flat in the bulk regime $\lambda(T)\ll t_{\rm m} \ll (T-\lambda(T))$, while it diverges in the edge regimes $t_{\rm m}\to 0$ and $t_{\rm m} \to T$. The width of the edge regimes is $\lambda(T)$, given in Eq.~\eqref{lambda_summary}. Since the process is at equilibrium, the distribution is symmetric around the midpoint $t_{\rm m}=T/2$, i.e., $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$.} \end{figure*} Using a ``block renormalization'' argument, we indeed show that the late time behavior of $P(t_{\rm m}|T)$ is universal for any $p>0$. In particular, for $T\gg\xi$, we find that \begin{equation} P(t_{\rm m}|T)\approx \begin{cases} \frac{1}{T}G\left(\frac{t_{\rm m}}{\lambda(T)}\right) &~~\text{ for }~~ t_{\rm m}\lesssim \lambda(T)\\ \\ \frac1T &~~\text{ for } ~~\lambda(T)\ll t_{\rm m}\ll T- \lambda(T)\\ \\ \frac{1}{T}G\left(\frac{T-t_{\rm m}}{\lambda(t)}\right) &~~\text{ for } ~~ t_{\rm m}\gtrsim T- \lambda(T) \,, \end{cases} \label{universal_G_summary} \end{equation} where $G(z)$ is given in Eq.~\eqref{G_summary} and \begin{equation} \lambda(T)=\frac{4D}{\alpha^2 p^2}\left(\frac{D}{\alpha}\ln(T)\right)^{-2(p-1)/p}\,. \label{lambda_summary} \end{equation} Interestingly, at late times the distribution $P(t_{\rm m}|T)$, once appropriately scaled, becomes completely universal, i.e., independent of the specific details of the model. Note that the model parameters $\alpha$ and $p$ appear in the expression of $P(t_{\rm m}|T)$ only through the width $\lambda(T)$ of the edge regime. This quantity is constant with increasing $T$ (for large $T$) for $p=1$, it shrinks as $\ln(T)^{-2(p-1)/p}$ for $p>1$, while it grows as $\ln(T)^{2(1-p)/p}$ for $0<p<1$. Setting $p=1$ or $p=2$ in Eq.~\eqref{universal_G_summary}, we recover the results in Eqs.~\eqref{PT_asymp_p1_summary} and \eqref{PT_asymp_p2_summary}. \subsection{Out-of-equilibrium processes} We also investigate the distribution of the time of the maximum in the case of out-of-equilibrium stationary processes. In this case, the system does not satisfy the time-reversal symmetry. Consequently, the distribution $P(t_{\rm m}|T)$ is generally not symmetric around the midpoint $t_{\rm m}=T/2$. The two exactly solvable processes that we consider are a single RBM and a confined RTP. \subsubsection{Resetting Brownian motion} \begin{figure*}[t]\includegraphics[scale=0.5]{little_T.pdf} \caption{\label{fig:comparis_eq-neq} {\bf a)} Probability density function $P(t_{\rm m}|T)$ as a function of the time $t_{\rm m}$ of the maximum for the Ornstein-Uhlenbeck process of duration $T=1$, with $\alpha=D=1$. The curve is symmetric around the midpoint $t_{\rm m}=T/2$ (vertical dashed line). {\bf b)} Probability density function $P(t_{\rm m}|T)$ versus $t_{\rm m} $ for Brownian motion with stochastic resetting, obtained from numerical simulations with $D=T=1$ and $r=10$. The curve is not symmetric around the midpoint $t_{\rm m}=T/2$ (see also Fig.~\ref{fig:avg_tmax}). } \end{figure*} We consider a Brownian particle with diffusion coefficient $D$. The particle is reset to the origin $x=0$ at a constant rate $r$. In other words, in a small time interval $dt$, the position $x(\tau)$ of the particle evolves according to \cite{EM_2011,EMS20} \begin{equation} x(t+dt)=\begin{cases} x(t)+\sqrt{2D}\eta(t)dt&\quad\text{ with probabilty }1-rdt\,,\\ \\ 0&\quad\text{ with probabilty }rdt\,.\\ \end{cases} \label{resetting_rule_1} \end{equation} The resetting process admits the following nonequilibrium steady state \cite{EM_2011} \begin{equation} P_{\rm st}(x_0)=\frac{1}{2}\sqrt{\frac{r}{D}}\exp\left(-\sqrt{\frac{r}{D}}|x_0|\right)\,. \label{NEQSS_RES} \end{equation} Note that the detailed balance is manifestly violated by the resetting move in Eq.~\eqref{resetting_rule_1} that induces a nonzero current to the resetting point $x=0$, even in the stationary state. Consequently, the RBM is a nonequilibrium process. Using a path-decomposition technique, we show that $P(t_{\rm m}|T)$ has the scaling form \begin{equation} P(t_{\rm m}|T)=rF_R(rt_{\rm m},r(T-t_{\rm m}))\,, \label{scaling_res_summary} \end{equation} where \begin{eqnarray} \nonumber &&\int_{0}^{\infty}dT_1~e^{-s_1 T_1}\int_{0}^{\infty}dT_2~e^{-s_2 T_2}~F_R(T_1,T_2)=\frac{1}{2} \frac{1}{(1+\sqrt{1+s_1})\sqrt{1+s_2}}\\&+&\frac12 \frac{\sqrt{1+s_2}}{\sqrt{1+s_1}-1}\int_{0}^{\infty}dz~e^{-(1+\sqrt{1+s_1})z}\frac{e^{z\sqrt{1+s_1}} s_1-\sqrt{1+s_1}+1} {\left(s_1+ e^{-z\sqrt{1+s_1}}\right)\left(s_2+ e^{-z\sqrt{1+s_2}}\right)}\,. \label{FR_LT_summary} \end{eqnarray} Interestingly, in this case we find that $P(t_{\rm m}|T)\neq P(T-t_{\rm m}|T)$, as a consequence of the nonequilibrium nature of the process. This is confirmed by numerical simulations (see Fig.~\ref{fig:comparis_eq-neq}{\bf b}). Thus, the first moment of $t_{\rm m}$ deviates from the equilibrium value $T/2$. In particular, we find \begin{equation} \langle t_{\rm m}\rangle =T f(rT)\,, \end{equation} where $f(t)$ is given in Eq.~\eqref{foft} and is shown in Fig.~\ref{fig:avg_tmax}. We observe that $f(t)>1/2$ for any $t>0$. This function is nonmonotonous and has a maximum at $t^*\approx 2.218$ with $f(t^*)\approx0.519$. Note that this is different from the case where the RBM starts from a fixed position $x_0$ in space, as investigated in \cite{MMSS21,SP21}. In contrast, in our case the initial position $x_0$ is sampled from the nonequilibrium steady state in Eq.~\eqref{NEQSS_RES}. From Eq.~\eqref{FR_LT_summary} one can also extract the asymptotic behaviors of $P(t_{\rm m}|T)$. In the short-time regime $T\ll\xi$, where $\xi=1/r$ in this case, we obtain once again that the distribution $P(t_{\rm m}|T)$ approaches the arcsine law in Eq.~\eqref{arcsine_intro}. This is because for $T\ll 1/r$ the system typically does not reset and the process therefore reduces to a standard BM. On the other hand, for $T\gg\xi$, we get \begin{equation} P(t_{\rm m}|T)\approx\begin{cases} \frac1T G(rt_{\rm m})\quad &\text{ for }t_{\rm m}\ll1/r\,\\ \\ \frac1T\quad &\text{ for }1/r\llt_{\rm m}\ll(T-1/r)\\ \\ \frac1T\left[2 G(r(T-t_{\rm m}))-1\right]\quad &\text{ for }t_{\rm m}\ll1/r\,,\\ \end{cases} \end{equation} where $G(z)$ is given in Eq.~\eqref{G_summary} (see Fig.~\ref{fig:comparis_eq-neq}b). Interestingly, the late time behavior of $P(t_{\rm m}|T)$ is qualitatively similar to the one of the equilibrium processes described above. Note however that in this case the distribution $P(t_{\rm m}|T)$ is not symmetric around $t_{\rm m}=T/2$. Indeed, for $t_{\rm m}\to 0$ the PDF diverges as $1/(2T\sqrt{\pi t_{\rm m}})$ while it diverges as $1/(T\sqrt{\pi (T-t_{\rm m})})$ for $t_{\rm m} \to T$. \subsubsection{Run-and-tumble particle in a potential $V(x)=\mu |x|$} We consider a single RTP moving in a one-dimensional potential $V(x)=\mu |x|$. The state $(x(\tau),\sigma(\tau))$ of the system at time $\tau$ is specified by the position $x(\tau)$ of the particle and its direction $\sigma(\tau)=\pm1$. The position of the particle evolves according to the differential equation \begin{equation} \frac{dx(\tau)}{d\tau}=-V'(x)+v_0\sigma(t)=-\mu\operatorname{sign}(x)+v_0\sigma(t)\,, \end{equation} where $\operatorname{sign}(x)$ denotes the sign of $x$ and $v_0>\mu$ is the speed of the particle. The direction $\sigma(t)$ of the particle is flipped at a constant rate $\gamma$. As explained in Section \ref{sec:neq}, the persistent motion of the particle breaks the detailed balance condition and thus the system is out-of-equilibrium. In the steady state, the probability of finding the particle at $x_0$ with a positive (negative) velocity is given by \cite{SKM_19} \begin{equation} P_{\rm st}^{\pm }(x_0)=\frac12 \left(1\pm \frac{\mu}{v_0}\operatorname{sign}(x)\right)\frac{\gamma~\mu}{v_0^2-\mu^2}\exp\left(-\frac{2\gamma\mu}{v_0^2-\mu^2}|x_0|\right)\,. \label{joint_distribution_RTP} \end{equation} We assume that at the initial time $\tau=0$ the position $x_0$ (with a positive/negative velocity) is drawn from this distribution in Eq.~\eqref{joint_distribution_RTP}. Our goal is to compute the distribution of the time $t_{\rm m}$ at which the position of the particle becomes maximal. We show that the distribution of $t_{\rm m}$ can be written as \begin{equation} P(t_{\rm m}|T)=P_0(T)\delta(t_{\rm m})+P_{\rm bulk}(t_{\rm m}|T)+P_1(T)\delta(t_{\rm m}-T)\,, \end{equation} where $P_0(T)$, $P_{\rm bulk}(t_{\rm m}|T)$ and $P_1(T)$ are given in Eqs.~\eqref{PO_LT}, \eqref{P1_LT}, and \eqref{eq:PDF_RTP_LT_bulk}. Interestingly, the events ``$t_{\rm m}=0$'' and ``$t_{\rm m}=T$'' occur with finite probability, as highlighted by the two $\delta$ functions in the expression above. The function $P_{\rm bulk}(t_{\rm m}|T)$ has support in $0<t_{\rm m}<T$. Interestingly, even though the system is nonequilibrium, the bulk of the distribution is still symmetric around the midpoint $t_{\rm m}=T/2$, i.e., $P_{\rm bulk}(t_{\rm m}|T)=P_{\rm bulk}(T-t_{\rm m}|T)$. Nevertheless, the amplitudes $P_0(T)$ and $P_1(T)$ of the $\delta$ functions are different, meaning that the distribution $P(t_{\rm m}|T)$ is overall not symmetric around $t_{\rm m}=T/2$. \subsubsection{Criterion to detect nonequilibrium dynamics} From the exact computations performed for different models of stationary processes, we have observed that, when the system is at equilibrium, the distribution of $t_{\rm m}$ is symmetric around the midpoint $t_{\rm m}=T/2$, i.e., $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. On the contrary the PDF $P(t_{\rm m}|T)$ does not satisfy this symmetry for the nonequilibrium processes described above. This observation turns out to be quite general. Indeed, we show that if the process is at equilibrium then necessarily the distribution of $t_{\rm m}$ is symmetric around $t_{\rm m}=T/2$. This property is a consequence of the time-reversal symmetry of equilibrium processes. On the other hand, for nonequilibrium processes, $P(t_{\rm m}|T)$ is typically not symmetric. Note, however, that there exist nonequilibrium systems for which $P(t_{\rm m}|T)$ is symmetric. This result leads to a simple criterion to detect nonequilibrium fluctuations in stationary time series. Imagine that one has access to a long stationary time series $x(\tau)$ (for instance, this could result from some experimental measurement). Without knowing the specific details of the dynamics of the process, how can one determine whether or not the underlying system is nonequilibrium? Building on the observation that if the distribution of $t_{\rm m}$ is not symmetric then the process is necessarily nonequilibrium, we propose the following simple method. First, divide the time series into $N$ blocks each of duration $T$ (assuming that the total duration of the time series is much larger than $T$) and measure the time $t_{\rm m}^i$ at which the maximum is reached within the $i$-th block. From this $N$ values $t_{\rm m}^1\,,\ldots\,,t_{\rm m}^N$ one can build the empirical PDF $P(t_{\rm m}|T)$, with $0\leqt_{\rm m}\leq T$. If this distribution is not symmetric around the midpoint $t_{\rm m}=T/2$ (as in Fig.~\ref{fig:comparis_eq-neq}{\bf b}), then one can conclude that the system is nonequilibrium. However, if $P(t_{\rm m}|T)$ turns out to be symmetric (as in Fig.~\ref{fig:comparis_eq-neq}{\bf a}) our test is inconclusive. This test can also be applied to systems composed of many interacting degrees of freedom. Indeed, finding that the distribution of $t_{\rm m}$ for one of the variables describing the system is not symmetric is sufficient to conclude that the full system is out of equilibrium. \section{Equilibrium processes} \label{sec:eq} \begin{figure*}[t]\includegraphics[scale=0.7]{deco.pdf} \caption{\label{fig:deco} Stationary process $x(\tau)$ during the time interval $[0,T]$. The value of the global maximum is $M-\epsilon$, with $\epsilon>0$, and the time of the maximum is $t_{\rm m}$. The time interval $[0,T]$ is divided into the two subintervals $[0,t_{\rm m}]$ (I) and $[t_{\rm m} , T]$ (II). } \end{figure*} In this section, we investigate the distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ at which an equilibrium process reaches the global maximum. We focus on the paradigmatic case of an overdamped Brownian particle in a confining potential $V(x)$. The Langevin equation that describes the evolution of the position $x(\tau)$ of the particle reads \begin{equation} \frac{d x(\tau)}{d\tau}=-V'(x)+\sqrt{2D}\eta(\tau)\,, \label{langevin} \end{equation} where $V'(x)=dV(x)/dx$ and $\eta(\tau)$ is a Gaussian zero-mean white noise with correlator $\langle\eta(\tau)\eta(\tau')\rangle=\delta(\tau-\tau')$ and $D>0$ is the diffusion constant. If the potential $V(x)$ grows sufficiently fast with $|x|$, the process admits the Boltzmann equilibrium state \begin{equation} P_{\rm st}(x)=\frac{1}{Z}e^{-V(x)/D}\,, \label{boltzmann} \end{equation} where $Z$ is the normalization constant. Here we assume that $V(x)$ is sufficiently confining such that $P_{\rm st}(x)$ is normalizable. In particular, we focus on the class of potentials $V(x)$ such that $V(x)\approx \alpha |x|^p$ for large $|x|$, where $\alpha>0$ and $p>0$ are fixed constants. We investigate the distribution of the time $t_{\rm m}$ at which the position $x(\tau)$ of the particle reaches its maximal value before time $T$. We assume that at the initial time $\tau=0$ the process has already reached the equilibrium state, meaning that the initial position $x_0=x(0)$ is drawn from the PDF $P_{\rm st}(x)$ in Eq.~\eqref{boltzmann}. We will first identify two cases ($p=1$ and $p=2$) in which the distribution of $t_{\rm m}$ can be exactly computed. Then, we will show that this distribution $P(t_{\rm m}|T)$ becomes universal for any $p> 0$ at late times. We use a path-decomposition technique to compute analytically the distribution of the time $t_{\rm m}$ of the maximum. Doing so, we first obtain the joint distribution $P(t_{\rm m},M|T)$ of $t_{\rm m}$ and of the maximum $M=x(t_{\rm m})$. Then, integrating over $M$, we find $P(t_{\rm m}|T)$. This path-decomposition approach is similar to the one adopted in Refs.~\cite{MRK08,MMS19,MMS20} and can be described as follows. Using the Markov property of the process, we can write the joint probability of $t_{\rm m}$ and $M$ as the product of the probabilities of two independent segments: (I) $[0,t_{\rm m}]$ and (II) $[t_{\rm m}, T]$ (see Fig.~\ref{fig:deco}). In the first interval (I), the process starts from position $x_0=x(0)$, which is random and distributed according to the steady state in Eq.~\eqref{boltzmann}, and it reaches the global maximum $M$ at time $t_{\rm m}$. In the second interval (II), the walker starts from position $M$ at time $t_{\rm m}$ and has to remain below this position $M$ up to time $T$. To compute the probability weight of the first interval, one has to solve the Fokker-Planck equation of this process with absorbing boundary condition at $x=M$ (see details below). Moreover, one must also impose that the particle arrives exactly at $M$ at time $t_{\rm m}$. However, due to the continuous-time nature of the process, one cannot constrain the trajectory to arrive at the absorbing boundary at a given time. Indeed, if the process arrives exactly at $M$ at time $t_{\rm m}$, it will go above position $x=M$ infinitely many times in any time interval $[t_{\rm m}-\delta,t_{\rm m}]$ with $\delta>0$ \cite{Feller}. In other words, one cannot satisfy $x(\tau)<M$ for $\tau<t_{\rm m}$ while imposing $x(t_{\rm m})=M$. A possible solution to this issue is to introduce a cutoff $\epsilon>0$ and to impose that at time $t_{\rm m}$ the process reaches position $x(t_{\rm m})=M-\epsilon$ (see Fig.~\ref{fig:deco}). In this way, one can compute $P(t_{\rm m}|T)$ for fixed $\epsilon$ and then take the limit $\epsilon\to 0$ at the very end of the computation. This approach was, for instance, used in Refs.~\cite{MRK08,RS11}. Let us first consider the interval $[0,t_{\rm m}]$. It is useful to define the constrained propagator $G^{M}(x,t|x_0)$ as the probability that the process goes from position $x_0$ at time $\tau=0$ to position $x$ at time $t$, while always remaining below position $M$. The probability weight $P_{\rm I}$ of the first interval (I) is $G^M(M-\epsilon,t_{\rm m}|x_0)$. The constrained propagator satisfies the Fokker-Plank equation \cite{Feller} \begin{equation} \partial_t G^M(x,t|x_0)=D\partial_x^2 G^M(x,t|x_0)+\partial_x\left[V'(x) ~G^M(x,t|x_0)\right]\,, \label{forward_FP} \end{equation} valid for $x\in (-\infty,M]$ with initial condition \begin{equation} G^M(x,t=0|x_0)=\delta(x-x_0)\,.\label{initial_condition_fw} \end{equation} The first boundary condition is \begin{equation} G^M(M,t|x_0)=0\,, \label{absorbing_condition_fw} \end{equation} corresponding to an absorbing wall at $x=M$. This boundary condition selects only those trajectories that remain below position $M$. The second boundary condition is \begin{equation} \lim_{x\to-\infty}G^M(x,t|x_0)=0\,, \label{boundary_condition_fw} \end{equation} since the probability to find the particle infinitely far from its starting position after a finite amount of time vanishes. In the second interval $[t_{\rm m},T]$, the process starts from position $M-\epsilon$ and remains below position $M$ up to time $T$. The corresponding probability weight can be expressed in terms of the survival probability $Q^M(x,t)$, i.e, the probability that the process starts from $x$ and remains below position $M$ up to time $t$. The weight $P_{\rm II}$ of the second interval can be written as $Q^M(M-\epsilon,T-t_{\rm m})$. The survival probability satisfies the backward Fokker-Planck equation \cite{Feller} \begin{equation} \partial_t Q^M(x,t)=D\partial^2_x Q^M(x,t)-V'(x)\partial_x Q^M(x,t)\,, \label{backward_FP} \end{equation} with initial condition (for $x<M$) \begin{equation} Q^M(x,t=0)=1\,. \label{initial_condition_bw} \end{equation} The boundary conditions are \begin{equation} Q^M(M,t)=0\,, \label{absorbing_condition_bw} \end{equation} meaning that the particle at $x=M$ is immediately absorbed and \begin{equation} \lim_{x\to-\infty}Q^M(x,t)=1\,, \label{boundary_condition_bw} \end{equation} since a particle starting infinitely far away from the absorbing wall will never be absorbed in a finite time. Then, the joint distribution of $M$ and $t_{\rm m}$ can be obtained as the product of the probability weights of the first ($[0,t_{\rm m}])$) and of the second ($[t_{\rm m} ,T]$) intervals. Since the starting position $x_0$ is also random, one has also to integrate over $x_0$ with the correct probability weight given in Eq.~\eqref{boltzmann}. Therefore, for a fixed value of $\epsilon$, we get \begin{equation} P(t_{\rm m},M|T,\epsilon)=\mathcal{N}(\epsilon) \int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)G^M(M-\epsilon,t_{\rm m}|x_0)Q^M(M-\epsilon,T-t_{\rm m})\,, \end{equation} where $\mathcal{N}(\epsilon)$ is a normalization constant, i.e., $\mathcal{N}(\epsilon)$ is chosen to satisfy \begin{equation} \int_{0}^{T}dt_{\rm m}~\int_{-\infty}^{\infty}dM~P(t_{\rm m},M|T,\epsilon)=1\,. \end{equation} This constant $\mathcal{N}(\epsilon)$ could, in principle, depend on the total time $T$, but we will show a posteriori that it does not. Integrating over $M$, one finds \begin{equation} P(t_{\rm m}|T,\epsilon)=\mathcal{N}(\epsilon) \int_{-\infty}^{\infty}dM~\int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)G^M(M-\epsilon,t_{\rm m}|x_0)Q^M(M-\epsilon,T-t_{\rm m})\,. \end{equation} Finally, taking the limit $\epsilon\to 0$, we obtain \begin{equation} P(t_{\rm m}|T)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon) \int_{-\infty}^{\infty}dM~\int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)G^M(M-\epsilon,t_{\rm m}|x_0)Q^M(M-\epsilon,T-t_{\rm m})\right]\,. \label{Ptm_integral} \end{equation} For a given potential $V(x)$, one needs first to compute the constrained propagator $G^M(x,t|x_0)$ and the survival probability $Q^M(x,t)$. Then, the distribution of $t_{\rm m}$ can be obtained using the formula \eqref{Ptm_integral}. As shown in the next sections, this can be done in the cases $V(x)=\alpha |x|$ (corresponding to $p=1$) and $V(x)=\alpha x^2$ (corresponding to $p=2$). In general, it is easier to compute the propagator $G^M(x,t|x_0)$ and the survival probability $Q^M(x,t)$ in Laplace space (with respect to the time $t$). Therefore, it is useful to express the relation in Eq.~\eqref{Ptm_integral} in terms of the Laplace transforms of these quantities. To do this, we introduce the variables $t_1=t_{\rm m}$, corresponding to the time of the maximum, and $t_2=T-t_{\rm m}$, corresponding to the time after the maximum. Considering the double Laplace transform of Eq.~\eqref{Ptm_integral} with Laplace variables $s_1$ and $s_2$, corresponding to $t_1$ and $t_2$ respectively, we obtain \begin{eqnarray} \label{Ptm_integral_LT} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}P(t_{\rm m}=t_1|T=t_1+t_2)\\ &=&\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon) \int_{-\infty}^{\infty}dM~\int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)\tilde{G}^M(M-\epsilon,s_1|x_0)\tilde{Q}^M(M-\epsilon,s_2)\right]\,,\nonumber \end{eqnarray} where we have defined \begin{equation} \tilde{G}^M(x,s|x_0)=\int_{0}^{\infty}dt~e^{-st}G^M(x,t|x_0)\,, \end{equation} and \begin{equation} \tilde{Q}^M(x,s)=\int_{0}^{\infty}dt~e^{-st}Q^M(x,t)\,. \end{equation} In the next sections, we derive an exact expression for $P(t_{\rm m}|T)$ in the cases $p=1$ and $p=2$. \subsection{The case $p=1$} \label{sec:p1} We first consider the case $p=1$, corresponding to the potential $V(x)=\alpha |x|$. The associated equilibrium steady state is \begin{equation} P_{\rm st}(x)=\frac{D}{2\alpha}e^{-\alpha |x|/D}\,. \label{stat_p1} \end{equation} We first compute the forward propagator for this process. Setting $V(x)=\alpha |x|$ in Eq.~\eqref{forward_FP}, we obtain \begin{equation} \partial_t G^M(x,t|x_0)=D\partial_x^2 G^M(x,t|x_0)+2\alpha\delta(x)~G^M(x,t|x_0)+\alpha \operatorname{sign}(x)~\partial_x G^M(x,t|x_0)\,, \label{forward_FP_p1} \end{equation} Taking a Laplace transform of this equation with respect to $t$ and using the initial condition in Eq.~\eqref{initial_condition_fw}, we find that $\tilde{G}^M(x,s|x_0)$ satisfies the equation \begin{equation} s\tilde{G}^M(x,s|x_0)-\delta(x-x_0)=D\partial_x^2 \tilde{G}^M(x,s|x_0)+2\alpha\delta(x)~\tilde{G}^M(x,s|x_0)+\alpha \operatorname{sign}(x)~\partial_x \tilde{G}^M(x,s|x_0)\,, \label{forward_FP_p1_LT} \end{equation} The boundary conditions in Eq.~\eqref{absorbing_condition_fw} and \eqref{boundary_condition_fw} become \begin{equation} \tilde{G}^M(M,s|x_0)=0\,, \label{absorbing_condition_fw_LT} \end{equation} and \begin{equation} \lim_{x\to-\infty}\tilde{G}^M(x,s|x_0)=0\,. \label{boundary_condition_fw_LT} \end{equation} Solving the differential equation \eqref{forward_FP_p1_LT} (see Appendix \ref{app:G_p1}), we obtain to leading order for small $\epsilon$ \begin{equation} \tilde{G}^M(M-\epsilon,s|x_0)\approx \begin{cases} \dfrac{\epsilon}{D}e^{(\alpha-k)(M-x_0)/(2D)}\quad &\text{ if }x_0<M<0\,,\\ \\ \dfrac{\epsilon}{D}\dfrac{(k-\alpha)e^{k x_0 /D}+\alpha}{(k-\alpha)e^{k M /D}+\alpha}e^{(-\alpha+k)(M-x_0)/(2D)}\quad &\text{ if }0<x_0<M\,,\\ \\ \dfrac{k \epsilon}{D}\dfrac{e^{(k-\alpha) x_0 /(2D)}e^{(-k-\alpha) M /(2D)}}{k-\alpha+\alpha e^{-k M /D}}\quad &\text{ if }x_0<0 \text{ and }M>0\,,\\ \end{cases} \label{G_p1_solution} \end{equation} where $k=\sqrt{\alpha^2+4sD}$. We next focus on the survival probability $Q^M(x,t)$. For the potential $V(x)=\alpha |x|$, the differential equation \eqref{backward_FP} becomes \begin{equation} \partial_t Q^M(x,t)=\partial^2_xQ^M(x,t)-\alpha \operatorname{sign}(x)\partial_x Q^M(x,t)\,. \label{backward_FP_LT_p1} \end{equation} Taking a Laplace transform with respect to $t$ on both sides and using the initial condition in Eq.~\eqref{initial_condition_bw}, we obtain that $\tilde{Q}^M(x,s)$ satisfies the equation \begin{equation} s\tilde{Q}^M(x,s)-1=\partial^2_x \tilde{Q}^M(x,s)-\alpha \operatorname{sign}(x)\partial_x\tilde{Q}^M(x,s)\,, \label{backward_FP_LT} \end{equation} with boundary conditions (see Eqs.~\eqref{absorbing_condition_bw} and \eqref{boundary_condition_bw}) \begin{equation} \tilde{Q}^M(M,s)=0\,, \label{absorbing_condition_bw_LT} \end{equation} and \begin{equation} \lim_{x\to-\infty}\tilde{Q}^M(x,s)=\frac{1}{s}\,. \label{boundary_condition_bw_LT} \end{equation} Solving Eq.~\eqref{backward_FP_LT}, we find that to leading order in $\epsilon$ (see Appendix \ref{app:G_p1}) \begin{equation} \tilde{Q}^M(M-\epsilon,s)\approx \begin{cases} \dfrac{\epsilon}{s}\dfrac{k-\alpha}{2D}\dfrac{(k+\alpha)e^{kM/D}-\alpha}{(k-\alpha)e^{kM/D}+\alpha}\quad &\text{ if }M>0\,,\\ \\ \dfrac{\epsilon}{s}\dfrac{k-\alpha}{2D}\quad &\text{ if }M<0\,.\\ \end{cases} \label{Q_p1_solution} \end{equation} We now have all the ingredients to compute the distribution of $t_{\rm m}$. Substituting the expressions of $P_{\rm st}(x_0)$, $\tilde{G}^M(M-\epsilon,s|x_0)$, and $\tilde{Q}^M(M-\epsilon,s)$, respectively given in Eq.~\eqref{stat_p1}, \eqref{G_p1_solution}, and \eqref{Q_p1_solution}, into Eq.~\eqref{Ptm_integral}, we obtain \begin{eqnarray} \nonumber &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{k_2-\alpha}{4D \alpha s_2}\left\{ \int_{-\infty}^{0}dM~\int_{-\infty}^{M}dx_0~e^{\alpha x_0/D}\right. \\ &\times &\left. e^{(\alpha-k_1)(M-x_0)/(2D)}+\int_{0}^{\infty}dM~\int_{0}^{M}dx_0~e^{-\alpha x_0/D}\frac{(k_1-\alpha)e^{k_1 x_0 /D}+\alpha}{(k_1-\alpha)e^{k_1 M /D}+\alpha}\right.\left.e^{(-\alpha +k_1)(M-x_0)/(2D)}\frac{(k_2+\alpha)e^{k_2M/D}-\alpha}{(k_2-\alpha)e^{k_2 M/D}+\alpha}\right. \nonumber \\&+&\left.\int_{0}^{\infty}dM~\int_{-\infty}^{0}dx_0~e^{\alpha x_0/D}k_1\frac{e^{(k_1-\alpha) x_0 /(2D)}e^{(-k_1-\alpha) M /(2D)}}{k_1-\alpha+\alpha e^{-k_1 M /D}}\right.\left.\frac{(k_2+\alpha)e^{k_2M/D}-\alpha}{(k_2-\alpha)e^{k_2 M/D}+\alpha} \right\}\,,\label{Ptm_integral_LT_p1} \end{eqnarray} where we have defined $k_1=\sqrt{\alpha^2+4Ds_1}$ and $k_2=\sqrt{\alpha^2+4Ds_2}$. The normalization constant $\mathcal{N}(\epsilon)$ can then be computed by setting $s_1=s_2=s$ on both sides of Eq.~\eqref{Ptm_integral_LT_p1}. Indeed, the left-hand side becomes \begin{equation} \int_{0}^{\infty}dt_1~\int_{0}^{\infty}dt_2~e^{-s (t_1+t_2)}~P(t_{\rm m}=t_1|T=t_1+t_2)=\int_{0}^{\infty}dT~e^{-s T}~\int_{0}^{T}dt_{\rm m}~P(t_{\rm m}|T)=\int_{0}^{\infty}dT~e^{-s T}=\frac1s\,, \label{lhs} \end{equation} where we have used the fact that $P(t_{\rm m}|T)$ is normalized to unity for $0\leq t_{\rm m} \leq T$. Setting $s_1=s_2=s$ on the right-hand side of Eq.~\eqref{Ptm_integral_LT_p1} and computing the integrals over $x_0$ and $M$ with Mathematica, we find \begin{equation} \lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{D}{\alpha^2 s}=\frac1s\,. \end{equation} Thus, we get \begin{equation} \lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]=\frac{\alpha^2 }{D}\,. \end{equation} Substituting this expression for the normalization constant in Eq.~\eqref{Ptm_integral_LT_p1} and computing the integrals over $x_0$, we obtain \begin{eqnarray}\nonumber &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)\\ &=&\frac{2\alpha}{(k_1+\alpha)(k_2+\alpha)}\left[\frac{D}{\alpha}+\int_{0}^{\infty}dM~e^{-\alpha M/D}\frac{(k_1+\alpha-\alpha e^{-k_1 M/D})(k_2+\alpha-\alpha e^{-k_2 M/D})}{(k_1-\alpha+\alpha e^{-k_1 M/D})(k_1-\alpha+\alpha e^{-k_2 M/D})}\right]\,, \label{Ptm_integral_LT_p1_2} \end{eqnarray} where we recall that $k_1=\sqrt{\alpha^2+4Ds_1}$ and $k_2=\sqrt{\alpha^2+4Ds_2}$. Notably, from Eq.~(\ref{Ptm_integral_LT_p1_2}) we can already observe that the distribution $P(t_{\rm m}|T)$ is symmetric around $t_{\rm m}=T/2$, i.e., $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. Indeed, it is clear from Eq.~(\ref{Ptm_integral_LT_p1_2}) that the Laplace transform of $P(t_{\rm m}=t_1|T=t_1+t_2)$ is symmetric under the exchange of $s_1$ and $s_2$. This symmetry is a signature of the equilibrium nature of the process. Interestingly, the PDF $P(t_{\rm m}|T)$ can be rewritten in the scaling form \begin{equation} P(t_{\rm m}|T)=\frac{\alpha^2}{4D}F_1\left(\frac{\alpha^2}{4D}t_{\rm m},\frac{\alpha^2}{4D}(T-t_{\rm m})\right)\,, \label{scaling_form_F1} \end{equation} where $F_1(T_1,T_2)$ is the scaling function and $\xi=4D/\alpha^2$ is the natural timescale of the process. Plugging this expression into Eq.~(\ref{Ptm_integral_LT_p1_2}), we find that the double Laplace transform of $F_1(T_1,T_2)$ is given by (see also Eq.~\eqref{LT_scaling_p1_summary}) \begin{eqnarray}\label{eq:LT_F_V} \nonumber \tilde{F}_1(s_1,s_2)&=& \frac{1}{2(1+\sqrt{1+s_1})(1+\sqrt{1+s_2})}\\ &\times & \Bigg[1+\int_{0}^{\infty}dz\,e^{-z}\frac{\left(\sqrt{1+s_1}+1-e^{-\sqrt{1+s_1}z}\right)\left(\sqrt{1+s_2}+1-e^{-\sqrt{1+s_2}z}\right)}{\left(\sqrt{1+s_1}-1+e^{-\sqrt{1+s_1}z}\right)\left(\sqrt{1+s_2}-1+e^{-\sqrt{1+s_2}z}\right)}\Bigg]\,, \end{eqnarray} where we have defined \begin{equation} \tilde{F}_1(s_1,s_2)=\int_{0}^{\infty}dT_1 e^{-s_1 T_1} \int_{0}^{\infty}dT_2 e^{-s_2 T_2} F_1(T_1,T_2)\,. \end{equation} This scaling function manifestly satisfies the symmetry $F_1(T_1,T_2)=F_1(T_2,T_1)$, corresponding to $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. As a consequence of this symmetry, the first moment of $t_{\rm m}$ is given by \begin{equation} \langle t_{\rm m} \rangle=\frac{T}{2}\,. \end{equation} \subsubsection{Asymptotic behaviors} Even though it is challenging to invert the double Laplace transform in Eq.~\eqref{eq:LT_F_V} exactly, this expression in Eq.~\eqref{eq:LT_F_V} can be used to extract the asymptotic behaviors of $P(t_{\rm m}|T)$ in the limits of short times ($T\ll \xi$) and late times ($T\gg \xi$). Here $\xi=\alpha^2/(4D)$ is the autocorrelation time of the process. The short-time limit ($T\ll\xi$) corresponds in Laplace space to the limit $s_1,s_2\to \infty$. Taking this limit in Eq.~\eqref{eq:LT_F_V}, we obtain to leading order \begin{equation} \tilde{F}_1(s_1,s_2)\approx \frac{1}{2\sqrt{s_1s_2}}\left(1+\int_{0}^{\infty}dz~e^{-z}\right)=\frac{1}{\sqrt{s_1s_2}}\,. \end{equation} This Laplace transform can now be inverted using the inversion formula \begin{equation} \mathcal{L}^{-1}_{s\to t}\left[\frac{1}{s^{\nu}}\right]=\frac{1}{\Gamma(\nu)~t^{1-\nu}}\,, \label{inv_lapl_sqrt} \end{equation} where $\Gamma(\nu)$ is the Gamma function. Using this formula with $\nu=1/2$, we find that for small $T_1$ and $T_2$ \begin{equation} F_1(T_1,T_2)\approx \frac{1}{\pi\sqrt{T_1 T_2}}\,. \end{equation} Using Eq.~\eqref{scaling_form_F1}, we find that this corresponds to \begin{equation} P(t_{\rm m}|T)\approx \frac{1}{\pi\sqrt{t_{\rm m} (T-t_{\rm m})}}\,, \end{equation} which is valid for $T\ll 4D/\alpha^2$. This expression coincides with the well-known arcsine law of L\'evy \cite{Levy}, describing the distribution of the time of the maximum for a free Brownian motion. Indeed, for $T\ll 4D/\alpha^2$, the process does not have enough time to feel the confining potential. We next focus on the late time limit $T\gg4D/\alpha^2 $. In this limit, three different regimes of distribution $P(t_{\rm m}|T)$ can be investigated: the central ``bulk'' regime, corresponding to $t_{\rm m},T\to \infty$ with $t_{\rm m}/T$ fixed, the left ``edge'' regime, corresponding to $T\to \infty$ with $t_{\rm m}\sim \mathcal{O} (1)$, and the right edge regime, where $T\to \infty$ with $T-t_{\rm m}\sim \mathcal{O} (1)$. Note that, thanks to the symmetry $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$, it is sufficient to study the right edge regime. To investigate the right-edge regime ($t_{\rm m}\to T$), we expand Eq.~\eqref{eq:LT_F_V} to leading order for small $s_1$ while keeping $s_2\sim\mathcal{O}(1)$, yielding \begin{eqnarray} \tilde{F}_1(s_1,s_2)\approx \frac{1}{2(1+\sqrt{1+s_2})}\int_{0}^{\infty}dz\,e^{-z}\frac{2-e^{-z}}{s_1+2e^{-z}}~\frac{\left[\sqrt{1+s_2}+1-e^{-\sqrt{1+s_2}z}\right]}{\left[\sqrt{1+s_2}-1+e^{-\sqrt{1+s_2}z}\right]}\,. \end{eqnarray} Identifying the pole $s_1=-2 e^{-z}$ and inverting the Laplace transform with respect to $T_1$, one finds \begin{eqnarray} \int_{0}^{\infty}dT_2~F_1(T_1,T_2)e^{-s_2 T_2}\approx \frac{1}{2(1+\sqrt{1+s_2})}\int_{0}^{\infty}dz\,\exp\left[-2T_1e^{-z}\right]e^{-z}(2-e^{-z})~\frac{\left[\sqrt{1+s_2}+1-e^{-\sqrt{1+s_2}z}\right]}{\left[\sqrt{1+s_2}-1+e^{-\sqrt{1+s_2}z}\right]}\,. \end{eqnarray} For large $T_1$, the integral on the right-hand side is dominated by large values of $z$ and can thus be approximated as \begin{eqnarray} \int_{0}^{\infty}dT_2~F_1(T_1,T_2)e^{-s_2 T_2}\approx \frac{1}{\sqrt{1+s_2}-1}\int_{0}^{\infty}dz\,e^{-z-2T_1\,e^{-z}}\,. \end{eqnarray} Performing the change of variable $z\to u=2T_1 e^{-z}$, we obtain \begin{eqnarray} \int_{0}^{\infty}dT_2~F_1(T_1,T_2)e^{-s_2 T_2}\approx \frac12 \frac{1+\sqrt{1+s_2}}{s_2}\frac{1}{T_1}\int_{0}^{2T_1}du\,e^{-u}\,, \end{eqnarray} where we have used the relation $(1+\sqrt{1+s})(1-\sqrt{1+s})=s$. When $T_1$ is large, we can replace the upper limit of integration with infinity, yielding \begin{eqnarray} \int_{0}^{\infty}dT_2~F_1(T_1,T_2)e^{-s_2 T_2}\approx \frac12 \frac{1+\sqrt{1+s_2}}{s_2}\frac{1}{T}\,, \end{eqnarray} where we have approximated $T_1\approx T$. The Laplace transform can be inverted by using the relation (see Appendix \ref{app:Laplace inversion}) \begin{equation} \mathcal{L}^{-1}_{s\to t}\left[\frac{1+\sqrt{1+s}}{s}\right]=1+\operatorname{erf}(\sqrt{t})+\frac{1}{\sqrt{\pi t}}e^{-t}\,, \label{G_LT} \end{equation} where $\operatorname{erf}(z)=(2/\sqrt{\pi})\int_{0}^{z}du~e^{-u^2}$. Therefore, we find that in the limit $T_1\to \infty$ with $T_2\sim O(1)$, \begin{equation} F(T_1,T_2)\approx \frac{1}{T}G(T_2)\,, \end{equation} where \begin{equation} G(z)=\frac12 \left[1+\operatorname{erf}(\sqrt{z})+\frac{1}{\sqrt{\pi z}}e^{-z}\right]\,. \label{G} \end{equation} This function $G(z)$ is shown in Fig.~\ref{fig:Gz} and has asymptotic behaviors \begin{equation} G(z)\approx \begin{cases} 1/(2\sqrt{\pi z})\quad &\text{ for } z\to 0\,,\\ \\ 1+e^{-z}/(4\sqrt{\pi}z^{3/2})\quad &\text{ for } z\to \infty\,.\\ \end{cases} \label{G_asym} \end{equation} Thus, for $t_{\rm m}\to T$, we find that $P(t_{\rm m}|T)$ diverges as $1/\sqrt{T-t_{\rm m}}$. On the other hand, for $T-t_{\rm m}\gg 4D/\alpha^2$ we find $P(t_{\rm m}|T)\approx 1/T$, smoothly connecting to the bulk regime. Similarly, using the symmetry of $F_1(T_1,T_2)$ we also find that when $T_2\to \infty$ with $T_1\sim O(1)$ \begin{equation} F(T_1,T_2)\approx \frac{1}{T}G(T_1)\,. \end{equation} \begin{figure*}[t]\includegraphics[scale=0.7]{g.pdf} \caption{\label{fig:Gz} Log-log plot of the function $G(z)$, given in Eq.~\eqref{G}. For $z\ll 1$, the function diverges as $G(z)\approx1/(2\sqrt{\pi z})$ while $G(z)$ tends to the limit value $1$ as $G(z)\approx 1+e^{-z}/(4\sqrt{\pi}z^{3/2})$ for large $z$.} \end{figure*} In Laplace space the bulk regime is obtained in the limit $s_1,s_2\to \infty$, with $s_1/s_2$ fixed. Taking this limit in Eq.~\eqref{eq:LT_F_V}, we find \begin{equation} \tilde{F}_1(s_1,s_2)\approx \frac{1}{2}\int_{0}^{\infty}dz\,e^{-z}\frac{\left(2-e^{-z}\right)^2}{\left(s_1+2e^{-z}\right)\left(s_2+2e^{-z}\right)}\,. \end{equation} Inverting the double Laplace transform, we get \begin{equation} F_1(T_1,T_2)\approx \frac{1}{2}\int_{0}^{\infty}dz\,e^{-z}\left(2-e^{-z}\right)^2 \exp[-2(T_1+T_2)e^{-z}]\,. \end{equation} Finally, performing the change of variables $z\to u=2(T_1+T_2)e^{-z}$, we obtain \begin{equation} F_1(T_1,T_2)\approx \frac{1}{4(T_1+T_2)}\int_{0}^{2(T_1+T_2)}du\,\left(2-\frac{u}{2(T_1+T_2)}\right)^2 e^{-u}=\frac{1}{T_1+T_2}+\mathcal{O}\left(\frac{1}{(T_1+T_2)^2}\right)\,. \end{equation} Thus, in the bulk regime the distribution $P(t_{\rm m}|T)$ can be approximated by the flat measure \begin{equation} P(t_{\rm m}|T)\approx \frac{1}{T}\,. \label{flat_p1_1} \end{equation} This uniform PDF for $t_{\rm m}$ is the distribution that one would obtain when the positions of the process at different times are independent random variables. Indeed, since the observation time $T$ is much larger than the correlation time $\xi=4D/\alpha^2$, these variables are approximately independent (this argument is precisely explained in Section \ref{sec:univ}). However, note that this result in Eq.~\eqref{flat_p1_1} does not apply in the edge regimes for $t_{\rm m}\to 0$ and $t_{\rm m}\to T$. To summarize, we have shown that in the late-time limit, the distribution of $t_{\rm m}$ approaches the form \begin{equation} P(t_{\rm m}|T)\approx \begin{cases}\frac1T G\left(\frac{\alpha^2}{4D}t_{\rm m}\right)\quad &\text{ for }\quadt_{\rm m}\lesssim 4D/\alpha^2\,,\\ \\ \frac1T \quad &\text{ for }\quad 4D/\alpha^2\ll t_{\rm m} \ll T-4D/\alpha^2\,,\\ \\ \frac1T G\left[\frac{\alpha^2}{4D}(T-t_{\rm m})\right]\quad &\text{ for }\quadt_{\rm m}\gtrsim T- 4D/\alpha^2\,.\\ \end{cases} \label{PT_asymp_p1} \end{equation} Note that this expression in Eq.~\eqref{PT_asymp_p1} is asymptotically normalized to one for large $T$. \subsection{The case $p=2$: the Ornstein-Uhlenbeck process} \label{sec:p2} This section focuses on BM in a harmonic potential $V(x)=\alpha x^2$, corresponding to the Ornstein-Uhlenbeck process. The equilibrium state for this process reads \begin{equation} P_{\rm st}(x_0)=\sqrt{\frac{\alpha}{\pi D}}\exp\left(-\frac{\alpha}{D}x_0^2\right)\,. \label{stat_p2} \end{equation} As before, we first need to compute the constrained propagator and the survival probability. The propagator satisfies the forward Fokker-Plank equation \eqref{forward_FP}, which in this case reads \begin{equation} \partial_t G^M(x,t|x_0)=D\partial_x^2 G^M(x,t|x_0)+2\alpha G^M(x,t|x_0)+2\alpha x\partial_x G^M(x,t|x_0)\,, \end{equation} Taking a Laplace transform with respect to $t$ and using the initial condition in Eq. (\ref{initial_condition_fw}), we obtain \begin{equation}\label{eq:FP_LT_ou} D\partial_x^2 \tilde{G}^M(x,s|x_0)+2\alpha x\partial_x \tilde{G}^M(x,s|x_0)+(2\alpha-s) \tilde{G}^M(x,s|x_0)+\delta(x-x_0)=0\,. \end{equation} Eq. (\ref{eq:FP_LT_ou}) can be exactly solved (see Appendix \ref{app:G_p2}) and one obtains to leading order in $\epsilon$ \begin{eqnarray}\label{eq:G_expanded_ou} \tilde{G}^M(M-\epsilon,s|x_0)\approx \frac{\epsilon}{D}e^{-(M^2-x_0^2)\alpha/(2D)}~\frac{D_{-s/(2\alpha)}\left(-\sqrt{2\alpha/D}~x_0\right)}{D_{-s/(2\alpha)}\left(-\sqrt{2\alpha/D}~M\right)}\,, \end{eqnarray} where $D_p(z)$ is the parabolic cylinder function. The backward Fokker-Plank equation for the survival probability, given in Eq.~\eqref{backward_FP} for a generic potential, in this case reads \begin{equation} \partial_t Q^M(x,t)=D\partial^2_x Q^M(x,t)-2\alpha x\partial_x Q^M(x,t)\,. \label{backward_FP_p2} \end{equation} Taking a Laplace transform and using the initial condition in Eq.~\eqref{initial_condition_bw}, we find \begin{equation} s \tilde{Q}^M(x,s)-1=D\partial^2_x \tilde{Q}^M(x,s)-2\alpha x\partial_x \tilde{Q}^M(x,s)\,, \label{backward_FP_LT_p2} \end{equation} with the boundary conditions in Eq.~\eqref{absorbing_condition_bw_LT} and \eqref{boundary_condition_bw_LT}. Solving this equation and imposing the boundary conditions (see Appendix \ref{app:G_p2}), we find, to leading order in $\epsilon$, \begin{equation}\label{eq:Q_ou_LT_expanded} \tilde{Q}^M(M-\epsilon,s)\approx \frac{\epsilon}{s}\left[\frac{M2\alpha}{D}+\sqrt{\frac{2\alpha}{D}}\frac{D_{1-s/(2\alpha)}\left(-\sqrt{\frac{2\alpha}{D}}~M\right)}{D_{-s/(2\alpha)}\left(-\sqrt{\frac{2\alpha}{D}}~M\right)}\right]\,. \end{equation} Substituting the expressions for $P_{\rm st}(x_0)$, $\tilde{G}^M(M-\epsilon,s|x_0)$, and $\tilde{Q}^M(M-\epsilon,s)$, respectively given in Eqs.~\eqref{stat_p2}, \eqref{eq:G_expanded_ou}, and \eqref{eq:Q_ou_LT_expanded}, into the formula for $P(t_{\rm m}|T)$ in Eq.~\eqref{Ptm_integral_LT}, we get \begin{eqnarray} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\sqrt{\frac{2}{\pi}}\frac{1}{s_2}\frac{\alpha}{D^2}\int_{-\infty}^{\infty}dM~\int_{-\infty}^{M}dx_0~e^{-\alpha x_0^2/(2D)}\nonumber \\ &\times & e^{-M^2\alpha/(2D)}\frac{D_{-s_1/(2\alpha)}\left(-\sqrt{2\alpha/D}x_0\right)}{D_{-s_1/(2\alpha)}\left(-\sqrt{2\alpha/D}M\right)} ~\left[\sqrt{\frac{2\alpha}{D}}M+\frac{D_{1-s_2/(2\alpha)}\left(-\sqrt{\frac{2\alpha}{D}}M\right)}{D_{-s_2/(2\alpha)}\left(-\sqrt{\frac{2\alpha}{D}}M\right)}\right]\,. \end{eqnarray} To simplify this expression we first perform the change of variables $(x_0,M)\to (z=M\sqrt{2\alpha/D},w=x_0\sqrt{2\alpha/D})$, yielding \begin{eqnarray} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{1}{\sqrt{2\pi}s_2D}\nonumber \\ &\times &\int_{-\infty}^{\infty}dz~e^{-z^2/4}\left[z+\frac{D_{1-s_2/(2\alpha)}\left(-z\right)}{D_{-s_2/(2\alpha)}~\left(-z\right)}\right]\int_{-\infty}^{z}dw~e^{-w^2/4} \frac{D_{-s_1/(2\alpha)}\left(-w\right)}{D_{-s_1/(2\alpha)}\left(-z\right)} \,. \end{eqnarray} Moreover, the integral over $w$ can be computed using the following identity (see Appendix \ref{app:wronskian}) \begin{equation}\label{eq:relation2} \int_{-\infty}^{z}dw~e^{-w^2 /4}D_{-s}(-w)=\frac{1}{s}e^{-z^2/4}\left[z D_{-s}(-z)+D_{1-s}(-z)\right]\,, \end{equation} which yields \begin{eqnarray} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{\sqrt{2}\alpha}{\sqrt{\pi}Ds_1 s_2}\nonumber \\ &\times &\int_{-\infty}^{\infty}dz~e^{-z^2/2}\left[z+\frac{D_{1-s_1/(2\alpha)}\left(-z\right)}{D_{-s_1/(2\alpha)}\left(-z\right)}\right]\left[z +\frac{D_{1-s_2/(2\alpha)}(-z)}{D_{-s_2/(2\alpha)}(-z)}\right] \,. \end{eqnarray} This expression can be rewritten in a more compact form by using the identity \cite{Gradshteyn} \begin{equation}\label{eq:relation3} z + \frac{D_{1-s}(-z)}{D_{-s}(-z)}= s\frac{D_{-s-1}(-z)}{D_{-s}(-z)} \,, \end{equation} yielding \begin{eqnarray}\nonumber &&\int_{0}^{\infty}dt_1~\int_{0}^{\infty}dt_2~e^{-s_1t_1-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)\\ &=&\frac{\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]}{\sqrt{8\pi}D\alpha}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\frac{D_{-1-s_1/(2\alpha)}\left(-z\right)}{D_{-s_1/(2\alpha)}\left(-z\right)}~\frac{D_{-1-s_2/(2\alpha)}(-z)}{D_{-s_2/(2\alpha)}(-z)}\,. \label{N_p2} \end{eqnarray} Finally, to fix the constant $\mathcal{N(\epsilon})$ we impose that $P(t_{\rm m}|T)$ must be normalized to unity. To do so, we set $s_1=s_2=s$ on both sides of equation \eqref{N_p2}. As before (see Eq.~\eqref{lhs}) the left-hand side is simply equal to $1/s$, yielding \begin{eqnarray} \frac1s = \frac{A}{\sqrt{8\pi}\alpha}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\left[\frac{D_{-1-s/(2\alpha)}\left(-z\right)}{D_{-s/(2\alpha)}\left(-z\right)}\right]^2\,, \end{eqnarray} where $A=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]/D$ needs to be determined. Introducing the dimensionless variable $q=s/(2\alpha)$, we find that $A$ satisfies \begin{eqnarray} \frac1q = \frac{A}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\left[\frac{D_{-1-q}\left(-z\right)}{D_{-q}\left(-z\right)}\right]^2\,. \end{eqnarray} Even though the integral on the right-hand side is hard to compute analytically, we verified numerically that \begin{equation}\label{eq:normalization_ou} \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dz\,e^{-z^2 /2}\left[\frac{D_{-1-q}(-z)}{D_{-q}(-z)}\right]^2=\frac1q\,, \end{equation} implying $A=1$ and thus \begin{equation} \label{mathcalN_1} \lim_{\epsilon\to 0}[\mathcal{N}(\epsilon)\epsilon^2]=D\,. \end{equation} Plugging the expression in Eq.~\eqref{mathcalN_1} into Eq.~\eqref{N_p2}, we find that the double Laplace transform of $P(t_{\rm m}|T)$ reads \begin{equation} \int_{0}^{\infty}dt_1~\int_{0}^{\infty}dt_2~e^{-s_1t_1-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\frac{1}{\sqrt{8\pi}\alpha}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\frac{D_{-1-s_1/(2\alpha)}\left(-z\right)}{D_{-s_1/(2\alpha)}\left(-z\right)}~\frac{D_{-1-s_2/(2\alpha)}(-z)}{D_{-s_2/(2\alpha)}(-z)}\,. \label{N_p2_final} \end{equation} Note that even if we have verified the validity of Eq.~(\ref{eq:normalization_ou}) numerically, the mathematical proof of this relation has eluded us so far and it remains an interesting exercise to prove this identity. Interestingly, the PDF $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ of the maximum can be rewritten in the scaling form \begin{equation} P(t_{\rm m}|T)=\alpha F_{\rm OU}(\alphat_{\rm m},\alpha(T-t_{\rm m}))\,, \label{scaling_relation_OU} \end{equation} where $F_{\rm OU}(T_1,T_2)$ is the scaling function and $\xi=1/\alpha$ the natural timescale of the process. Plugging this expression into Eq.~\eqref{N_p2_final}, we find that the scaling function $F_{\rm OU}(T_1,T_2)$ is given by \begin{equation} \int_{0}^{\infty}dT_1~\int_{0}^{\infty}dT_2~e^{-s_1 T_1-s_2 T_2}F_{\rm OU}(T_1,T_2)=\frac{1}{\sqrt{8\pi}}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\frac{D_{-1-s_1/2}\left(-z\right)}{D_{-s_1/2}\left(-z\right)}~\frac{D_{-1-s_2/2}(-z)}{D_{-s_2/2}(-z)}\,. \label{scaling_OU} \end{equation} From this expression, we immediately find that the PDF $P(t_{\rm m}|T)$ is symmetric around the midpoint $t_{\rm m}=T/2$. This is a signature of the time-reversal symmetry of equilibrium processes. Consequently, the first moment of $t_{\rm m}$ is simply given by $\langle t_{\rm m}\rangle=T/2$. Moreover, it is interesting to notice that in this case the distribution of $t_{\rm m}$ is completely independent of the diffusion coefficient $D$. \subsubsection{Asymptotic behaviors} We next focus on the asymptotic behaviors of $P(t_{\rm m}|T)$ in the limit of small and large $T$. We expect these behaviors to be qualitatively similar to the ones derived for $p=1$ in Section \ref{sec:p1}. The double Laplace transform in Eq.~\eqref{scaling_OU} can be inverted in the short-time and late-time limits, corresponding to $T\ll \xi$ and $T\gg \xi$. Here the correlation time $\xi$ is given by $\xi=1/\alpha$. We first focus on the small $T$ behavior. This corresponds to the limit $s_1,s_2\to \infty $ in Eq.~\eqref{scaling_OU}. We start by performing the change of variable $z\to u=z\sqrt{2/s_1}$ in Eq.~\eqref{scaling_OU}, yielding \begin{equation} \int_{0}^{\infty}dT_1~\int_{0}^{\infty}dT_2~e^{-s_1 T_1-s_2 T_2}F_{\rm OU}(T_1,T_2)=\frac{\sqrt{s_1}}{4\sqrt{\pi}}\int_{-\infty}^{\infty}du~e^{-s_1 u^2/4}\frac{D_{-1-s_1/2}\left(-u\sqrt{s_1/2}\right)}{D_{-s_1/2}\left(-u\sqrt{s_1/2}\right)}~\frac{D_{-1-s_2/2}(-u\sqrt{s_1/2})}{D_{-s_2/2}(-u\sqrt{s_1/2})}\,. \end{equation} When $s_1$ and $s_2$ are both large, we use the approximation (see Appendix \ref{app:wronskian}) \begin{equation} \frac{D_{-(s+1)}(-\sqrt{s}u)}{D_{-s}(-\sqrt{s}u)}\approx\frac{1}{\sqrt{s}}\frac{u+\sqrt{u^2+4}}{2}\,, \label{relation_large_p} \end{equation} valid for large $s$ and fixed $u$. We obtain \begin{equation} \int_{0}^{\infty}dT_1~\int_{0}^{\infty}dT_2~e^{-s_1 T_1-s_2 T_2}F_{\rm OU}(T_1,T_2)\approx\frac{1}{8\sqrt{\pi s_2}}\int_{-\infty}^{\infty}du~e^{-s_1 u^2/4}(u+\sqrt{u^2+4})(\sqrt{s_1/s_2}u+\sqrt{(s_1/s_2) u^2+4})\,. \end{equation} To leading order, this integral becomes \begin{equation} \int_{0}^{\infty}dT_1~\int_{0}^{\infty}dT_2~e^{-s_1 T_1-s_2 T_2}F_{\rm OU}(T_1,T_2)\approx\frac{1}{2\sqrt{\pi s_2}}\int_{-\infty}^{\infty}du~e^{-s_1 u^2/4}=\frac{1}{\sqrt{s_1 s_2}}\,. \end{equation} This Laplace transform can now be inverted using Eq.~\eqref{inv_lapl_sqrt}, and we obtain, for $T_1,T_2\ll 1$, \begin{equation} F_{\rm OU}(T_1,T_2)\approx\frac{1}{\pi\sqrt{T_1 T_2}}\,, \end{equation} corresponding to \begin{equation} P(t_{\rm m}|T)\approx\frac{1}{\pi\sqrt{t_{\rm m}(T-t_{\rm m})}}\,. \end{equation} This is precisely the same result as for the case $p=1$ and corresponds to the arcsine law of L\'evy \cite{Levy}, which describes the PDF of $t_{\rm m}$ for a free BM of duration $T$. As before, this result is coherent with the fact that for very short times ($T\ll 1/\alpha$), the effect of the external potential is negligible and the process is simply diffusive. We next focus on the limit of large $T$. As for the case $p=1$, there are three different regimes to be investigated: the left edge regime ($t_{\rm m}$ small and $T\to \infty$), the central bulk regime ($1\ll t_{\rm m}\ll T$), and the right edge regime ($t_{\rm m}\sim T$ and $T\to \infty$). We now focus on the right-edge behavior ($t_{\rm m} \to T$). Formally inverting the Laplace transform in Eq.~(\ref{scaling_OU}) with respect to $T_1$ gives \begin{eqnarray} \int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 =\frac{1}{\sqrt{8\pi }}\int_{-\infty}^{\infty} dz\, e^{-z^2/2} \left[ \int_{\Gamma_1} \frac{ds_1}{2\pi i}\, e^{s_1\, T_1}\, \frac{D_{-1-s_1/2}(-z)}{D_{-s_1/2}(-z)}\right]\, \left[ \frac{D_{-1-s_2/2}(-z)}{D_{-s_2/2}(-z)}\right]\, , \label{F2s2.1} \end{eqnarray} where $\Gamma_1$ denotes a Bromwich contour in the complex $s_1$ plane. We first consider the Bromwich integral over $s_1$ in \eqref{F2s2.1}. We are interested in the large $T_1$ limit. To evaluate this integral for large $T_1$ we should look for the pole of the integrand at some small negative $s_1$. Since $s_1$ is small, we can replace the numerator, to leading order, by its value at $s_1=0$ \begin{equation} D_{-1-s_1/2}(-z) \approx D_{-1}(-z)= \sqrt{\frac{\pi}{2}}\, e^{z^2/4}\, {\rm erfc}\left( -\frac{z}{\sqrt{2}}\right)\, , \label{Dm1.1} \end{equation} where $\operatorname{erfc}(z)=(2/\sqrt{\pi})\int_{z}^{\infty}du~e^{-u^2}$. Furthermore, the denominator for small $s_1$ can be approximated as (simply by expanding in Taylor series and using $D_0(z)= e^{-z^2/4}$) \begin{eqnarray} D_{-s_1/2}(-z) &\approx & e^{-z^2/4}\left[ 1+ s_1\, \sqrt{\pi/2}\, \frac{1}{z}\, e^{z^2/2}\right] = \frac{\sqrt{2\pi}}{z}\, e^{z^2/4}\, \left[\frac{s_1}{2}+ \frac{z}{\sqrt{2\pi}}\, e^{-z^2/2}\right]\, . \label{Dpsmall.1} \end{eqnarray} Using \eqref{Dm1.1} and \eqref{Dpsmall.1} we then obtain the following approximation for the Bromwich integral for large $T_1$ \begin{equation} \int_{\Gamma} \frac{ds_1}{2\pi i}\, e^{s_1\, T_1}\, \frac{D_{-1-s_1/2}(-z)}{D_{-s_1/2}(-z)} \approx \int_{\Gamma} \frac{ds_1}{2\pi i}\, e^{s_1\, T_1}\, \frac{z\, {\rm erfc} (-z/\sqrt{2})}{ s_1+ \frac{\sqrt{2}z}{\sqrt{\pi}}\, e^{-z^2/2}}\, . \label{Brom.1} \end{equation} Clearly the pole occurs at $s_1= - \sqrt{2/\pi}z\, e^{-z^2/2}$. Since $s_1$ is small, the variable $z$ is necessarily large and positive. In particular, since for large $T_1$ we expect that $s_1$ scales as $1/T_1$, we can anticipate that $z\sim \sqrt{\ln T_1}$. Evaluating the residue and using ${\rm erfc}(-\infty)=2$, we get \begin{equation} \int_{\Gamma} \frac{ds_1}{2\pi i}\, e^{s_1\, T_1}\,\frac{D_{-1-s_1/2}(-z)}{D_{-s_1/2}(-z)} \approx 2 z\, \exp\left[- \frac{\sqrt{2} z}{\sqrt{\pi}}\, e^{-z^2/2}\, T_1\right]\, . \label{Brom.2} \end{equation} Substituting this result back in \eqref{F2s2.1} gives \begin{eqnarray} \int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 \approx \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} dz\, z\, e^{-z^2/2} \exp\left[- \frac{\sqrt{2} z}{\sqrt{\pi}}\, e^{-z^2/2}\, T_1\right]\, \frac{D_{-1-s_2/2}(-z)}{D_{-s_2/2}(-z)}\, . \label{F2s2.2} \end{eqnarray} To make further progress, we perform the change of variable $z\to y=(z-a_{T_1})/b_{T_1}$, where $a_{T_1}$ and $b_{T_1}$ are positive constants that depend on $T_1$ and will be chosen appropriately, yielding \begin{eqnarray} \nonumber&&\int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 \approx \frac{b_{T_1}}{\sqrt{2\pi}}\int_{-\infty}^{\infty} dy\, (a_{T_1}+b_{T_1}y)\, e^{-(a_{T_1}+b_{T_1}y)^2/2} \\ &\times & \exp\left\{- \exp\left[\ln\left(\sqrt{\frac{2}{\pi}}\right)+\ln(a_{T_1}+b_{T_1}y)-(a_{T_1}+b_{T_1}y)^2/2+\ln T_1\right]\, \right\} \frac{D_{-1-s_2/2}(-(a_{T_1}+b_{T_1}y))}{D_{-s_2/2}(-(a_{T_1}+b_{T_1}y))}\, . \label{F2s2.2.1} \end{eqnarray} To get rid of the dependence on $T_1$ in the exponent, we choose $a_{T_1}$ such that \begin{equation} \ln\left(\sqrt{\frac{2}{\pi}}\right)+\ln(a_{T_1}+b_{T_1}y)-a_{T_1}^2/2+\ln T_1=0\,, \end{equation} meaning that to leading order \begin{equation} \label{Oumax.1} a_{T_1}\approx \sqrt{2\ln T_1}\,. \end{equation} Similarly, we choose $b_{T_1}=1/a_{T_1}\approx 1/\sqrt{2\ln(T_1)}$. Plugging these expressions into Eq.~\eqref{F2s2.2.1}, we find that to leading order in $T_1$ \begin{equation} \int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 \approx \frac{1}{2T_1 a_{T_1}}\int_{-\infty}^{\infty}dy ~e^{-y-e^{-y}}\frac{D_{-1-s_2/2}(-(a_{T_1}+b_{T_1}y))}{D_{-s_2/2}(-(a_{T_1}+b_{T_1}y))}\,. \end{equation} Moreover, since $a_{T_1}\gg b_{T_1}$, we obtain \begin{equation} \int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 \approx \frac{1}{2T_1 a_{T_1}}\frac{D_{-1-s_2/2}(-a_{T_1})}{D_{-s_2/2}(-a_{T_1})}\,, \label{F2s2.4} \end{equation} where $a_{T_1}\approx\sqrt{2\ln(T_1)}$ and we have used \begin{equation} \int_{-\infty}^{\infty}dy ~e^{-y-e^{-y}}=1\,. \label{integrand_gumbel_1} \end{equation} Note that the result in \eqref{F2s2.4} is valid for large $T_1$, but for arbitrary $s_2$. We then need to invert this Laplace transform with respect to $s_2$ in order to derive the dependence on $T_2$. To make further progress, let us see what we anticipate and then work backwards. We anticipate a scaling form for the edge behavior, $F(T_1,T_2)\sim (1/T_1) G\left( a\, (2 \ln T_1)\, T_2\right)$ for large $T_1$, where $a$ is some constant scale factor. This would mean that the width of the right edge would shrink very slowly with increasing $T_1$. Hence, if $T_1$ is large, we should scale $T_2$ as $T_2 \sim t/(2 \ln T_1)$ where $t\sim O(1)$. This would mean that in the conjugate Laplace space $s_2$ should scale as $s_2\sim (2 \ln T_1)$. Since $a_{T_1}\approx \sqrt{2 \ln T_1}$, this means that $s_2\sim (a_{T_1})^2$ for large $a_{T_1}$. Hence, driven by this observation, in order to derive a nontrivial scaling function at the right edge, we should analyse \eqref{F2s2.4} in the scaling limit by setting \begin{equation} a_{T_1}= \sqrt{ \frac{s_2}{2}}\, u \label{u_def.1} \end{equation} where $u\sim O(1)$, while both $s_2$ and $a_{T_1}$ are large. With this setting, i.e., replacing $a_{T_1}$ by $\sqrt{s_2/2}\, u$ in \eqref{F2s2.4} we get \begin{equation} \int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 \approx \frac{1}{2T_1}\, \frac{\sqrt{2}}{\sqrt{s_2}\, u} \, \frac{D_{-1-s_2/2}(-\sqrt{s_2/2}\, u)}{D_{-s_2/2}(-\sqrt{s_2/2}\, u)} \,. \label{F2s2.5} \end{equation} This leads us to analyse the asymptotic behavior of $D_{-p}(-\sqrt{p}\, u)$ in the limit of large $p$ (somewhat akin to the Plancheral-Rotah type asymptotic limits for Hermite polynomials). To do this, we use the expansion in Eq.~\eqref{relation_large_p}, which gives the limiting large $s_2$ behavior \begin{equation} \int_0^{\infty}F(T_1, T_2)\, e^{-s_2\, T_2} dT_2 \approx \frac{1}{2 T_1}\, \frac{1}{s_2}\, \left[ 1+ \sqrt{1+ \frac{4}{u^2}} \right]= \frac{1}{2T_1}\, \frac{1}{s_2}\, \left[ 1+ \sqrt{1+ \frac{2 s_2}{a_{T_1}^2}} \right]\, , \label{F2s2.6} \end{equation} where we used $u=\sqrt{2} a_{T_1}/\sqrt{s_2}$ from \eqref{u_def.1}. Inverting formally the Laplace transform with respect to $s_2$ gives \begin{equation} F(T_1,T_2)\approx \int_{{\Gamma}_2} \frac{ds_2}{2\pi i}\, e^{s_2\, T_2}\, \frac{1}{2T_1}\, \frac{1}{\,s_2}\, \left[ 1+ \sqrt{1+ \frac{2 s_2}{a_{T_1}^2}}\right]\, , \label{FT1T2.1} \end{equation} where ${\Gamma}_2$ denotes a Bromwich contour in the complex $s_2$ plane. Rescaling $s_2= s\, a_{T_1}^2/2$ gives \begin{equation} F(T_1,T_2)\approx \int_{{\Gamma}_2} \frac{ds}{2\pi i}\, e^{s\, a_{T_1}^2 T_2/2} \, \frac{1}{T_1}\, \frac{1}{2s}\, \left[ 1+ \sqrt{1+ s}\right]\, , \label{FT1T2.2} \end{equation} Using the Laplace-inversion formula in Eq.~\eqref{G_LT}, we get our main scaling result at the right edge \begin{equation} F(T_1,T_2) \approx \frac{1}{T_1}\, G\left( \frac{a_{T_1}^2}{2}\, T_2\right) \label{edge_scaling.1} \end{equation} where the scaling function $G(t)$ is given exactly in \eqref{G}. Note that $a_{T_1}= \sqrt{2\, \ln T_1}$ from \eqref{Oumax.1}. The result in \eqref{edge_scaling.1} is valid in the scaling limit when $T_1$ is large and $T_2\sim 1/{\ln T_1}$ is small. In this limit, we can replace $T_1\approx T=T_1+T_2$ and hence \eqref{edge_scaling.1} reads \begin{equation} F(T_1,T_2) \approx \frac{1}{T}\, G\left((\ln T)\, T_2\right) \, . \label{edge_scaling.2} \end{equation} Using the scaling relation in Eq.~\eqref{scaling_relation_OU}, we obtain \begin{equation} P(t_{\rm m}|T)\approx \frac1T G\left(\alpha(\ln T) (T-t_{\rm m})\right)\,, \end{equation} valid for large $T$ and $(T-t_{\rm m})\sim 1/\ln(T)$. Using the $t_{\rm m}\to T-t_{\rm m}$ symmetry of the process, we obtain the left edge regime \begin{equation} P(t_{\rm m}|T)\approx \frac1T G\left(\alpha(\ln T) t_{\rm m}\right)\,, \end{equation} valid for large $T$ and $t_{\rm m} \sim 1/\ln(T)$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{correlations.pdf} \caption{ Semi-logarithmic plot of the correlation function $\langle x(t)x(0)\rangle$ as a function of $\sqrt{t}$ for Brownian motion with diffusion constant $D=1$ in the potential $V(x)=\sqrt{|x|}$. The initial position $x(0)$ is drawn from the equilibrium state of the system. The continuous red line shows a stretched-exponential decay of the type $\langle x(t)x(0)\rangle\sim e^{-\sqrt{t/\xi}}$. \label{fig:correlations}} \end{center} \end{figure} Finally, to compute the bulk regime ($t_{\rm m}, T\to\infty$ with $t_{\rm m}/T$ fixed), we first formally invert the double Laplace transform in Eq.~\eqref{scaling_OU} to obtain \begin{equation} F_{\rm OU}(T_1,T_2)=\frac{1}{\sqrt{8\pi}}\int_{-\infty}^{\infty}dz~e^{-z^2/2}\left[\int_{\Gamma_1}\frac{ds_1}{2\pi i}e^{s_1 T_1}\frac{D_{-1-s_1/2}\left(-z\right)}{D_{-s_1/2}\left(-z\right)}\right]\left[\int_{\Gamma_2}\frac{ds_2}{2\pi i}e^{s_2 T_2}\frac{D_{-1-s_2/2}(-z)}{D_{-s_2/2}(-z)}\right]\,, \end{equation} where $\Gamma_1$ and $\Gamma_2$ denote Bromwich contours in the complex $s_1$ and $s_2$ planes. The bulk regime corresponds to the limit $s_1\,,s_2\to 0$. Thus, using the small-$s$ expansion in Eq.~\eqref{Brom.2}, we find \begin{equation} F_{\rm OU}(T_1,T_2)\approx\sqrt{\frac{2}{\pi}}\int_{-\infty}^{\infty}dz~e^{-z^2/2}z^2\, \exp\left[- \frac{\sqrt{2}z}{\sqrt{\pi}}\, e^{-z^2/2} (T_1+T_2)\right]\,. \end{equation} We next perform the change of variable $z\to u=\frac{\sqrt{2}z}{\sqrt{\pi}}\, e^{-z^2/2} (T_1+T_2)$ with Jacobian \begin{equation} |du/dz|=\frac{\sqrt{2}z}{\sqrt{\pi}}\, e^{-z^2/2} (T_1+T_2)|1-z^2|\,. \end{equation} Using the fact that when $(T_1+T_2)$ is large the integral is dominated by large values of $z$, we can approximate \begin{equation} |du/dz|\approx\frac{\sqrt{2}z}{\sqrt{\pi}}\, e^{-z^2/2} (T_1+T_2)z^2\,. \end{equation} Thus, we finally obtain \begin{equation} F_{\rm OU}(T_1,T_2)\approx \frac{1}{T_1+T_2}\int_{0}^{\infty}du~e^{-u}=\frac{1}{T_1+T_2}\,. \end{equation} Therefore, in the bulk regime where $1\llt_{\rm m}\ll T$, we obtain \begin{equation} P(t_{\rm m}|T)\approx\frac{1}{T}\,. \end{equation} Once again, we find that the distribution of $t_{\rm m}$ becomes flat in the bulk regime for late times. This is because the random variables describing the positions of the process at different times become approximately independent when $T\gg \xi$ (where $\xi$ is the correlation time). To summarize, for $T\gg \xi=1/\alpha$, the distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ of the maximum behaves as \begin{equation} P(t_{\rm m}|T)\approx \begin{cases}\frac1T G\left(\alpha \ln(T)~t_{\rm m}\right)\quad &\text{ for }\quadt_{\rm m}\lesssim 1/(\alpha \ln(T))\,,\\ \\ \frac1T \quad &\text{ for }\quad 1/(\alpha \ln(T))\ll t_{\rm m} \ll T-1/(\alpha \ln(T))\,,\\ \\ \frac1T G\left(\alpha \ln(T)~(T-t_{\rm m})\right)\quad &\text{ for }\quadt_{\rm m}\lesssim 1/(\alpha \ln(T))\,.\\ \end{cases} \label{PT_asymp_p2} \end{equation} Remarkably, the late-time shape of $P(t_{\rm m}|T)$, obtained for the harmonic potential ($p=2$) is the same as the one obtained for the potential $V(x)=\alpha|x|$ ($p=1$, see Eq.~\eqref{PT_asymp_p1}). The only difference is the scale of the edge regime, which is $\sim 4D/\alpha^2$ for $p=1$ and $\sim 2/(\alpha \ln(T))$ for $p=2$. The shape of the edge regime is described by the function $G(z)= \left[1+\operatorname{erf}(\sqrt{z})+e^{-z}/\sqrt{\pi z}\right]/2$, which is the same for both $p=1$ and $p=2$. This universality is unexpected and lead us to two natural questions: (i) What is the origin of this function $G(z)$? and (ii) Is the universality of the edge behavior valid for any $p>0$? \subsection{Universality at late times} \label{sec:univ} \begin{figure*}[t]\includegraphics[scale=0.9]{OU_old.pdf} \caption{\label{fig:block_argument} Typical trajectory of a stationary process of duration T. The time interval $[0,T]$ is divided into the $N$ subintervals of duration $\xi$, where $\xi$ is the correlation time of the process. The maximum of the process during the $i$-th time window is denoted by $M_i$. The variables $M_1\,,\ldots\,, M_N$ are approximately independent.} \end{figure*} In this section, we show that the late-time universality of the distribution of $t_{\rm m}$ that we have observed for $p=1$ and $p=2$ is actually valid for any $p>0$. This is based on a real-space "blocking argument", which we describe below. For $p\geq 1$, i.e., if the potential $V(x)$ grows faster than $|x|$ for large $|x|$, one can show that the correlation function decays exponentially in time as \cite{SM20} \begin{equation} \langle x(\tau)x(\tau')\rangle-\langle x(\tau)\rangle\langle x(\tau')\rangle\sim e^{-|\tau-\tau'|/\xi}\,, \end{equation} where $\xi$ is the correlation time. For $0<p<1$, we have verified numerically that the autocorrelation function has a stretched-exponential decay in time. For instance, for $p=1/2$, we verified numerically that (see Fig.~\ref{fig:correlations}) \begin{equation} \langle x(t)x(t')\rangle\sim e^{-\sqrt{|t-t'|/\xi}}\,, \end{equation} for some timescale $\xi>0$. Thus, also for $0<p<1$ one has a typical timescale over which correlations decay and one can still apply the blocking argument. Note that in our recent Letter \cite{MMS21}, we used an heuristic argument to show that the distribution of $t_{\rm m}$ becomes universal at late times for $p\geq 1$. Here, we present a more precise version of this argument and we show that this universality is actually valid for $p>0$. For large $T$ we can thus divide the time interval $[0,T]$ into $N$ blocks of duration $\xi=T/N$ (see Fig.~\ref{fig:block_argument}). Let $M_i$ be the maximal position reached in the $i$-th block. It is clear that the variables $M_1\,,\ldots\,,M_N$ are identically distributed, since the process is in the steady state. Moreover, since we assume that $\xi$ is of the order of the correlation time, they can also be considered independent. Thus the maximum will be reached in a given box with probability $1/N=\xi/T$ and the late-time probability distribution of the $t_{\rm m}$ is approximately given by the uniform measure \begin{equation} P(t_{\rm m}|T)\approx \frac1N \frac{1}{\xi}=\frac1T\,. \label{uniform2} \end{equation} Note, however, that this argument is only valid when $\xi\ll t_{\rm m}\ll T-\xi$, i.e., in the bulk of the distribution $P(t_{\rm m}|T)$. As observed in the exactly solvable cases $p=1$ and $p=2$, in the edge regions $0<t_{\rm m}<\xi$ and $(T-\xi)<t_{\rm m}<T$, a more precise analysis is required. To proceed, let us analyze the behavior of the global maximum $M$ in the limit of large $T$. The global maximum $M$ of the process can be written as the maximum of the local i.i.d. variables $M_1\,,\ldots\,,M_N$ \begin{equation} M=\max_{1\leq i\leq N}\left(M_i\right)\,. \end{equation} Even though we do not know the PDF $P(M_i)$ of the local maximum $M_i$, we can guess that it will have the same large-$M_i$ tail as the equilibrium distribution in Eq.~\eqref{boltzmann}, i.e., that for large $M_i$ one has \begin{equation} P(M_i)\sim \exp\left(-\frac{\alpha }{D}~ M_i^p\right)\,. \label{tail_behavior_1} \end{equation} Thus, one can apply the standard extreme value theory for i.i.d. random variables (see, e.g., \cite{MP20}). In particular, to find the leading-order behavior of the global maximum $M$, we need to solve the equation \begin{equation} \int_{M}^{\infty}dM'~P(M')=\frac1N\,. \label{1Nfrac} \end{equation} Indeed, we expect the probability of observing a value larger than the global maximum to be $1/N$, since $M_1\,,\ldots\,,M_N$ are i.i.d. variables (this argument can be made more precise, see \cite{MP20}). Plugging the expression for $P(M)$, given in Eq.~\eqref{tail_behavior_1}, into Eq.~\eqref{1Nfrac} and integrating by parts, we find that to leading order \begin{equation} M \approx \left(\frac{D}{\alpha}\ln(N)\right)^{1/p}\,. \end{equation} Moreover, since $N=T/\xi$, we obtain \begin{equation} M \approx \left(\frac{D}{\alpha}\ln(T)\right)^{1/p}\,. \label{M_leading_order} \end{equation} Interestingly, one can also show that the global maximum concentrates around this deterministic value in Eq.~\eqref{M_leading_order} for large $T$ \cite{MP20}, meaning that the fluctuations around this value are subleading in $T$. Indeed, one can show that the global maximum $M$ of $N$ i.i.d.~variables, each drawn from the PDF in Eq.~\eqref{tail_behavior_1}, grows as \begin{equation} M\approx \left(\frac{D}{\alpha}\ln N\right)^{1/p}+\frac{D}{\alpha p}\left(\frac{D}{\alpha}\ln N\right)^{(1-p)/p}z\,, \end{equation} where $z$ is Gumbel distributed. Note that the relative ratio of the fluctuations and the mean decays as $1/\ln N$. Therefore, we can consider the value of the global maximum to be fixed and given by Eq.~\eqref{M_leading_order}. \begin{figure*}[t]\includegraphics[scale=0.9]{last_block.pdf} \caption{\label{fig:last_block} Schematic representation of a stochastic process where the global maximum $M-\epsilon$ is reached within the right edge, i.e., at time $t_{\rm m}$ with $T-t_{\rm m}\ll T$. } \end{figure*} Then, we can apply the path-decomposition technique derived at the beginning of this section (see Eq.~\eqref{Ptm_integral}), with the only difference that $M$ is now fixed. Thus, Eq.~\eqref{Ptm_integral} gets modified as follows \begin{equation} P(t_{\rm m}|T)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon) \int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)G^M(M-\epsilon,t_{\rm m}|x_0)Q^M(M-\epsilon,T-t_{\rm m})\right]\,, \label{Ptm_integral_2} \end{equation} where now $M$ depends on $T$ and is given in Eq.~\eqref{M_leading_order}. We recall that $P_{\rm st}(x_0)$, $G^M(x,t|x_0)$, and $Q^M(x,t)$ respectively indicate the equilibrium distribution, the constrained propagator, and the survival probability of the process. We focus on the right-edge regime, corresponding to configurations in which the global maximum is reached at the end of the time interval $[0,T]$, i.e., for $T-t_{\rm m}\ll T$ (see Fig.~\ref{fig:last_block}). In this region we can approximate $t_{\rm m} \approx T$ and therefore Eq.~\eqref{Ptm_integral_2} becomes \begin{equation} P(t_{\rm m}|T)\approx\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon) \int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)G^M(M-\epsilon,T|x_0)Q^M(M-\epsilon,T-t_{\rm m})\right]\,. \end{equation} Absorbing the constant terms, i.e., the terms that are independent of $t_{\rm m}$, into $\mathcal{N}(\epsilon)$, we obtain \begin{equation} P(t_{\rm m}|T)\approx\lim_{\epsilon\to 0}\left[\mathcal{N}'(\epsilon) Q^M(M-\epsilon,T-t_{\rm m})\right]\,, \label{Ptm_integral_2_new} \end{equation} where the constant $\mathcal{N}'(\epsilon)$ can be determined by matching this edge expression in Eq.~\eqref{Ptm_integral_2_new} with the bulk result $P(t_{\rm m}|T)\approx 1/T$. \begin{figure*}[t] \includegraphics[scale=0.6]{comparison.pdf} \caption{\label{fig:comparison} The scaled probability density function $TP(t_{\rm m}|T)$ as a function of the scaled time of the maximum $t_{\rm m}/\lambda(T)$. The symbols correspond to numerical simulations of Brownian motion in a potential $V(x)=|x|^p$, for different values of $p$ and $T$ large ($T=6400$ for $p=1$ and $T=800$ for $p=2$ and $p=3$). The continuous blue curve corresponds to the exact scaling function $G(z)$, given in Eq.~\eqref{G_summary} and valid for large $T$. For further evidence of the convergence to this scaling function, see Fig.~\ref{fig:conv}.} \end{figure*} Thus, we need to compute the survival probability $Q^M(M-\epsilon,T-t_{\rm m})$, defined as the probability that the process remains below position $M$ for a time $T-t_{\rm m}$, having started from position $M-\epsilon$. As previously explained, this is, in general, hard (the only two solvable models are $p=1$ and $p=2$). Nevertheless, since the time interval $[T-t_{\rm m},T]$ is short, we expect the position of the particle within this time interval to remain close to the global maximum $M$. Thus, we linearize the potential $V(x)$ around $x=M$ and we obtain \begin{equation} V(x)\approx V(M)+(x-M)V'(M)\,. \end{equation} As a consequence, to leading order, the effective Langevin equation of the process (see Eq.~\eqref{langevin}) becomes \begin{equation} \frac{dx(\tau)}{d\tau}=-V'(M)+\eta(\tau)\,, \label{largevin2} \end{equation} meaning that the particle is subject, in first approximation, to a constant negative drift $\mu=-V'(M)$. Using $V'(x)=\alpha p x^{p-1}$ for $x>0$ and the expression for $M$ in Eq.~\eqref{M_leading_order}, we find that the constant drift $\mu$ is given by \begin{equation} \mu=-V'(M)\approx-\alpha~ p\left(\frac{D}{\alpha}\ln(T)\right)^{(p-1)/p}\,. \label{mu} \end{equation} Here we use the definition that the drift $\mu$ is positive when it is pointing towards increasing values of $x$ and negative otherwise (in our case the drift is negative). Crucially, the survival probability of a BM subject to a constant drift $\mu$ can be computed exactly (see Appendix \ref{app:surv_drift}) and reads \begin{equation} Q^M(x_0,t)=\frac{1}{2}\left[\operatorname{erfc}\left(-\frac{M-x_0-\mu t}{\sqrt{4Dt}}\right)-e^{\mu (M-x_0)/D}\operatorname{erfc}\left(\frac{M-x_0+\mu t}{\sqrt{4Dt}}\right)\right]\,, \end{equation} for $x<M$, where $\operatorname{erfc}(z)=(2/\sqrt{\pi})\int_{z}^{\infty}du~e^{-u^2}$. We recall that the survival probability $Q^M(x,t)$ is defined as the probability that the process remains below position $M$ for a total time $t$, having started from position $x$. Evaluating this expression for $x=M-\epsilon$ and expanding to leading order in $\epsilon$, we find \begin{equation} Q^M(M-\epsilon,t)\approx \frac{\epsilon |\mu|}{D}G\left(\frac{\mu^2 t}{4D}\right)\,, \label{QM_1_exp} \end{equation} where $G(z)= \left[1+\operatorname{erf}(\sqrt{z})+e^{-z}/\sqrt{\pi z}\right]/2$. Plugging the expression for $Q^M(M-\epsilon,t)$ in Eq.~\eqref{QM_1_exp} into Eq.~\eqref{Ptm_integral_2} and using the expression for $\mu$ in Eq.~\eqref{mu}, we obtain \begin{equation} P(t_{\rm m}|T)\approx\lim_{\epsilon\to 0}\left[\mathcal{N}'(\epsilon) \frac{\epsilon|\mu|}{D}G\left(\frac{T-t_{\rm m}}{\lambda(T)}\right)\right]\,, \label{Ptm_integral_3} \end{equation} where \begin{equation} \lambda(T)=\frac{4D}{\mu^2}= \frac{4D}{\alpha^2 p^2}\left(\frac{D}{\alpha}\ln(T)\right)^{-2(p-1)/p}\,, \label{lambda} \end{equation} where we have used the expression for $\mu$ in Eq.~\eqref{mu}. To determine the constant $\mathcal{N}'(\epsilon)$ we impose that this edge-regime expression matches the bulk expression in Eq.~\eqref{uniform2}, i.e., that \begin{equation} \lim_{\epsilon\to 0}\left[\mathcal{N}'(\epsilon) \frac{\epsilon |\mu|}{D}G\left(\frac{T-t_{\rm m}}{\lambda(T)}\right)\right]\approx \frac1T\,, \end{equation} for $T-t_{\rm m}\gg \lambda(T)$. Using the fact that $G(z)\approx 1 $ for large $z$, we obtain \begin{equation} \lim_{\epsilon\to 0}\left(\mathcal{N}'(\epsilon) \epsilon\right)=\frac{D}{|\mu| T}\,. \end{equation} Finally, we get \begin{equation} P(t_{\rm m}|T)\approx \frac{1}{T}G\left(\frac{T-t_{\rm m}}{\lambda(T)}\right)\,, \label{eq:edge2_drift_final} \end{equation} for $T-t_{\rm m} \lesssim \lambda(T)$. \begin{figure*}[t] \includegraphics[scale=0.4]{convergence.pdf} \caption{\label{fig:conv} The scaled probability density function $TP(t_{\rm m}|T)$ of the time $t_{\rm m}$ of the maximum for Brownian motion in a confining potential $V(x)=\alpha|x|^p$ as a function of the scaled time of the maximum $t_{\rm m}/\lambda(T)$ for $p=1$ (a), $p=2$ (b), and $p=3$ (c). The timescale $\lambda(T)$, given in Eq.~\eqref{lambda}, is a function of $\alpha$ and $p$. The continuous blue curve corresponds to the theoretical scaling function $G(z)$, given in Eq.~\eqref{G_summary} and valid for large $T$. This universal curve is valid for any value of $\alpha>0$ and $p\geq 1$. As $T$ increases, the numerical scaling functions for different values of $p$, shown by symbols, approaches the same theoretical scaling function, shown by a solid line.} \end{figure*} Even if the width $\lambda(T)$ of the edge region depends on the details of the potential, the shape of $P(t_{\rm m}|T)$, encoded in the scaling function $G(z)$, becomes completely universal, i.e. independent of $V(x)$, for large $T$. Note that in the special case $p=1$, i.e. when $V(x)=\alpha~|x|$, one finds that $\lambda(T)=4D/\alpha^2$, coinciding with the results of Section \ref{sec:p1}. Thus, for $p=1$ the width of the edge region is independent of $T$. On the other hand, for $p>1$ we observe that $\lambda(T)$ decreases very slowly with $T$ and thus for $T\to \infty$ the edge region slowly disappears. In particular, for $p=2$, we find $\lambda(T)=1/(\alpha\ln(T))$, in agreement with Eq.~\eqref{PT_asymp_p2}. The universal scaling function $G(z)$ has asymptotic behaviors given in Eq.~\eqref{G_asym} and is plotted in Fig.~\ref{fig:Gz}. Since the process is at equilibrium, the PDF of $t_{\rm m}$ satisfies the symmetry $P(t_{\rm m}|T)=P(T-t_{\rm m} |T)$ (see Subsection \ref{sec:criterion}). Therefore, in the left-edge regime, i.e., for $t_{\rm m}\lesssim \lambda(T)$, we have \begin{equation} P(t_{\rm m}|T)\approx \frac{1}{T}G\left(\frac{t_{\rm m}}{\lambda(T)}\right)\,. \label{eq:edge1_drift_final} \end{equation} Thus, the late-time behavior of the distribution of $t_{\rm m}$ can be summarized as \begin{equation} P(t_{\rm m}|T)\approx \begin{cases} \frac{1}{T}G\left(\frac{t_{\rm m}}{\lambda(T)}\right) &~~\text{ for }~~ t_{\rm m}\lesssim \lambda(T)\\ \\ \frac1T &~~\text{ for } ~~\lambda(T)\ll t_{\rm m}\ll T- \lambda(T)\\ \\ \frac{1}{T}G\left(\frac{T-t_{\rm m}}{\lambda(t)}\right) &~~\text{ for } ~~ t_{\rm m}\gtrsim T- \lambda(T) \,. \end{cases} \label{universal_G} \end{equation} Remarkably, the shape of the distribution $P(t_{\rm m}|T)$ is completely independent of the details of the potential and valid for any $V(x)$ such that $V(x)\approx\alpha |x|^p$ for large $|x|$, with $\alpha>0$ and $p\geq 1$. The details of the potential, i.e., the parameters $\alpha$ and $p$, only appear in $P(t_{\rm m}|T)$ through the width $\lambda(T)$ of the edge region, which slowly shrinks as $\ln(T)^{-2(p-1)/p}$ for $p>1$, is of order one for $p=1$ and expands as $\ln(T)^{2(1-p)/p}$ for $0<p<1$. In Figs.~\ref{fig:comparison}, \ref{fig:conv}, and \ref{fig:convergence_p05}), we compare this universal result in Eq.~\eqref{universal_G} with numerical simulations performed for different values of $p$. We observe that for large $T$ the results of numerical simulations performed with different values of $p$ and appropriately rescaled with the corresponding value of $\lambda(T)$ collapse into the same universal function $G(z)$, given in Eq.~\eqref{G}. \begin{figure}[t] \begin{center} \includegraphics[scale=0.5]{tmax_p05.pdf} \caption{Left edge of the scaled distribution $TP(t_{\rm m}|T)$ as a function of the scaled time of the maximum $t_{\rm m}/\lambda(T)$ for $p=1/2$. Note that here the width $\lambda(T)=(16D/\alpha^2)(D\ln(T)/\alpha)^2$ is increasing with $T$. The continuous blue line corresponds to the universal result in Eq.~\eqref{eq:edge1_drift_final}. The symbols are the results of numerical simulations of Brownian motion in the potential $V(x)=4x^2$ for $|x|<1$ and $V(x)=4\sqrt{|x|}$ for $|x|>1$ with different total times $T$. We choose the quadratic part for small $x$ to avoid the divergence in the first derivative $V'(x)$. We observe that already at $T=5$ the numerical results are in excellent agreement the analytical prediction. \label{fig:convergence_p05}} \end{center} \end{figure} \section{Out-of-equilibrium processes} \label{sec:neq} We next focus on the case of nonequilibrium steady states (NESS). This class of stochastic processes is characterized by the violation of the detailed balance condition and by the presence of steady-state probability currents. In the last decades, there has been a surge of interest in characterizing the properties of NESS, especially in the context of living systems. As a consequence of the violation of the detailed balance condition, NESS do not satisfy time-reversal symmetry. To better understand the properties of $t_{\rm m}$ for nonequilibrium processes, we first investigate two canonical models for which $P(t_{\rm m}|T)$ can be computed exactly: resetting Brownian motion (RBM) and a confined run-and-tumble particle (RTP). \subsection{Resetting Brownian motion} \label{sec:res_BM} \begin{figure*}[t] \includegraphics[scale=0.7]{res.pdf} \caption{\label{fig:res} Typical trajectory $x(\tau)$ of a Brownian motion with stochastic resetting as a function of time $\tau$ in the interval $[0,T]$. The red segments indicate the resetting events. The particle starts from position $x_0$, drawn from the steady state \eqref{eq:stationary_resetting} and reaches the global maximum $M$ at time $t_{\rm m}$.} \end{figure*} A nonequilibrium version of BM which has been widely investigated recently is BM with stochastic resetting \cite{EM_2011,EMS20}. Stochastic resetting describes dynamical processes that are restarted from some fixed state at random times. Processes with stochastic restarts appear in different disciplines, from computer science \cite{montanari2002optimizing} to chemistry \cite{reuveni2014role}. The restarting dynamics drives the system to a nonequilibrium steady state \cite{EM_2011,EMS20} and induces many interesting phenomena, including dynamical phase transitions \cite{MSS15,BBPM20,FBPC21,MVB20,MVB22}. Besides Brownian motion, resetting has been investigated for several other random processes, including L\'evy flights \cite{kusmierz2014first,kusmierz2015optimal,campos2015phase}, active particles \cite{evans2018run,masoliver2019telegraphic}, fluctuating interfaces \cite{GMS14}, and the Ising model \cite{magoni2020ising}. Moreover, many theoretical predictions for stochastic resetting have been recently verified in experiments \cite{TFPS20,BBPM20,FBPC21,MVB20,MVB22}. In this section, we investigate the time $t_{\rm m}$ of the maximum for a Brownian particle $x(\tau)$, evolving with diffusion coefficient $D$ up to time $T$. In addition to the usual diffusive motion, we assume that the particle undergoes stochastic resetting to the origin with constant rate $r$. In a small time interval $dt$, the position of the particle evolves according to \begin{equation} x(t+dt)=\begin{cases} x(t)+\sqrt{2D}\eta(t)dt&\quad\text{ with probabilty }1-rdt\,,\\ \\ 0&\quad\text{ with probabilty }rdt\,,\\ \end{cases} \end{equation} where $\eta(t)$ is a Gaussian white noise. A typical trajectory is shown in Fig.~\ref{fig:res}. The resetting dynamics drives the system to the stationary state \cite{EM_2011} \begin{equation}\label{eq:stationary_resetting} P_{\rm st}(x_0)=\frac{1}{2}\sqrt{\frac{r}{D}}\exp\left(-\sqrt{\frac{r}{D}}|x_0|\right)\,. \end{equation} Interestingly, the resetting events produce a net probability current towards the resetting location $x=0$, driving the system out of equilibrium. Note that the distribution of the time $t_{\rm m}$ of the maximum for RBM has been also investigated in \cite{SP21}, where the authors considered the case where the initial position of the particle is fixed to $x_0=0$. Here, we assume instead that at the initial time the particle has already reached the steady state, meaning that $x_0=x(0)$ is drawn from $P_{\rm st}(x_0)$ given in Eq.~\eqref{eq:stationary_resetting}. To compute the distribution of $t_{\rm m}$, we will use the path-decomposition technique described in Section \ref{sec:eq}. Indeed, it is easy to show that the result in Eq.~\eqref{Ptm_integral_LT} remains valid in the case of RBM. Note that also in this case one has to consider a cutoff $\epsilon$, as explained at the beginning of Section \ref{sec:eq}. In the case of RBM, the result in Eq.~\eqref{Ptm_integral_LT} becomes \begin{eqnarray} \nonumber &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)\\ &=&\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon) \int_{-\infty}^{\infty}dM~\int_{-\infty}^{M}dx_0~P_{\rm st}(x_0)\tilde{G}_r^M(M-\epsilon,s_1|x_0)\tilde{Q}_r^M(M-\epsilon,s_2)\right]\,, \label{Ptm_integral_LT_res} \end{eqnarray} where we now use the subscript $r$ in $\tilde{G}_r^M(x,s|x_0)$ and $\tilde{Q}_r^M(x,s)$ to stress the dependence on the resetting rate $r$. We recall that $\tilde{G}_r^M(x,s|x_0)$ is the Laplace transform with respect to $t$ of the constrained propagator $G_r^M(x,t|x_0)$, defined as the probability that the process arrives at position $x$ at time $t$, while always remaining below position $M$. Similarly, $\tilde{Q}_r^M(x,s)$ is the Laplace transform with respect to $t$ of the survival probability $Q_r^M(x,t)$, defined as the probability that the process remains below position $M$ for a total time $t$, having started from position $x$. Note that the constant $\mathcal{N}(\epsilon)$ has to be fixed using the normalization condition of $P(t_{\rm m}|T)$. To exploit this relation in Eq.~\eqref{Ptm_integral_LT_res}, we first have to derive the constrained propagator $\tilde{G}_r^M(M-\epsilon,s_1|x_0)$ and the survival probability $\tilde{Q}_r^M(M-\epsilon,s_2)$. We start by computing the survival probability $Q_r^M(x,t)$. It is useful to consider the cases $M<0$ and $M>0$ separately. If the global maximum is negative, then no resetting event has occurred. Indeed, a resetting event would bring the particle to the origin, implying $M\geq 0$. As a consequence, for $M<0$, the survival probability is simply given by \begin{equation} Q_r^M(x,t)=e^{-rt}Q_0^M(x,t)\,, \label{relation_Qr_M<0} \end{equation} where the term $e^{-rt}$ is the probability that no resetting event occurs up to time $t$ and the term $Q_0^M(x,t)$ is the survival probability of BM without resetting. The latter quantity can be easily computed by solving the diffusion equation \cite{Redner_book,M05,BMS13} and is given by \begin{equation} Q_0^M(x,t)=\operatorname{erf}\left(\frac{M-x}{\sqrt{4Dt}}\right)\theta(M-x)\,, \end{equation} where $\theta(z)$ is the Heaviside theta function, i.e., $\theta(z)=0$ for $z<0$ and $\theta(z)=1$ for $z>0$. Considering the Laplace transform of this quantity, we obtain \begin{equation} \tilde{Q}_0^M(x,s)= \frac{1}{s}\left[1-e^{-\sqrt{s/D}(M-x)}\right]\theta(M-x)\,. \label{Q0_LT} \end{equation} Setting $x=M-\epsilon$ and expanding for small $\epsilon>0$, we get \begin{equation} \tilde{Q}_0^M(M-\epsilon,s)\approx \frac{\epsilon}{\sqrt{ Ds}}\,. \label{Q0_LT_expanded} \end{equation} Taking a Laplace transform of the relation in Eq.~\eqref{relation_Qr_M<0} and using this expansion in Eq.~\eqref{Q0_LT_expanded}, we obtain \begin{equation} \tilde{Q}_r^M(M-\epsilon,s)=\tilde{Q}_0^M(M-\epsilon,s+r)\approx \frac{\epsilon}{\sqrt{ D(s+r)}}\,, \label{Q0_LT_M<0} \end{equation} valid for $M<0$. When $M>0$, resetting events are possible. Thus, the survival probability $Q_r^M(x,t)$ can be computed by using the following renewal equation \begin{equation} Q_r^M(x,t)=e^{-rt}Q_0^M(x,t)+r\int_{0}^{t}d\tau~e^{-r\tau}Q_0^M(x,\tau)Q_r^M(0,t-\tau)\,. \label{renew_1} \end{equation} The first term on the right-hand side of Eq.~\eqref{renew_1} corresponds to the survival of the process with no resetting up to time $t$. The second term describes the case where the first resetting occurs at time $0<\tau<t$. The factor $Q_0^M(x,\tau)$ is the survival probability in the interval $[0,\tau]$, while the factor $Q_r^M(0,t-\tau)$ is the survival probability in the remaining interval $[\tau,t]$. Taking a Laplace transform of Eq.~\eqref{renew_1} and using the convolution theorem, we get \begin{equation} \tilde{Q}_r^M(x,s)=\tilde{Q}_0^M(x,r+s)+r\tilde{Q}_0^M(x,r+s)\tilde{Q}_r^M(0,s)\,. \label{renew_1_LT_} \end{equation} Setting $x=0$, we find \begin{equation} \tilde{Q}_r^M(0,s)=\frac{\tilde{Q}_0^M(0,r+s)}{1-r~\tilde{Q}_0^M(0,r+s)}\,. \end{equation} Substituting this last expression back into Eq.~\eqref{renew_1_LT_}, we get \begin{equation} \tilde{Q}_r^M(x,s)=\frac{\tilde{Q}_0^M(x,s+r)}{1-r\tilde{Q}_0^M(0,s+r)}\,, \end{equation} which is valid for $M>0$. Using the expression for $\tilde{Q}_0^M(x,s+r)$, given in Eq.~\eqref{Q0_LT}, we have \begin{equation} \tilde{Q}_r^M(x,s)=\frac{1-e^{-\sqrt{(s+r)/D}(M-x)}}{s+r e^{-\sqrt{(s+r)/D}M}}\theta(M-x)\,. \label{LT_Qxs} \end{equation} Setting $x=M-\epsilon$, we obtain \begin{equation} \tilde{Q}_r^M(M-\epsilon,s)\approx\frac{\epsilon}{\sqrt{D}}\frac{\sqrt{s+r}}{s+r e^{-\sqrt{(s+r)/D}M}}\,. \end{equation} To summarize, so far we have shown that to leading order in $\epsilon$ \begin{equation} \tilde{Q}_r^M(M-\epsilon,s)\approx\begin{cases} \dfrac{\epsilon}{\sqrt{D}}\dfrac{\sqrt{s+r}}{s+r e^{-\sqrt{(s+r)/D}M}}\quad &\text{ for }M>0\,,\\ \\ \dfrac{\epsilon}{\sqrt{D(s+r)}}\quad &\text{ for }M<0\,. \end{cases} \label{Q_final_expanded} \end{equation} We next focus on the constrained propagator $G_r^M(x,t|x_0)$. As before, it is useful to consider the cases $M<0$ and $M>0$ separately. For $M<0$, no resetting can occur and the constrained propagator is simply given by \begin{equation} G_r^M(x,t|x_0)=e^{-rt} G_0^M(x,t|x_0)\,, \label{G_M<0} \end{equation} where $e^{-rt}$ is the probability that no resetting occurs up to time $t$ and $G_0^M(x,t|x_0)$ is the constrained propagator of BM without resetting. The latter quantity can be computed using the method of images \cite{Redner_book} and reads \begin{equation}\label{eq:propagtor_r0} G^M_0(x,t|x_0)=\frac{1}{\sqrt{2\pi Dt}}\left(e^{-(x-x_0)^2 /(4Dt)}-e^{-(2M-x+x_0)^2 /(4Dt)}\right)\theta(M-x)\,. \end{equation} Setting $x=M-\epsilon$ and expanding to leading order for small $\epsilon$, we find \begin{equation} G^M_0(M-\epsilon,t|x_0)\simeq\frac{(M-x_0)\epsilon}{D t\sqrt{4\pi D t}}e^{-(M-x_0)^2/(4Dt)}\,. \end{equation} Taking a Laplace transform with respect to $t$ gives, to leading order in $\epsilon>0$ \begin{equation}\label{eq:tilde_G_0_expanded} \tilde{G}^M_0(M-\epsilon,s|x_0)\simeq\frac{\epsilon}{D}e^{-\sqrt{s/D}(M-x_0)}\,. \end{equation} Considering the Laplace transform of Eq.~\eqref{G_M<0} and using the expansion in Eq.~\eqref{eq:tilde_G_0_expanded}, we obtain \begin{equation}\label{eq:tilde_G_r_expanded_M<0} \tilde{G}^M_r(M-\epsilon,s|x_0)\simeq\frac{\epsilon}{D}e^{-\sqrt{(s+r)/D}(M-x_0)}\,, \end{equation} valid for $M<0$. On the other hand, in the case $M>0$, resetting events can occur and the propagator satisfies the renewal equation \begin{equation} G^M_r(x,t|x_0)=e^{-rt}G^M_0(x,t|x_0)+r \int_{0}^{t}d\tau\,e^{-r\tau}Q^M_r(x_0,t-\tau)G^M_0(x,\tau|0)\,. \end{equation} The first term on the right-hand side corresponds to the case where no resetting occurs. The second term corresponds to the case where the last resetting event occurs at time $t-\tau$ and the factor $Q^M_r(x_0,t-\tau)$ is the probability that the particle remains below position $M$ up to time $t-\tau$. Taking a Laplace transform with respect to $t$ yields \begin{equation} \tilde{G}^M_r(x,s|x_0)=\tilde{G}^M_0(x,s+r|x_0)+r \tilde{Q}_r^M(x_0,s)\tilde{G}^M_0(x,s+r|0)\,. \end{equation} Finally, setting $x=M-\epsilon$, using Eq.~\eqref{LT_Qxs}, and expanding to leading order in $\epsilon$, we find \begin{equation}\label{eq:G_r_lt_final} \tilde{G}_r(M-\epsilon,s|x_0)\simeq\frac{\epsilon}{D}~\frac{\left[r+s\, e^{\sqrt{(s+r)/D}x_0} \right]}{\left[r+s\, e^{\sqrt{(s+r)/D}M}\right]}\,, \end{equation} which is valid for $M>0$. To summarize, we have shown that to leading order in $\epsilon$ \begin{equation} \tilde{G}_r(M-\epsilon,s|x_0)\approx\begin{cases} \dfrac{\epsilon}{D}~\dfrac{\left[r+s\, e^{\sqrt{(s+r)/D}x_0} \right]}{\left[r+s\, e^{\sqrt{(s+r)/D}M}\right]}\quad &\text{ for }M>0\,,\\ \\ \dfrac{\epsilon}{D}e^{-\sqrt{(s+r)/D}(M-x_0)}\quad &\text{ for }M<0\,. \end{cases} \label{G_final_expanded} \end{equation} We now have all the ingredients to compute the PDF $P(t_{\rm m}|T)$. Substituting the expressions for $P_{\rm st}(x_0)$, $\tilde{Q}_r^M(M-\epsilon,s)$, and $\tilde{G}_r^M(M-\epsilon,s|x_0)$, respectively given in Eqs.~\eqref{eq:stationary_resetting}, \eqref{Q_final_expanded}, and \eqref{G_final_expanded}, into Eq.~\eqref{Ptm_integral_LT_res} we obtain \begin{eqnarray} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{1}{2}\frac{\sqrt{r}}{D^2} \left\{\int_{-\infty}^{0}dM~\int_{-\infty}^{M}dx_0~e^{\sqrt{r/D} x_0}\right.\nonumber \\ &\times & \left. \frac{e^{-\sqrt{(s_1+r)/D}(M-x_0)}}{\sqrt{s_2+r}}+\int_{0}^{\infty}dM~\int_{-\infty}^{M}dx_0~e^{-\sqrt{r/D}|x_0|} \frac{\left[r+s_1\, e^{\sqrt{(s_1+r)/D}x_0} \right]}{\left[r+s_1\, e^{\sqrt{(s_1+r)/D}M}\right]}~\frac{\sqrt{s_2+r}}{\left[s_2+r e^{-\sqrt{(s_2+r)/D}M}\right]}\right\}\,, \end{eqnarray} where we recall that we integrate the initial position $x_0$ over the interval $(-\infty,M)$ because by definition the variable $M$ is the global maximum and hence $M>x_0$. This expression can be simplified by performing the change of variables $(x_0,M)\to(w=x_0\sqrt{r/D},z=M\sqrt{r/D})$, which gives \begin{eqnarray} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{1}{2}\frac{1}{D\sqrt{r}} \left\{\int_{-\infty}^{0}dz~\int_{-\infty}^{z}dw~e^{w}\right.\nonumber \\ &\times & \left. \frac{e^{-(z-w)\sqrt{1+s_1/r}}}{\sqrt{s_2+r}}+\int_{0}^{\infty}dz~\int_{-\infty}^{z}dw~e^{-|w|} \frac{\left[r+s_1\, e^{w\sqrt{1+s_1/r}} \right]}{\left[r+s_1\, e^{z\sqrt{1+s_1/r}}\right]}~\frac{\sqrt{s_2+r}}{\left[s_2+r e^{-z\sqrt{1+s_2/r}}\right]}\right\}\,. \end{eqnarray} Computing the integrals over $w$, we get \begin{eqnarray} &&\int_{0}^{\infty}dt_1~e^{-s_1 t_1}\int_{0}^{\infty}dt_2~e^{-s_2 t_2}~P(t_{\rm m}=t_1|T=t_1+t_2)=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{1}{2}\frac{1}{D r} \Bigg\{ \frac{1}{(1+\sqrt{1+s_1/r})\sqrt{1+s_2/r}}\nonumber \\ &+& \frac{\sqrt{1+s_2/r}}{\sqrt{1+s_1/r}-1}\int_{0}^{\infty}dz~e^{-(1+\sqrt{1+s_1/r})z}~\frac{e^{z\sqrt{1+s_1/r}} s_1/r-\sqrt{1+s_1/r}+1} {\left(s_1/r+ e^{-z\sqrt{1+s_1/r}}\right)\left(s_2/r+ e^{-z\sqrt{1+s_2/r}}\right)}\Bigg\}\,. \label{eq:int_res} \end{eqnarray} In order to fix the normalization constant $\mathcal{N}(\epsilon)$, we set $s_1=s_2=s$ on both sides of Eq.~\eqref{eq:int_res}. The left-hand side can be evaluated by using the fact that the PDF $P(t_{\rm m}|T)$ is normalized to unity over $t_{\rm m}$ (see Eq.~\eqref{lhs}) and is equal to $1/s$. Evaluating the integrals on the right-hand side with Mathematica, we find \begin{equation} \frac1s=\lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]\frac{1}{D s}\,, \end{equation} and hence \begin{equation} \lim_{\epsilon\to 0}\left[\mathcal{N}(\epsilon)\epsilon^2\right]=D\,. \end{equation} Using this expression, we can finally write the PDF $P(t_{\rm m}|T)$ in the scaling form \begin{equation} P(t_{\rm m}|T)=rF_R(rt_{\rm m},r(T-t_{\rm m}))\,, \label{scaling_res} \end{equation} where \begin{eqnarray} \int_{0}^{\infty}dT_1~e^{-s_1 T_1}\int_{0}^{\infty}dT_2~e^{-s_2 T_2}~F_R(T_1,T_2)&=&\frac{1}{2} \frac{1}{(1+\sqrt{1+s_1})\sqrt{1+s_2}}\\&+&\frac12 \frac{\sqrt{1+s_2}}{\sqrt{1+s_1}-1}\int_{0}^{\infty}dz~e^{-(1+\sqrt{1+s_1})z}\frac{e^{z\sqrt{1+s_1}} s_1-\sqrt{1+s_1}+1} {\left(s_1+ e^{-z\sqrt{1+s_1}}\right)\left(s_2+ e^{-z\sqrt{1+s_2}}\right)}\,.\nonumber \label{FR_LT} \end{eqnarray} Interestingly, this expression is not invariant under exchange of $s_1$ and $s_2$. As a consequence $P(T_1,T_2)\neq P(T_2,T_1)$ and thus the PDF $P(t_{\rm m}|T)$ is not symmetric around the midpoint $t_{\rm m}=T/2$. This is confirmed by numerical simulations (see Fig. ~\ref{fig:res_ptm}). \begin{figure*}[t]\includegraphics[scale=0.7]{tmax_res.pdf} \caption{\label{fig:res_ptm} Probability density function $P(t_{\rm m}|T)$ as a function of the time $t_{\rm m}$ of the maximum, obtained from numerical simulations of resetting Brownian motion with $D=T=1$ and $r=10$. The vertical dashed line indicates the midpoint $t_{\rm m}=T/2$. As a consequence of the nonequilibrium nature of the process, the distribution $P(t_{\rm m}|T)$ is not symmetric around $t_{\rm m}=T/2$. } \end{figure*} \subsubsection{Expected time of the maximum} As a consequence of the asymmetry of the time $t_{\rm m}$ of the maximum, the average value $\langle t_{\rm m} \rangle$ is different from $T/2$. Therefore, it is interesting to investigate the behavior of $\langle t_{\rm m} \rangle$ as a function of the total time $T$. The deviations of this quantity $\langle t_{\rm m} \rangle$ from the equilibrium value $T/2$ quantify the degree of asymmetry of the distribution and consequently the nonequilibrium nature of the process. To study this average value, we differentiate both sides of Eq.~\eqref{FR_LT_der} with respect to $s_1$ and then we set $s_1=s_2=s$, yielding \begin{eqnarray}\nonumber &&\int_{0}^{\infty}dT_1~T_1 e^{-s T_1}\int_{0}^{\infty}dT_2~e^{-s T_2}~F_R(T_1,T_2)= \frac{1}{4(1+s)(1+\sqrt{1+s})^2}\\&+& \frac{1}{4\sqrt{1+s}(\sqrt{1+s}-1)^2}\int_{0}^{1}du~u^{1/\sqrt{1+s}}\frac{s\left[3+s/u-2\sqrt{1+s}+(\sqrt{1+s}-1)\ln(u)\right]-2(\sqrt{1+s}-1)}{(s+u)^3}\,, \label{FR_LT_der} \end{eqnarray} where we have performed the change of variable $z\to u=-\ln(z)/\sqrt{1+s}$. Let us first consider the left-hand side of Eq.~\eqref{FR_LT_der}, which can be rewritten as \begin{equation} \int_{0}^{\infty}dT_1~T_1 e^{-s T_1}\int_{0}^{\infty}dT_2~e^{-s T_2}~F_R(T_1,T_2)=\int_{0}^{\infty}d\tilde{T}~e^{-s \tilde{T}}\int_{0}^{\tilde{T}}dT_1~T_1~F_R(T_1,T_2)=\int_{0}^{\infty}d\tilde{T}~e^{-s \tilde{T}}\langle T_1 (\tilde{T})\rangle \end{equation} where we have made the change of variable $(T_1,T_2)\to (T_1,\tilde{T}=T_1+T_2)$ and we have defined \begin{equation} \langle T_1 (\tilde{T})\rangle=\int_{0}^{\tilde{T}}dT_1~T_1~F_R(T_1,\tilde{T}-T_1)\,. \end{equation} Note that $T_1$ and $\tilde{T}=T_1+T_2$ respectively correspond to the rescaled time of the maximum $T_1=rt_{\rm m}$ and the rescaled total time $\tilde{T}=rT$ (see Eq.~\eqref{scaling_res}). The Laplace transform in Eq.~\eqref{FR_LT_der} can be inverted (see Appendix \ref{app:LI_2}), yielding \begin{equation} \langle T_1(\tilde{T})\rangle=\tilde{T} f(\tilde{T})\,. \end{equation} Reintroducing dimensions, this corresponds to \begin{equation} \label{avg} \langle t_{\rm m}(T)\rangle=T f(rT)\,. \end{equation} where the scaling function $f(t)$ is given by \begin{eqnarray} \label{foft} f(t)&=&\frac{1}{96}\left[-4(2t^2+3t-18)+\frac{2}{\sqrt{\pi}}\frac{1}{\sqrt{t}}(3+16t+4t^2)e^{-t}+(-3-30t+36t^2+8t^3)\frac1t \operatorname{erf}(\sqrt{t})\right]\nonumber\\ &+&\frac{1}{2t}\left[e^{-t}-\frac{2}{\sqrt{\pi}}\Gamma\left(\frac32,t\right)\right]+\sum_{k=1}^{\infty} \frac1t g_k(t)\,, \end{eqnarray} and $\Gamma(a,t)=\int_{t}^{\infty}x^{a-1}e^{-x}$ is the upper incomplete Gamma function. The function $g_k(t)$ reads \begin{equation} g_k(t)=(-1)^k\frac12 (k+1)(k+2)\int_{0}^{t}d\tau\,h_k(t-\tau)\tau^{k+1}\left(\frac{1}{(k+1)!}+\frac{\tau}{(k+2)!}\right)\,, \end{equation} where \begin{eqnarray} \label{hk_1_} h_k(t)&=&\frac{1}{k^2}\left\{-e^{-t+t/k^2}k(1-k)^2+e^{-t}\frac{k\left[k(1+k)^3-2k^3 t\right]}{\sqrt{\pi t}(1+k)^3}\right\}+\frac{1}{k^2}\left[\operatorname{erf}\left(\frac{\sqrt{t}}{k}\right)e^{-t+t/k^2}(1-k)^2\right]\nonumber \\ &\times &\frac{1}{(1+k)^4}e^{-t+t/(1+k)^2}\left[(1+k)^2 (k^2-2)+2kt\right]\left[1-\operatorname{erf}\left(\frac{\sqrt{t}}{(k+1)}\right)\right]\,. \end{eqnarray} \begin{figure}[t]\includegraphics[scale=0.7]{avg_tmax.pdf} \caption{\label{fig:avg_tmax} The scaled average $\langle t_{\rm m}\rangle/T$ as a function of $rT$ for Brownian motion with resetting rate $r$. The symbols depict the results of numerical simulations (performed with $r=1$) while the continuous line corresponds to the analytical results in Eqs.~(\ref{avg}-\ref{hk_1_}). In the case of an equilibrium process, one expects $\langle t_{\rm m}\rangle/T=1/2$ for any $T$.} \end{figure} The exact result in Eqs.~\eqref{avg} and \eqref{foft} is shown in Fig. \ref{fig:avg_tmax} and is in perfect agreement with numerical simulations. We observe that the ratio $\langle t_{\rm m}\rangle/T$ is manifestly different from the constant value $1/2$, signaling that the process violates detailed balance. Note also that the function $f(t)$ has a maximum at $t^*\approx 2.218$ with $f(t^*)\approx 0.519$. Thus, keeping $T$ fixed, there exists a value of the resetting rate $r$ that maximizes the deviation from the equilibrium result. \subsubsection{Time asymptotics} Although it is quite challenging to invert exactly the double Laplace transform in Eq.~\eqref{FR_LT}, this expression can be used to extract the asymptotic behavior of the distribution $P(t_{\rm m}|T)$ in the limit of short times ($T\ll \xi$) and late times ($T\gg \xi$). Here, the correlation time $\xi$ of the process is $\xi=1/r$, where $r$ is the resetting rate. This quantity $\xi$ represents the typical time between two consecutive resetting events. When $T\ll 1/r$, hardly any resetting event has occurred and we expect to recover the results obtained for Brownian motion without resetting. On the other hand, for $T\gg 1/r$, the positions of the process at different times become uncorrelated and we expect the distribution $P(t_{\rm m}|T)$ to become uniform in the interval $[0,T]$, with corrections for $t_{\rm m}\to 0$ and $t_{\rm m} \to T$. To investigate the short time regime $T\ll 1/r$, we take the limit $s_1,s_2\to \infty$ on the right-hand side of Eq.~\eqref{FR_LT} \begin{eqnarray} \int_{0}^{\infty}dT_1~e^{-s_1 T_1}\int_{0}^{\infty}dT_2~e^{-s_2 T_2}~F_R(T_1,T_2)\approx\frac{1}{2} \frac{1}{\sqrt{s_1}\sqrt{s_2}}+\frac12 \frac{1}{\sqrt{s_1}\sqrt{s_2}}\int_{0}^{\infty}dz~e^{-z}\,, \label{FR_LT_short_time} \end{eqnarray} and hence \begin{equation} \int_{0}^{\infty}dT_1~e^{-s_1 T_1}\int_{0}^{\infty}dT_2~e^{-s_2 T_2}~F_R(T_1,T_2)\approx \frac{1}{\sqrt{s_1}\sqrt{s_2}}\,. \end{equation} Using the Laplace inversion in Eq.~\eqref{inv_lapl_sqrt}, we get \begin{equation} F_R(T_1,T_2)\approx\frac{1}{\pi\sqrt{T_1T_2}}\,. \end{equation} Hence, for $T\ll1/r$, we obtain \begin{equation} P(t_{\rm m}|T)\approx\frac{1}{\pi\sqrt{t_{\rm m}(T-t_{\rm m})}}\,, \end{equation} which is once again L\'evy arcsine law \cite{Levy}, i.e., the distribution of the time of the maximum for a free BM, as expected. It turns out that the late time regime $T\gg 1/r$ is somewhat similar to the one described in Section \ref{sec:eq} for equilibrium processes. Indeed, there are three distinct regimes, depending on $t_{\rm m}$: the left edge regime, where $t_{\rm m}\sim 1/r$, the bulk regime, where $1/r\llt_{\rm m}\ll (T-1/r)$, and the right edge, where $T-t_{\rm m}\sim 1/r$. We first focus on the left edge, corresponding to $T_1=r t_{\rm m}\sim O(1)$ and $T_2=r(T-t_{\rm m})\to \infty$. Performing the change of variable $z\to u=\exp(-\sqrt{1+s_2}z)$ in Eq.~(\ref{FR_LT}), we obtain \begin{eqnarray}\label{eq:LT_F_edge1_res} &&\tilde{F}_R(s_1,s_2)\equiv \int_{0}^{\infty}dT_1\,\int_{0}^{\infty}dT_2\,e^{-s_1 \,T_1-s_2 \,T_2} F_R(T_1,T_2)=\frac{1}{2} \frac{1}{\sqrt{1+s_2}\left(\sqrt{1}+\sqrt{1+s_1}\right)}\\ &+&\frac12\frac{1}{\sqrt{s_1+1}-1}\int_{0}^{1}du\,\frac{u^{(1+\sqrt{1+s_1})/\sqrt{1+s_2}-1}\left(s_1 u^{-\sqrt{(1+s_1)/(1+s_2)}}-\sqrt{s_1+1}+1 \right)}{\left(s_1+u^{\sqrt{(1+s_1)/(1+s_2)}}\right)\left(s_2+u\right)}\,.\nonumber \end{eqnarray} To investigate the limit of large $T_2$, we expand the right-hand side of Eq. (\ref{eq:LT_F_edge1_res}) for small $s_2$ and we obtain \begin{eqnarray}\label{eq:LT_F_edge1_res2} \tilde{F}_R(s_1,s_2)\approx\frac12\frac{1}{\sqrt{s_1+1}-1}\int_{0}^{1}du\,\frac{s_1 +(1-\sqrt{s_1+1})u^{\sqrt{1+s_1}}}{\left(s_1+u^{\sqrt{1+s_1}}\right)\left(s_2+u\right)}\,. \end{eqnarray} We observe that the expression on the right-hand side of Eq. (\ref{eq:LT_F_edge1_res2}) has a pole at $s_2=-u$, thus the integral over $u$ will be dominated by small values of $u$. Thus, we neglect the term $u^{\sqrt{1+s_1}}$ and we obtain \begin{eqnarray}\label{eq:LT_F_edge1_res3} \tilde{F}_R(s_1,s_2)=\frac12\frac{1}{\sqrt{s_1+1}-1}\int_{0}^{1}du\,\frac{1}{s_2+u}\,. \end{eqnarray} Using the Laplace-inversion formula in Eq.~\eqref{G_LT} to invert the double Laplace transform, we obtain \begin{equation} F_R(T_1,T_2)\simeq G(T_1)\int_{0}^{1}du~e^{-u T_2}\,, \end{equation} where \begin{equation}\label{eq:G} G(z)= \frac{1}{2}\left[ 1+ {\rm erf}\left(\sqrt{z}\right)+ \frac{1}{\sqrt{\pi z}}\, e^{-z}\right]\, . \end{equation} Finally, computing the integral over $u$, we find that, for $T_1\sim \mathcal{O}(1)$ and $T_2\to \infty$, \begin{equation}\label{eq:edge1_res} F_R(T_1,T_2)\simeq \frac1T G(T_1)\,. \end{equation} Interestingly, we find that the same scaling function $G(z)$, that describes the left-edge behavior for equilibrium processes in Section \ref{sec:eq} (see Eq.~\eqref{universal_G}), also describes the left-edge behavior for this out-of-equilibrium Brownian resetting process. Let us now consider the right edge, i.e., the limit $T_1\to \infty$ with $T_2=T-T_1\sim O(1)$. Performing the change of variable $z\to u=\exp(-\sqrt{1+s_1}z)$ in Eq. (\ref{FR_LT}), we obtain \begin{eqnarray}\label{eq:LT_F_edge2_res} \tilde{F}_R(s_1,s_2)=\frac{1}{2} \frac{1}{\sqrt{1+s_2}\left(\sqrt{1}+\sqrt{1+s_1}\right)}+\frac12\frac{1}{\sqrt{s_1+1}-1}\sqrt{\frac{1+s_2}{1+s_1}}\int_{0}^{1}du\,\frac{u^{1/\sqrt{1+s_1}}\left(\frac{s_1}{u}-\sqrt{1+s_1}+1 \right)}{\left(s_1+u\right)\left(s_2+u^{\sqrt{(1+s_2)/(1+s_1)}}\right)}\,. \end{eqnarray} For small values of $s_1$ the integral on the right-hand side of Eq. (\ref{eq:LT_F_edge2_res}) is dominated by small values of $u$. Thus, expanding for small $s_1$ and small $u$, we get \begin{eqnarray}\label{eq:LT_F_edge2_res2} \tilde{F}_R(s_1,s_2)\approx\frac{\sqrt{s_2+1}}{s_2}\int_{0}^{1}du\,\frac{1}{s_1+u}\,. \end{eqnarray} Inverting the double Laplace transform with Eq.~\eqref{G_LT}, we find that when $T_1\to\infty$ with $T_2\sim O(1)$ \begin{equation}\label{eq:edge2_res} F_R(T_1,T_2)\approx \frac{1}{T}\left(2 G(T_2)-1\right)\,, \end{equation} where $G(T_2)$ is given in Eq. (\ref{eq:G}). The bulk regime corresponds instead to the limit $s_1,s_2\to 0$ in Eq.~\eqref{FR_LT}, yielding \begin{equation} \tilde{F}_R(s_1,s_2)\approx \int_{0}^{\infty}dz~e^{-z}\frac{1}{(s_1+e^{-z})(s_2+e^{-z})}\,. \end{equation} Inverting the double Laplace transform, we obtain \begin{equation} F_R(T_1,T_2)\approx \int_{0}^{\infty}dz~e^{-z}e^{(T_1+T_2)e^{-z}}=\frac{1}{T_1+T_2}\,, \end{equation} corresponding to the flat distribution \begin{equation} P(t_{\rm m}|T)\approx\frac{1}{T}\,. \end{equation} To summarize, we have shown that for $T\gg 1/r$ \begin{equation} P(t_{\rm m}|T)\approx\begin{cases} \frac1T G(rt_{\rm m})\quad &\text{ for }t_{\rm m}\ll1/r\,\\ \\ \frac1T\quad &\text{ for }1/r\llt_{\rm m}\ll(T-1/r)\\ \\ \frac1T\left[2 G(r(T-t_{\rm m}))-1\right]\quad &\text{ for }t_{\rm m}\ll1/r\,,\\ \end{cases} \end{equation} where the function $G(z)$ is given in Eq.~\eqref{eq:G}. The late-time shape of the distribution $P(t_{\rm m}|T)$ is remarkably similar to the one of confined Brownian motion (see Eq.~\eqref{universal_G}). However, due to the nonequilibrium nature of the process, this distribution in not symmetric around the midpoint $t_{\rm m}=T/2$. In particular, using the asymptotics of the function $G(z)$, given in Eq.~\eqref{G_asym}, we find that for $t_{\rm m}\to 0$ and $T\gg 1/r$ \begin{equation} P(t_{\rm m}|T)\approx\frac{1}{T\sqrt{2\pi rt_{\rm m}}}\,. \end{equation} On the other hand, for $t_{\rm m} \to T$ and $T\gg 1/r$ we find \begin{equation} P(t_{\rm m}|T)\approx\frac{\sqrt{2}}{T\sqrt{\pi rt_{\rm m}}}\,. \end{equation} \subsection{Run-and-tumble particle in a confining potential} \label{sec:rtp} \begin{figure}[t] \includegraphics[scale=0.55]{RTP.pdf} \includegraphics[scale=0.55]{sigma.pdf} \caption{\label{fig:rtp_realiz} {\bf Left panel}: Typical trajectory of the position $x(\tau)$ of an RTP in a confining potential $V(x)=\mu |x|$ as a function of time $\tau$. The position of the particle reaches the maximal value $M$ at time $t_{\rm m}$. {\bf Right panel}: Realization of the telegraphic noise $\sigma(\tau)$, switching sign with rate $\gamma$.} \end{figure} In this section, we investigate the time $t_{\rm m}$ of the maximum for the RTP process. This model was first known in the literature of stochastic processes as ``persistent random walk'' \cite{kac1974stochastic,stadje1987exact,orsingher1990probability,weiss2002some}. More recently, this process was exploited to describe the persistent motion of a class of bacteria, including \emph{E. coli} \cite{berg2004coli}, which move along a fixed direction (they ``run''), randomizing their orientation (they ``tumble'') at random times. Quite remarkably, such a simple model displays several nontrivial features, including clustering at the boundaries in a confining domain \cite{bechinger2016active}, non-Boltzmann steady-state distributions \cite{sevilla2019stationary,dhar2019run}, and jamming \cite{slowman2016jamming,metson2020jamming}. The distribution of the time of the maximum for a free RTP has been investigated in \cite{SK19,MLD20a,MLD20}. Here we focus instead on the case of a RTP in a one-dimensional potential $V(x)$, since we want to study a stationary version of this active process. Here we consider a single RTP moving on a line and subject to a confining potential $V(x)=\mu |x|$. The evolution of the position $x$ of the particle can be described by the following Langevin equation \begin{equation} \frac{dx}{dt}=f(x)+v_0~\sigma(t)\,, \label{eq:langevin_RTP} \end{equation} where $v_0>0$ is the velocity of the particle, $f(x)=-V'(x)$ is the external force. The term $\sigma(t)=\pm 1$ is a telegraphic noise, describing the direction of the particle. We assume that $\sigma(t)$ flips its sign with constant rate $\gamma$. A typical realization of this process is shown in Fig.~\ref{fig:rtp_realiz}. The persistent nature of the motion of the particle drives the system out of equilibrium. Indeed, in a small time interval $dt$, the system can go from the state $(x,+1)$, i.e., position $x$ and positive direction, to the state $(x+dt(f(x)+v_0),+1)$. However, the inverse transition from $(x+dt(f(x)+v_0),+1)$ to $(x,+1)$ is not possible, inducing probability currents in phase space. Computing analytically the distribution of $t_{\rm m}$ for arbitrary $V(x)$ appears to be challenging. For this reason, we focus on the case $V(x)=\mu |x|$, which can be solved exactly. As we will show, even though it is possible to compute exactly the double Laplace transform of $P(t_{\rm m}|T)$ with respect to $t_{\rm m}$ and $T-t_{\rm m}$, the resulting expression is quite cumbersome, and even extracting asymptotics is quite hard. Nevertheless, from this exact computation one can easily check whether or not the distribution of $t_{\rm m}$ satisfies the symmetry $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. This is precisely the goal of this section. As we have observed that this symmetry is always present in the case of equilibrium processes while it is not satisfied by RBM, it is natural to ask whether or not this property is present in the case of nonequilibrium active particles. We also assume that $v_0>\mu$, since in the opposite case $v_0\leq \mu$ no steady state exists \cite{SKM_19}. For $v_0>\mu$, the stationary distribution of the position is given by \cite{SKM_19} \begin{equation} P_{\rm st}(x_0)=\frac{\gamma~\mu}{v_0^2-\mu^2}\exp\left(-\frac{2\gamma\mu}{v_0^2-\mu^2}|x_0|\right)\,. \label{eq:stationary_RTP_x} \end{equation} It is useful to define also the joint stationary distribution $P_{\rm st}^{\sigma}(x_0)$ of the position $x_0$ and of the direction $\sigma=\pm $ of the particle, which is given by \cite{SKM_19} \begin{equation} P_{\rm st}^{\pm }(x_0)=\frac12 \left(1\pm \frac{\mu}{v_0}\operatorname{sign}(x)\right)\frac{\gamma~\mu}{v_0^2-\mu^2}\exp\left(-\frac{2\gamma\mu}{v_0^2-\mu^2}|x_0|\right)\,. \label{eq:stationary_RTP_x_s} \end{equation} We assume that the particle starts at the initial time from position $x_0$ with a positive (negative) direction $\sigma$ is drawn from the distribution in Eq.~\eqref{eq:stationary_RTP_x_s} with a positive (negative) sign, and that it evolves according to Eq.~\eqref{eq:langevin_RTP} up to time $T$. We are interested in the distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ at which the maximum of the position is reached. To compute this quantity, we will exploit a path-decomposition technique similar to the one described in the previous sections. Note that the events $t_{\rm m}=0$ and $t_{\rm m}=T$ happen with a finite probability and have to be considered separately. Let us first consider the case $0<t_{\rm m}<T$. As a consequence of the persistent motion of the particle, the time of the maximum coincides with a tumbling event if $0<t_{\rm m}<T$ (see Fig.~\ref{fig:rtp_realiz}). Thus, we can divide the time interval $[0,T]$ into the three subintervals $[0,t_{\rm m}]$ (I), $[t_{\rm m},t_{\rm m}+\delta]$ (II), where $\delta$ is assumed to be small, and $[t_{\rm m}+\delta,T]$ (III). In the interval (I), the particle starts from position $x_0$ with direction $\sigma$, it stays below the maximal value $M$ and reaches $M$ for the first time at time $t_{\rm m}$. Note that, since we are constraining the particle to remain below position $M$, it can only arrive at position $M$ with positive velocity. Thus, the probability weight of the first interval can be written as $G^{+}_M(M,t_{\rm m}|x_0,\sigma)$, where the constrained propagator $G^{\pm}_M(x,t|x_0,\sigma)$ is defined as the probability that the particle reaches position $x$ with direction $\pm$ at time $t$ while always remaining below position $M$, having started from position $x_0$ with direction $\sigma$. In the short time interval (II), the particle has to tumble, i.e., to change its direction from positive to negative. Since we assume that the tumbling events happen with a constant rate $\gamma$ and that $\delta$ is small, the probability weight of this interval is $\gamma \delta$. Finally, in the interval $[t_{\rm m}+\delta,T]$ the particle starts from position $M$ with negative velocity and remains below position $M$ up to time $T$. Thus, the weight of this last interval is $Q^{-}_M(M,T-t_{\rm m})$, where the survival probability $Q^{\sigma}_M(x,t)$ is defined as the probability that the particle remains below position $M$ up to time $t$, starting from position $x$ with direction $\sigma$. Since the joint process $(x,\sigma)$ is Markov, the distribution of $t_{\rm m}$ can be written as the product of the three probability weights corresponding to the three time intervals. Thus, integrating over the initial position $x_0$, the maximal value $M>x_0$ and summing over the initial direction $\sigma$, we obtain \begin{equation} P(t_{\rm m}|T)=A~\gamma~\sum_{\sigma=\pm }\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{\sigma}(x_0)~\int_{x_0}^{\infty}dM~G^{+}_M(M,t_{\rm m}|x_0,\sigma)~Q_M^{-}(M,T-t_{\rm m})\,, \label{eq:PDF_RTP_bulk} \end{equation} where $A$ is a normalization constant. Note that the small time interval $\delta$ is included in the normalization constant $A$. As anticipated, the events $t_{\rm m}=0$ and $t_{\rm m}=T$ happen with non-zero probability. In particular, the event $t_{\rm m}=0$ will happen when the particle starts from position $x_0$ with a negative velocity and remains below its starting position $x_0$ up to time $T$. Thus, since the starting position and direction are drawn from the stationary distribution $P_{\rm st}^{\sigma}(x_0)$, integrating over $x_0$ we obtain \begin{equation} {\rm Prob}.(t_{\rm m}=0|T)=\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{-}(x_0)Q_{x_0}^{-}(x_0,T)\,. \label{eq:edge0} \end{equation} On the other hand, the event ``$t_{\rm m}=T$'' happens when the particle reaches the maximum $M$ at the final time $T$. Since the particle is constrained to stay below position $M$, the particle can only reach the maximum coming from below, with a positive direction. Thus, summing over the initial position $x_0$ and direction $\sigma$, we find \begin{equation} {\rm Prob}.(t_{\rm m}=T|T)=\sum_{\sigma=\pm}\int_{-\infty}^{\infty}dx_0~\int_{x_0}^{\infty}dM~P_{\rm st}^{\sigma}(x_0)~G^{+}_M(M,T|x_0,\sigma)\,. \label{eq:edge1} \end{equation} Thus, using the Eqs.~\eqref{eq:PDF_RTP_bulk}, \eqref{eq:edge0}, and \eqref{eq:edge1}, we find that for $0\leq t_{\rm m}\leq T$ \begin{eqnarray} \nonumber && P(t_{\rm m}|T)=A~\gamma~\sum_{\sigma=\pm }\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{\sigma}(x_0)~\int_{x_0}^{\infty}dM~G^{+}_M(M,t_{\rm m}|x_0,\sigma)~Q_M^{-}(M,T-t_{\rm m})\\ &+ & \delta\left(t_{\rm m}\right)\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{-}(x_0)Q_{x_0}^{-}(x_0,T)+\delta\left(t_{\rm m}-T\right)\sum_{\sigma=\pm}\int_{-\infty}^{\infty}dx_0~\int_{x_0}^{\infty}dM~P_{\rm st}^{\sigma}(x_0)~G^{+}_M(M,T|x_0,\sigma)\,,\label{eq:PDF_RTP} \end{eqnarray} where the constant $A$ can be computed from the normalization condition \begin{equation} \int_{0}^{T}dt_{\rm m}~P(t_{\rm m}|T)=1\,. \end{equation} We now need to compute the constrained propagator $G_M^{\pm}(x,t|x_0,\sigma)$ and the survival probability $Q^{\pm}_M(x,t)$. These quantities can be obtained by solving the Fokker-Planck equation associated with the system. We first consider the constrained propagator $G_M^{\pm}(x,t|x_0,\sigma)$. It is possible to show that $G_M^{+}(x,t|x_0,\sigma)$ and $G_M^{-}(x,t|x_0,\sigma)$ evolve according to the following coupled forward Fokker-Planck equations \cite{SKM_19} \begin{equation} \begin{cases} \partial_tG_M^{+}(x,t|x_0,\sigma)=- \partial_x\left[\left(-\mu \operatorname{sign}(x)+v_0\right)G_M^{+}(x,t|x_0,\sigma)\right]-\gamma ~G_M^{+}(x,t|x_0,\sigma)+\gamma ~G_M^{-}(x,t|x_0,\sigma)\\ \\ \partial_tG_M^{-}(x,t|x_0,\sigma)=- \partial_x\left[\left(-\mu \operatorname{sign}(x)-v_0\right)G_M^{-}(x,t|x_0,\sigma)\right]-\gamma~ G_M^{-}(x,t|x_0,\sigma)+\gamma~ G_M^{+}(x,t|x_0,\sigma) \end{cases} \label{FP_RTP_G} \end{equation} with initial condition \begin{equation} G_M^{\pm}(x,t=0|x_0,\sigma)=\delta\left(x-x_0\right)\delta_{\sigma,\pm}\,, \label{initial_condition_RTP} \end{equation} and boundary conditions \begin{equation} \begin{cases} G_M^{\pm}(-\infty,t|x_0,\sigma)=0\\ \\ G_M^{-}(M,t|x_0,\sigma)=0\,. \end{cases} \label{bc_RTP} \end{equation} The boundary condition on the second line of Eq.~\eqref{bc_RTP} can be understood as follows. If a particle arrives at position $M$ with a negative velocity, it must have already visited the region $x>M$. However, we are constraining the particle to remain below the position $M$. Thus, $G_M^{-}(M,t|x_0,\sigma)$ has to be zero. Note that $G_M^{+}(M,t|x_0,\sigma)$ remains instead unspecified. We limit our discussion to the case $\sigma=+$, i.e., we assume that the particle starts with a positive velocity. The complementary case $\sigma=-$ can be treated similarly. Taking a Laplace transform with respect to $t$ on both sides of the Eqs.~\eqref{FP_RTP_G} and using the initial condition in Eq.~\eqref{initial_condition_RTP}, we obtain \begin{equation} \begin{cases} s\tilde{G}_M^{+}(x,s|x_0,+)-\delta(x-x_0)=- \partial_x\left[\left(-\mu \operatorname{sign}(x)+v_0\right)\tilde{G}_M^{+}(x,s|x_0,+)\right]-\gamma ~\tilde{G}_M^{+}(x,s|x_0,+)+\gamma ~\tilde{G}_M^{-}(x,s|x_0,+)\\ \\ s\tilde{G}_M^{-}(x,s|x_0,+)=- \partial_x\left[\left(-\mu \operatorname{sign}(x)-v_0\right)\tilde{G}_M^{-}(x,s|x_0,+)\right]-\gamma~ \tilde{G}_M^{-}(x,s|x_0,+)+\gamma~ \tilde{G}_M^{+}(x,s|x_0,+) \end{cases}\,, \label{LT_FP_RTP_G} \end{equation} where we have defined the Laplace transform \begin{equation} {\tilde G}_M^{\pm}(x,s|x_0,\sigma)=\int_{0}^{\infty}dt~e^{-st} G_M^{\pm}(x,t|x_0,\sigma)\,. \end{equation} The boundary conditions of the differential equations \eqref{LT_FP_RTP_G} can be obtained from Eq.~\eqref{boundary_condition_RTP} and are given by \begin{equation} \begin{cases} \tilde{G}_M^{\pm}(-\infty,s|x_0,\sigma)=0\,,\\ \\ \tilde{G}_M^{-}(M,t|x_0,\sigma)=0\,. \label{boundary_condition_RTP} \end{cases} \end{equation} The solution of the coupled ordinary differential equations \eqref{LT_FP_RTP_G} is presented in Appendix \ref{app:prop}, where we show that \begin{equation} {\tilde G}_M^{+}(M,s|x_0,+)=\begin{cases} \displaystyle \dfrac{1}{v_0+\mu}e^{-(k-(s+\gamma)\mu)(M-x_0)/(v_0^2-\mu^2)}&\;{\rm for}\;x_0<0~,M<0\,,\\ \\ \displaystyle k \dfrac{e^{-(\mu(s+\gamma)+k)M/(v_0^2-\mu^2)}~e^{(-\mu(s+\gamma)+k)x_0/(v_0^2-\mu^2)}}{v_0(k-\mu(\gamma+s))+\mu(v_0(\gamma+s)-k)e^{-2kM/(v_0^2-\mu^2)}}&\;{\rm for}\;x_0<0~,M>0\,,\\ \\ \displaystyle \dfrac{1}{v_0-\mu} ~\dfrac{(k-v_0(s+\gamma))\mu+e^{2kx_0/(v_0^2-\mu^2)}v_0((s+\gamma)\mu-k)}{(k-v_0(s+\gamma))\mu+e^{2kM/(v_0^2-\mu^2)}v_0((s+\gamma)\mu-k)}\\ \times e^{(k-\mu(s+\gamma))(M-x_0)/(v_0^2-\mu^2)}&\;{\rm for}\;x_0>0~,M>0\,,\\ \end{cases} \label{G_RTP_A} \end{equation} where we have defined \begin{equation} k=\sqrt{s^2v_0^2+2sv_0^2\gamma+\gamma^2\mu^2}\,. \label{eq:k} \end{equation} Following the same steps in the case $\sigma=-$, we obtain \begin{equation} {\tilde G}_M^{+}(M,s|x_0,-)=\begin{cases} \displaystyle \dfrac{v_0(\gamma+s)-k}{\gamma(v_0^2-\mu^2)}e^{-(k-(s+\gamma)\mu)(M-x_0)/(v_0^2-\mu^2)}&\;{\rm for}\;x_0<0~,M<0\,,\\ \\ \displaystyle \dfrac{k(v_0(\gamma+s)-k)}{\gamma(v_0-\mu)} \dfrac{e^{-(\mu(s+\gamma)+k)M/(v_0^2-\mu^2)}~e^{(-\mu(s+\gamma)+k)x_0/(v_0^2-\mu^2)}}{v_0(k-\mu(\gamma+s))+\mu(v_0(\gamma+s)-k)e^{-2kM/(v_0^2-\mu^2)}}&\;{\rm for}\;x_0<0~,M>0\,,\\ \\ \displaystyle \dfrac{v_0(s+\gamma)-k}{\gamma(v_0^2-\mu^2)} ~\dfrac{(k-v_0(s+\gamma))\mu+e^{2kx_0/(v_0^2-\mu^2)}v_0((s+\gamma)\mu-k)}{(k-v_0(s+\gamma))\mu+e^{2kM/(v_0^2-\mu^2)}v_0((s+\gamma)\mu-k)}\\ \times e^{(k-\mu(s+\gamma))(M-x_0)/(v_0^2-\mu^2)}&\;{\rm for}\;x_0>0~,M>0\,,\\ \end{cases} \label{G_RTP_B} \end{equation} We now want to compute the survival probability $Q_M^{\pm}(x,t)$, defined as the probability to remain below position $M$ up to time $t$, starting from position $x$ with initial direction $\pm$. It is possible to show that $Q_M^{+}(x,t)$ and $Q_M^{-}(x,t)$ satisfy the following backward Fokker-Planck equations \cite{SKM_19} \begin{equation} \begin{cases} \partial_t~Q_M^{+}(x,t)= \left(-\mu \operatorname{sign}(x)+v_0\right)\partial_x Q_M^{+}(x,t)+\gamma ~Q_M^{+}(x,t)-\gamma ~Q_M^{-}(x,t)\,,\\ \\ \partial_t~Q_M^{-}(x,t)=\left(-\mu \operatorname{sign}(x)-v_0\right)\partial_x Q_M^{-}(x,t)+\gamma ~Q_M^{-}(x,t)-\gamma ~Q_M^{+}(x,t)\end{cases} \label{FP_RTP_Q} \end{equation} with initial condition \begin{equation} Q_M^{\pm}(x,t=0)=1\,, \label{intial_condition_Q_RTP} \end{equation} for any $x<M$. The boundary conditions in this case are given by \begin{equation} \begin{cases} Q_M^{\pm}(-\infty,t)=1\,,\\ Q_M^{+}(M,t)=0\,. \end{cases} \label{boundary_condition_RTP_2} \end{equation} The first boundary condition means that if the particle starts infinitely far from the absorbing barrier at $x=M$, it will never go above position $M$ in a finite time. The second boundary condition encodes that if the particle starts at $M$ with a positive velocity, it will immediately go above $M$. Note that in this case the boundary condition for $Q_M^{-}(M,t)$ remains unspecified. It is useful to perform a Laplace transform with respect to $t$ of the equations \eqref{FP_RTP_Q}. Using the initial condition in Eq.~\eqref{intial_condition_Q_RTP}, we obtain \begin{equation} \begin{cases} s\tilde{Q}_M^{+}(x,s)-1= \left(-\mu \operatorname{sign}(x)+v_0\right)\partial_x\tilde{Q}_M^{+}(x,s)+\gamma ~\tilde{Q}_M^{-}(x,s)-\gamma ~\tilde{Q}_M^{+}(x,s)\,,\\ \\ s~\tilde{Q}_M^{-}(x,s)-1= \left(-\mu \operatorname{sign}(x)-v_0\right)\partial_x\tilde{Q}_M^{-}(x,s)+\gamma ~\tilde{Q}_M^{+}(x,s)-\gamma ~\tilde{Q}_M^{-}(x,s)\,,\end{cases} \label{FP_RTP_Q_LT} \end{equation} where we have defined \begin{equation} \tilde{Q}^{\pm}_M(x,s)=\int_{0}^{\infty}dt ~e^{-st}Q^{\pm}_M(x,t)\,. \end{equation} In Laplace space, the boundary conditions in Eq.~\eqref{boundary_condition_RTP_2} become \begin{equation} \begin{cases} \tilde{Q}_M^{\pm}(-\infty,s)=1/s\,,\\ \tilde{Q}_M^{+}(M,s)=0\,. \end{cases} \label{boundary_condition_RTP_LT} \end{equation} The solution of Eq.~\eqref{FP_RTP_Q_LT} is presented in Appendix \eqref{app:surv}, where we show that \begin{equation} {\tilde Q}_M^{-}(M,s)=\begin{cases} \displaystyle \dfrac{1}{s}\dfrac{k+v_0s-\gamma\mu}{k+v_0(s+\gamma)}&\;{\rm for}\;M<0\,,\\ \\ \displaystyle \dfrac{1}{s}\dfrac{1}{k+v_0(s+\gamma)}\left[k+v_0s+\mu\gamma-\dfrac{2k\gamma\mu(v_0-\mu)}{(v_0(s+\gamma)-k)\mu+v_0(k-(s+\gamma)\mu)e^{2kM/(v_0^2-\mu^2)}}\right]&\;{\rm for}\;M>0\,.\\ \\ \end{cases} \label{Q_RTP} \end{equation} Note that we will not need the expression of ${\tilde Q}_M^{+}$ to compute $P(t_{\rm m}|T)$. We can now use the formula in Eq.~\eqref{eq:PDF_RTP} to compute $P(t_{\rm m}|T)$. To proceed, we need to write this relation in Eq.~\eqref{eq:PDF_RTP} in Laplace space. Thus, we consider the double Laplace transform of Eq.~\eqref{eq:PDF_RTP} with respect to $T_1=t_{\rm m}$ and $T_2=T-t_{\rm m}$, yielding \begin{eqnarray} \nonumber &&\int_{0}^{\infty}dT_1~\int_{0}^{\infty}dt_2~e^{-s_1 T_1-s_2 T_2} P(t_{\rm m} =T_1|T=T_1+T_2)= \int_{-\infty}^{\infty}dx_0~P_{\rm st}^{-}(x_0)\tilde{Q}_{x_0}^{-}(x_0,s_2) \\\nonumber & &+\! A\gamma\sum_{\sigma=\pm }\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{\sigma}(x_0)\int_{x_0}^{\infty}dM\tilde{G}^{+}_M(M,s_1|x_0,\sigma)~\tilde{Q}_M^{-}(M,s_2)\\&+&\sum_{\sigma=\pm}\int_{-\infty}^{\infty}dx_0~\int_{x_0}^{\infty}dM~P_{\rm st}^{\sigma}(x_0)~\tilde{G}^{+}_M(M,s_1|x_0,\sigma) \,.\label{eq:PDF_RTP_LT} \end{eqnarray} In order to determine the constant $A$, we impose that the PDF $P(t_{\rm m}|T)$ is correctly normalized to unity. To do this, we set $s_1=s_2=s$ on both sides of Eq.~\eqref{eq:PDF_RTP_LT}, yielding \begin{eqnarray} \nonumber \frac1s &=& \int_{-\infty}^{\infty}dx_0~P_{\rm st}^{-}(x_0)\tilde{Q}_{x_0}^{-}(x_0,s)+ A~\gamma~\sum_{\sigma=\pm }\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{\sigma}(x_0)~\int_{x_0}^{\infty}dM~\tilde{G}^{+}_M(M,s|x_0,\sigma)~\tilde{Q}_M^{-}(M,s)\\ &+& \sum_{\sigma=\pm}\int_{-\infty}^{\infty}dx_0~\int_{x_0}^{\infty}dM~P_{\rm st}^{\sigma}(x_0)~\tilde{G}^{+}_M(M,s|x_0,\sigma) \,, \end{eqnarray} where we have simplified the left-hand side using Eq.~\eqref{lhs}. Computing the integrals on the right-hand side turns out to be rather nontrivial even after setting $s_1=s_2=s$. Nevertheless, using the expressions obtained above for $P^{\sigma}_{\rm st}$, $\tilde{G}^+_M$, and $\tilde{Q}^{-}_M$ and evaluating these integrals numerically with Mathematica we have verified that the correct normalization constant is $A=1$. \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{tmax_RTP_V_bulk_5_new.pdf} \caption{Probability density function $P_{\rm bulk}(t_{\rm m}|T)$ as a function of the time $t_{\rm m}$ of the maximum for a run-and-tumble particle in a confining potential $V(x)=|x|$, for $0<t_{\rm m}<T$. Note that since the events ``$t_{\rm m}=0$'' and ``$t_{\rm m}=T$'' occur with finite probability, the distribution is not normalized to unity for $0<t_{\rm m}<T$. The curve is obtained by numerical simulations with $\gamma=1$, $T=5$, and $v_0=2$. The distribution $P_{\rm bulk}(t_{\rm m}|T)$ appears to be symmetric around the midpoint $t_{\rm m}=T/2$. We find numerically that $P_0(T)={\rm Prob}.(t_{\rm m}=0)\approx 0.087$ and $P_1(T)={\rm Prob}.(t_{\rm m}=T)\approx 0.165$. The inset shows the full distribution $P(t_{\rm m}|T)$, including the two asymmetric $\delta$-functions in $t_{\rm m}=0$ and $t_{\rm m}=T$. \label{fig:RTP_V_tmax}} \end{center} \end{figure} We then rewrite $P(t_{\rm m}|T)$ as \begin{equation} P(t_{\rm m}|T)=P_0(T)\delta(t_{\rm m})+P_{\rm bulk}(t_{\rm m}|T)+P_1(T)\delta(t_{\rm m}-T)\,, \end{equation} where \begin{equation} \int_{0}^{\infty}dt_1~\int_{0}^{\infty}dt_2~e^{-s_1 t_1-s_2 t_2} P_{\rm bulk}(t_{\rm m} =t_1|T=t_1+t_2)=\gamma~\sum_{\sigma=\pm }\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{\sigma}(x_0)~\int_{x_0}^{\infty}dM~\tilde{G}^{+}_M(M,s_1|x_0,\sigma)~\tilde{Q}_M^{-}(M,s_2) \,, \label{eq:PDF_RTP_LT_bulk} \end{equation} \begin{equation} \int_{0}^{\infty}dT~e^{-sT}P_0(T)=\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{-}(x_0)\tilde{Q}_{x_0}^{-}(x_0,s)\,, \label{PO_LT} \end{equation} and \begin{equation} \int_{0}^{\infty}dT~e^{-sT}P_1(T)=\sum_{\sigma=\pm}\int_{-\infty}^{\infty}dx_0~P_{\rm st}^{\sigma}(x_0)~\int_{x_0}^{\infty}dM~\tilde{G}^{+}_M(M,s|x_0,\sigma) \,. \label{P1_LT} \end{equation} Exactly inverting these Laplace transforms turns out to be quite nontrivial. Nevertheless, it is possible to check, for instance using Mathematica, that the Laplace transform of $P_{\rm bulk}(t_{\rm m}|T)$ in Eq.~\eqref{eq:PDF_RTP_LT_bulk} is invariant under exchange of $s_1$ and $s_2$. This implies that $P_{\rm bulk}(t_{\rm m}|T)=P_{\rm bulk}(T-t_{\rm m}|T)$, i.e., that the central part of the distribution of $t_{\rm m}$ is symmetric around the midpoint $t_{\rm m}=T/2$. This is confirmed by numerical simulations (see Fig.~\ref{fig:RTP_V_tmax}). However, it is easy to show that the amplitudes $P_0(T)$ and $P_1(T)$ of the delta functions in $t_{\rm m}=0$ and $t_{\rm m}=T$ are in general different. Thus, the full distribution $P(t_{\rm m}|T)$, for $0\leq t_{\rm m}\leq T$ is not symmetric around $t_{\rm m}=T/2$. This is a consequence of the nonequilibrium nature of the process. \subsection{Criterion to detect nonequilibrium dynamics} \label{sec:criterion} From the exact results of the previous sections, we have observed that for equilibrium systems corresponding to an overdamped Brownian particle in a confining potential $V(x)$ the probability distribution of the time $t_{\rm m}$ of the maximum is symmetric around the midpoint $t_{\rm m}=T/2$. Let us stress that this property is not related to the symmetry $V(x)=V(-x)$ of the potentials that we have investigated in Section \ref{sec:eq} (see Fig.~\ref{fig:asym_V_}, where we show that this property is valid even when $V(x)\neq V(-x)$). On the other hand, for the nonequilibrium processes we have considered, this symmetry is not present and $P(t_{\rm m}|T)\neq P(T-t_{\rm m} |T)$. In this Section, we show that this symmetry is quite general and can be used to develop a technique to detect nonequilibrium fluctuations in steady states. In particular, we show that one has the property $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$ for any equilibrium process. Note that the inverse implication is not true as there are nonequilibrium processes with this symmetry. We consider a time series of duration $T$. For simplicity, we focus on a discrete-time process $x_i$ with $1\leq i\leq T$ -- where $i$ and $T$ are positive integers. This derivation immediately generalizes to the continuous-time case. We assume that this time series is generated from an equilibrium Markov process. We denote by $P(\{x_i\})$ the probability of observing a given trajectory $\{x_i\}=\{x_1\,,x_2\,,\ldots\,,x_T\}$ and by $\{\bar{x}_i\}=\{x_T,\ldots ,x_i\}$ the time-reversed trajectory associated with $\{x_i\}$. Note that if the system is at equilibrium it is easy to show that $P(\{x_i\})=P(\{\bar{x}_i\})$ (time-reversal symmetry). \begin{figure}[t] \includegraphics[scale=0.6]{tmax_asymm_V.pdf} \caption{\label{fig:asym_V_} Probability density function $P(t_{\rm m}|T)$ as a function of $t_{\rm m}$, obtained from numerical simulations of Brownian motion in an asymmetric potential $V(x)$, with $V(x)=x^2$ for $x>0$ and $V(x)=-x$ for $x<0$, $T=1$, and $D=1$. The distribution appears to be symmetric around the midpoint $t_{\rm m}=T/2$ (vertical dashed line). } \end{figure} The distribution of the time $t_{\rm m}$ of the maximum can be written as \begin{equation} P(t_{\rm m}|T)=\int_{-\infty}^{\infty}dx_1\ldots\int_{-\infty}^{\infty}dx_T~ \Theta_{t_{\rm m}}(\{x_i\}) P\left(\{x_i\}\right)\,, \label{discrete} \end{equation} where \begin{equation} \Theta_{k}\left(\{x_i\}\right)=\prod_{i\neq k}\theta\left(x_k-x_i\right)\,. \end{equation} Here $\theta(z)$ is the Heaviside step function, i.e., $\theta(z)=1$ for $z>0$ and $\theta(z)=0$ otherwise. The function $\Theta_{k}\left(\{x_i\}\right)$ is one if the maximum of the trajectory $\{x_i\}$ is attained at step $k$ and is zero otherwise. Performing the change of variables $x_i\to\bar{x}_i=x_{T-i}$ in Eq.~\eqref{discrete}, we get \begin{equation} P(t_{\rm m}|T)=\int_{-\infty}^{\infty}d\bar{x}_1\ldots\int_{-\infty}^{\infty}d\bar{x}_T ~\Theta_{t_{\rm m}}(\{\bar{x}_{T-k}\}) P\left(\{\bar{x}_k\}\right)\,, \label{discrete2} \end{equation} where we have used the relation $P(\{x_i\})=P(\{\bar{x}_i\})$. It is easy to show that $\Theta_{t_{\rm m}}(\{\bar{x}_{T-i}\})=\Theta_{T-t_{\rm m}}(\{\bar{x}_{i}\})$, meaning that if the maximum of the forward trajectory is reached at time $t_{\rm m}$ then the maximum of the backward trajectory is reached at time $T-t_{\rm m}$. Using this relation, we find \begin{equation} P(t_{\rm m}|T)=\int_{-\infty}^{\infty}d\bar{x}_1\ldots\int_{-\infty}^{\infty}d\bar{x}_T ~\Theta_{T-t_{\rm m}}(\{\bar{x}_{i}\}) P\left(\{\bar{x}_i\}\right)\,. \label{discrete3} \end{equation} We now recognize the expression of the right-hand side to be $P(T-t_{\rm m}|T)$ (see Eq.~\eqref{discrete}). Therefore, we obtain $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. To summarize we have shown that if a process is at equilibrium then $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$. As anticipated, this symmetry can be used to determine whether or not a stationary system is nonequilibrium. Imagine that one has access to a long time series $x(\tau)$, for instance, obtained from some experiments, and one does not know the specific details of the underlying system. For example, this time series could represent the position of a molecular motor along a microtubule or a Brownian particle in an optical trap. This setup has become increasingly relevant due to recent developments in single-particle tracking \cite{shen2017single}. Then, one of the most fundamental questions that one can ask about the system is whether or not it is at equilibrium. In particular, in the context of biological systems, these questions are relevant since nonequilibrium fluctuations typically signal the active consumption of energy. Throughout the last decades, several methods to determine the nonequilibrium nature of a system have been developed -- for a recent review, see \cite{GMG18}. Many of these techniques also quantify how much the system is out of equilibrium, usually as a bound on the entropy production \cite{LH19,MGK20,manikandan2021quantitative,roldan2021quantifying,otsubo2022estimating}. A popular technique is based on the verification of the fluctuation-dissipation relation, which relates correlation and response for equilibrium systems \cite{cugliandolo1997fluctuation,martin2001comparison,mizuno2007nonequilibrium,TFA16}. If a violation of this relation is observed, one can immediately conclude that the system is nonequilibrium. Note that the main drawback of this method is that it requires perturbing the system to measure the response function. Many other techniques have been proposed, including the detection of violations of the detailed balance condition\cite{ZS07,BBF16,mura2018nonequilibrium} or the analysis of waiting-time distributions \cite{tu2008nonequilibrium,skinner2021estimating}. Using the fact that $P(t_{\rm m}|T)$ is symmetric for equilibrium systems, we introduce a new method based on two steps. First, divide the time series $x(\tau)$ into $N$ blocks of duration $T$ (assuming that the time series is long enough such that $N\gg1$). Compute the time $t_{\rm m}^i$ at which the maximum is reached within each block (where the index $i$ refers to the $i$-th block). From these $N$ values, build the empirical distribution $P(t_{\rm m}|T)$. If this distribution is not symmetric around $t_{\rm m}=T/2$, the process is necessarily nonequilibrium. On the other hand, if $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$ our test is inconclusive. Note that for a multidimensional system, one can apply the criterion to any of its components. As anticipated, there exist nonequilibrium processes for which the distribution of $t_{\rm m}$ is symmetric. As an example, we can consider a single one-dimensional active Ornstein-Uhlenbeck process (AOUP) in a harmonic potential $V(x)=\alpha x^2$ \cite{fodor2016far}. The system is described by the position $x(\tau)$ and the speed $v(\tau)$ of the particle. The system evolves according to \begin{equation} \frac{dx(t)}{dt}=-\alpha x(t)+v(t)+\sqrt{2D}\xi(t)\,, \end{equation} where $\xi(t)$ is a Gaussian white noise with zero mean and correlator $\langle \xi(t)\xi(t')\rangle=\delta(t-t')$ and $v(t)$ evolves as \begin{equation} \frac{dv(t)}{dt}=-\frac{v}{\tau_a}+\frac{\sqrt{2D_a}}{\tau_a}\zeta(t)\,, \end{equation} where $D_a>0$, $\tau_a>0$, and $\zeta(t)$ is a Gaussian white noise. We also assume that $\xi(t)$ and $\zeta(t)$ are uncorrelated. Note that since the equations of motion are linear, the process is Gaussian. Even though $x(t)$ depends on the evolution of $v(t)$, there is no feedback from $v(t)$ to $x(t)$. This creates probability currents in the phase space $(x,v)$ and hence the system is out of equilibrium. Nevertheless, it can be shown analytically that the one-dimensional process describing the position of the particle $x(t)$ satisfies time reversal symmetry \cite{BO}. Thus, even though the process is nonequilibrium, the distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ at which the position $x(\tau)$ is maximal is symmetric around $t_{\rm m}=T/2$. Interestingly, this is a consequence of the fact that this is a Gaussian stationary process. Indeed, as shown below, for any one-dimensional Gaussian stationary process the distribution of $t_{\rm m}$ is always symmetric around $t_{\rm m}=T/2$. Let us consider a one-dimensional discrete-time Gaussian stationary process $x_k$ with $1\leq k \leq T$ (it is easy to generalize the following argument to continuous-time processes). Without loss of generality, we assume that the mean value of $x_k$ vanishes. The probability of a trajectory $\{x_k\}=\{x_1\,,\ldots\,,x_T\}$ is \begin{equation} P(\{x_k\})=\mathcal{N} ~\exp\left[-\frac12 \sum_{i,j} ~x_i \Sigma^{-1}_{i,j}x_j\right]\,, \label{Pxk_1} \end{equation} where $\Sigma_{i,j}=\langle x_i x_j\rangle$ is the covariance matrix and $\mathcal{N}$ is a normalization constant. Since the process is stationary, the covariance $\Sigma_{i,j}$ only depends on $|i-j|$, yielding \begin{equation} P(\{x_k\})=\mathcal{N} ~\exp\left[-\frac12 \sum_{i,j} ~x_i \Sigma^{-1}(|i-j|)x_j\right]\,. \label{Pxk_2} \end{equation} The probability of the time-reversed trajectory $\{\bar{x}_k\}=\{x_{T-k}\}$ is given by \begin{equation} P(\{\bar{x}_k\})=\mathcal{N} ~\exp\left[-\frac12 \sum_{i,j} ~x_{T-i} \Sigma^{-1}(|i-j|)x_{T-j}\right]\,. \label{Pxk_3} \end{equation} Performing the change of variable $(i,j)\to(i'=T-i,j'=T-j)$, we obtain \begin{equation} P(\{\bar{x}_k\})=\mathcal{N} ~\exp\left[-\frac12 \sum_{i',j'} ~x_{i'} \Sigma^{-1}(|i'-j'|)x_{j'}\right]\,. \label{Pxk_4} \end{equation} Using Eq.~\eqref{Pxk_2}, we get \begin{equation} P(\{\bar{x}_k\})=P(\{x_k\})\,, \end{equation} meaning that the process is symmetric under time reversal. As shown above, this implies that the distribution of $t_{\rm m}$ is symmetric around $t_{\rm m}=T/2$. \section{Conclusions} \label{sec:conclusion} In summary, we have investigated the distribution of the time $t_{\rm m}$ at which a stationary stochastic process reaches its global maximum within a time window $[0,T]$. Using a path decomposition technique, we have computed analytically the distribution $P(t_{\rm m}|T)$ of the time $t_{\rm m}$ of the maximum for several processes, both at equilibrium and out of equilibrium. The class of equilibrium processes that we have considered corresponds to an overdamped Brownian particle moving in a one-dimensional potential $V(x)$ such that $V(x)\approx\alpha |x|^p$ for large $|x|$, with $\alpha>0$ and $p>0$. We have computed the distribution $P(t_{\rm m}|T)$ exactly in the cases $V(x)=\alpha |x|$ (corresponding to $p=1$) and $V(x)=\alpha x^2$ (corresponding to the Ornstein-Uhlenbeck process with $p=2$). From these exact computations, we have observed that the distribution of $t_{\rm m}$ is symmetric around $t_{\rm m}=T/2$, i.e., $P(t_{\rm m}|T)=P(T-t_{\rm m}|T)$, for any equilibrium process. This property is a consequence of the time-reversal symmetry of equilibrium systems. Moreover, we have shown that the distribution of $P(t_{\rm m}|T)$, once appropriately scaled, becomes completely universal for any $\alpha>0$ and $p>0$ in the late-time limit $T\gg 1$. We have also considered two models of nonequilibrium stationary processes for which we could compute exactly the distribution of $t_{\rm m}$: a Brownian particle with stochastic resetting and a single RTP in a confining potential $V(x)=\mu |x|$. In both cases, we have shown that the distribution of $t_{\rm m}$ is not symmetric around $t_{\rm m}=T/2$. From this observation, we have presented a sufficiency test, based on the measurement of $t_{\rm m}$, which allows detecting nonequilibrium fluctuations in stationary systems. For future studies, it would be interesting to investigate the joint distribution of the time $t_{\rm m}$ of the maximum and the time $t_{\min}$ of the minimum for a stationary process of duration $T$. For short times ($T\ll1$), we expect these two times to be strongly correlated, while we expect $t_{\rm m}$ and $t_{\min}$ to become independent at late times. From the joint distribution of $t_{\rm m}$ and $t_{\min}$ one can also obtain several relevant quantities, including the distribution of the time $\tau=t_{\min}-t_{\rm m}$ between the global maximum and the global minimum \cite{MMS19,MMS20}. Another relevant direction would be to investigate the distribution of the time $t_{\rm m}$ of the maximum for an overdamped Brownian particle in a potential that grows as $V(x)\approx\alpha \ln(|x|)$ for large $|x|$, with $\alpha>D$ where $D$ is the diffusion constant (for $\alpha<D$ the process does not reach a steady state). The distribution of the global maximum for this model was investigated in Refs.~\cite{OPR20,PRO20}, where it was shown that that the average maximum grows for late times as $\langle M\rangle\approx T^{1/(1+\alpha/D)}$. Although it appears quite challenging to exactly compute the distribution of $t_{\rm m}$ for this model, it would be relevant to investigate whether the universality of the distribution $P(t_{\rm m}|T)$, presented in Subsection \ref{sec:univ} remains valid in this case.
train/arxiv
BkiUao_xK4tBVicoATw8
5
1
\section{Introduction} \label{ss:intro} Perturbative quantum field theory has been describing particle physics phenomena very well, yet improving the perturbative series, i.e., calculating Feynman integrals, has been always challenging. It becomes more and more important to include higher order corrections to theoretical predictions as particle physics experiments, especially at the Large Hadron Collider (LHC), become more and more precise over the years. This means that a deeper understanding of higher order corrections is required to obtain meaningful theoretical predictions. One of the most interesting problems in this field is higher order corrections to multi-scale processes. The prime examples are the Higgs+jet, Higgs pair, and Higgs+$Z$ production cross sections at the LHC. These $2\to 2$ processes involve two kinematical variables, such as the center of mass energy and the transverse momentum, and the masses of internal or external particles serve as additional scales. Here, the word ``multi-scale" is used when there are more than three scales. The bottlenecks of the calculation of multi-scale processes are the integration-by-parts (IBP) reduction and the evaluation of the resulting master integrals. The subject of this paper is the second issue, namely, the evaluation of multi-scale Feynman integrals. Efforts to solve multi-scale Feynman integrals have persisted over the years. One of the milestones is the analytic computation of all the planar master integrals contributing to the Higgs~$\to$~3~partons process at two-loops~\cite{Bonciani:2016qxi}. Another milestone is the numeric evaluation of the Higgs+jet~\cite{Jones:2018hbb} and Higgs pair production~\cite{Borowka:2016ehy,Borowka:2016ypz} cross sections at two-loop level using the program \texttt{SecDec}~\cite{Borowka:2015mxa,Borowka:2017idc}. An independent numerical evaluation of the Higgs pair production cross section is given recently~\cite{Baglio:2018lrj}. It is worth mentioning some recent analytic calculations of three-scale four-point two-loop diagrams; the non-planar master integrals for $\mu e$ scattering ~\cite{Mastrolia:2017pfy,DiVita:2018nnh}, the planar double box integral relevant to top pair production~\cite{Adams:2018kez} and the planar master integrals relevant to di-photon and di-jet production~\cite{Becchetti:2017abb}. These works show that even three-scale problems are difficult to solve. Recently, some of the non-planar master integrals for these processes in the limit $m_H=0$ have been solved \cite{Xu:2018eos}, but there still remain unsolved master integrals. It is a promising idea to reduce the number of scales entering integrals by expanding them in some small parameters. For a summary of this topic, see Ref.~\cite{Smirnov:2002pj}. In this direction, the large-$m_t$ expansion of the Higgs+jet~\cite{Boughezal:2013uia,Chen:2014gva,Boughezal:2015dra,Boughezal:2015aha,Caola:2015wna,Chen:2016zka,Neumann:2016dny} and Higgs pair production~\cite{deFlorian:2016uhr,deFlorian:2013uza,Grigo:2013rya,deFlorian:2013jea,Maltoni:2014eza,Grigo:2014jma,Grigo:2015dia,Degrassi:2016vss} cross sections is very well investigated. However, it is not until recently that expansions in other parameters have been investigated. Concerning the Higgs+jet production cross section, the expansion in the small bottom quark mass $m_b\ll m_H$~\cite{Melnikov:2016qoc,Melnikov:2017pgf} and in the small top quark and Higgs masses $m_t>m_H$~\cite{Kudashkin:2017skd} are performed. For the Higgs pair production cross section, the expansion in small $m_t$ for the planar master integrals~\cite{Davies:2018ood} and in small Higgs transverse momentum~\cite{Bonciani:2018omm} are performed. The rest of the master integrals of Higgs pair production in the small-$m_t$ expansion are obtained in Ref.~\cite{Davies:2018qvx} together with the results of this paper. Many of the non-planar master integrals are the same as, or related to those of Ref.~\cite{Kudashkin:2017skd} but we provide some new information needed for Higgs pair production. Furthermore, the method used in this paper---the method of regions---is completely different from the one in Ref.~\cite{Kudashkin:2017skd} at all steps of the calculation, so it provides a complementary understanding of the massive non-planer integrals. We would like to emphasize that the method of regions is a generic and systematic procedure to expand integrals, and thus the calculations shown in this paper can be applied to other integrals in a straightforward way. The concept of dividing the domain of integration variables into several regions and expanding the integrand according to hierarchies in each region was introduced by Beneke and Smirnov~\cite{Beneke:1997zp}. The method is now called ``expansion by regions" or ``strategy of regions," and in this paper we call it the method of regions. A mathematical proof of the method of regions for a general integral is not yet known although many successful applications have been reported. In fact, the author of the most up-to-date textbook on this topic states in his book~\cite{Smirnov:2012gma} \begin{quote} \textit{ The strategy of expansion by regions still has the status of experimental mathematics. } \end{quote} In the cases of off-shell large-momentum expansion and large-mass expansion, a mathematical proof based on a graph-theoretical language is known and it is called as ``expansion by subgraphs"~\cite{Smirnov:1990rz,Smirnov:2002pj}. The procedure of expansion by subgraphs is implemented and can be performed in an automatic way~\cite{Harlander:1997zb,Seidensticker:1999bb}. The large-$m_t$ expansion mentioned above belongs to this category. A new proof of the method of regions was proposed by Jantzen~\cite{Jantzen:2011nz} but its application is limited. The purpose of this paper is to show non-trivial examples where the method of regions works well, and our calculation shows the first application of the method to the high energy expansion of non-planar four-point integrals. The remainder of the paper is organized as follows: in Section~\ref{ss:gene}, we briefly summarize the method of regions. In Section~\ref{ss:setup} we introduce conventions, ideas, and techniques, which will be used in the following sections. In Section~\ref{ss:one}, \ref{ss:twoPL} and~\ref{ss:two}, we apply the method of regions to the one-loop box diagram, the two-loop planar massive diagrams, and the two-loop non-planar massive diagrams, respectively. \section{General Idea of the Method of Regions} \label{ss:gene} The procedure of the method of regions is the following ~\cite{Beneke:1997zp,Smirnov:2002pj, Jantzen:2011nz,Smirnov:2012gma, Semenova:2018cwy}: \begin{itemize} \item Step 1: Assign a hierarchy to the dimensionful parameters. \item Step 2: Reveal the relevant scaling of the integration variable. \item Step 3: For each region, expand the integrand according to its scaling. \item Step 4: Integrate. Scaleless integrals such as $\int _0^\infty dx~x^a$ are set to zero. \item Step 5: Sum over the contributions from all the relevant regions. \end{itemize} The Step~2 is the crucial part of the method of regions, and an algorithm to reveal such scalings for a general integral is established based on the analysis of the convex hull~\cite{Pak:2010pt,Jantzen:2012mw}. One can use the algorithm, implemented in the \texttt{Mathematica} package \texttt{asy2.1.m}~\cite{Jantzen:2012mw}. Although it is not proved that the algorithm works correctly for all the cases, no counterexample is known so far. Recently, a new idea to reveal relevant scalings is proposed based on the technique of power geometry, which is implemented in the \texttt{Mathematica} package \texttt{ASPIRE}~\cite{Ananthanarayan:2018tog}. In this paper we use \texttt{asy2.1.m}. The practical bottleneck is Step~4, since the integration tends to be complicated even after the expansion if the original integral is very complicated. This is one of the reasons why testing the method of regions is difficult. The method of regions was first applied to the momentum representation of the Feynman integrals, so the ``regions" mean some domains of the loop momenta. Later, it was found that parametric representations such as the Feynman representation and the alpha representation are more convenient to apply the method~\cite{Smirnov:1999bza}. Recently, it was proposed in Ref.~\cite{Semenova:2018cwy} to use yet another parametric representation, Lee-Pomeransky representation~\cite{Lee:2013hzt}, to apply the method of regions. For all the representations mentioned above, one has to follow Step 1 to 5 for practical calculation. \section{Notation and Technical Tools} \label{ss:setup} \subsection{Conventions} \label{ss:conv} We distinguish the exact equal sign $``="$ and the equal sign under a certain analytic continuation. For this purpose, we introduce a sign~$``\overset{\mathrm{AC}}{=} "$ and use it as, e.g., \begin{align} \log (z+i0)\overset{\mathrm{AC}}{=} \log (-z-i0)+i\pi \,, \label{ac} \end{align} where $i0$ represents an infinitesimal positive imaginary number. We interpret $\log(z)$ as the principal value of the complex logarithm whose imaginary part lies in the interval $(-\pi,\pi ]$. Both the left-hand side and the right-hand side of Eq.~\eqref{ac} are well-defined in the entire domain of $z$, but the equality is valid only in the upper half plane of $z$. This is how analytic continuation is performed, and that is why we add ``AC" to the normal equal sign in Eq.~\eqref{ac}. The equality of a series expansion like \begin{align} \frac{1}{1-m/M}=\sum_{n=0}^\infty \left( \frac{m}{M} \right) ^n \label{series} \end{align} is in principle also regarded as an analytic continuation. However, when a hierarchy like $m\ll M$ is explicitly stated in the text, we use normal equal sign. We use a simplified expression of the Landau $\mathcal{O}$ notation for more than one variable as \begin{align} X+ \mathcal{O}\left( (m_H^2)^{n_H},(m_t^2)^{n_t}, \epsilon ^n\right) \equiv X+ \mathcal{O}\left( (m_H^2)^{n_H}\right) +\mathcal{O}\left((m_t^2)^{n_t}\right) +\mathcal{O}\left( \epsilon ^n\right) \,. \end{align} The Euler--Mascheroni constant is denoted as $\gamma_E$. We use the alpha representation to calculate Feynman integrals. The integration measure and the analytic regularization parameters are defined as \begin{align} \int\! \mathfrak{D}^n \alpha^\delta \equiv \prod_{i=1}^n \left( \int_0^\infty \frac{\mathrm{d}\alpha _i ~\alpha_i^{\delta_j}}{\Gamma (1+\delta_j)} \right) \,. \label{one-da} \end{align} The analytic regularization parameters $\delta_j$ play one essential role and three secondary roles: \renewcommand{\theenumi}{(\roman{enumi})} \begin{enumerate} \item The essential role is to regularize the contribution of individual regions which are divergent if we naively expand in $\alpha_i$. This means that individual contributions are regulator dependent, and the dependence on $\delta_j$ cancel after we sum all the contributions and take the limits $\delta_j\to0$. In taking the limit, it is necessary to specify the order because some of them do not commute. We express the sequence of limits as \begin{align} \lim_{\epsilon,\delta_n,...,\delta_2,\delta_1\to0} X \equiv \lim_{\epsilon\to0} \lim_{\delta_n\to0} \cdots \lim_{\delta_2\to0} \lim_{\delta_1\to0} X\,. \end{align} \item We use $\delta_j$ to regularize the Mellin-Barnes integral. [See the text below Eq.~\eqref{mb2}.] \item By shifting $\delta_j\to\delta_j+1$, we can express polynomials of $\alpha_i$ in the integrand. For example, when $n=2$, \begin{align} \int\! \mathfrak{D}^2 \alpha^\delta \left( \alpha_1^2+\alpha_1\alpha_2\right) = \left. \int\! \mathfrak{D}^2 \alpha^\delta \right|_{\delta_1\to \delta_1+2} + \left. \int\! \mathfrak{D}^2 \alpha^\delta \right|_{\delta_1\to \delta_1+1,\delta_2\to\delta_2+1} \label{del-shift} \end{align} This property is usually used to express the integrals with higher powers of propagators. \item We use the property of Eq.~\eqref{del-shift} to express the higher order terms. [See the text below Eq.~\eqref{one-r1-3}.] \end{enumerate} The sum of the variables will be expressed as \begin{align} \alpha_{i_1...i_n} \equiv \alpha_{i_1}+\cdots +\alpha_{i_n} ,\qquad \delta_{i_1...i_n \overline{i_{n+1}}...i_{n'}} \equiv \delta_{i_1}+\cdots +\delta_{i_n} -\delta_{i_{n+1}}+\cdots+\delta_{i_{n'}} \,. \end{align} The bar on an index indicates that the variable corresponding to the index is subtracted instead of added. Sometimes $\epsilon$ and $\delta_j$ are treated in the same way, and in those cases we express $\epsilon$ as $\delta_0$. For example, $\delta_{001\bar2}=2\epsilon+\delta_1-\delta_2$. Also, we introduce the following compact notation for the product of $\Gamma$-functions \begin{align} \Gamma\left[ x_1,\dots,x_n \right] \equiv \prod_{i=1}^n \Gamma (x_i) \,. \label{gamma} \end{align} In Step~3 of Section~\ref{ss:gene}, we expand the integrand of Feynman integrals in terms of soft parameters. In order to control the expansion in a systematic way, we introduce an auxiliary soft-scaling parameter~$\chi$. For example, assume that we have four variables $\alpha_1,...,\alpha_4$ whose scalings are \begin{align} \alpha_1\sim m,\quad \alpha_2\sim M,\quad \alpha_3\sim m,\quad \alpha_4\sim M, \label{chi-scale} \end{align} where $m\sim \chi$ is a soft parameter and $M\sim 1$ is a hard parameter. In this case, we apply a substitution \begin{align} m\to \chi m,\quad M\to M,\quad \alpha_1\to \chi\alpha_1,\quad \alpha_2\to \alpha_2,\quad \alpha_3\to \chi\alpha_3,\quad \alpha_4\to \alpha_4, \end{align} to the integrand and expand in $\chi$. After that, we can set $\chi =1$. In this paper we denote the scalings~\eqref{chi-scale} as \begin{align} (\alpha_1,\alpha_2,\alpha_3,\alpha_4) \overset{\chi}{\sim } (1,0,1,0) \end{align} or simply $(1,0,1,0)$. \subsection{Kinematics and High Energy Expansion} \label{ss:kinematics} The assignment of the external momenta $q_1,...,q_4$ is illustrated in Fig.~\ref{fig:mom}. We consider the 2 to 2 process but define all the external momenta as incoming. In addition to the usual Mandelstam variables $s, t, u,$ (which we call physical Mandelstam variables), we introduce $S, T, U$ as \begin{align} S=-s=-(q_1+q_2)^2,\qquad T=-t=-(q_1+q_3)^2,\qquad U=-u=-(q_2+q_3)^2 \,. \label{stu} \end{align} In our calculation we assume that $S,T,$ and $U$ are positive and thus call them positive Mandelstam variables. Sometimes two of the three Mandelstam variables are sufficient to express four-point functions, and indeed for planar integrals we do not use $U$ [See Section~\ref{ss:twoPL}]. However for non-planar integrals, all of $S,T,U$ are required to make the second Symanzik polynomial positive [See Subsection~\ref{ss:two-scale}]. In this paper, we consider the master integrals of Higgs pair production at next-to-leading order, where the loops are induced by the top quark. Therefore there are two mass scales, the Higgs mass $m_H$ and the top quark mass $m_t$. We consider the high energy limit where the following hierarchy is satisfied \begin{align} m_H^2<m_t^2\ll S,T,U \,. \label{hie0} \end{align} The high energy expansion in this case is two-fold. First, we treat $m_H^2$ as the soft parameter and $m_t^2,S,T,U$ as the hard parameters. Afterwards, we treat $m_t^2$ as the soft parameter and $S,T,U$ as the hard parameters. In the first expansion, i.e. the $m_H$-expansion, $m_H$ enters the integrals through the on-shell condition of the external momenta \begin{align} q_1^2=0,\quad q_2^2=0,\quad q_3^2=m_H^2,\quad q_4^2=m_H^2 \,. \label{os1} \end{align} In terms of $\chi$, the scaling we impose here is \begin{align} m_H^2\sim \chi,\quad m_t^2\sim 1,\quad S\sim 1,\quad T\sim 1,\quad U\sim 1\,, \label{sca1} \end{align} and the resulting series expression of an integral $I$ is expressed symbolically as\footnote{ In general, the coefficients $c_{n_H}$ depend on $\log (m_H^2)$, but in our case not. } \begin{align} I(S,T,U,m_t^2,m_H^2) =\sum_{n_H} (m_H^2)^{n_H} c_{n_H} (S,T,U,m_t^2) \,. \label{symbol1} \end{align} A note of caution should be made regarding the dependence on $S,T,U$. Since the physical Mandelstam variables satisfy the relation $s+t+u=2m_H^2$, a similar relation should also hold for the positive Mandelstam variables. As a result, functions expressed in terms of $S,T,U$ are not unique. However this is not a problem, because at the end of the calculation, we express the result in a unique way. [See the text below Eq.~\eqref{anaconS}.] We use the linear dependence of $S,T,U$ to make the second Symanzik polynomial positive definite. [See Subsection~\ref{ss:two-scale}.] After the $m_H$-expansion, the external legs becomes massless legs. In the second expansion, i.e. the $m_t$-expansion, the on-shell condition of the external momenta becomes \begin{align} q_1^2=0,\quad q_2^2=0,\quad q_3^2=0,\quad q_4^2=0 \,. \label{os2} \end{align} In terms of $\chi$, the scaling we impose here is \begin{align} m_t^2\sim \chi,\quad S\sim 1,\quad T\sim 1,\quad U\sim 1\,, \label{sca2} \end{align} and the resulting series for an integral $I$ is expressed symbolically as \begin{align} I(S,T,U,m_t^2,m_H^2) =\sum_{n_H} (m_H^2)^{n_H} \sum_{n_t} (m_t^2)^{n_t} c_{n_H,n_t} (S,T,U,\log (m_t^2)) \,. \label{symbol2} \end{align} \begin{figure} \centering \includegraphics[width=0.3\textwidth]{fig1.pdf} \caption{The convention of the external momenta.} \label{fig:mom} \end{figure} The result should be expressed in a way suitable for the evaluation with the physical kinematics. In order to achieve that, the analytic continuation \begin{align} S\overset{\mathrm{AC}}{=} e^{-i\pi +i0} (s+i0),\quad T\overset{\mathrm{AC}}{=} sv,\quad U\overset{\mathrm{AC}}{=} s(1-v), \label{anaconS} \end{align} is applied, where $i0$ represents an infinitesimal positive imaginary number, (note that the massless on-shell condition~\eqref{os2} is adopted), and $0\leq v\leq 1$ in the physical kinematics. After expressing the result in terms of $s$ and $v$, the expression is unique. The analytic continuation of $T,U$ is trivial since their signs are consistent with those of the physical kinematics. The results are expressed in terms of harmonic polylogarithms~(HPL)~\cite{Remiddi:1999ew} and we introduce an abbreviation for HPL as \begin{align} h_0=H(0;v),\quad h_1=H(1;v),\quad h_2=H(2;v),\quad h_{2,1}=H(2,1;v), \end{align} and so on. The argument of the HPL is always chosen to be $v$. For example, \begin{align} H\left(3;-\frac{u}{s}\right) = H\left(3;1-v\right) \overset{\mathrm{AC}}{=} -h_{2,1}+h_2h_1-\frac{1}{2} h_1^2h_0-\frac{\pi^2}{6}h_1+\zeta_3 \,. \label{hpl3} \end{align} We use the \texttt{Mathematica} package \texttt{HPL.m}~\cite{Maitre:2005uu,Maitre:2007kp} in dealing with HPL. When diagrams of the same topology but with a different assignment of external momenta are considered, it is convenient to relate them by applying replacements of the external momenta, which means replacements of the Mandelstam variables. We call these replacements ``crossing relations". This subject is already well-established in the case of HPL~\cite{Anastasiou:2000mf}, but we propose a simpler way to obtain the crossing relations. In Fig.~\ref{fig:com}, the commutative diagram of the crossing and analytic continuation is given. Usually an integral is given as the bottom-left expression, where the result is expressed in terms of the physical kinematic variables. On the other hand, we proceed the crossing in the upper expression, where the result is expressed in terms of the positive Mandelstam variables. The analytic continuation in the upper expression is the simple replacement of the positive Mandelstam variables, whereas the analytic continuation of the bottom expression requires the precise knowledge of the branch cuts. We take the approach (a) of Fig.~\ref{fig:com} because it is easy to implement in programs, and crosscheck the result using approach (b). One can interpret the simplification by introducing the positive Mandelstam variables as the resolution of singularities by increasing the dimension, or in another words, we lose some information when we map the three-variable function $F$ into the two-variable function $f$. We would like to emphasize that the method to obtain the crossing relations explained above is a by-product of introducing the positive Mandelstam variables. The most important point in introducing the positive Mandelstam variables is that it makes the Symanzik polynomials positive and thus allows us to apply the method of regions safely. [See Subsection~\ref{ss:two-scale}.] \begin{figure} \centering \includegraphics[width=0.7\textwidth]{fig2.pdf} \caption{The commutative diagram of the crossing and the analytic continuation. Note that the crossing at upper level is a literal replacement of $S$ and $U$ whereas the crossing at bottom level changes the function in a nontrivial way. (a) Our approach. (b) Conventional approach.} \label{fig:com} \end{figure} \section{A First Example: One-Loop Box Diagram} \label{ss:one} We consider the massive one-loop Feynman integral family \begin{align} J_{a_1,a_2,a_3,a_4}= \int\!\! \frac{d^d\ell}{i \pi^{d/2}} \frac{1} {\left(m_t^2-\ell^2\right)^{a_1} \left( m_t^2-(\ell+q_1)^2 \right)^{a_2} \left( m_t^2-(\ell+q_1+q_2)^2 \right)^{a_3} \left( m_t^2-(\ell-q_3)^2 \right)^{a_4}} \,, \label{one1} \end{align} where the infinitesimal negative imaginary part of each denominator is implicit. The external momenta $q_i$ satisfy the on-shell conditions~\eqref{os1}. We consider the box diagram shown in Fig.~\ref{fig:one}, $J_{1,1,1,1}$ and its alpha representation is \begin{align} J_{1,1,1,1} \overset{\mathrm{AC}}{=} I&=\int\! \mathfrak{D}^4\alpha^\delta~ \mathcal{U}^{-d/2} e^{ -\mathcal{F}/\mathcal{U} }\,, \label{one-alpha} \end{align} where the first Symanzik polynomial $\mathcal{U}$ and the second Symanzik polynomial $\mathcal{F}$ are given by \begin{align} \mathcal{U}=\alpha_{1234},\qquad \mathcal{F}=m_t^2\alpha_{1234}~\mathcal{U}+S\alpha_1\alpha_3+T\alpha_2\alpha_4-m_H^2\alpha_{13}\alpha_4 \,. \label{one-uf} \end{align} We make clear the analytic continuation in Eq.~\eqref{one-alpha} because the right hand side is regularized by $\delta_j$ whereas Eq.~\eqref{one1} is explicitly $\delta_j$-independent. Also, we assume $m_H^2<0$ in order to ensure the convergence of the integral and perform the analytic continuation of the result to $m_H^2>0$ at the end, which turns out to be trivial. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{fig3.pdf} \caption{The one-loop box diagram considered in Section~\ref{ss:one}} \label{fig:one} \end{figure} \subsection{Expansion in the Higgs Mass} \label{ss:one-higgs} We first expand in $m_H$. By using \texttt{asy2.1.m}, we find that there is only one relevant scaling \begin{align} (\alpha_1,\alpha_2,\alpha_3,\alpha_4) \overset{\chi}{\sim } (0,0,0,0) \,. \end{align} The expansion corresponding to this scaling is actually just the Taylor expansion of the integrand in $m_H^2$, and the original integral can be written as \begin{align} I&=\int\! \mathfrak{D}^4\alpha^\delta~ \mathcal{U}^{-d/2} \left[ e^{ -\mathcal{F}/\mathcal{U} } \Big|_{(m_H^2)=0} +(m_H^2) \frac{\partial (e^{ -\mathcal{F}/\mathcal{U} })}{\partial (m_H^2)} \Big|_{(m_H^2)=0} +\cdots \right]\,. \label{one-mh-exp} \end{align} In particular, the leading term is identical to the box diagram with completely massless external legs. The fact that the expansion in $m_H^2$ and the integration commute is reasonable because $m_H$ in the denominator of the integrand (appearing through $q_3^2=m_H^2$) is always accompanied by $m_t$ which regulates the integral and the limit $m_H\to 0$ always exists. In collider physics, it is common to use $s$ and the transverse momentum $p_T$ as the kinematic variables, where $t$ and $p_T$ are related as \begin{align} p_T^2=\frac{ut-m_H^4}{s} \,, \label{mh-3b} \end{align} or equivalently\footnote{ When we solve Eq.~\eqref{mh-3b} in $t$ using the relation $s+t+u=2m_H^2$, there are two solutions. The other solution has ``+" in front of the square root in Eq.~\eqref{mh-3}, and it corresponds to $u$ in this case. Note that the amplitude is symmetric under $t\leftrightarrow u$, so we could choose the sign the other way around. We choose the sign such that $t= -p_T^2+\mathcal{O}(p_T^2/s,m_H^2/s)$. } \begin{align} t=-\frac{1}{2} \left[ s-2m_H^2 - \sqrt{s(s-4m_H^2-4p_T^2)} \right] \,. \label{mh-3} \end{align} Since these relations are $m_H$-dependent, one should be careful when expanding in $m_H$. In order to clarify the point, let us consider two expressions \begin{align} f_1(t,m_H^2)=f_2(p_T^2,m_H^2) \label{mh-4} \end{align} which are related by Eq.~\eqref{mh-3}. We would like to analyze the cross section for fixed $p_T$ but not $t$, so let us consider the $m_H$-expansion of $f_2(p_T^2,m_H^2)$: \begin{align} f_2(p_T^2,m_H^2) =f_2(p_T^2,0) +(m_H^2) \left.\frac{\partial f_2}{\partial (m_H^2)}\right|_{m_H=0} +\frac{(m_H^2)^2}{2} \left.\frac{\partial ^2f_2}{\partial (m_H^2)^2}\right|_{m_H=0} +\mathcal{O}((m_H^2)^3) \,. \label{mh-5} \end{align} On the other hand, the kinematic variable appearing in the Feynman integral is $t$ and the natural representation is $f_1(t,m_H^2)$. Thus, we express the ingredients of Eq.~\eqref{mh-5} in terms of $f_1(t,m_H^2)$ as \begin{align} f_2(p_T^2,0) =&f_1(t_0,0) \label{mh-6} \\ \left.\frac{\partial f_2}{\partial (m_H^2)}\right|_{m_H=0} =& \left.\frac{\partial t}{\partial (m_H^2)}\right|_{m_H=0} \left.\frac{\partial f_1}{\partial t}\right|_{t=t_0,m_H=0} +\left.\frac{\partial f_1}{\partial (m_H^2)}\right|_{t=t_0,m_H=0} \label{mh-7} \\ \left.\frac{\partial ^2f_2}{\partial (m_H^2)^2}\right|_{m_H=0} =& \left.\frac{\partial ^2t}{\partial (m_H^2)^2}\right|_{m_H=0} \left.\frac{\partial f_1}{\partial t}\right|_{t=t_0,m_H=0} +\left[ \left.\frac{\partial t}{\partial (m_H^2)}\right|_{m_H=0} \right]^2 \left.\frac{\partial ^2f_1}{\partial t^2}\right|_{t=t_0,m_H=0} \nonumber\\ & +2\left.\frac{\partial t}{\partial (m_H^2)}\right|_{m_H=0} \left.\frac{\partial ^2f_1}{\partial t\partial (m_H^2)}\right|_{t=t_0,m_H=0} +\left.\frac{\partial ^2f_1}{\partial (m_H^2)^2}\right|_{t=t_0,m_H=0} \label{mh-8} \end{align} where $t_0=t|_{m_H=0}$. Apparently, Eq.~\eqref{mh-5} becomes complicated when Eqs.~\eqref{mh-6},~\eqref{mh-7},~\eqref{mh-8} are substituted. However, taking into account the $m_H$-expansion of $f_1(t,0)$ \begin{align} f_1(t,0)=f_1(t_0,0)+ (m_H)^2 \left.\frac{\partial t}{\partial (m_H^2)}\right|_{m_H=0} \left.\frac{\partial f_1}{\partial t}\right|_{t=t_0,m_H=0} +\cdots \,, \end{align} the expression becomes simpler \begin{align} f_2(p_T^2,m_H^2) =f_1(t,0) +(m_H^2) \left.\frac{\partial f_1}{\partial (m_H^2)}\right|_{m_H=0} +\frac{(m_H^2)^2}{2} \left.\frac{\partial ^2f_1}{\partial (m_H^2)^2}\right|_{m_H=0} +\mathcal{O}((m_H^2)^3) \,, \end{align} where the $m_H$-dependent $t$~\eqref{mh-3} is used. In general, for a given order of $m_H^2$, the difference between the strict $m_H$-expansion of $f_2(p_T^2,m_H^2)$ for fixed $p_T$ and the $m_H$-expansion of $f_1(t,m_H^2)$ for the $m_H$-dependent $t$ is higher order in $m_H^2$. The conclusions of this subsection are the following: \begin{enumerate} \item As the result of the $m_H$-expansion, the integrals reduce to integrals with massless external legs. \item The $m_H$-expansion for fixed $p_T$ is obtained by the $m_H$-expansion with fixed~$t$, keeping the $m_H$-dependence of $t$. \end{enumerate} \subsection{Expansion in the Top Quark Mass} \label{ss:one-top} After the expansion in $m_H$, we have integrals which depend on $m_t$, $S,$ and $T$. We now consider the expansion in $m_t$ assuming the hierarchy~\eqref{sca2}. The integral of interest is \begin{align} \int\! \mathfrak{D}^4\alpha^\delta~ \mathcal{U}^{-d/2} e^{ -\mathcal{F}/\mathcal{U} }, \label{one-alpha2} \end{align} with \begin{align} \mathcal{U}=\alpha_{1234},\qquad \mathcal{F}=m_t^2\alpha_{1234}\mathcal{U}+S\alpha_1\alpha_3+T\alpha_2\alpha_4 \,. \label{one-uf2} \end{align} Here we use the positive Mandelstam variables, $S,T$, to make all the terms in $\mathcal{F}$ positive. Otherwise, hard terms could cancel and result in a soft term, which breaks the method of regions. The use of positive Mandelstam variables in Eq.~\eqref{one-uf2} is conceptually not new, since it corresponds to the integral in the $u$-channel where $s<0,t<0$ and $u=-s-t>0$. The absence of a negative term in the $u$-channel is reasonable because there is no physical cut in those kinematics. By using the package \texttt{asy2.1.m}~\cite{Jantzen:2012mw}, we reveal five relevant scalings:\footnote{ More precisely, \texttt{asy2.1.m} reveals the scalings which lead to homogeneous and non-scaleless integrals. } \begin{align} \underbrace{(0,0,0,0)}_{1},~ \underbrace{(0,0,1,1)}_{2},~ \underbrace{(0,1,1,0)}_{3},~ \underbrace{(1,0,0,1)}_{4},~ \underbrace{(1,1,0,0)}_{5}\,. \label{one-scale} \end{align} The scalings of regions 2 to 5 reflect the symmetries of the integral, $\alpha_1\leftrightarrow\alpha_3$ and $\alpha_2\leftrightarrow\alpha_4$. Eq.~\eqref{one-alpha2} is thus expressed as the sum of the contributions from these five regions: $\sum_{i=1}^5 I^{(i)}$. \subsubsection*{Region 1 (all-hard region)} The region where all the alpha variables scale as $\chi^0$, i.e., $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)\overset{\chi}{\sim } (0,0,0,0)$ is special, and we call this region the ``all-hard region". We can make several general statements about this region within our high-energy expansion: \begin{enumerate} \renewcommand{\theenumi}{(\alph{enumi})} \item Every integral has one all-hard region. \item There is only one soft parameter in the all-hard region, which is $m_t$. \item The contribution from the all-hard region can be expressed as the massless integral of the original topology. In particular, the leading order term is obtained by substituting $m_t=0$ into the original integral. \item The leading order contribution of the all-hard region is $\mathcal{O}(\chi ^0)$. \item The contribution from the all-hard region has no singularities in $\delta_j$. \end{enumerate} Because of these properties, the contribution from the all-hard region can be calculated in two ways. The first is the procedure universal for any region, and the second is to use the momentum representation. We show them in order. First, we show the universal procedure. By expanding Eq.~\eqref{one-alpha2} in terms of $\chi m_t^2$, we obtain the contribution of this region as \begin{align} I^{(1)}&=\int\! \mathfrak{D}^4\alpha^\delta~ {\mathcal{U}^{(1)}}^{-d/2} e^{ -\mathcal{F}^{(1)}/\mathcal{U}^{(1)} } \left[ 1-\chi m_t^2\alpha_{1234} +\frac{\chi^2}{2} \left( m_t^2\alpha_{1234} \right) ^2 \right] +\mathcal{O}(\chi^3) \label{one-r1-1} \,, \end{align} where $\mathcal{U}^{(1)}=\alpha_{1234}$ and $\mathcal{F}^{(1)}=S\alpha_1\alpha_3+T\alpha_2\alpha_4$. As stated above, the leading order term is the massless box integral and we name it $\mathcal{T}^{(1)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon}$ for later use: \begin{align} &\mathcal{T}^{(1)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} =\int\! \mathfrak{D}^4\alpha^\delta~ {\mathcal{U}^{(1)}}^{-d/2} e^{ -\mathcal{F}^{(1)}/\mathcal{U}^{(1)} } \label{one-r1-2} \\ &=\int\!\! \mathrm{d}z\frac{T^z~ \Gamma \left[ -z,1+z+\delta_2,1+z+\delta_4, -1-z-\delta_{0124}, -1-z-\delta_{0234}, 2+z+\delta_{01234} \right] }{S^{2+z+\delta_{01234}} \Gamma [-\delta_{001234},1+\delta_1,1+\delta_2,1+\delta_3,1+\delta_4] } \,. \label{one-r1-3} \end{align} The integrand of the higher order corrections has additional factors of $\alpha_i$ which can be expressed by some shifts of $\delta_j\to\delta_j+1$. Indeed, Eq.~\eqref{one-r1-1} can be expressed as \begin{align} I^{(1)}=\mathcal{T}^{(1)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} -\chi m_t^2 &\left( \mathcal{P}_{1+\delta_1}^1\mathcal{T}^{(1)}_{1+\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} +\mathcal{P}_{1+\delta_2}^1\mathcal{T}^{(1)}_{\delta_1,1+\delta_2,\delta_3,\delta_4,\epsilon} \right. \nonumber\\ &\left. +\mathcal{P}_{1+\delta_3}^1\mathcal{T}^{(1)}_{\delta_1,\delta_2,1+\delta_3,\delta_4,\epsilon} +\mathcal{P}_{1+\delta_4}^1\mathcal{T}^{(1)}_{\delta_1,\delta_2,\delta_3,1+\delta_4,\epsilon} \right) +\mathcal{O}(\chi ^2) \,, \label{one-r1-4} \end{align} where $\mathcal{P}_x^n=\Gamma(x+n)/\Gamma(x)$ is the Pochhammer symbol. We call $\mathcal{T}^{(1)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon}$ a ``template integral" since all the higher order terms can be expressed in terms of $\mathcal{T}^{(1)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon}$ by shifting $\delta_j$. Since there is no singularity in $\delta_j$, we can safely set $\delta_j=0$ in Eq.~\eqref{one-r1-4}. Then, the leading order term is expressed as \begin{align} \mathcal{T}^{(1)}_{0,0,0,0,\epsilon} &= \left. \int _{-1/2-i\infty}^{-1/2+i\infty} \frac{\mathrm{d}z~T^z}{S^{2+\epsilon +z}} \frac{\Gamma [ -z,1+z,1+z, -1-\epsilon-z, -1-\epsilon-z, 2+\epsilon+z]} {\Gamma (-2\epsilon )} \right|_{\epsilon \simeq -1} \,, \label{one-r1-5} \end{align} where the technique explained in Appendix~\ref{ss:introMB} is used to set the integration contour to a straight line. The expression at $\epsilon \to 0$ is obtained by using the package \texttt{MB.m}~\cite{Czakon:2005rk}, and the result is \begin{align} \mathcal{T}^{(1)}_{0,0,0,0,\epsilon} &= e^{i\pi\epsilon} \frac{e^{-\epsilon \gamma_E}}{s^{2+\epsilon}v} \left[ -\frac{4}{\epsilon^2} +\frac{2h_0+2i\pi}{\epsilon} +\frac{4\pi^2}{3} +\mathcal{O}(\epsilon) \right] \,. \label{one-r1-7} \end{align} The higher order terms are given by $\mathcal{T}^{(1)}_{1,0,0,0,\epsilon}$, $\mathcal{T}^{(1)}_{0,1,0,0,\epsilon}$ etc, and can be calculated in a similar way. The other procedure to calculate the contribution of the all-hard region is the following. We return to the momentum representation~\eqref{one1} and expand each propagator in $m_t$ as \begin{align} \frac{1}{m_t^2-\ell ^2}\to \sum_{n=0}^\infty \frac{(-m_t^2)^n}{(-\ell^2)^{n+1}} \,. \label{one-r1-8} \end{align} Then, the contribution can be expressed in terms of the integral family \begin{align} J_{a_1,a_2,a_3,a_4}^\mathrm{massless}= \int\!\! \frac{d^d\ell}{i \pi^{d/2}} \frac{1} {(-\ell^2)^{a_1} \left( -(\ell+q_1)^2 \right)^{a_2} \left( -(\ell+q_1+q_2)^2 \right)^{a_3} \left( -(\ell-q_3)^2 \right)^{a_4}} \, \label{one-r1-9} \end{align} as \begin{align} I^{(1)}= J_{1,1,1,1}^\mathrm{massless} -m_t^2(J_{2,1,1,1}^\mathrm{massless}+J_{1,2,1,1}^\mathrm{massless} +J_{1,1,2,1}^\mathrm{massless}+J_{1,1,1,2}^\mathrm{massless}) +\mathcal{O}(m_t^4) \,. \label{one-r1-10} \end{align} This corresponds to Eq.~\eqref{one-r1-4}. Applying the IBP-reduction, all the integrals appearing Eq.~\eqref{one-r1-10} including higher order terms can be expressed by three master integrals \begin{align} J_{1,0,1,0}^\mathrm{massless},~ J_{0,1,0,1}^\mathrm{massless},~ J_{1,1,1,1}^\mathrm{massless} \,. \end{align} The IBP-reduction of massless integrals is computationally easy even at the two-loop level, and thus very useful. In the calculation of two-loop integrals, we adopt this approach to calculate the contribution from the all-hard region. \subsubsection*{Regions 2, 3, 4, 5} The contribution of Region~2 is obtained by applying the second scaling of Eq.~\eqref{one-scale} and expanding in $\chi m_t^2,\chi\alpha_3,\chi\alpha_4,$ \begin{align} I^{(2)}&=\int\! \mathfrak{D}^4\alpha~ {\mathcal{U}^{(2)}}^{-d/2} e^{ -\mathcal{F}^{(2)}/\mathcal{U}^{(2)} } \left[ 1-\chi \left(m_t^2\alpha_{34} +\frac{d}{2}\frac{\alpha_{34}}{\mathcal{U}^{(2)}} +S\frac{\alpha_1\alpha_3\alpha_{34}}{(\mathcal{U}^{(2)})^2} +T\frac{\alpha_2\alpha_4\alpha_{34}}{(\mathcal{U}^{(2)})^2} \right) \right] +\mathcal{O}(\chi^2) \label{one-r2-1} \,, \end{align} where $\mathcal{U}^{(2)}=\alpha_{12}$ and $\mathcal{F}^{(2)}=m_t^2\alpha_{12}\mathcal{U}^{(2)}+S\alpha_1\alpha_3+T\alpha_2\alpha_4$. The integration over $\alpha_1,\dots,\alpha_4$ can be performed using the relation~\eqref{form1} and a variant of it, and the template integral of this region is \begin{align} \mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} = \frac{(m_t^2)^{-\epsilon-\delta_1-\delta_2}} {S^{1+\delta_3}T^{1+\delta_4}} \frac{\Gamma [\delta_1-\delta_3,\delta_2-\delta_4,\delta_1+\delta_2+\epsilon]} {\Gamma [\delta_{12}-\delta_{34},1+\delta_1,1+\delta_2]} \,. \label{one-r2-2} \end{align} The higher order terms in Eq.~\eqref{one-r2-1} which contain the inverse of $\mathcal{U}^{(2)}$ can be expressed by the shift $\epsilon\to\epsilon -1$ in the template integral. Thus Eq.~\eqref{one-r2-1} is written as \begin{align} I^{(2)}&= \mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} +\chi m_t^2 (\mathcal{P}_{1+\delta_3}^1\mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3+1,\delta_4,\epsilon} +\mathcal{P}_{1+\delta_4}^1\mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4+1,\epsilon}) \nonumber\\ &+\chi \frac{d}{2} (\mathcal{P}_{1+\delta_3}^1\mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3+1,\delta_4,\epsilon-1} +\mathcal{P}_{1+\delta_4}^1\mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4+1,\epsilon-1}) \nonumber\\ &+\chi S (\mathcal{P}_{1+\delta_1}^1\mathcal{P}_{1+\delta_3}^2 \mathcal{T}^{(2)}_{\delta_1+1,\delta_2,\delta_3+2,\delta_4,\epsilon-2} +\mathcal{P}_{1+\delta_1}^1\mathcal{P}_{1+\delta_3}^1\mathcal{P}_{1+\delta_4}^1 \mathcal{T}^{(2)}_{\delta_1+1,\delta_2,\delta_3+1,\delta_4+1,\epsilon-2}) \nonumber\\ &+\chi T (\mathcal{P}_{1+\delta_2}^1\mathcal{P}_{1+\delta_3}^1\mathcal{P}_{1+\delta_4}^1 \mathcal{T}^{(2)}_{\delta_1,\delta_2+1,\delta_3+1,\delta_4+1,\epsilon-2} +\mathcal{P}_{1+\delta_2}^1\mathcal{P}_{1+\delta_4}^2 \mathcal{T}^{(2)}_{\delta_1,\delta_2+1,\delta_3,\delta_4+2,\epsilon-2}) +\mathcal{O}(\chi^2) \label{one-r2-3} \,. \end{align} Recall that $\mathcal{P}_x^n=\Gamma(x+n)/\Gamma(x)$ is the Pochhammer symbol. As mentioned in Subsection~\ref{ss:conv}, the result of the limits $\delta_j\to0$ depends on the order in which we take them. For example, when we take the sequence of limits with ascending values of $j$, we obtain \begin{align} \lim_{\epsilon,\delta_4,\delta_3,\delta_2,\delta_1\to0} \mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} = \frac{e^{-\epsilon \gamma_E}}{s^{2}v(m_t^2)^{\epsilon}} \frac{1}{\epsilon} \left( \frac{1}{\delta_3}+\frac{1}{\delta_4} -\log s-h_0+i\pi \right) +\mathcal{O}(\epsilon) \label{one-r2-4} \,, \end{align} whereas with descending values of $j$, we instead obtain \begin{align} \lim_{\epsilon,\delta_1,\delta_2,\delta_3,\delta_4\to0} \mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} = \frac{e^{-\epsilon \gamma_E}}{s^{2}v(m_t^2)^{\epsilon}} \left[ \frac{2}{\epsilon^2} -\frac{1}{\epsilon}\left( \frac{1}{\delta_1}+\frac{1}{\delta_2} -2\log (m_t^2) \right) -\frac{\pi^2}{6} \right] +\mathcal{O}(\epsilon) \label{one-r2-5} \,. \end{align} The order dependence is not problem, provided we use the same order throughout the calculation. The artifacts caused by the $\delta_j$ will cancel after we sum the contributions from all the relevant regions. Due to the symmetries of the diagrams, the template integrals of the other regions can be expressed in terms of $\mathcal{T}^{(2)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon}$ with exchanged $\delta_j$ as \begin{align} \mathcal{T}^{(3)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} &= \mathcal{T}^{(2)}_{\delta_1,\delta_4,\delta_3,\delta_2,\epsilon} \label{one-r3-1} \\ \mathcal{T}^{(4)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} &= \mathcal{T}^{(2)}_{\delta_3,\delta_2,\delta_1,\delta_4,\epsilon} \label{one-r4-1} \\ \mathcal{T}^{(5)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} &= \mathcal{T}^{(2)}_{\delta_3,\delta_4,\delta_1,\delta_2,\epsilon} \,, \label{one-r5-1} \end{align} and when we take the ascending order of limits, we obtain \begin{align} \lim_{\epsilon,\delta_4,\delta_3,\delta_2,\delta_1\to0} \mathcal{T}^{(3)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} &= \frac{e^{-\epsilon \gamma_E}}{s^{2}v(m_t^2)^{\epsilon}} \left[ \frac{1}{\epsilon^2} +\frac{1}{\epsilon}\left( \log(m_t^2)-\log s+i\pi +\frac{1}{\delta_3}-\frac{1}{\delta_4} \right) -\frac{\pi^2}{12} \right] +\mathcal{O}(\epsilon) \label{one-r3-2} \\ \lim_{\epsilon,\delta_4,\delta_3,\delta_2,\delta_1\to0} \mathcal{T}^{(4)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} &= \frac{e^{-\epsilon \gamma_E}}{s^{2}v(m_t^2)^{\epsilon}} \left[ \frac{1}{\epsilon^2} +\frac{1}{\epsilon}\left( \log(m_t^2)-h_0 -\frac{1}{\delta_3}+\frac{1}{\delta_4} \right) -\frac{\pi^2}{12} \right] +\mathcal{O}(\epsilon) \label{one-r4-2} \\ \lim_{\epsilon,\delta_4,\delta_3,\delta_2,\delta_1\to0} \mathcal{T}^{(5)}_{\delta_1,\delta_2,\delta_3,\delta_4,\epsilon} &= \frac{e^{-\epsilon \gamma_E}}{s^{2}v(m_t^2)^{\epsilon}} \left[ \frac{2}{\epsilon^2} +\frac{1}{\epsilon}\left( 2\log(m_t^2)-\frac{1}{\delta_3}-\frac{1}{\delta_4} \right) -\frac{\pi^2}{6} \right] +\mathcal{O}(\epsilon) \label{one-r5-2} \,. \end{align} \subsubsection*{Sum of all Regions} Summing Eqs.~\eqref{one-r1-7}, \eqref{one-r2-4}, \eqref{one-r3-2}, \eqref{one-r4-2}, \eqref{one-r5-2}, we obtain the leading term of Eq.~\eqref{one1}: \begin{align} \mathrm{Eq.~\eqref{one1}}= e^{i\pi\epsilon} \frac{e^{-\epsilon \gamma_E}}{s^{2+\epsilon}v} \left\{ \pi^2-2 \left[ \log \left( \frac{s}{m_t^2} \right) -i\pi \right] \left[ \log \left( \frac{s}{m_t^2} \right) +h_0 \right] \right\} +\mathcal{O}(m_H^2,m_t^2,\epsilon) \,. \end{align} As mentioned, the result is $\delta_j$-independent. There are 24 possible ways to order $\delta_1,\delta_2,\delta_3,\delta_4$ in taking the limit, and we have confirmed that the result is the same for all of the orderings. Since the original integral is finite in the limit $\epsilon\to0$, the poles of $\epsilon$ in the individual contributions from each region cancel. \subsection{Higher Order Terms in $m_t$} \label{ss:one-higher} The method of regions can be used to obtain a series expansion up to arbitrary order in the soft parameters, $m_H$ or $m_t$. For example, Eqs.~\eqref{one-r1-4},~\eqref{one-r2-3} includes the contributions up to $\mathcal{O}(\chi )$. In general, the higher order terms can be expressed in terms of the template integrals. Therefore, in principle, it is straightforward to calculate higher order terms up to arbitrary order, once the template integrals have been obtained. However, the number of integrals to calculate increases rapidly with the order of $\chi$, and this makes it hard to calculate higher order terms. Especially in the two-loop case, some of the template integrals contain multi-dimensional Mellin-Barnes integrals, which are not so easy to solve. Furthermore, the number of integrals increases more rapidly than in the case of the one-loop calculation. Therefore it is better to use another method to calculate the higher-order corrections. The use of differential equations solves this issue~ \cite{Melnikov:2016qoc,Kudashkin:2017skd,Davies:2018ood,Davies:2018qvx}. We use the differential equation with respect to $m_t^2$ to obtain higher order corrections in $m_t^2$. Since we know that the integral has the form \begin{align} \mathrm{Eq.~\eqref{one-alpha2}}= \sum_{n_1} \sum_{n_2} c_{n_1,n_2}(S,T) (m_t^2)^{n_1} \left( \log m_t^2 \right)^{n_2} \,, \label{diffeq1} \end{align} the set of differential equations reduces to a set of linear relations of $c_{n_1,n_2}(S,T)$ which simplifies the problem a lot. In this sense, the leading order terms which we calculate in the previous subsections play the role of the boundary conditions of the differential equations. \subsection{Integrals with Fewer Lines} \label{ss:fewer} Once we have calculated the box integral, there are several shortcuts to calculate integrals with fewer lines such as the triangle integral and the self energy integral. Let us consider the $s$-channel triangle diagram, $J_{1,1,1,0}$, as an example. The alpha representation of $J_{1,1,1,0}$ is obtained by setting $\alpha_4\to 0$ in Eq.~\eqref{one-uf}, since the forth propagator is absent in $J_{1,1,1,0}$. If we use \texttt{asy2.1.m} to reveal the relevant scalings for $J_{1,1,1,0}$, we obtain \begin{align} (0,0,0),~(0,0,1),~(1,0,0), \end{align} however, we do not have to do that. We do not have to derive the template integrals for $J_{1,1,1,0}$ because they are derived from the template integrals of $J_{1,1,1,1}$. Using the fact that the $\delta_j$-dependence of the alpha representation is expressed by the replacement of $a_i\to 1+\delta_j$, the triangle integral $J_{1,1,1,0}$ can be expressed by the limit $\delta_4\to-1$. Therefore by taking the limit $\delta_4\to-1$ to the template integral of $J_{1,1,1,1}$, one can obtain the template integral for $J_{1,1,1,0}$. For example, \begin{align} \lim_{\delta_4\to-1} \mathrm{Eq.~\eqref{one-r2-2}} = \frac{(m_t^2)^{-\epsilon-\delta_1-\delta_2}} {S^{1+\delta_3}} \frac{\Gamma [\delta_1-\delta_3,1+\delta_2,\delta_1+\delta_2+\epsilon]} {\Gamma [1+\delta_{12}-\delta_{3},1+\delta_1,1+\delta_2]} \,, \label{one-few1} \end{align} and this is the template integral for the region $(0,0,1)$. Sometimes the limit vanishes due to a suppression factor $1/\Gamma(1+\delta_4)$. For example, \begin{align} \lim_{\delta_4\to-1} \mathrm{Eq.~\eqref{one-r3-1}} = 0 \,. \label{one-few2} \end{align} This fact is reasonable because the number of relevant regions for $J_{1,1,1,0}$ is smaller than for $J_{1,1,1,1}$. Two of the four soft template integrals are non-vanishing after taking the limit, and they are the two soft template integrals of $J_{1,1,1,0}$. \section{Two-Loop Planar Diagrams} \label{ss:twoPL} We consider the following Feynman integral families \begin{align} J^\mathrm{PL1}_{a_1,a_2,a_3,a_4,a_5,a_6,a_7}&= \int\!\! \frac{\mathrm{d}^d\ell_1}{i \pi^{d/2}} \frac{\mathrm{d}^d\ell_2}{i \pi^{d/2}} \frac{1}{(-p_7^2)^{a_7}} \prod _{n=1}^6 \frac{1}{(m_t^2-p_n^2)^{a_n}} \label{pl-1} \\ J^\mathrm{PL2}_{a_1,a_2,a_3,a_4,a_5,a_6,a_7}&= \int\!\! \frac{\mathrm{d}^d\ell_1}{i \pi^{d/2}} \frac{\mathrm{d}^d\ell_2}{i \pi^{d/2}} \prod _{n=1,2,3,7} \frac{1}{(m_t^2-p_n^2)^{a_n}} \prod _{n=4}^6 \frac{1}{(-p_n^2)^{a_n}} \label{pl-2} \end{align} where the momenta of the lines are \begin{align} \{p_1,p_2,p_3,p_4,p_5,p_6,p_7\} =\{\ell_1+q_{1}, \ell_1, \ell_1-q_2, \ell_2-q_2, \ell_{2}+q_{13}, \ell_2+q_1, \ell_1-\ell_2\} \,. \label{pl-3} \end{align} Recall that $q_{13}=q_1+q_3$. We consider the integrals $I^\mathrm{PL1}=J^\mathrm{PL1}_{1,1,1,1,1,1,1}$ and $I^\mathrm{PL2}=J^\mathrm{PL2}_{1,1,1,1,1,1,1}$ whose diagrammatic representations are shown in Fig.~\ref{fig:two-pl}. The $m_H$-expansion can be performed in the same way as in Subsection~\ref{ss:one-higgs}, and the alpha representations of the integrals after the $m_H$-expansion are given by \begin{align} &\mathcal{U}^\mathrm{PL1}= \mathcal{U}^\mathrm{PL2}= \alpha_{123}\alpha_{456}+\alpha_{123456}\alpha_7\,, \\ &\mathcal{F}^\mathrm{PL1}=m_t^2\alpha_{123456}\mathcal{U}^\mathrm{PL1} +S\left[ \alpha_1\left( \alpha_4\alpha_{67}+\alpha_3\alpha_{4567} \right) +\alpha_6\left( \alpha_{23}\alpha_4+\alpha_{34}\alpha_7\right) \right] +T\alpha_2\alpha_5\alpha_7\,, \\ &\mathcal{F}^\mathrm{PL2}=m_t^2\alpha_{1237}\mathcal{U}^\mathrm{PL2} +S\left[ \alpha_1\left( \alpha_4\alpha_{67}+\alpha_3\alpha_{4567} \right) +\alpha_6\left( \alpha_{23}\alpha_4+\alpha_{34}\alpha_7\right) \right] +T\alpha_2\alpha_5\alpha_7 \,. \end{align} Conceptually there is no difference between the procedure of applying the method of regions to these integrals and the example discussed in Section~\ref{ss:one}. In particular, the property of the $\mathcal{F}$-function that it is positive definite in the $u$-channel is the same [cf. the text below Eq.~\eqref{one-uf2}]. The only new ingredient is that now the template integrals are expressed by at most two-dimensional Mellin-Barnes integrals, which are not trivial to solve. However, their calculation is a subset of the calculation of the non-planar integrals, so we do not describe it here [cf. Subsection~\ref{ss:mb}]. Therefore we briefly summarize the important ingredients of the two-loop planar integrals in this section. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig4.pdf} \caption{ The two-loop massive planar diagrams defined in Eq.~\eqref{pl-1} (left) and Eq.~\eqref{pl-2} (right).} \label{fig:two-pl} \end{figure} \subsection*{Double Massive Box Diagram} By using the package \texttt{asy2.1.m}, we reveal thirteen relevant scalings: \begin{align} &(0,0,0,0,0,0,0) , (0,0,1,0,0,1,1) , (0,0,1,1,0,0,1) , (0,0,1,1,1,0,0) , (0,1,1,1,0,0,0) ,\nonumber\\ &(1,0,0,0,0,1,1) , (1,0,0,0,1,1,0) , (1,0,0,1,0,0,1) , (1,1,0,0,0,1,0) , (0,0,1,1,1,1,1) ,\nonumber\\ &(1,0,0,1,1,1,1) , (1,1,1,0,0,1,1) , (1,1,1,1,0,0,1) \,. \end{align} The template integrals of these regions are summarized in Appendix~\ref{app:temp}, and can be found in the ancillary file to this paper~\cite{anci}. The result of this integral at the leading order in $m_t$, up to $\mathcal{O}(\epsilon )$, is given by \begin{align} I^\mathrm{PL1}=\sum_{n_1=0}^1\sum_{n_2=0}^{n_1+4} \frac{d_{n_1,n_2}}{v} \epsilon ^{n_1} \log ^{n_2} (m_t) +\mathcal{O}(m_t,\epsilon ^2) \,, \end{align} where the coefficients $d_{n_1,n_2}$ are given by \begin{align} d_{0,4}&=16\,,\qquad d_{0,3}=-\frac{64 h_0}{3}+\frac{32 i \pi }{3}\,,\qquad d_{0,2}=8 h_0^2-16 i \pi h_0-\frac{8 \pi ^2}{3}\,,\nonumber\\ d_{0,1}&=8 i \pi h_0^2+\frac{16 \pi ^2 h_0}{3}-8 \zeta_{3}+\frac{8 i \pi ^3}{3}\,,\nonumber\\ d_{0,0}&=16 h_0 h_3-\frac{h_0^4}{3}-\frac{4}{3} i \pi h_0^3-4 h_0^2 h_2-2 \pi ^2 h_0^2-8 i \pi h_0 h_2+4 h_0 \zeta_{3}-\frac{4}{3} i \pi ^3 h_0+16 i \pi h_3-24 h_4-\frac{7 \pi ^4}{15}\,,\nonumber\\ d_{1,5}&=-\frac{128}{3}\,,\qquad d_{1,4}=\frac{152 h_0}{3}-56 i \pi\,,\qquad d_{1,3}=-16 h_0^2+\frac{208 i \pi h_0}{3}+\frac{160 \pi ^2}{9}\,,\nonumber\\ d_{1,2}&=\frac{4 h_0^3}{3}+4 h_0^2 h_1-20 i \pi h_0^2+8 i \pi h_0 h_1-8 h_0 h_2-28 \pi ^2 h_0-8 i \pi h_2+8 h_3-16 \zeta_{3}-\frac{4 i \pi ^3}{3}\,,\nonumber\\ d_{1,1}&=-\frac{4 h_0^4}{3}-\frac{8 h_0^3 h_1}{3}-4 i \pi h_0^3-4 i \pi h_0^2 h_1+\frac{28 \pi ^2 h_0^2}{3}-8 i \pi h_0 h_2+16 h_0 h_3+24 i \pi h_3-32 h_4\nonumber\\ &-\frac{8}{3} \pi ^2 h_0 h_1+16 h_0 \zeta_{3}-\frac{28}{3} i \pi ^3 h_0+\frac{8 \pi ^2 h_2}{3}+\frac{218 \pi ^4}{45}\,,\nonumber\\ d_{1,0}&=\frac{h_0^5}{2}+\frac{h_0^4 h_1}{2}+6 h_0^3 h_2+6 h_0^2 h_1 h_2+4 h_0^2 h_{21}-20 h_0^2 h_3-4 h_0 h_2^2-12 h_0 h_{22}\nonumber\\ &+\frac{11}{6} i \pi h_0^4+\frac{2}{3} i \pi h_0^3 h_1+18 i \pi h_0^2 h_2+12 i \pi h_0 h_1 h_2-24 h_0 h_1 h_3+36 h_1 h_4-4 i \pi h_2^2+4 h_2 h_3\nonumber\\ &+\frac{\pi ^2 h_0^3}{9}+\frac{5}{3} \pi ^2 h_0^2 h_1-6 \pi ^2 h_0 h_2+8 i \pi h_0 h_{21}-32 i \pi h_0 h_3-24 i \pi h_1 h_3-12 i \pi h_{22}-16 i \pi h_4\nonumber\\ &-4 h_0^2 \zeta_{3}+5 i \pi ^3 h_0^2+2 i \pi ^3 h_0 h_1-\frac{157 \pi ^4 h_0}{90}-\frac{2 \pi ^4 h_1}{5}+\frac{10}{3} i \pi ^3 h_2+\frac{26 \pi ^2 h_3}{3}+\frac{61 i \pi ^5}{90}\nonumber\\ &+24 h_0 h_1 \zeta_{3}+24 i \pi h_1 \zeta_{3}-56 h_2 \zeta_{3}+4 (4 h_{23}+7 h_{32}+21 h_5-13 \zeta_{5})+6 \pi ^2 \zeta_{3}\,. \end{align} \subsection*{Single Massive Double Box Diagram} By using the package \texttt{asy2.1.m}, we reveal ten relevant scalings: \begin{align} &(0,0,0,0,0,0,0) , (0,0,1,1,1,0,0) , (0,1,1,1,0,0,0) , (1,0,0,0,1,1,0) , \nonumber\\ &(1,1,0,0,0,1,0) , (0,0,1,1,1,1,1) , (1,0,0,1,1,1,1) , (1,1,1,1,1,1,0) \,. \end{align} The template integrals of these regions are summarized in Appendix~\ref{app:temp}, and can be found in the ancillary file to this paper~\cite{anci}. The result of this integral at the leading order in $m_t$, up to $\mathcal{O}(\epsilon )$, is \begin{align} I^\mathrm{PL2}=\sum_{n_1=-2}^1\sum_{n_2=0}^{n_1+4} \frac{d_{n_1,n_2}}{v} \epsilon ^{n_1} \log ^{n_2} (m_t) +\mathcal{O}(m_t,\epsilon ^2) \,, \end{align} where the coefficients $d_{n_1,n_2}$ are now given by \begin{align} d_{-2,2}&=8\,,\qquad d_{-2,1}=-4 h_0+4 i \pi\,,\qquad d_{-2,0}=-\pi ^2-2 i \pi h_0\,,\nonumber\\ d_{-1,3}&=-\frac{32}{3}\,,\qquad d_{-1,2}=-4 h_0-20 i \pi\,,\qquad d_{-1,1}=4 h_0^2+4 i \pi h_0+\frac{20 \pi ^2}{3}\,,\nonumber\\ d_{-1,0}&=\frac{h_0^3}{3}+h_0^2 h_1+3 i \pi h_0^2+2 i \pi h_0 h_1-2 h_0 h_2-\frac{4 \pi ^2 h_0}{3}-2 i \pi h_2+2 h_3-14 \zeta_{3}+2 i \pi ^3\,,\nonumber\\ d_{0,4}&=8\,,\qquad d_{0,3}=\frac{32 h_0}{3}+\frac{80 i \pi }{3}\,,\qquad d_{0,2}=-4 h_0^2+8 i \pi h_0-\frac{70 \pi ^2}{3}\,,\nonumber\\ d_{0,1}&=-4 i \pi h_0^2+4 \pi ^2 h_0+20 \zeta_{3}-\frac{10 i \pi ^3}{3}\,,\nonumber\\ d_{0,0}&=-\frac{5 h_0^4}{6}-\frac{4 h_0^3 h_1}{3}-\frac{h_0^2 h_1^2}{2}-3 h_0^2 h_2+2 h_0 h_1 h_2-2 h_0 h_{21}+20 h_0 h_3+\frac{h_2^2}{2}-h_{22}\nonumber\\ &-\frac{10}{3} i \pi h_0^3-4 i \pi h_0^2 h_1-i \pi h_0 h_1^2-6 i \pi h_0 h_2+2 i \pi h_1 h_2-2 h_1 h_3-2 i \pi h_{21}+20 i \pi h_3-34 h_4\nonumber\\ &+\frac{11 \pi ^2 h_0^2}{6}+\frac{7}{3} \pi ^2 h_0 h_1+14 h_0 \zeta_{3}-\frac{7}{3} i \pi ^3 h_0+2 h_1 \zeta_{3}-\frac{1}{3} i \pi ^3 h_1-\frac{7 \pi ^2 h_2}{3}+24 i \pi \zeta_{3}+\frac{259 \pi ^4}{180}\,,\nonumber\\ d_{1,5}&=-\frac{64}{15}\,,\qquad d_{1,4}=-12 h_0-\frac{68 i \pi }{3}\,,\qquad d_{1,3}=\frac{8 h_0^2}{3}-\frac{56 i \pi h_0}{3}+\frac{272 \pi ^2}{9}\,,\nonumber\\ d_{1,2}&=\frac{2 h_0^3}{3}+2 h_0^2 h_1+6 i \pi h_0^2+4 i \pi h_0 h_1-4 h_0 h_2+\frac{16 \pi ^2 h_0}{3}-4 i \pi h_2+4 h_3-\frac{220 \zeta_{3}}{3}+\frac{40 i \pi ^3}{3}\,,\nonumber\\ d_{1,1}&=-\frac{2 h_0^4}{3}-\frac{4 h_0^3 h_1}{3}-2 i \pi h_0^3-2 i \pi h_0^2 h_1-\frac{8 \pi ^2 h_0^2}{3}-4 i \pi h_0 h_2+8 h_0 h_3+12 i \pi h_3-16 h_4\nonumber\\ &-\frac{4}{3} \pi ^2 h_0 h_1+\frac{44 h_0 \zeta_{3}}{3}-\frac{16}{3} i \pi ^3 h_0+\frac{4 \pi ^2 h_2}{3}-\frac{176 i \pi \zeta_{3}}{3}+\frac{5 \pi ^4}{18}\,,\nonumber\\ d_{1,0}&=\frac{43 h_0^5}{60}+\frac{5 h_0^4 h_1}{4}+\frac{2 h_0^3 h_1^2}{3}+\frac{17 h_0^3 h_2}{3}+\frac{h_0^2 h_1^3}{6}+5 h_0^2 h_1 h_2-h_0 h_1^2 h_2\nonumber\\ &+5 h_0^2 h_{21}+2 h_0 h_1 h_{21}-3 h_0 h_2^2-\frac{h_1 h_2^2}{2}+h_1 h_{22}+\frac{h_2 h_{21}}{3}-\frac{h_{212}}{3}\nonumber\\ &-22 h_0^2 h_3-28 h_0 h_1 h_3-2 h_0 (h_{211}+7 h_{22})+h_1^2 h_3+\frac{7 h_2 h_3}{3}-h_{221}+\frac{53 h_{23}}{3}\nonumber\\ &+\frac{13}{4} i \pi h_0^4+\frac{13}{3} i \pi h_0^3 h_1+2 i \pi h_0^2 h_1^2+\frac{1}{3} i \pi h_0 h_1^3+46 h_1 h_4+33 h_{32}+98 h_5\nonumber\\ &+17 i \pi h_0^2 h_2+10 i \pi h_0 h_1 h_2+10 i \pi h_0 h_{21}-i \pi h_1^2 h_2+2 i \pi h_1 h_{21}-3 i \pi h_2^2-2 i \pi (h_{211}+7 h_{22})\nonumber\\ &-\frac{22}{9} \pi ^2 h_0^3-3 \pi ^2 h_0^2 h_1-\frac{7}{6} \pi ^2 h_0 h_1^2-5 \pi ^2 h_0 h_2-40 i \pi h_0 h_3-28 i \pi h_1 h_3-8 i \pi h_4\nonumber\\ &+\frac{10}{3} i \pi ^3 h_0^2+\frac{4}{3} i \pi ^3 h_0 h_1+\frac{1}{6} i \pi ^3 h_1^2+\frac{7}{3} \pi ^2 h_1 h_2+5 i \pi ^3 h_2-\frac{7 \pi ^2 h_{21}}{3}+16 \pi ^2 h_3\nonumber\\ &-9 h_0^2 \zeta_{3}+26 h_0 h_1 \zeta_{3}-\frac{599 \pi ^4 h_0}{180}-h_1^2 \zeta_{3}-\frac{161 \pi ^4 h_1}{180}-64 h_2 \zeta_{3}-\frac{47 i \pi ^5}{90}\nonumber\\ &-\frac{32}{3} i \pi h_0 \zeta_{3}+26 i \pi h_1 \zeta_{3}+29 \pi ^2 \zeta_{3}-121 \zeta_{5}\,. \end{align} \section{Two-Loop Non-Planar Diagram} \label{ss:two} For the two-loop massive non-planar diagrams, we consider the following two Feynman integral families \begin{align} J^\mathrm{NPL1}_{a_1,a_2,a_3,a_4,a_5,a_6,a_7}&= \int\!\! \frac{\mathrm{d}^d\ell_1}{i \pi^{d/2}} \frac{\mathrm{d}^d\ell_2}{i \pi^{d/2}} \prod _{n=1}^2 \frac{1}{(-p_n^2)^{a_n}} \prod _{n=3}^7 \frac{1}{(m_t^2-p_n^2)^{a_n}} \label{two-1} \end{align} where the momenta of the lines are given by \begin{align} \{p_1,p_2,p_3,p_4,p_5,p_6,p_7\} =\{\ell_1+q_{12}, \ell_1-q_3, \ell_{12}+q_{2\bar3}, \ell_{12}+q_2, \ell_2-q_1, \ell_2, \ell_2+q_2\} \,, \label{two-2} \end{align} and \begin{align} J^\mathrm{NPL2}_{a_1,a_2,a_3,a_4,a_5,a_6,a_7}&= \int\!\! \frac{\mathrm{d}^d\ell_1}{i \pi^{d/2}} \frac{\mathrm{d}^d\ell_2}{i \pi^{d/2}} \prod _{n=1}^4 \frac{1}{(m_t^2-p_n^2)^{a_n}} \prod _{n=5}^7 \frac{1}{(-p_n^2)^{a_n}} \label{two-npl2-1} \end{align} where the momenta of the lines are \begin{align} \{p_1,p_2,p_3,p_4,p_5,p_6,p_7\} =\{\ell_1, \ell_1+q_3, \ell_{12}+q_{23}, \ell_{12}-q_1 \ell_2-q_1, \ell_2, \ell_2+q_2\} \,. \label{two-npl2-2} \end{align} Recall that $q_{2\bar 3}=q_2-q_3$ and $q_{23}=q_2+q_3$. We consider $I^\mathrm{NPL1}=J^\mathrm{NPL1}_{1,1,1,1,1,1,1}$ as an example in this section. The template integrals of $I^\mathrm{NPL2}=J^\mathrm{NPL2}_{1,1,1,1,1,1,1}$ is provided in Appendix~\ref{app:temp}. The Feynman diagrams corresponding $I^\mathrm{NPL1}$ and $I^\mathrm{NPL2}$ are illustrated in Fig.~\ref{fig:two}. In Subsection~\ref{ss:one-higgs} we showed that the expansion in $m_H$ can be obtained by the naive Taylor expansion of the integrand. This holds also in this case, and the $m_H$-expansion is straightforward. Therefore we again consider only integrals with massless external legs. The alpha representation of our non-planar integral is \begin{align} I^\mathrm{NPL1}&\overset{\mathrm{AC}}{=} \int\! \mathfrak{D}^7\alpha^\delta~ \mathcal{U}^{-d/2} e^{ -\mathcal{F}/\mathcal{U} } \,, \label{two-4} \end{align} where \begin{align} \mathcal{U}&=\alpha_{12}\alpha_{34567}+\alpha_{34}\alpha_{567} \label{two-5} \\ \mathcal{F}&= m_t^2\alpha_{34567}\mathcal{U} +S\left( \alpha_1\alpha_7\alpha_{45}+\alpha_2\alpha_5\alpha_{37}+\alpha_5\alpha_7\alpha_{34} \right) +T\alpha_1\alpha_3\alpha_6 +U\alpha_2\alpha_4\alpha_6 \label{two-6} \,. \end{align} \subsection{Relevant Scaling} \label{ss:two-scale} A new feature appears in Eq.~\eqref{two-6}: if we impose the relation $S+T+U=0$, we obtain \begin{align} \mathcal{F} \overset{\mathrm{AC}}{=} m_t^2\alpha_{34567}\mathcal{U} +S\left( \alpha_1\alpha_7\alpha_{45}+\alpha_2\alpha_5\alpha_{37}+\alpha_5\alpha_7\alpha_{34} -\alpha_2\alpha_4\alpha_6 \right) +T(\alpha_1\alpha_3\alpha_6-\alpha_2\alpha_4\alpha_6) \,, \label{two-f2} \end{align} and the sign of $\mathcal{F}$ becomes indefinite. In such a case, it is not guaranteed that the method to reveal the relevant scalings works properly~\cite{Jantzen:2012mw}. An idea to solve this problem is to perform a proper change of variables and decompose the integration domain such that $\mathcal{F}$ is positive-definite~\cite{Jantzen:2012mw}. However in our case, this approach does not resolve the indefinite sign of $\mathcal{F}$ since there is no simple change of variables to make $\mathcal{F}$ positive-definite. The solution to this problem is to keep $S, T, U$ as independent variables. It is obvious that Eq.~\eqref{two-6} is positive definite in this case, and we can apply the method of regions, expand the integrand, and express the result in terms of Mellin-Barnes integrals in terms of the positive Mandelstam variables. The procedure to obtain expression~\eqref{two-6} is the following: we first compute $\mathcal{F}$ respecting the original definition of the Mandelstam variables~\eqref{stu}. At this point, there are some redundant terms in $\mathcal{F}$ such as $(S+T+U)\alpha_2\alpha_3\alpha_6$. We minimize the number of terms, under the condition that $\mathcal{F}$ remains positive definite. The resulting $\mathcal{F}$ is unique. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig5.pdf} \caption{ The two-loop massive non-planar diagrams defined in Eq.~\eqref{two-1} (left) and Eq.~\eqref{two-npl2-1} (right).} \label{fig:two} \end{figure} There are two commands in the package \texttt{asy2.1.m} to reveal the relevant regions. The first is \texttt{AlphaRepExpand}, which accepts a set of propagators and replacement rules as input. The other is \texttt{WilsonExpand}, which accepts the Symanzik polynomials as input. Here we must use \texttt{WilsonExpand} since the conventional routine used in \texttt{AlphaRepExpand} either eliminates $U$ completely or keeps $U$ completely, whereas we want to eliminate $U$ partially, as explained above. There is an option \texttt{Preresolve} in \texttt{AlphaRepExpand} which makes it attempt some changes of variables to make $\mathcal{F}$ positive definite, but in our cases this option did not solve the problem. With this setup and the hierarchy~\eqref{sca2}, we obtain the following fourteen relevant scalings\footnote{ In fact, it turns out that the correct scalings~\eqref{two-scalings} can be obtained by using Eq.~\eqref{two-f2} or by using \texttt{AlphaRepExpand}, provided suitable values for $S$ and $T$ are chosen (e.g. $S=1, T=1$), such that $U\not =0$. This may be because there is no cancellation between two hard-scaling terms resulting in a soft-scaling term in Eq.~\eqref{two-f2}. However, this observation is made in hindsight, since in principle it is not guaranteed that the regions are found correctly. One could also use \texttt{ASPIRE}~\cite{Ananthanarayan:2018tog} instead of \texttt{asy2.1.m} and obtain the correct scalings, if one similarly chooses suitable values for $S$ and $T$. } \begin{align} & (0,0,0,0,0,0,0), (0,\frac{1}{2},\frac{1}{2},0,0,\frac{1}{2},1), (\frac{1}{2},0,0,\frac{1}{2},1,\frac{1}{2},0), (0,0,0,1,1,1,0) \nonumber\\ & (0,0,1,0,0,1,1), (0,1,0,0,0,1,1), (0,1,1,0,0,0,1), (1,0,0,0,1,1,0), (1,0,0,1,1,0,0), \nonumber\\ & (1,1,0,0,0,0,1), (1,1,0,0,1,0,0), (1,1,0,0,1,1,1), (1,1,1,1,0,0,1), (1,1,1,1,1,0,0) \,. \label{two-scalings} \end{align} In the case of the planar integrals~\cite{Davies:2018ood}, the scalings consist of only 0 and 1. Here, we additionally have a scaling $(0, \frac{1}{2}, \frac{1}{2}, 0, 0, \frac{1}{2})$ which is particular to these non-planar integrals. The contribution of the all-hard regions can be calculated by using the massless integral family and IBP-reduction [cf. the text below Eq.~\eqref{one-r1-8}]. Therefore we do not need a template integral for Region~1, and we show the calculation of Region~1 separately. The template integrals for Regions~2 to 14 are calculated in the next subsection, and can be found in the ancillary file to this paper~\cite{anci}. \subsubsection*{Region 1 $(0,0,0,0,0,0,0)$ (all-hard region)} As shown in Subsection~\ref{ss:one-top}, the contribution from the all-hard region can be expressed in terms of massless integrals of the same topology. The massless integral that is relevant here has been calculated in Ref.~\cite{Tausk:1999vh}. \subsection{Template Integrals for Regions 2 to 14} \label{ss:two-temp} The template integral of each region is expressed as \begin{align} \mathcal{T}^{(j)}_{\delta_1,\delta_2,\delta_3,\delta_4,\delta_5,\delta_6,\delta_7,\epsilon}&= \int\! \mathfrak{D}^7\alpha^\delta~ \left( \mathcal{U}^{(j)} \right)^{-d/2} e^{ -\mathcal{F}^{(j)}/\mathcal{U}^{(j)} } \label{two-temp} \end{align} where $2\leq j\leq 14$. For simplicity, we omit the subscripts of $\mathcal{T}^{(j)}_{\delta_1,\delta_2,\delta_3,\delta_4,\delta_5,\delta_6,\delta_7,\epsilon}$ and write $\mathcal{T}^{(j)}$ when they are in the ordinary order. \subsubsection*{Region 2 $(0,\frac{1}{2},\frac{1}{2},0,0,\frac{1}{2},1)$, Region 3 $(\frac{1}{2},0,0,\frac{1}{2},1,\frac{1}{2},0)$} The Symanzik polynomials of Region~2 are given by \begin{align} &\mathcal{U}^{(2)}=\alpha_1\alpha_{45}+\alpha_4\alpha_5 \label{two-r2-1} \\ &\mathcal{F}^{(2)}=m_t^2\alpha_{45}~\mathcal{U}^{(2)} +S(\alpha_2\alpha_3\alpha_5+\alpha_1\alpha_{45}\alpha_7+\alpha_4\alpha_5\alpha_7) +T\alpha_1\alpha_3\alpha_6 +U\alpha_2\alpha_4\alpha_6 \,. \label{two-r2-2} \end{align} The integration in $\alpha_7$ can be performed using the relation~\eqref{form1}. Then, we perform the following change of variables \begin{align} \alpha_2\to \beta_1\beta_3/\beta_2,\quad \alpha_3\to \beta_1\beta_2/\beta_3,\quad \alpha_6\to \beta_2\beta_3/\beta_1 \label{two-r2-3} \,, \end{align} and the template integral becomes \begin{align} \mathcal{T}^{(2)} &= 4S^{-1-\delta_7} \int_{0}^{\infty} \! \mathrm{d}\alpha_1\mathrm{d}\alpha_4\mathrm{d}\alpha_5 \mathrm{d}\beta_1\mathrm{d}\beta_2\mathrm{d}\beta_3~ \alpha_1^{\delta_1} \alpha_4^{\delta_4} \alpha_5^{\delta_5} \beta_1^{\delta_{23\bar6}} \beta_2^{\delta_{\bar236}} \beta_3^{\delta_{2\bar36}} \nonumber\\ & \times \frac{ \left( \alpha_1\alpha_4+\alpha_1\alpha_5+\alpha_4\alpha_5 \right)^{-d/2} e^{-m_t^2\alpha_{45}-\left( S b_1^2\alpha_5+Tb_2^2\alpha_1+Ub_3^2\alpha_4\right)/\mathcal{U}^{(2)}} } {\Gamma [ \delta_1+1, \delta_2+1, \delta_3+1, \delta_4+1, \delta_5+1, \delta_6+1 ]} \,. \label{two-r2-4} \end{align} The integration in $\beta_1,\beta_2,\beta_3$ is now straightforward. To integrate over the remaining variables, it is easiest to integrate in $\alpha_1$ first using the relation~\eqref{form2} and then integrate in $\alpha_4,\alpha_5$. Finally we obtain the template integral as \begin{align} &\mathcal{T}^{(2)} = \frac{1}{2} (m_t^2)^{-(1+4\epsilon+\delta_{112344556})/2} S^{-(3+\delta_{23\bar 677})/2} T^{-(1+\delta_{\bar 2 36})/2} U^{-(1+\delta_{2\bar 36})/2} \nonumber\\ &\qquad \times \frac{\Gamma [ \delta_{\bar0\bar1\bar2}, \frac{1+\delta_{23\bar 6}}{2}, \frac{1+\delta_{2\bar 36}}{2}, \frac{1+\delta_{\bar 236}}{2}, \frac{1+\delta_{112\bar 3\bar 6}}{2}, \frac{1+\delta_{00112344\bar 6}}{2}, \frac{1+\delta_{00112\bar 3556}}{2}, \frac{1+\delta_{0000112344556}}{2} ]} {\Gamma [ \delta_1+1, \delta_2+1, \delta_3+1, \delta_4+1, \delta_5+1, \delta_6+1, \delta_{0011245}+1, \frac{1-\delta_{00236}}{2} ]} \,. \label{two-r2-6} \end{align} The template integral of the Region~3 is obtained by the replacement $\alpha_1\leftrightarrow\alpha_2$, $\alpha_3\leftrightarrow\alpha_4$, $\alpha_5\leftrightarrow\alpha_7$, $T\leftrightarrow U$ of $\mathcal{T}^{(2)}$ \begin{align} \mathcal{T}^{(3)}_{\delta_1,\delta_2,\delta_3,\delta_4,\delta_5,\delta_6,\delta_7,\epsilon}= \mathcal{T}^{(2)}_{\delta_2,\delta_1,\delta_4,\delta_3,\delta_7,\delta_6,\delta_5,\epsilon} \Big|_{T\leftrightarrow U} \label{two-r2-7} \,. \end{align} \subsubsection*{Region 4 $(0,0,0,1,1,1,0)$, Region 5 $(0,0,1,0,0,1,1)$} The Symanzik polynomials of Region~4 are given by \begin{align} &\mathcal{U}^{(4)}=\alpha_{12}\alpha_{37}+\alpha_{3}\alpha_{7} \label{two-r4-1} \\ &\mathcal{F}^{(4)}=m_t^2\alpha_{37}~\mathcal{U}^{(4)} +S(\alpha_3\alpha_5\alpha_7+\alpha_1\alpha_{45}\alpha_7+\alpha_2\alpha_5\alpha_{37}) +T\alpha_1\alpha_3\alpha_6 \,. \label{two-r4-2} \end{align} The integrations in $\alpha_4,\alpha_5,\alpha_6$ can be done using the relation~\eqref{form1}. Then the template integral is \begin{align} \mathcal{T}^{(4)} &= \frac{1} {S^{2+\delta_{45}}T^{1+\delta_6}} \int_{0}^{\infty} \! \mathrm{d}\alpha_1\mathrm{d}\alpha_2\mathrm{d}\alpha_3\mathrm{d}\alpha_7~ \alpha_1^{-2+\delta_{1\bar4\bar6}} \alpha_2^{\delta_2} \alpha_3^{-1+\delta_{3\bar6}} \alpha_7^{-1+\delta_{\bar47}} \nonumber\\ & \times \frac{ \left(\alpha_{12}\alpha_{37}+\alpha_{3}\alpha_{7} \right)^{3-d/2+\delta_{456}} \left(\alpha_{13}\alpha_{7}+\alpha_{2}\alpha_{37} \right)^{-1-\delta_{5}} e^{-m_t^2\alpha_{37}} } {\Gamma [ \delta_1+1, \delta_2+1, \delta_3+1, \delta_7+1 ]} \,. \label{two-r4-3} \end{align} We introduce a Mellin-Barnes integral to separate $\alpha_{12}\alpha_{37}+\alpha_{3}\alpha_{7}$ into two factors, $\alpha_{13}\alpha_{7}+\alpha_{2}\alpha_{37}$ and $\alpha_1\alpha_3$, then the integration in $\alpha_1$ and $\alpha_2$ can be done using the relation~\eqref{form2}. The remaining integration is also straightforward and we obtain \begin{align} \mathcal{T}^{(4)}=& \int\mathrm{d}z_1 \frac{\Gamma[\delta_{\bar{0}\bar{1}\bar{2}},\delta_{001237},z_{\bar{1}}+\delta_{0267}+1,z_{\bar{1}},z_{1}+\delta_{0123\bar{6}},z_{1}+\delta_{1\bar{4}\bar{6}}-1,z_{1}+\delta_{\bar{0}\bar{4}\bar{5}\bar{6}}-1]} {(m_t^2)^{\delta_{001237}}S^{2+\delta_{45}}T^{1+\delta_{6}} \Gamma[\delta_{1}+1,\delta_{3}+1,\delta_{\bar{0}\bar{4}\bar{5}\bar{6}}-1,\delta_{7}+1,\delta_{0012237}+1,z_{1}+\delta_{\bar{0}\bar{4}\bar{6}}]}\,. \label{template-npl4} \end{align} The template integral of Region~5 can be obtained in a similar manner and the result is \begin{align} \mathcal{T}^{(5)}=& \int\mathrm{d}z_1 \frac{\Gamma[\delta_{\bar{0}\bar{1}\bar{2}},\delta_{001245},z_{\bar{1}}+\delta_{0156}+1,z_{\bar{1}},z_{1}+\delta_{2\bar{3}\bar{6}}-1,z_{1}+\delta_{0124\bar{6}},z_{1}+\delta_{\bar{0}\bar{3}\bar{6}\bar{7}}-1]} {(m_t^2)^{\delta_{001245}}S^{2+\delta_{37}}U^{1+\delta_{6}} \Gamma[\delta_{2}+1,\delta_{4}+1,\delta_{5}+1,\delta_{0011245}+1,\delta_{\bar{0}\bar{3}\bar{6}\bar{7}}-1,z_{1}+\delta_{\bar{0}\bar{3}\bar{6}}]}\,. \label{template-npl5} \end{align} \subsubsection*{Region 6 $(0,1,0,0,0,1,1)$, Region 8 $(1,0,0,0,1,1,0)$} The Symanzik polynomials of Region~6 are given by \begin{align} &\mathcal{U}^{(6)}=\alpha_{1}\alpha_{345}+\alpha_{34}\alpha_5 \label{two-r6-1} \\ &\mathcal{F}^{(6)}=m_t^2\alpha_{345}~\mathcal{U}^{(6)} +S(\alpha_2\alpha_3\alpha_5+\alpha_{34}\alpha_{5}\alpha_7+\alpha_1\alpha_{45}\alpha_{7}) +T\alpha_1\alpha_3\alpha_6 \,. \label{two-r6-2} \end{align} The integrations in $\alpha_2,\alpha_6,\alpha_7$ are straightforward using the relation~\eqref{form1} and the template integral is given by \begin{align} \mathcal{T}^{(6)} &= \frac{1} {S^{2+\delta_{27}}T^{1+\delta_6}} \int_{0}^{\infty} \! \mathrm{d}\alpha_1\mathrm{d}\alpha_3\mathrm{d}\alpha_4\mathrm{d}\alpha_5~ \alpha_1^{-1+\delta_{1\bar6}} \alpha_3^{-2+\delta_{\bar23\bar6}} \alpha_4^{\delta_4} \alpha_5^{-1+\delta_{\bar25}} \nonumber\\ & \times \frac{ \left(\alpha_{1}\alpha_{345}+\alpha_{34}\alpha_5 \right)^{3-d/2+\delta_{267}} \left(\alpha_{34}\alpha_{5}+\alpha_{1}\alpha_{45} \right)^{-1-\delta_{7}} e^{-m_t^2\alpha_{345}} } {\Gamma [ \delta_1+1, \delta_3+1, \delta_4+1, \delta_5+1 ]} \,. \label{two-r6-3} \end{align} This looks similar to Eq.~\eqref{two-r4-3} but has one extra massive integral, and thus it is necessary to introduce two Mellin-Barnes integrals, giving \begin{align} \mathcal{T}^{(6)}=& \int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{015},z_{\bar{1}},z_{1}+\delta_{1\bar{6}},z_{1}+\delta_{\bar{0}\bar{2}\bar{6}\bar{7}}-1,z_{\bar{2}}+\delta_{0124}+1]} {(m_t^2)^{\delta_{001345}}S^{2+\delta_{27}}T^{1+\delta_{6}} \Gamma[\delta_{1}+1,\delta_{3}+1,\delta_{4}+1,\delta_{5}+1]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{\bar{1}\bar{2}}+\delta_{0012456}+1,z_{\bar{2}},z_{2}+\delta_{\bar{0}\bar{1}\bar{2}},z_{12}+\delta_{\bar{2}3\bar{6}}-1]} {\Gamma[\delta_{\bar{0}\bar{2}\bar{6}\bar{7}}-1,z_{1}+\delta_{\bar{0}\bar{2}\bar{6}},z_{\bar{2}}+\delta_{0011245}+1]}\,. \label{template-npl6} \end{align} The template integral of Region~8 can be obtained in a similar manner and the result is \begin{align} \mathcal{T}^{(8)}=& \int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{\bar{0}\bar{1}\bar{2}},z_{\bar{1}},z_{1}+\delta_{\bar{1}4\bar{6}}-1,z_{1}+\delta_{\bar{0}\bar{1}\bar{5}\bar{6}}-1,z_{\bar{1}\bar{2}}+\delta_{067}]} {(m_t^2)^{\delta_{002347}}S^{2+\delta_{15}}U^{1+\delta_{6}} \Gamma[\delta_{2}+1,\delta_{3}+1,\delta_{4}+1,\delta_{\bar{0}\bar{1}\bar{5}\bar{6}}-1]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{\bar{2}},z_{2}+\delta_{3}+1,z_{12}+\delta_{2\bar{6}},z_{12}+\delta_{0234\bar{6}}]} {\Gamma[\delta_{7}+1,z_{1}+\delta_{\bar{0}\bar{1}\bar{6}},z_{12}+\delta_{\bar{1}34\bar{6}}]}\,. \label{template-npl8} \end{align} \subsubsection*{Region 7 $(0,1,1,0,0,0,1)$, Region 9 $(1,0,0,1,1,0,0)$} The Symanzik polynomials of Region~7 are given by \begin{align} &\mathcal{U}^{(7)}=\alpha_{1}\alpha_{456}+\alpha_{4}\alpha_{56} \label{two-r7-1} \\ &\mathcal{F}^{(7)}=m_t^2\alpha_{456}~\mathcal{U}^{(7)} +S(\alpha_{4}\alpha_{5}\alpha_7+\alpha_1\alpha_{45}\alpha_{7}) +T\alpha_1\alpha_3\alpha_6 +U\alpha_2\alpha_4\alpha_6 \,. \label{two-r7-2} \end{align} The template integral is obtained in a similar way as that of Region~6 and the resulting two-dimensional integral is given by \begin{align} \mathcal{T}^{(7)}=& \int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{014},z_{\bar{1}}+\delta_{0012345}+1,z_{1}+\delta_{\bar{2}\bar{3}6}-1,z_{1}+\delta_{\bar{0}\bar{2}\bar{3}\bar{7}}-1,z_{\bar{1}\bar{2}}+\delta_{\bar{1}3}]} {(m_t^2)^{\delta_{001456}}S^{1+\delta_{7}}T^{1+\delta_{3}}U^{1+\delta_{2}} \Gamma[\delta_{1}+1,\delta_{4}+1,\delta_{5}+1,\delta_{6}+1]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{\bar{1}\bar{2}}+\delta_{0235}+1,z_{\bar{2}},z_{2}+\delta_{1\bar{3}},z_{12}+\delta_{\bar{0}\bar{2}\bar{3}}]} {\Gamma[\delta_{\bar{0}\bar{2}\bar{3}\bar{7}}-1,z_{1}+\delta_{\bar{0}\bar{2}\bar{3}},z_{\bar{1}\bar{2}}+\delta_{0012345}+1]}\,. \label{template-npl7} \end{align} The template integral of Region~9 can also be obtained in a similar manner and the result is given by \begin{align} \mathcal{T}^{(9)}=& \int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{023},z_{\bar{1}}+\delta_{0012347}+1,z_{1}+\delta_{\bar{0}\bar{1}\bar{4}\bar{5}}-1,z_{1}+\delta_{\bar{1}\bar{4}6}-1,z_{\bar{1}\bar{2}}+\delta_{\bar{2}4}]} {(m_t^2)^{\delta_{002367}}S^{1+\delta_{5}}T^{1+\delta_{1}}U^{1+\delta_{4}} \Gamma[\delta_{2}+1,\delta_{3}+1,\delta_{\bar{0}\bar{1}\bar{4}\bar{5}}-1,\delta_{6}+1]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{\bar{1}\bar{2}}+\delta_{0147}+1,z_{\bar{2}},z_{2}+\delta_{2\bar{4}},z_{12}+\delta_{\bar{0}\bar{1}\bar{4}}]} {\Gamma[\delta_{7}+1,z_{1}+\delta_{\bar{0}\bar{1}\bar{4}},z_{\bar{1}\bar{2}}+\delta_{0012347}+1]}\,. \label{template-npl9} \end{align} \subsubsection*{Region 10 $(1,1,0,0,0,0,1)$, Region 11 $(1,1,0,0,1,0,0)$} The Symanzik polynomials of Region~10 are given by \begin{align} &\mathcal{U}^{(10)}=\alpha_{34}\alpha_{56} \label{two-r10-1} \\ &\mathcal{F}^{(10)}=m_t^2\alpha_{3456}~\mathcal{U}^{(10)} +S(\alpha_{2}\alpha_{3}\alpha_5+\alpha_{34}\alpha_{5}\alpha_{7}) +T\alpha_1\alpha_3\alpha_6 +U\alpha_2\alpha_4\alpha_6 \,. \label{two-r10-2} \end{align} The template integral is obtained in a similar way as that of Region~5 and the resulting one-dimensional integral is given by \begin{align} \mathcal{T}^{(10)}=& \int\mathrm{d}z_1 \frac{\Gamma[\delta_{034},\delta_{056},z_{\bar{1}}+\delta_{\bar{1}\bar{2}3}-1,z_{\bar{1}}+\delta_{\bar{2}5\bar{7}}-1,z_{\bar{1}},z_{1}+\delta_{2}+1,z_{1}+\delta_{4}+1,z_{1}+\delta_{\bar{1}6}]} {(m_t^2)^{\delta_{003456}}S^{2+z_{1}+\delta_{27}}T^{1+\delta_{1}}U^{z_{\bar{1}}} \Gamma[\delta_{2}+1,\delta_{3}+1,\delta_{4}+1,\delta_{\bar{1}\bar{2}34},\delta_{5}+1,\delta_{6}+1,\delta_{\bar{1}\bar{2}56\bar{7}}-1]}\,. \label{template-npl10} \end{align} The template integral of Region~11 can be obtained in a similar manner and the result is given by \begin{align} \mathcal{T}^{(11)}=& \int\mathrm{d}z_1 \frac{\Gamma[\delta_{034},\delta_{067},z_{\bar{1}}+\delta_{\bar{1}\bar{2}4}-1,z_{\bar{1}}+\delta_{\bar{1}\bar{5}7}-1,z_{\bar{1}},z_{1}+\delta_{1}+1,z_{1}+\delta_{3}+1,z_{1}+\delta_{\bar{2}6}]} {(m_t^2)^{\delta_{003467}}S^{2+z_{1}+\delta_{15}}T^{z_{\bar{1}}}U^{1+\delta_{2}} \Gamma[\delta_{1}+1,\delta_{3}+1,\delta_{4}+1,\delta_{\bar{1}\bar{2}34},\delta_{6}+1,\delta_{7}+1,\delta_{\bar{1}\bar{2}\bar{5}67}-1]}\,. \label{template-npl11} \end{align} \subsubsection*{Region 12 $(1,1,0,0,1,1,1)$} The Symanzik polynomials of Region~12 are given by \begin{align} &\mathcal{U}^{(12)}=\alpha_{34}\alpha_{12567} \label{two-r12-1} \\ &\mathcal{F}^{(12)}=m_t^2\alpha_{34}~\mathcal{U}^{(12)} +S(\alpha_{2}\alpha_{3}\alpha_5+\alpha_{1}\alpha_{4}\alpha_7+\alpha_{34}\alpha_{5}\alpha_{7}) +T\alpha_1\alpha_3\alpha_6 +U\alpha_2\alpha_4\alpha_6 \,. \label{two-r12-2} \end{align} We introduce Mellin-Barnes integrals four times, and the template integral is given by \begin{align} \mathcal{T}^{(12)}=& \int\mathrm{d}z_1\mathrm{d}z_2\mathrm{d}z_3\mathrm{d}z_4 \frac{\Gamma[\delta_{034},z_{\bar{1}},z_{\bar{2}},z_{12}+\delta_{6}+1,z_{\bar{1}\bar{2}\bar{3}}+\delta_{\bar{0}\bar{1}\bar{2}\bar{5}\bar{6}}-2,z_{\bar{3}},z_{13}+\delta_{3}+1,z_{23}+\delta_{2}+1]} {(m_t^2)^{\delta_{034}}S^{3+z_{12}+\delta_{012567}}T^{z_{\bar{1}}}U^{z_{\bar{2}}} \Gamma[\delta_{1}+1,\delta_{2}+1,\delta_{3}+1,\delta_{4}+1]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{\bar{1}\bar{3}\bar{4}}+\delta_{\bar{0}\bar{1}\bar{2}4\bar{5}\bar{6}\bar{7}}-2,z_{\bar{2}\bar{3}\bar{4}}+\delta_{\bar{0}\bar{2}\bar{5}\bar{6}\bar{7}}-2,z_{\bar{4}},z_{34}+\delta_{5}+1,z_{1234}+\delta_{012567}+3]} {\Gamma[\delta_{5}+1,\delta_{6}+1,\delta_{\bar{0}\bar{0}\bar{1}\bar{2}\bar{5}\bar{6}\bar{7}}-1,\delta_{7}+1,z_{\bar{4}}+\delta_{\bar{0}\bar{1}\bar{2}34\bar{5}\bar{6}\bar{7}}-1]}\,. \label{template-npl12} \end{align} \subsubsection*{Region 13 $(1,1,1,1,0,0,1)$, Region 14 $(1,1,1,1,1,0,0)$} The Symanzik polynomials of Region~13 are given by \begin{align} &\mathcal{U}^{(13)}=\alpha_{1234}\alpha_{56} \label{two-r13-1} \\ &\mathcal{F}^{(13)}=m_t^2\alpha_{56}\mathcal{U}^{(13)} +S(\alpha_{134}\alpha_{5}\alpha_7+\alpha_{2}\alpha_{37}\alpha_5) +T\alpha_1\alpha_3\alpha_6 +U\alpha_2\alpha_4\alpha_6 \label{two-r13-2} \,. \end{align} It is necessary to introduce the Mellin-Barnes integral twice in order to separate the terms proportional to $S,T$, or $U$, respectively, and we obtain \begin{align} \mathcal{T}^{(13)}=& \int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{056},z_{\bar{1}}+\delta_{\bar{0}\bar{1}\bar{2}\bar{3}\bar{4}6}-1,z_{\bar{1}},z_{1}+\delta_{5\bar{7}},z_{\bar{2}}+\delta_{\bar{0}\bar{1}\bar{3}\bar{4}}-1,z_{\bar{1}\bar{2}}+\delta_{\bar{0}\bar{1}\bar{2}\bar{3}}-1,z_{\bar{2}}]} {(m_t^2)^{\delta_{056}}S^{1+z_{\bar{1}}+\delta_{7}}T^{z_{\bar{2}}}U^{2+z_{12}+\delta_{01234}} \Gamma[\delta_{1}+1,\delta_{2}+1,\delta_{3}+1,\delta_{\bar{0}\bar{0}\bar{1}\bar{2}\bar{3}\bar{4}}]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{2}+\delta_{1}+1,z_{12}+\delta_{3}+1,z_{12}+\delta_{01234}+2]} {\Gamma[\delta_{4}+1,\delta_{5}+1,\delta_{6}+1,\delta_{\bar{0}\bar{1}\bar{2}\bar{3}\bar{4}56\bar{7}}-1]}\,. \label{template-npl13} \end{align} The template integral of Region~14 can be obtained in a similar manner and the result is \begin{align} \mathcal{T}^{(14)}=& \int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{067},z_{\bar{1}}+\delta_{\bar{0}\bar{1}\bar{2}\bar{3}\bar{4}6}-1,z_{\bar{1}},z_{1}+\delta_{\bar{5}7},z_{\bar{2}}+\delta_{\bar{0}\bar{2}\bar{3}\bar{4}}-1,z_{\bar{1}\bar{2}}+\delta_{\bar{0}\bar{1}\bar{2}\bar{4}}-1,z_{\bar{2}}]} {(m_t^2)^{\delta_{067}}S^{1+z_{\bar{1}}+\delta_{5}}T^{2+z_{12}+\delta_{01234}}U^{z_{\bar{2}}} \Gamma[\delta_{1}+1,\delta_{2}+1,\delta_{3}+1,\delta_{\bar{0}\bar{0}\bar{1}\bar{2}\bar{3}\bar{4}}]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{2}+\delta_{2}+1,z_{12}+\delta_{4}+1,z_{12}+\delta_{01234}+2]} {\Gamma[\delta_{4}+1,\delta_{6}+1,\delta_{7}+1,\delta_{\bar{0}\bar{1}\bar{2}\bar{3}\bar{4}\bar{5}67}-1]}\,. \label{template-npl14} \end{align} In the limit where all the regularization parameters go to zero, Eq.~\eqref{template-npl13} becomes \begin{align} \mathcal{T}^{(13)}= &-2\epsilon \int \mathrm{d}z_1\mathrm{d}z_2 \frac{S^{-1+z_1}T^{z_2}}{U^{2+z_1+z_2}} \Gamma [ z_{\bar1}-1,z_{\bar1},z_1,z_{\bar2}-1,z_{\bar2},z_2+1,z_{\bar1\bar2}-1, z_{12}+1,z_{12}+2 ] \nonumber\\ &+\mbox{(one-dimensional Mellin-Barnes integrals)} +\mathcal{O}(\epsilon^2) \label{two-r13-4} \,, \end{align} which means that the two-dimensional integral does not contribute at $\epsilon^0$-order. Thus, the representation~\eqref{template-npl13} is most useful when the required order is $\epsilon^0$. There is another representation of $\mathcal{T}^{(13)}$ which is more suitable if we require calculation beyond order $\epsilon^0$: \begin{align} \mathcal{T}^{(13)}=& \int\mathrm{d}z_1 \frac{\Gamma[\delta_{\bar{0}},\delta_{\bar{0}\bar{1}\bar{2}},\delta_{012}+1,\delta_{056},\delta_{5\bar{7}},z_{\bar{1}}+\delta_{\bar{0}\bar{1}}-1,z_{\bar{1}},z_{1}+1,z_{1}+\delta_{\bar{0}\bar{1}\bar{2}6}]} {(m_t^2)^{\delta_{056}}S^{2+z_{1}+\delta_{7}}T^{z_{\bar{1}}}U^{1+\delta_{012}} \Gamma[\delta_{\bar{0}\bar{0}\bar{1}\bar{2}},\delta_{2}+1,\delta_{5}+1,\delta_{6}+1,z_{\bar{1}}+\delta_{\bar{0}},z_{1}+\delta_{\bar{0}\bar{1}\bar{2}56\bar{7}}]}\nonumber\\ & -\int\mathrm{d}z_1\mathrm{d}z_2 \frac{\Gamma[\delta_{\bar{0}},\delta_{\bar{0}\bar{1}\bar{2}},\delta_{056},z_{\bar{1}}+\delta_{\bar{0}\bar{1}\bar{2}5\bar{7}}-1,z_{\bar{1}},z_{1}+\delta_{1}+1,z_{1}+\delta_{012}+1]} {(m_t^2)^{\delta_{056}}S^{2+z_{1\bar{2}}+\delta_{0127}}T^{1+z_{\bar{1}2}} \Gamma[\delta_{1}+1,\delta_{\bar{0}\bar{0}\bar{1}\bar{2}},\delta_{2}+1,\delta_{5}+1]}\nonumber\\ &\qquad\times \frac{\Gamma[z_{1\bar{2}}+\delta_{6},z_{\bar{2}},z_{2}+1,z_{\bar{1}2}+\delta_{\bar{0}\bar{1}}]} {\Gamma[\delta_{6}+1,z_{\bar{2}}+\delta_{\bar{0}\bar{1}\bar{2}56\bar{7}}-1,z_{2}+\delta_{\bar{0}}+1]}\,. \label{template-npl13-b} \end{align} This representation consists of two integrals with at most two kinematic parameters, and their calculation is simpler. It has some disadvantages, however; it holds only when $\delta_3$ and $\delta_4$ are non-negative integers, and Eq.~\eqref{template-npl13-b} is shown in the special case that $\delta_3=0, \delta_4=0$, since that is the typical situation (here we may set these values because $\mathcal{T}^{(13)}$ is regular in $\delta_3$ and $\delta_4$). Another disadvantage is that each integral produces singularities in $\epsilon$ which cancel in their sum, however as a by-product, higher-order derivatives of $\Gamma$-functions contribute to the $\epsilon^0$ order. \subsection{Analytic Continuation} \label{ss:anacon} As mentioned in the text below Eq.~\eqref{mb1}, the Mellin-Barnes integrals in the template integrals are assumed to be regularized by choosing suitable values of $\delta_j$. However, the quantity we need is the one where $\delta_j\to0$ for all $j$. Therefore we need to analytically continue the Mellin-Barnes integrals in terms of $\delta_j$. We describe the procedure of the analytic continuation showing the case of Region~7, which is one of the most involved cases, as a concrete example.\footnote{ In terms of the dimension of the Mellin-Barnes integrals, the most involved is Region~12. However, the analytic continuation turns out to be rather simple in this region. } We take the limit of the ascending order of $\delta_j$, \begin{align} \lim_{\epsilon,\delta_7,\delta_6,\delta_5,\delta_4,\delta_3,\delta_2,\delta_1\to0} \mathcal{T}^{(7)} \,. \label{ana1} \end{align} As mentioned below Eq.~\eqref{mb1}, in general the integral contours of $z_1,...,z_4$ are assumed to be straight lines parallel to the imaginary axis [cf. Fig.~\ref{fig:contour1} (b)]. For Region~7, we have only $z_1, z_2$ as integration variables. We choose $\mathrm{Re}(z_1)=-1/5$, $\mathrm{Re}(z_2)=-1/3$. Then, we may choose \begin{align} \delta_1=0,~ \delta_2=0,~ \delta_3=-\frac{13}{30},~ \delta_4=\frac{1}{300},~ \delta_5=\frac{1}{500},~ \delta_6=\frac{17881}{15540},~ \delta_7=-\frac{64003}{62160},~ \epsilon=-\frac{2507}{10360} \label{ana2} \end{align} to regularize the template integral $\mathcal{T}^{(7)}$.\footnote{ These values can be changed, provided they do not cross the integration contours. } We try to set as many parameters as possible to zero in ascending $j$ order in Eq.~\eqref{ana2}. The limit of $\delta_{1,2}\to 0$ in Eq.~\eqref{ana1} is now trivial. The analytic continuation of $\delta_3$ from $-13/30$ to 0 makes the first rightmost left poles of $\Gamma (z_1+\delta_{\bar 36}-1)$, $\Gamma (z_2-\delta_{3})$, and $\Gamma (z_{12}-\delta_{03})$ into right poles, which must be compensated by adding their residues. This analytic continuation procedure is automatized in the \texttt{Mathematica} package \texttt{MB.m}~\cite{Czakon:2005rk}. After the analytic continuation in terms of $\delta_3$, the integral depends on $\delta_4,...,\delta_7$ and $\epsilon$, and we repeat the same procedure for $\delta_4$, then, $\delta_5$ and so on. In this way, we obtain a combination of integrals for which the arguments of the $\Gamma$-functions in the integrand contain only $z_1$ and $z_2$ such as \begin{align} \int \mathrm{d}z_1\mathrm{d}z_2 ~ \frac{ \Gamma[ 1-z_1,-1+z_1,-1+z_1,-z_{12},-z_2,z_2,z_{12}] } {\Gamma ( z_1)} \label{ana4} \,. \end{align} The methods to solve these integrals are explained in the next subsection. \subsection{Solving the Mellin-Barnes Integrals} \label{ss:mb} The usual idea to solve the Mellin-Barnes integral is to apply the first and the second Barnes lemma and variants of them. The \texttt{Mathematica} package \texttt{barnesroutines.m}~\cite{hepforge} performs this procedure in an automatic way, and solves some of the Mellin-Barnes integrals we encounter. Unfortunately, not all of them are solved by this package, and we describe here how to treat such cases. The essential points are mentioned in Ref.~\cite{Davies:2018ood}, and we fit or extend them to our integrals. \subsubsection*{Three- and Four-Dimensional Mellin-Barnes Integrals} The template integral of the Region~12~\eqref{template-npl12} is expressed as a four-dimensional Mellin-Barnes integral. Thus, the contribution from Region~12 contain a four-dimensional integral of the form \begin{align} \int \mathrm{d}z_1\mathrm{d}z_2\mathrm{d}z_3\mathrm{d}z_4 &\left(\frac{T}{S}\right)^{z_1} \left(\frac{U}{S}\right)^{z_2} \frac{ \Gamma [-z_4, 1+z_{34}, -2-z_{134}, -2-z_{234}, 3+z_{1234} ] } {\Gamma (-1-z_4)} \nonumber\\ &\times \Gamma[ -z_1,-z_2,-z_3, 1+z_{12}, 1+z_{13}, 1+z_{23}, -2-z_{123} ] \,. \label{sol4-1} \end{align} We use the relation \begin{align} & \frac{ \Gamma [-z_4, 1+z_{34}, -2-z_{134}, -2-z_{234}, 3+z_{1234} ] } {\Gamma (-1-z_4)} \nonumber\\ &= \Gamma[ -2-z_{134}, -1-z_{234}, 1+z_{34}, 3+z_{1234} ] \nonumber\\ &\quad + \frac{ \Gamma[2+z_{23}, -2-z_{134}, -2-z_{234}, 1+z_{34}, 3+z_{1234} ] } {\Gamma(1+z_{23})} \label{sol4-2} \end{align} to reduce the number of the $\Gamma$-functions whose argument contains $z_4$ from 6 to 4. Now one can apply the first Barnes lemma to solve the $z_4$-integral. The resulting three-dimensional integral can be easily reduced to a sum of two-dimensional integrals since the $z_3$-integral can be solved by the variants of the first and second Barnes lemmas. Thus, we have two-dimensional integrals, and the way to solve them will be explained below. Note that the reduction in Eq.~\eqref{sol4-2} can be done only after the limits $\delta_j\to 0$ and $\epsilon \to0$ since the $\Gamma$-functions of the denominator and the numerator have different dependence on $\epsilon$, thus the cancellation does not occur before the limits have been taken. The content of this subsection is not formulated in an algorithmic way and has been done manually. \subsubsection*{Two-Dimensional Mellin-Barnes Integrals} In the cases of integrals with no argument such as \begin{align} \int \mathrm{d}z_1\mathrm{d}z_2 ~\Gamma[ -z_1,-1+z_1,-z_2,z_2,1-z_{12},-1+z_{12}] \psi(z_1)\psi(1-z_{12}) \label{solmb2-1} \,, \end{align} or integrals with a single argument of the form \begin{align} \int \mathrm{d}z_1\mathrm{d}z_2 ~X^{z_2} \Gamma[ -z_1,-1+z_1,-z_2,z_2,1-z_{12},-1+z_{12}] \psi(z_1)\psi(1-z_{12}) \label{solmb2-2} \,, \end{align} we first reduce them to a one-dimensional integral using the generalized Barnes lemma~\cite{Davies:2018ood} \begin{align} & \int_{C}^{} \frac{dz}{2\pi i} \frac{\Gamma [a_1-z,a_2-z,a_3+z,a_4+z,a_5+z]}{\Gamma (-a_6+z)} \nonumber\\ &\hspace{2cm}= \frac{\Gamma [a_{13},a_{23},a_{14},a_{24},a_{15},a_{25}]} {\Gamma [a_{1235},a_{1245},-a_{56}]} \, _3F_2 \left( \begin{array}[]{c} a_{15},a_{25},a_{123456}\\ a_{1235},a_{1245} \end{array};1 \right) \,, \label{solmb2-10} \end{align} where $_3F_2$ is the generalized hypergeometric function ~\cite{hypgeo1,hypgeo2}. A useful corollary of Eq.~\eqref{solmb2-10} is presented in Appendix~\ref{app:mb}. The resulting one-dimensional integrals can be solved by the method below. Integrals with two different arguments are difficult to solve. However in our case, such integrals only appear at higher orders in $\epsilon$ so we do not need to consider them. \subsubsection*{One-Dimensional Mellin-Barnes Integrals} For integrals with no argument such as \footnote{ After the analytic continuation described in Subsection~\ref{ss:anacon}, we may have an expression where some of the poles merge. The following procedure can be applied also in these cases. } \begin{align} \int \mathrm{d}z_1 ~\Gamma[ -z_1,-1+z_1,1-z_1,-1+z_1] \psi (z_1)\psi (-2-z_1) \,, \label{solmb1} \end{align} we evaluate them numerically using \texttt{Mathematica} and apply the PSLQ algorithm~\cite{PSLQ1,PSLQ2} to fit them to a basis of constants which consists of all possible products of \begin{align} \{ 1,\gamma_E, \pi^2,\zeta_3 ,\zeta_5 \} \label{solmb2} \end{align} up to a transcendental weight of five. The results turn out to only require constants to weight four. Typically, 50--70 digits of the numerical result are sufficient to obtain the correct answer, which we verify with 100 more digits. The integrals with one argument typically have the form \begin{align} \int \mathrm{d}z_1 X^{z_1} ~\Gamma[ -z_1,-1+z_1,1-z_1,-1+z_1], \qquad X=\frac{X_1}{X_2}, \qquad X_1,X_2\in \{S,T,U\} \,. \label{solmb3} \end{align} Various combinations of $X_1, X_2$ appear since the template integrals contain them. We obtain the series expansion of Eq.~\eqref{solmb3} by taking the residue of the left poles or the right poles. By adjusting which poles we consider, we can choose the series in terms of either $T/S, U/S$, or $T/U$: \footnote{ Below Eq.~\eqref{series}, it was stated that we use the normal equal sign for series representations when the hierarchy is obvious. However, here we have to introduce additional assumptions for $X= T/S, U/S, T/U,$ since hierarchies between the positive Mandelstam variables have not been fixed up to this point. Thus we use the sign ``$\overset{\mathrm{AC}}{=}$" in Eq.~\eqref{solmb4}, indicating that a certain analytic continuation should be performed in order to ensure $X\ll 1$. } \begin{align} \int \mathrm{d}z_1 X^{z_1} ~\Gamma[ -z_1,-1+z_1,1-z_1,-1+z_1] \overset{\mathrm{AC}}{=} \sum_{n_1=0}^4 \sum_{n_2=0}^\infty (\log X)^{n_1} X^{n_2} \,, \label{solmb4} \end{align} Now we apply analytic continuation and obtain \begin{align} X=T/S:&\quad \log X \overset{\mathrm{AC}}{=} h_0-\log s+i\pi,\qquad X \overset{\mathrm{AC}}{=} -v \label{solmb5} \\ X=U/S:&\quad \log X \overset{\mathrm{AC}}{=} -h_1-\log s+i\pi,\qquad X\overset{\mathrm{AC}}{=} -(1-v) \label{solmb6} \\ X=T/U:&\quad \log X \overset{\mathrm{AC}}{=} h_0+h_1,\qquad X\overset{\mathrm{AC}}{=} \frac{v}{1-v} \,. \label{solmb7} \end{align} Recall that $h_1=-\log (1-v)$. We fit the series with HPL and express the result in terms of $h_N$. In the case of~\eqref{solmb5}, the series in $v$ is directly fit to $h_N$. In the case of~\eqref{solmb6}, we first fit the series to HPL with the argument of $(1-v)$ and then express them in terms of $h_N$. In the case of~\eqref{solmb7}, we first fit the series to HPL with the argument of $v/(1-v)$ and then express them in terms of $h_N$. Taking into account that $0\leq v\leq 1$ and the brach cut of $h_{N>0}$ lies on the real axis of $v>1$, we never cross the branch cut in the above procedure. The information of the branch cut is encoded in the analytic continuation of $\log X$. We already cover all of the combinations of $X=X_1/X_2$, so the calculation of the crossed diagrams can be done with the same procedure. For our sample integral~\eqref{two-1}, there are about 50 one-dimensional Mellin-Barnes integrals which are treated in this way. \subsection{Combining the Results} \label{ss:total} Summing the contributions from all the relevant regions, we obtain for our sample integral \begin{align} I=& \frac{i\pi^3 e^{-2\epsilon\gamma_E}}{m_ts^{\frac{5}{2}}\sqrt{v}\sqrt{1-v}} \left( \frac{1}{\epsilon}-2\log (m_t^2)-10\log 2\right) \nonumber\\ &+ \frac{e^{2i\pi\epsilon}e^{-2\epsilon\gamma_E}}{s^{3+2\epsilon}} \sum_{i_1=-1}^0\sum_{i_2=0}^{4+i_1} \frac{d_{i_1,i_2}}{v(1-v)} \epsilon ^{i_1} \log ^{i_2} (m_t) +\mathcal{O}(m_t,\epsilon) \end{align} where \begin{align} d_{-1,3}&=-\frac{4}{3}\,,\qquad d_{-1,2}=h_0 (4 v+2)+h_1 (4 v-6)+6 i \pi\,,\nonumber\\ d_{-1,1}&=-h_0^2+2 h_0 h_1+8 h_0 v+2 i \pi h_0 (2 v-1)-h_1^2+8 h_1 (v-1)+2 i \pi h_1 (2 v-1)-\frac{10 \pi ^2}{3}+8 i \pi\,,\nonumber\\ d_{-1,0}&=-\frac{1}{2} i \pi h_0^2+4 h_0 h_1+i \pi h_0 h_1-\frac{1}{2} i \pi h_1^2-8 i \pi+4 i \pi h_1 v\nonumber\\ &+\frac{1}{6} h_0^3 (1-2 v)+\frac{1}{3} \pi ^2 h_0 (5-8 v)+h_1 (8-8 v)+\pi ^2 h_1 \left(1-\frac{8 v}{3}\right)-\frac{4 i \pi ^3}{3}\nonumber\\ &+h_0^2 h_1 \left(\frac{1}{2}-v\right)+h_0 h_1^2 \left(\frac{1}{2}-v\right)+4 i \pi h_0 (v-1)-8 h_0 v+\frac{1}{6} h_1^3 (1-2 v)\,,\nonumber\\ d_{0,4}&=-\frac{10}{3}\,,\qquad d_{0,3}=h_0 (4-8 v)+h_1 (4-8 v)-\frac{20 i \pi }{3}\,,\nonumber\\ d_{0,2}&=-h_0^2+6 h_0 h_1-2 i \pi h_0 (6 v+1)-h_1^2-2 i \pi h_1 (6 v-7)+\frac{47 \pi ^2}{3}\,,\nonumber\\ d_{0,1}&=-i \pi h_0^2+16 h_0 h_1-16 i \pi h_0+16 i \pi h_1-32 i \pi+\pi ^2 h_0 \left(2 v-\frac{7}{3}\right)\nonumber\\ &+\frac{1}{3} h_0^3 (-2 v-1)-2 i \pi h_0 h_1-i \pi h_1^2-i \pi ^3+16 \pi ^2+\pi ^2 h_1 \left(2 v+\frac{1}{3}\right)\nonumber\\ &+h_0^2 h_1 (-2 v-1)+h_0 h_1^2 (3-2 v)-32 h_0 v+h_1^3 \left(1-\frac{2 v}{3}\right)-32 h_1 (v-1)+6 \zeta_{3}\,,\nonumber\\ d_{0,0}&=8 h_0 h_1^2-32 h_0 h_1-12 i \pi h_1 h_2\ +h_1 h_3 (8-16 v)+h_4 (60-48 v)-16 \pi ^2+\frac{22}{3} \pi ^2 h_2 (1-2 v)\nonumber\\ &+h_0^2 h_1 (8-8 v)+h_0^2 h_2 (10-8 v)+\frac{11}{12} \pi ^2 h_1^2 (1-4 v) +h_0 h_1^3 \left(\frac{1}{2}-2 v\right)+\frac{1}{8} h_1^4 (5-4 v)\nonumber\\ &+h_0^2 h_1^2 \left(-v-\frac{1}{4}\right)+16 h_0 h_2 (v-2)-\frac{2}{3} \pi ^2 h_1 (v-13) -16 i \pi h_0 (v-2)-\frac{8}{3} h_1^3 (v-1)\nonumber\\ &-\frac{8 h_0^3 v}{3}-8 i \pi h_0^2 (v-1)+8 i \pi h_1^2 v +8 i \pi h_0 h_2 (v+1)-\frac{2}{3} \pi ^2 h_0 (v+12)-16 i \pi h_1 (v+1)\nonumber\\ &-\frac{1}{6} i \pi h_0^3 (2 v-11)+\frac{1}{6} i \pi ^3 h_1 (2 v-7)+\pi ^4 \left(\frac{151 v}{90}-\frac{19}{9}\right) +h_0^3 h_1 \left(2 v-\frac{3}{2}\right)-8 i \pi h_0 h_1 (2 v-1)\nonumber\\ &-\frac{1}{2} i \pi h_0^2 h_1 (2 v+1)-\frac{1}{6} i \pi h_1^3 (2 v+9)-4 i \pi h_3 (2 v+5) +\frac{11}{12} \pi ^2 h_0^2 (4 v-3)+8 h_0 h_3 (4 v-5)\nonumber\\ &+\frac{1}{8} h_0^4 (4 v+1)+\frac{1}{2} i \pi h_0 h_1^2 (6 v-1)-\frac{2}{3} i \pi ^3 (4 v+1)+\frac{1}{6} i \pi ^3 h_0 (2 v+17)-16 h_3 (v-2)\nonumber\\ &+h_0 h_1 h_2 (16 v-8)+\frac{1}{6} \pi ^2 h_0 h_1 (44 v-31)+h_2^2 (14 v-7)+16 i \pi h_2 (2 v-1)\nonumber\\ &+h_0 (h_{21} (20-40 v)+64 v)+4 h_1 (4 h_{21} v+h_{21}+16 (v-1))-4 i \pi (h_{21} (2 v-7)-16)\nonumber\\ &+h_0 (-30 v-3) \zeta_{3}+h_1 (21-30 v) \zeta_{3}-2 (8 h_{21} (v+1)+6 h_{211} (4 v+1)+7 h_{22} (2 v-1))\nonumber\\ &+16 v \zeta_{3}+i \pi (8 v-61) \zeta_{3}\,. \end{align} This result is consistent with Ref.~\cite{Kudashkin:2017skd} after a proper analytic continuation. One remarkable feature of the result is that it contains terms proportionals to $1/m_t$. Higher $m_t$-order correction also contain odd-power terms. These odd-power terms come from Region~2~and~3: \begin{align} &\lim_{\epsilon,\delta_7,\delta_6,\delta_5,\delta_4,\delta_3,\delta_2,\delta_1\to0} \left( \mathcal{T}^{(2)} +\mathcal{T}^{(3)} \right) \times e^{2\epsilon\gamma_E} \nonumber\\ &\qquad = \frac{i\pi^3}{m_ts^{\frac{5}{2}}\sqrt{v}\sqrt{1-v}} \left( \frac{1}{\epsilon}-2\log (m_t^2)-10\log 2\right) +m_t^0~\Delta (1/\delta_j) +\mathcal{O}(\epsilon,m_t) \,, \label{two-tot1} \end{align} where $\Delta (1/\delta_j)$ has poles in $\delta_j$, and these poles are cancelled by the contributions from the other regions. \subsection{Other Master Integrals} \label{ss:dots} \subsubsection*{Seven-Line Integrals} After the IBP-reduction described in Ref.~\cite{Davies:2018qvx}, we find that there are four more diagrams which have seven internal lines. We consider $J^\mathrm{NPL1}_{2,1,1,1,1,1,1}$, $J^\mathrm{NPL1}_{1,1,2,1,1,1,1}$, $J^\mathrm{NPL1}_{1,1,1,2,1,1,1}$, $J^\mathrm{NPL1}_{1,1,1,1,1,2,1}$ which are used to calculate the Higgs pair production cross section. (For the detail, see Ref.~\cite{Davies:2018qvx}.) These integrals can be expressed by the proper shifts of $\delta_j$: $\delta_1\to\delta_1+1$ for $J^\mathrm{NPL1}_{2,1,1,1,1,1,1}$, for example. \subsubsection*{Six-Line Integrals} We can use the method described in Subsection~\ref{ss:fewer} to compute the integral with fewer lines. For example, $J^\mathrm{NPL1}_{1,1,1,1,1,1,0}$ is obtained by shifting $\delta_7\to\delta_7-1$ and repeat the same procedure described above. Note that the analytic continuation of $\delta_j$ may change due to the shift. For example, the template integral of Region 12~\eqref{template-npl12} can be regularized with $\delta_{i>0}=0$, whereas $J^\mathrm{NPL1}_{1,1,1,1,1,1,0}$ requires a non-zero $\delta_7$ to regularize that template integral. \section{Summary} \label{ss:sum} Asymptotic expansion is useful to extract information from multi-scale Feynman integrals, which are difficult to solve exactly. The method of regions plays an essential role in this extraction. The crucial part of the method of regions is to reveal the relevant regions correctly, and a naive application of the conventional method fails in the case of non-planar integrals for which the second Symanzik polynomial does not have a definite sign. We solve this problem by performing an analytic continuation of the Mandelstam variables such that the second Symanzik polynomial is positive definite. We show the applicability of the method of regions by the explicit calculation of the master integrals of the Higgs pair production cross section at two-loop order, in the high energy limit. It is straightforward to extend our calculation to other four-point two-loop integral which satisfy $q_i^2\ll m^2\ll S,T,U$ where $q_i$ are the external momenta and $m$ is the mass of the internal lines. We anticipate that our idea to make the second Symanzik polynomial positive definite works in more general cases. In addition to solving the issue of the sign of the second Symanzik polynomial, we formulate the procedure of the calculation in a systematic way. The contribution from each region is expressed in terms of Mellin-Barnes integrals, and a way to solve them is presented. The procedure presented here to solve the Mellin-Barnes integrals beyond the Barnes lemmas is not applicable to the general case, although it is sufficient to solve our master integrals completely. The automatization of this part is a future project. As a by-product of introducing the positive Mandelstam variables, it becomes easier to obtain the crossed integrals, since the crossing of the positive Mandelstam variables does not cross any branch cut. We compute the first few terms of the series in $m_t$, and the higher order terms can be obtained by the use of the $m_t$-differential equations. In this sense, our results can be considered as the boundary conditions of the differential equations with respect to $m_t$. \section*{Acknowledgements} The author is grateful to Joshua Davies, Matthias Steinhauser, and David Wellmann for valuable collaborative work, and Hjalte Frellesvig, Christopher Wever for useful discussions. The author sincerely thanks Matthias Steinhauser, Joshua Davies, Vladimir Smirnov, and Yukinari Sumino for carefully reading the manuscript and a lot of useful comments.
train/arxiv
BkiUdOQ4eIXho9hDI74g
5
1
\section{Introduction.} At the end of the XIXth century, in his studies of wealth and income \footnote{Wealth and income are proxies of each other in first approximation for our purposes.} distribution on different countries, Vilfredo Pareto (\cite{P1}, \cite{P2}) discovered the universal power law that governs the upper tail of wealth distribution. It is well known that this is not a good model for the lower part of the curve that is more dependent on specific sociological factors and of log-normal type (see the discusion in \cite{Ma}). The exponent in the power decay is country dependent and is an indicator of equitative wealth (re)distribution. A larger exponent indicates a more equitative wealth distribution. Pareto's universal assymptotic behaviour appears in distributions from various other contexts, and, as we show, is typical from competitive system where the reward is proportional to the accumulated wealth. The purpose of this article is to provide a simple explanation to Pareto's empirical observation. We propose a natural dynamical model of evolution of wealth where Pareto distributions emerge as invariant dynamically stable \footnote{``Dynamically stable distribution'' in the Dynamical System sense not in the probabilistic sense.} distributions of this Dynamical System. The stability of the wealth distribution, which is different from the universality property, was conjectured by Pareto, whose intuition apparently comes from his empirical observations. We can read in \cite{P2}, chap. VII, point 31, p.393: \medskip \textit{Si, par exemple, on enlevait tout leur revenu aux citoyens les plus riches, en supprimant la queue de la figure des revenus, celle-ci ne conserverait pas cette forme, mais t\^ot ou tard elle se r\'etablirait suivant une forme semblable \`a la premi\`ere.}\footnote{\textit{``If, for instance, we confiscate all income to the richests citizens, thus erasing the tail of income distribution, this shape will not persist and sooner or later it will evolve to a similar shape of the original.''}} \medskip There are other classical models and studies of Pareto empirical observation and power laws (like Zipf's law). For the record we cite a few classical ones: H. Simon \cite {Si}, D.G. Champernowne \cite{Ch}, B. Mandelbrot \cite{Ma},etc Simon model \cite{Si} for Zipf's law is a ``genesis model'' of the distribution, i.e. it is a model for its creation. Champernowne \cite{Ch} proposed a general multiplicative stochastic model, and B. Mandelbrot \cite{Ma} explained Pareto law by the universal limit character of Pareto-L\'evy probabilistically stable distributions. \section{The dynamical model.} In this first section, we propose and study a dynamical model of wealth evolution which is a simple first approximation. \subsection{Setup.} Let $f(x)$ be the wealth distribution, i.e. $df= f(x) \ dx$ is the number of individuals with wealth in the infinitesimal interval $[x, x+dx[$. The distribution function $f : \RR_+ \to \RR_+$ is continuous, positive and decreasing and $\lim_{x\to +\infty} f(x) =0$. A distribution is of Pareto type if it presents a power law decay $x^{-\alpha}$ at $+\infty$, that is $$ \lim_{x\to +\infty} -\frac{\log f(x)}{\log x} =\alpha >0 \ . $$ The exponent $\alpha >0$ is \textit{Pareto exponent}. A distribution of the form $f(x) = C. x^{-\alpha}$ is called a Pareto distribution. Smaller values of $\alpha$ indicate larger inequalities in wealth distribution. Notice that $\alpha >1$ is necessary for the distribution to be summable at $+\infty$, i.e. finite wealth at infinite (finitness near $0$ is not significant since the model aims to explain the tail behaviour at $+\infty$). \subsection{Wealth dynamics.} We focuss on the evolution of individual wealth. We assume that the evolution is based on two main factors: Finantial decisions, that we model as a betting game, and by public redistribution of wealth, that absorbs part of the individual wealth into public wealth. For the first factor we model the finantial decisions of each individual by a sequence of bets. Each financial decision turns out to be a bet, waging a proportion of his wealth. As a first approximation, we assume that the probability of success is the same for all agents and bets $0<p<1$ (this is the average probability). At each round, each agent risks the same percentage of his wealth, a fraction $\gamma >0$ (that is also an average). If he wins, his wealth is multiplied by the factor $1+\gamma$ and if he looses his wealth is divided by $1+\gamma$. Only considering this first factor, one round evolution the distribution transforms into the new distribution $$ \cW (f) (x) = \frac{p}{1+\gamma} \ f( x/(1+\gamma)) + (1-p)(1+\gamma ) \ f( (1+\gamma) x) \ . $$ The operator $\cW$ is ``wealth preserving''. In terms of $L^1$-norm we have $$ ||\cW (f)||_{L^1} = ||f||_{L^1} \ . $$ The agents will only risk their capital if there is a positive expectation of gain, thus we should assume that $p>1/2$. There are other mechanisms that affect wealth evolution that we should consider, as for example inheritances that divide wealth, taxes, etc. Note that public wealth drains individual wealth by the fiscal mechanism. Thus it is natural to consider a broader class of operators $\cW$ with a dissipative parameter $\kappa \geq 1$, the \textit{dissipative coefficient}, $$ \cW_\kappa (f) (x) = \frac{1}{\kappa} \cW (f) (x)=\frac{p}{\kappa (1+\gamma)}f \left ( x/(1+\gamma) \right ) + \frac{(1-p)(1+\gamma)}{\kappa} f( (1+\gamma)x) \ , $$ so that for $\kappa=1$ the operator is wealth preserving. We name the model for $\kappa =1$ the ``wealth preserving model''. \subsection{Invariant distributions.} Distributions invariant by the evolution operator $\cW_\kappa$ must satisfy the fixed point functional equation $\cW_\kappa (f)=f$, that is, \begin{equation} \label{functional_eq0} f(x)= \frac{p}{\kappa (1+\gamma)}f \left ( x/(1+\gamma) \right ) + \frac{(1-p)(1+\gamma)}{\kappa} f( (1+\gamma)x) \ . \end{equation} We solve this equation in the next section. \subsection{Solution of the functional equation.} Considering the change of variables $F(x)=f(e^x)$, equation (\ref{functional_eq0}) becomes a functional equation for $F:\RR \to \RR$ \begin{equation} \label{functional_eq} a \ F(x+\lambda) - F(x) +b \ F(x-\lambda) =0 \ , \end{equation} where $\lambda = \log (1+\gamma ) >0$, $a=(1-p)(1+\gamma )/\kappa > 0$ and $b=p/\kappa /(1+\gamma) > 0$. We have a general theory of these type of functional equations. L. Schwartz (see \cite{S}, and also \cite{M}, \cite{K}) studied more general ``mean periodic'' smooth functions $F$ that satisfy a functional equation of the form $$ \omega \star F = 0 \ , $$ where $\omega$ is a compactly supported distribution. In our case, $\omega = a \ \delta_\lambda -\delta_0 + b \ \delta_{-\lambda}$\footnote{The way to study these equations is by Fourier transforming it (\`a la Carleman \cite{C} using hyperfunctions in order to work in sufficient generality). One of the general results by L. Schwartz (see \cite{S} Theorem 10 p.894) is the ``spectral synthesis'' of solutions: Smooth solutions are uniform limits on compact set of $\RR$ of linear combinations of exponential solutions $(e^{\rho x})_\rho$. Also these exponential solutions are not limits of linear combinations of the others, thus the expansion is unique.}. \medskip In our simplified model we don't need the general theory and the equation can be solved by elementary means. First, the exponential solutions are easy to calculate. A function $F(x)=e^{\rho x}$ is a solution if $e^{\rho \lambda}$ satisfies the following second degree equation: \begin{equation}\label{2nd_degree} a \left (e^{\rho \lambda}\right )^2 - \left (e^{\rho \lambda}\right ) +b =0 \ . \end{equation} Observe that the discriminant $\Delta = 1-4ab$ is positive since we have $$ ab=\frac{p(p-1)}{\kappa^2}< \frac{1}{4\kappa^2} \ , $$ thus $\Delta > 1-\frac{1}{\kappa^2} > 0$ because $\kappa \geq 1$. Thus we have two distinct solutions: $$ e^{\rho \lambda} = \frac{1}{2a} \pm \frac{1}{2a} \sqrt{1-4ab} \ . $$ Since $a>0$ and the polynomial $P(x)=ax^2-x+b$ satisfies $P(0) >0$ and $P(1)<0$, we have two real root $x_1$ and $x_2$ with $0<x_1 < 1 < x_2$. Therefore, we have two families of solutions for $\rho$ in two vertical lines in the complex domain, for $k\in \ZZ$, $j=0,1$, $$ \rho_{j,k} = \lambda^{-1}\log x_j + 2 \pi i k \lambda^{-1} \ . $$ Note that $\Re \rho_{0,k} < 0 < \Re \rho_{1,k}$. Let $\rho_0= \rho_{0,0} <0$ and $\rho_1=\rho_{1,0}>0$. Observe that the particular solution $F(x)= C.e^{\rho_{0} x}$ leads to the solution $f(x)=F(\log x)=C. x^{\rho_0}$ which is exactly Pareto distribution with Pareto exponent $\alpha = -\rho_0$. We can now solve the functional equation completely\footnote{We have a strong form of Schwartz spectral theorem.}: \begin{theorem} The general solution of the functional equation (\ref{functional_eq}), \begin{equation} a \ F(x+\lambda) - F(x) +b \ F(x-\lambda) =0 \ , \end{equation} (with $a, b, \lambda$ as above) is $$ F(x) = e^{\rho_0 x} L_0(x/\lambda) +e^{\rho_1 x} L_1(x/\lambda) $$ where $L_0$ and $L_1$ are $\ZZ$-periodic functions. \end{theorem} In order to solve the functional equation (\ref{functional_eq}), we consider $H(x) =F(x+\lambda)-e^{\rho_0 \lambda} F(x)$. Substracting (\ref{functional_eq}) from (\ref{2nd_degree}) multiplied by $e^{-\rho_0 \lambda} F(x)$ we get $$ aH(x)-be^{-\rho_0 \lambda} H(x-\lambda) =0 \ , $$ or $$ H(x)=\left (\frac{b}{a} \ e^{-\rho_0 \lambda} \right ) H(x-\lambda) \ . $$ Considering $$ \hat H(x) =\left (\frac{b}{a} \ e^{-\rho_0 \lambda} \right )^{-x/\lambda} H(x) \ , $$ we have that $\hat H(x)=\hat H(x-\lambda)$, i.e. there is a $\ZZ$-periodic function $L$ such that $$ H(x)= \left (\frac{b}{a} \ e^{-\rho_0 \lambda} \right )^{x/\lambda} L(x/\lambda) \ . $$ Therefore we have $$ F(x+\lambda)-e^{\rho_0 \lambda} F(x)= \left (\frac{b}{a} \ e^{-\rho_0 \lambda} \right )^{x/\lambda} L(x/\lambda) \ . $$ Now, put $$ \hat F(x) = e^{-\rho_0 x} F(x) \ . $$ Then we need to solve $$ \hat F(x+\lambda) - \hat F(x) = e^{-\rho_0 \lambda} \ \left (\frac{b}{a} \right )^{x/\lambda} \ e^{-2\rho_0 x} L(x/\lambda) \ , $$ if we write $G(x)=e^{\rho_0 \lambda} \hat F(x)$ and $c=-2\rho_0+\lambda^{-1} \log(b/a)$, $$ G(x+\lambda) - G(x) =e^{cx} L(x/\lambda) \ . $$ We use the following lemma: \begin{lemma} For $c\in \RR$, $\lambda >0$, and $L$ a $\ZZ$-periodic function, the solutions of the functional equation \begin{equation} G(x+\lambda)-G(x) = e^{cx} L(x/\lambda)\ , \end{equation} are of the form $$ G(x)= G_0(x)+M(x/\lambda) \ , $$ where $M$ is a $\ZZ$-periodic function, and for $c\not= 0$, $$ G_0(x)= \frac{e^{cx}}{e^{c\lambda}-1} \ L(x/\lambda) \ , $$ and for $c=0$ $$ G_0(x)= \lambda^{-1} x \ L(x/\lambda) \ . $$ \end{lemma} \begin{proof} Obviously in both cases $G_0$ is a particular solution. Then the functional equation is equivalent to $M(x+1)-M(x)=0$, where $M(x)=G(\lambda x)-G_0(\lambda x)$, i.e. $M$ is $\ZZ$-periodic. \end{proof} So, in the non-degenerate case ($c\not= 0$), absorbing the multiplicative constants into $L$ and $M$, the general solutions of (\ref{functional_eq}) are of the form $$ F(x) = e^{(-\rho_0 +\lambda^{-1} \log (b/a) ) x} L(x/\lambda) +e^{\rho_0 x} M(x/\lambda) \ . $$ And coming back to the second degree equation (\ref{2nd_degree}) we have $$ e^{-\rho_0 \lambda} \ \frac{b}{a}=e^{\rho_1 \lambda} \ , $$ so $$ F(x) = e^{\rho_1 x} L(x/\lambda) +e^{\rho_0 x} M(x/\lambda) \ . $$ Indeed the degenerate case never happens: \begin{lemma} We have $c\not=0$. \end{lemma} \begin{proof} If $c=0$ then $e^{2\rho_0 \lambda}=b/a=e^{\rho_0 \lambda}.e^{\rho_1 \lambda}$ and $e^{\rho_0 \lambda}=e^{\rho_1 \lambda}$, the root of the second degree equation would be double and the discriminant would be $\Delta =0$ but we have seen that $\Delta >0$. \end{proof} If we request that $F>0$ and $F(x)\to 0$ for $x\to +\infty$ (the only sound solutions) then $L=0$ and $M>0$, $$ F(x) = e^{\rho_0 x} M(x/\lambda) \ . $$ Finally we have $$ f(x)=x^{\rho_0} M(\lambda^{-1}\log x ) \ . $$ If we look for continuous solutions, then $M$ must be continuous and bounded since it is $\ZZ$-periodic, thus $f$ satisfies Pareto assymptotics $$ \lim_{x\to +\infty} -\frac{\log f(x)}{\log x} = - \rho_0 = \alpha >0 \ . $$ \subsection{Pareto exponent.} It is interesting that we can compute an explicit expression of the Pareto exponent in terms of the parameters $\kappa$, $\gamma$ and $p$, \begin{corollary} The Pareto exponent is given by $$ \alpha = -\rho_0 = -\lambda^{-1} \log \left (\frac{1-\sqrt{1-4ab}}{2 a}\right ) $$ or $$ \alpha = 1- \frac{\log \left (\frac{\kappa-\sqrt{\kappa^2-4p(1-p)}}{2 (1-p)}\right )}{\log(1+\gamma) } \ . $$ \end{corollary} It is interesting to note that the Pareto exponent $\alpha$ decreases when $\gamma$ increases. This means that a more risky finantial behaviour, or more active economy, favours unequal distribution. Fortunes are created and lost more often. Ruin is more common. Indeed we know by the Kelly criterion \cite{Ke} that ruin is almost sure in the long run if $\gamma$ is larger than a certain threshold. With a slightly modified model we can explain Pareto's theory of ``Circulation of Elites''. Indeed this circulation occurs at all level of social status when the agents are not enough conservative to satisfy Kelly criterion. We will discuss these questions in a companion article \cite{PM}. The Pareto exponent also increases with $\kappa$ since $$ \frac{d\alpha}{d\kappa} =\frac{1}{\log(1+\gamma)} \frac{\kappa-\sqrt{\kappa^2-4p(1-p)}}{\sqrt{\kappa^2-4p(1-p)} \left (\kappa-\sqrt{\kappa^2-4p(1-p)}\right )} \ , $$ is positive. This is natural since a larger $\kappa$ means a larger demographic and fiscal pressure and thus we expect a better redistribution of wealth and a larger Pareto exponent. \subsection{A remarkable solution in the wealth preserving model.} A Pareto exponent $\alpha >1$ is necessary for summability of the tail of the distribution and is always observed in experimental studies. It is remarkable that in the wealth preserving model with the critical value of the dissipative coefficient $\kappa =1$, the Pareto exponent is exactly $\alpha =1$. \begin{theorem} In the wealth preserving model, $\kappa = 1$, the Pareto exponent is exactly equal to $\alpha =1$. \end{theorem} \begin{proof} For $\kappa=1$ we have $$ \kappa^2-4p(1-p)=(2p-1)^2 \ . $$ Therefore $$ \kappa-\sqrt{\kappa^2-4p(1-p)} =2(1-p) \ . $$ And the formula in the previous section gives $\alpha =1$. \end{proof} This result is natural and to be expected: For $\kappa < 1$ the wealth in increasing without limit and the invariant distributions could not be summable at $+\infty$, and for $\kappa > 1$ we have finite wealth at $+\infty$. From the form of the invariant solutions, we have: \begin{theorem} For an invariant solution, the following conditions are equivalent: \begin{enumerate} \item The tail wealth is summable, $W(f, x_0) < +\infty$ . \item The Pareto exponent $\alpha$ is larger than $1$, $\alpha >1$. \item The model is wealth dissipative, that is $\kappa >\kappa_0$ . \end{enumerate} \end{theorem} It has been observed that the Pareto exponent of the wealthiest fraction of the population has a Pareto exponent which is much closer to $1$ than expected (or to the rest of the medium class, whatever this means). So for this class of the population the dissipative coefficient is closer to the critical one $\kappa_0$, this means that the wealthiest part of the population is able to avoid the mechanisms of fiscal redistribution of wealth. \subsection{Stability of invariant solutions.} We now study the Pareto problem of stability of the Pareto distribution. Since $\kappa >1$, we can observe that for the $L^1$-norm the operator $\cW_\kappa$ is contracting: \begin{lemma} Let $f, g : \RR_+^* \to \RR_+$ be measurable functions , with $f-g \in L^1(\RR_+^* )$, then $$ || \cW_\kappa (f) -\cW_\kappa (g)||_{L^1} \leq \kappa^{-1} ||f-g||_{L^1} \ . $$ \end{lemma} \begin{proof} We have \begin{align*} \left | \cW_\kappa (f) (x) -\cW_\kappa (g) (x) \right | \leq & \frac{p}{\kappa (1+\gamma)} \left |f(x/(1+\gamma)) -g(x/(1+\gamma))\right | \\ &+\frac{(1-p)(1+\gamma)}{\kappa}\left |f(x(1+\gamma)) -g(x(1+\gamma))\right | \end{align*} and the result follows integrating over $\RR_+^*$. \end{proof} Obviously this lemma is only interesting when $||f-g||_{L^1}$ is finite. For each invariant solution $f_0$ it is natural to consider the space of measurable bounded perturbations of $f_0$ for the $L^1$-norm, $\cM( \RR_+^*, \RR )$ denotes the space of Borel measurable functions, $$ \cS_{f_0} = \{ g \in \cM( \RR_+^*, \RR ) ; ||g-f_0||_{L^1}<+\infty \} \ . $$ Then the fixed point $f_0$ is a global attractor in $\cS_{f_0}$ and we have: \begin{theorem} For any $g \in \cS_{f_0}$, we have that $\cW_\kappa^n (g) \to f_0$ for the $L^1$-norm at a geometric rate. \end{theorem} This proves the Pareto stability conjecture, exactly as stated by Pareto (see the citation in the introduction): If we remove all wealth larger than some value $x$ from the invariant solution, then the perturbation thus obtained is $L^1$ bounded because of summability of the tail, hence the stability. \section{Other more refined models.} With the same ideas, we can build more sophisticated models that will be studied in the future. The main difference with the model presented here is that the invariant solutions cannot be computed explicitely in general, nor we can give close formulas for the Pareto exponents. But this does not prevent numerical studies of the invariant solutions. We may more realistically assume that there are different sorts of individuals with different skills for finantial investment (different $p$'s), and different risk profiles (different $\gamma$'s). If we assume that each class of individuals is equally represented accross wealth classes (which is not true, the more skilled ones should be more numerous in the upper classes), then we end with a general wealth operator of the form $$ \cW_\kappa (f) =\sum_i \frac{p_i (1+\gamma_i)}{\kappa} f(x/(1+\gamma_i)) + \frac{q_i (1+\gamma_i)}{\kappa} f(x(1+\gamma_i)) \ , $$ with $$ \sum_i p_i +\sum_i q_i =1 \ . $$ The exponentials of the Pareto exponents appear then as roots of a Dirichlet polynomial. One can prove, using results from \cite{S} that the invariant solutions obey Pareto law. A more realistic model consists in allowing the dissipative coefficient $\kappa$ to be non constant and make it dependent on $x$. In principle, $x\mapsto \kappa (x)$ should be increasing. Then the search for invariant solutions leads to a functional equation with non-constant coefficients whose possible explicit resolution depends on the form of the function $x\mapsto \kappa (x)$. \noindent \textbf{Acknowledgements.} I thank my colleague Philippe Marchal for pointing out an error in the formula of the first version of this article.
train/arxiv
BkiUalLxK2li-LM1Pb_O
5
1
\section{Introduction} The behavior of a system of spins interacting with static and time varying magnetic fields is a very broad topic and has been the subject of intense study for decades. A very important application is to the study of spins interacting with the randomly fluctuating fields associated with a thermal reservoir. Bloembergen, Purcell and Pound \cite{Bloembergen1948} have treated this problem using physical arguments based on Fermi's golden rule and showed that the relaxation induced by the fields associated with a thermal reservoir is proportional to the power spectrum of the fluctuating fields evaluated at the Larmor frequency, $\omega_0=\gamma B_0,$ (where $\gamma$ is the gyromagnetic ratio and $B_0$ is an applied constant and uniform field), which is given by the Fourier transform of the auto-correlation function of these fluctuating fields evaluated at $\omega_0$. Wangsness and Bloch \cite{Wangsness1953} and then Bloch \cite{Bloch1956} approached the problem using second order perturbation theory applied to the equation of motion of the density matrix and Redfield, \cite{Redfield1957,Redfield1965} (see also \cite{Slichter1964}) carried this calculation forward to show that the relaxation, indeed, depends on the spectrum of the auto-correlation of the fluctuating fields. Another source of randomly fluctuating fields is the stochastic motion of spins (e.g. diffusion) through a region with an inhomogeneous magnetic field. To study this problem Torrey \cite{Torrey1956} introduced a diffusion term into the Bloch equation applied to the bulk magnetization of a sample containing many spins (Torrey equation). Cates, Schaeffer and Happer \cite{Cates1988} then rewrote the Torrey equation to apply to the density matrix and solved this equation to second order in the varying fields using an expansion in the eigenfunctions of the diffusion equation. McGregor \cite{McGregor1990} applied the Redfield theory to this problem using diffusion theory to calculate the auto-correlation function of the fluctuating fields experienced by spins diffusing through a (uniform gradient) inhomogeneous field. Recently Golub \textit{et al.} \cite{Golub2010c} showed that these two approaches \cite{Cates1988,McGregor1990} are identical. A useful review of the field is \cite{Nicholas2010}. Another problem that can be treated by these methods is the case of a gas of spins contained in a cell subject to inhomogeneous magnetic fields and a strong electric field as in experiments to search for a non-zero electric dipole moment (EDM) of neutral particles such as the neutron \cite{Lamoreaux2009} or various atoms or molecules \cite{Eckel2013}. This was shown by Pendlebury \textit{et al.} \cite{Pendlebury2004} using a second order perturbation approach to the classical Bloch equation, to lead to an unwanted, linear in electric field, frequency shift, (often called a 'false EDM' effect) which can be the largest systematic error in such experiments. Lamoreaux and Golub \cite{Lamoreaux2005b} showed, using a standard density matrix calculation (Redfield theory), that the 'false EDM' frequency shift is given, to second order, by certain correlation functions of the fields seen by the moving particles. Barabanov \textit{et. al.} \cite{Barabanov2006} gave analytic expressions for the relevant correlation functions for a gas of particles moving in a cylindrical vessel exposed to a magnetic field with a linear gradient along with an electric field. Petukhov \textit{et al.} \cite{Petukhov2010} and Clayton \cite{Clayton2011} showed how to determine the correlation functions for arbitrary geometries and spatial field dependence for cases where the diffusion theory applies, while Swank \textit{et. al.} \cite{Swank2012} showed how to calculate the spectra of the relevant correlation functions for gases in rectangular cells in magnetic fields of arbitrary position dependence even in those cases where the diffusion theory does not apply. Recently Afach \textit{et al} \cite{Afach2015} measured a frequency shift that is linearly proportional to an applied electric field (false electric dipole moment) for a system consisting of Hg atoms moving in a confined gas exposed to parallel electric and magnetic fields. Pignol and Roccia \cite{Pignol2012a} have initiated a program to search for universal expressions giving general results valid for all geometries and scattering conditions in the gas and gave such a result for the false EDM effect valid in the nonadiabatic (low frequency) limit. Further steps in this direction were taken by Guigue \textit{et. al.} \cite{Guigue2014} who provided a universal result for frequency shifts induced by inhomogeneous fields in the adiabatic (high frequency) limit and for the relaxation rate ($\Gamma_{1}$) in the case where diffusion theory applies. In this work we extend the search for universal expressions of frequency shifts and relaxation for both the adiabatic and nonadiabatic (high and low Larmor frequency) limits. \section{Frequency shifts and relaxation rates from Redfield theory} We consider the case of a gas of spin-1/2 particles inside a trap with a gyromagnetic ratio $\gamma$ evolving in a slightly inhomogeneous magnetic field $\vec{B}(\vec{r}) = \vec{B}_0 + \vec{b}(\vec{r})$. One can define the holding magnetic field $\vec{B}_0 = B_0\vec{e}_{z}$ and the Larmor precession frequency $\omega_0 = \gamma B_0$. The inhomogeneities $\vec{b}$ can be taken to have $\langle \vec{b} \rangle = \vec0$ where $\langle \cdots \rangle$ represents the ensemble average over all particles in the trap. In addition to this inhomogeneity, the particles can move with a velocity $\vec{v}$ in an electric field $\vec{E}$. For simplicity, one can consider that the direction of this electric field is aligned with the holding magnetic field: $\vec{E} = E\vec{e}_z$. These particles will experience an effective motional magnetic field $\vec{E} \times \vec{v}/c^2$. The transverse components of the total magnetic inhomogeneity will then depend on the position and the velocity of the particles in the trap \begin{align} B_x & = b_x - \frac{E}{c^2} v_y \\ B_y & = b_y + \frac{E}{c^2} v_x. \label{eq:} \end{align} These transverse inhomogeneities induce a shift $\delta\omega$ of the precession frequency and a longitudinal relaxation rate $\Gamma_1$. Correct to second order in the perturbation, $b$, the frequency shift $\delta\omega$, the longitudinal relaxation rate $\Gamma_{1}$ and the transverse relaxation rate $\Gamma_2$, involving the Fourier spectra of the inhomogeneity correlation functions, are given by the Redfield theory \cite{Slichter1964,Redfield1965,McGregor1990,Guigue2014}: \begin{align} \delta\omega & = \frac{\gamma^2}{2} \left\{ \mathrm{Re}\left[ S_{xy}(\omega_0) - S_{yx}(\omega_0) \right] + \mathrm{Im}\left[ S_{xx}(\omega_0) + S_{yy}(\omega_0)\right] \right\}, \label{eq:dw-Redfielc}\\ \Gamma_1 & = \gamma^2\left\{ \mathrm{Re} \left[ S_{xx}(\omega_0) + S_{yy}(\omega_0)\right] + \mathrm{Im}\left[ S_{yx}(\omega_0) - S_{xy}(\omega_0) \right] \right\} \\ \Gamma_2 & = \frac{\Gamma_1}{2} + \gamma^2 S_{zz}(\omega=0) \end{align} with \begin{equation} S_{ij}(\omega)=\int_0^\infty e^{i\omega\tau}\langle B_i(0)B_j(\tau)\rangle\mathrm{d}\tau. \label{eq:1} \end{equation} This result is valid in cases where the field fluctuations are stationary in the statistical sense and where the measurements are made over a time scale $T\gg\tau_{\mathrm{corr}}$ where the correlation time $\tau_{\mathrm{corr}}$ is the time scale for which the correlation functions go to zero. In the case of particles evolving in both an inhomogeneous magnetic field and an electric field, the frequency shift $\delta\omega$ and relaxation rate $\Gamma_{1}$ can be decomposed as \begin{align} \delta\omega & = \delta\omega_{B^2} + \delta\omega_{E^2} + \delta\omega_{BE}, \label{eq:dw-decomposition} \\ \Gamma_1 & = \Gamma_{1 (B^2)} + \Gamma_{1 (E^2)} + \Gamma_{1 (BE) ,} \end{align} with \begin{eqnarray} \delta \omega_{B^2} & = \frac{\gamma^2}{2} & \mathrm{Im} \int_0^\infty e^{i\omega_0\tau} \langle b_x(0)b_x(\tau)+b_y(0)b_y(\tau)\rangle \mathrm{d}\tau, \label{eq:dw-B2} \\ \delta\omega_{E^2} & = \frac{\gamma^2 E^2}{2c^{4}} & \mathrm{Im} \int_0^\infty e^{i\omega_0\tau} \langle v_x(0)v_x(\tau) + v_y(0)v_y(\tau) \rangle \mathrm{d}\tau, \label{eq:dw-E2} \\ \delta\omega_{BE} & = \frac{\gamma^2E}{c^2} & \mathrm{Re} \int_0^{\infty} e^{i\omega_0\tau} \langle b_x(0)v_x(\tau) + b_y(0)v_y(\tau) \rangle \mathrm{d}\tau, \label{eq:dw-BE} \\ \Gamma_{1 (B^2)} & = \gamma^2 & \mathrm{Re} \int_0^\infty e^{i\omega_0\tau} \langle b_x(0)b_x(\tau)+b_y(0)b_y(\tau) \rangle \mathrm{d}\tau, \label{eq:rel-B2} \\ \Gamma_{1 (E^2)} & = \frac{\gamma^2 E^2}{c^{4}} & \mathrm{Re} \int_0^\infty e^{i\omega_0\tau} \langle v_x(0)v_x(\tau) + v_y(0)v_y(\tau) \rangle \mathrm{d}\tau, \label{eq:rel-E2} \\ \Gamma_{1 (BE)} & = \frac{2 \gamma^2E}{c^2} & \mathrm{Im} \int_0^\infty e^{i\omega_0\tau} \langle b_x(0)v_x(\tau) + b_y(0)v_y(\tau) \rangle \mathrm{d}\tau. \label{eq:rel-BE} \end{eqnarray} These Larmor frequency shifts and relaxation rates cannot be further simplified to a form valid for all values of holding magnetic field and independent of the particle motion in the trap. However, due to the properties of the Fourier transform, there are universal relations that hold for all types of particle motion and all shapes of trap geometry for values of Larmor frequency (magnetic field) large and small relative to the inverse transit time of particles across the cell, $\lambda/v$ (see \ below). \section{Spin dynamics and particle motion regimes} In general two length scales describe the motion of a gas of particles in a cell: (i) the mean free path between particle collisions noted $l_c$, and (ii) the mean distance between two points on the wall which can be evaluated by the Clausius expression $\lambda = 4 V/S$ where $V$ and $S$ are the volume and the surface of the cell. We define Knudsen's number as $Kn=\frac{l_c}{\lambda}$. At high pressure, $Kn \ll 1$: this is the \textit{diffusive regime} where the propagation of the particles is described by the diffusion equation, characterized by the diffusion coefficient $D$. At low pressure, $Kn \gg 1$: this is the \textit{ballistic regime} where the particles travel in straight lines across the cell in free molecular flow. The correlation time $\tau_{\mathrm{corr}}$ corresponds to the typical time necessary for a particle to probe the magnetic inhomogeneity. Since one usually has to deal with large scale inhomogeneities, $\tau_{\mathrm{corr}}$ is of the order of the average time between successive collisions with the trap walls. Therefore, it depends on the geometry of the trap and on the properties of the particle motion inside this trap. In the case of a gas at atmospheric pressure, the correlation time is about $1~\mathrm{s}$ for a cubic trap with $10~\mathrm{cm}$ sides. For rarefied gas confined in the trap, $\tau_{\mathrm{corr}}$ is approximately equal to $1~\mathrm{ms}$. This time scale can be compared with the Larmor precession frequency $\omega_0$. The limit when $\omega_0$ is much bigger than $1/\tau_{\mathrm{corr}}$, is called \textit{adiabatic regime}. This regime can be interpreted as the particle spins following the local magnetic field. It is also valid when the particles are moving slowly in the trap or if they encounter a great number of collisions with other particles between two collisions with walls. In contrast, the regime is called \textit{nonadiabatic} if $\omega_0 \tau_c \ll 1$. This limit physically appears when the particles are able to probe the whole magnetic inhomogeneity within times shorter than a Larmor period. It is also sometimes refered to as the regime of \textit{motional narrowing}. This phenomenon can be observed in systems immersed in very weak magnetic fields or if the thermal particles are in a ballistic regime in a small container. For a given trap geometry, these regimes depend on the pressure of the spin gas and on the holding magnetic field. Fig. \ref{fig:classification-regime} shows this classification as a function of pressure and holding field for a $^3$He gas contained in a spherical cell with $5~\mathrm{cm}$ radius. The \textit{super-adiabatic} regime corresponds to the situation where the gas is in a diffusive regime and the spin motion is adiabatic between two interparticles collisions. In this case, we have the condition $\omega _0\tau _{\rm{coll}}\gg 1$, with $\tau_{\rm{coll}}$ the time between two interparticles collisions. The correlation functions calculated in \cite{Swank2012} are valid in this region as is Eq. (12) in \cite{McGregor1990}. \begin{figure} \centering\includegraphics[width=0.50\textwidth]{classification-regime.pdf} \caption{Classification of the different regimes for a $R=5$~cm radius spherical cell filled with polarized $^3$He gas as a function of pressure and holding magnetic field.} \label{fig:classification-regime}% \end{figure} To illustrate this classification, let us consider some realistic systems of particular relevance for our study. The cylindrical RAL/Sussex/ILL trap \cite{Pendlebury2004}, used to measure the neutron EDM, contains ultracold neutrons (UCN) and a mercury comagnetometer, immersed in a $1~\mathrm{\mu T}$ holding magnetic field. The small particles number of each species and the size of the trap ($47~\mathrm{cm}$ diameter and $12~\mathrm{cm}$ height) lead to a large Knudsen's number and so to the ballistic regime. However, the speeds of the two particle species are very different: while the mercury atoms are at thermal equilibrium and have an average speed of several hundred meters per second, the UCN are moving at a few meters per second. Therefore the mercury comagnetometer is in the ballistic nonadiabatic regime whereas UCN are in the ballistic adiabatic limit. In the case of a gas at atmospheric pressure, such as a polarized $^3$He gas \cite{Petukhov2010}, in a several $\mu\mathrm{{T}}$ holding field, the particle motion follows the diffusion equation and the number of Larmor precessions done by the spins between two collisions with the walls is very high. This kind of systems is thus in diffusive adiabatic regime. As shown in \cite{Pignol2012a,Guigue2014}, the leading order of frequency shifts (\ref{eq:dw-B2}), (\ref{eq:dw-E2}) and (\ref{eq:dw-BE}) for adiabatic and nonadiabatic regimes can be expressed as powers of $\omega_0 \tau_{\mathrm{corr}}$ or $1/\omega_0 \tau_{\mathrm{corr}}$. To do so, we apply a succession of integrations by parts of the integrals defining the frequency shifts. This is the purpose of the two next sections. The first one presents the simplified expression of the frequency shift in the adiabatic regime. The nonadiabatic regime will be considered in the second next section. The well-known case of uniform magnetic gradients will be discussed in Section VI. \section{Adiabatic regime: high magnetic field or slow particles, arbitrary fields} The adiabatic regime corresponds to systems which satisfy $\omega_0\gg 1/ \tau_{\mathrm{corr}}$. We want to expand the frequency shift (\ref{eq:dw-decomposition}) in power series of $1/\omega_0$. One way to obtain such an expression consists in applying several integrations by parts \begin{equation} \int_0^\infty \sin(\omega_0 \tau) f(\tau) \mathrm{d}\tau = \left[ \frac{-\cos(\omega_0\tau)}{\omega_0}f(\tau) \right]_0^\infty + \frac{1}{\omega_0}\int_0^\infty \cos(\omega_0\tau) \der{f}{\tau}(\tau)\mathrm{d}\tau, \label{eq:IPPcos} \end{equation} \begin{equation} \int_0^{\infty}\cos(\omega_0\tau)f(\tau) \mathrm{d}\tau = \left[ \frac {\sin(\omega_0\tau)}{\omega_0}f(\tau)\right] _0^\infty - \frac{1}{\omega_0}\int_0^{\infty}\sin(\omega_0\tau) \der{f}{\tau}(\tau)\mathrm{d}\tau. \label{eq:IPPsin} \end{equation} Equation \eqref{eq:IPPcos} and \eqref{eq:IPPsin} assume that the function $f$ and its derivative are integrable. We denote $\dot{f}=\der{f}{\tau}$. Using the fact that the correlation functions go to zero at infinite time, we can write \begin{eqnarray} \delta \omega_{B^2} & = & \frac{\gamma^2}{2\omega_0} \langle b_x^2+b_y^2 \rangle - \frac{\gamma^2}{2\omega_0^3} \langle b_x(0)\ddot{b}_x(0)+b_y(0)\ddot{b}_y(0) \rangle \nonumber \\ & - & \frac{\gamma^2}{2\omega_0^{3}} \int_0^\infty \cos(\omega_0\tau)\langle b_x(0)\dddot{b}_x(\tau)+b_y(0)\dddot{b}_y(\tau)\rangle \mathrm{d}\tau, \label{eq:dw-B2-int} \end{eqnarray} \begin{eqnarray} \delta\omega_{E^2} & = & \frac{\gamma^2 E^2}{2c^{4}\omega_0}\langle v_x^2 + v_y^2 \rangle \nonumber \\ & + & \frac{\gamma^2 E^2}{2c^{4}\omega_0} \int_0^\infty \cos(\omega_0\tau) \langle v_x(0)\dot{v}_x(\tau)+v_y(0)\dot{v}_y(\tau) \rangle \mathrm{d}\tau, \label{eq:dw-E2-int} \end{eqnarray} \begin{eqnarray} \delta \omega_{BE} & = & \frac{\gamma^2E}{c^2\omega_0^2}\langle b_x(0)\dot{v}_x(0) + b_y(0)\dot{v}_y(0) \rangle \nonumber \\ & - & \frac{\gamma^2E}{c\omega_0^2}\int_0^\infty\cos(\omega_0\tau) \langle b_x(0)\ddot{v}_x(0)+b_y(0)\ddot{v}_y(0)\rangle\mathrm{d}\tau. \label{eq:dw-BE-int} \end{eqnarray} Making the reasonable assumption that the correlation functions and their derivatives are continuously decaying to 0 for $\tau\rightarrow\infty$, we apply the Riemann-Lebesgue lemma \cite{Appel} and arrive at the conclusion that the last terms in Eq. \eqref{eq:dw-B2-int}, \eqref{eq:dw-E2-int}, \eqref{eq:dw-BE-int} go to zero faster than the other terms in each equation. The frequency shift expressions can be written as (see also Eq. \eqref{eq:Sij-dev}, \eqref{bg1}, \eqref{bg2}): \begin{equation} \delta \omega_{B^2} = \frac{\gamma^2}{2\omega_0} \langle b_x^2 + b_y^2 \rangle - \frac{\gamma^2}{2\omega_0^{3}}\langle b_x(0)\ddot{b}_x(0) + b_y(0) \ddot{b}_y(0) \rangle + O \left( 1/(\omega_0 \tau_{\mathrm{corr}})^5\right), \label{eq:dw-B2-int2a} \end{equation} \begin{equation} \delta\omega_{E^2} = \frac{\gamma^2 E^2}{2c^{4}\omega_0} \left\langle v_x^2+v_y^2\right\rangle +O\left(1/(\omega_0 \tau_{\mathrm{corr}})^3\right), \label{eq:dw-E2-int2a} \end{equation} \begin{equation} \delta\omega_{BE} = \frac{\gamma^2E}{c^2\omega_0^2}\langle b_x(0)\dot{v}_x(0) + b_y(0)\dot{v}_y(0) \rangle + O\left( 1/(\omega_0\tau_{\mathrm{corr}})^4 \right). \label{eq:dw-BE-int2} \end{equation} Using the expressions for the derivatives of the correlation functions presented in the Appendix and assuming that velocities in different directions are uncorrelated and $\langle v_x^2\rangle=\langle v_y^2\rangle=\langle v_{z}^2\rangle=\frac{1}% {3}\langle v^2\rangle$, we obtain \begin{eqnarray} \delta\omega_{B^2}&=&\frac{\gamma^2}{2\omega_0}\langle b_x^2+b_y^2\rangle + \frac{\gamma^2}{6 \omega_0^{3}}\langle v^2 \rangle \langle \vert\vec{\nabla}b_x\vert ^2 + \vert\vec{\nabla}b_y\vert ^2\rangle +O\left( 1/(\omega_0\tau_{\mathrm{corr}})^5 \right), \label{eq:dw-B2-int2} \end{eqnarray} \begin{equation} \delta\omega_{E^2}=\frac{\gamma^2E^2}{3c^{4}\omega_0}\langle v^2\rangle+O\left( 1/(\omega_0\tau_{\mathrm{corr}})^3\right), \label{eq:dw-E2-int2} \end{equation} \begin{equation} \delta\omega_{BE}=\frac{\gamma^2E}{c^2\omega_0^2}\left(\langle\frac{\partial b_x}{\partial x}\,v_x^2 \rangle + \langle\frac{\partial b_y}{\partial y}\,v_y^2 \rangle\right) +O\left( 1/(\omega_0\tau_{\mathrm{corr}})^4 \right). \label{eq:dw-BE-int2} \end{equation} These results are presented in Table \ref{expressions}. The first term in Eq. \eqref{eq:dw-B2-int2} corresponds to the leading order of the frequency shift in adiabatic regime \cite{Guigue2014}. It is instructive to note that these results can be obtained in another way. One can rewrite Eq. \eqref{eq:1} \begin{equation} S_{ij}(\omega)=\int_0^\infty e^{i\omega\tau}\langle B_{i}(0)B_{j}(\tau) \rangle \mathrm{d}\tau = \int_0^\infty e^{i\omega\tau} f(\tau) \mathrm{d}\tau \label{eq:def-Sij} \end{equation} and expand $f(\tau)$ in a Taylor series \begin{eqnarray} \nonumber S_{ij}(\omega) & = & \int_0^\infty\left( f(0) + \dot{f}(0) \tau + \cdots + f^{(n)}(0) \frac{\tau^{n}}{n!} \right) e^{i\omega\tau}\mathrm{d}\tau \\ \nonumber & = & \left( f(0) + \dot{f}(0) \frac{\partial}{\partial (i\omega) } + \cdots + \frac{f^{(n)}(0)}{n!} \frac{\partial^{n}}{\partial (i\omega)^{n}}\right) \int_0^{\infty}e^{i\omega t}d\tau \\ & = & f(0) \frac{i}{\omega} - \dot{f}(0) \frac{1}{\omega^2} + \ddot{f}(0) \frac{1}{i\omega^{3}} + \cdots + f^{(n)}(0) \frac{(-1)^n}{\omega^{n+1}i^{n-1}}. \label{eq:Sij-dev} \end{eqnarray} Taking the real part (that is, the relaxation rate for the $B^2,E^2$ terms and the frequency shift for the $EB$ term, we obtain% \begin{equation} \mathrm{Re} \left[ S_{ij}(\omega) \right] = -\dot{f}(0) \frac{1}{\omega^2} + \frac{\dddot{f}(0)}{\omega^{4}}+\cdots, \label{bg1} \end{equation} which is equivalent to Eq. (14) of \cite{Guigue2014} and (\ref{eq:dw-BE-int}) above but now we have another form of the correction term and we can use this to calculate the frequency range where the first term is a good approximation. For the imaginary part ($B^2,E^2$ frequency shifts and $EB$ relaxation) we find \begin{equation} \mathrm{Im} \left[ S_{ij}(\omega) \right] = \frac{f(0)}{\omega} - \frac{\ddot{f}(0)}{\omega^{3}} + \cdots , \label{bg2} \end{equation} which is equivalent to Eq. (17) of the same paper and Eq. \eqref{eq:dw-B2-int2a} above. In addition we find, \begin{align} \Gamma_{1 (B^2)} & = \gamma^2\left[ -\frac{1}{\omega_0^2}\langle b_x(0)\dot{b}_x(0)+b_y(0)\dot{b}_y(0)\rangle+\frac{1}{\omega_0^{4}}\langle b_x(0)\dddot{b}_x(0)+b_y(0)\dddot{b}_y(0)\rangle\right] , \label{bg5}\\ \Gamma_{1 (E^2)} & = \frac{\gamma^2E^2}{c^{4}}\left[-\frac{1}{\omega_0^2}\langle v_x(0)\dot{v}_x(0)+v_y(0)\dot{v}_y(0)\rangle+\frac{1}{\omega_0^{4}} \langle v_x(0)\dddot{v}_x(0)+v_y(0)\dddot{v}_y(0)\rangle\right], \\ \Gamma_{1 (BE) } & = \frac{2 \gamma^2 E}{c^2} \left[ \frac{1}{\omega_0}\langle b_x(0)v_x(\tau) + b_y(0)v_y(\tau)\rangle \mathrm{|}_{\tau=0} + \frac{1}{\omega_0^{3}}\langle b_x(0)\ddot{v}_x(0)+b_y(0)\ddot{v}_y(0)\rangle\right]. \end{align} Using the results presented in the Appendix, we can write \begin{align} \Gamma_{1 (B^2)} & = -\frac{\gamma^2} {2\omega_0^2} \langle \vec{v}\cdot \vec{\nabla} \left( b_x^2 + b_y^2\right)\rangle +O\left( 1/\left( \omega_0\tau_{\mathrm{corr}}\right) ^{4}\right) , \label{eq:Gamma1_B2int} \\ \Gamma_{1 (E^2)} & = -\frac{\gamma^2 E^2}{\omega_0^2c^4}\langle v_x\dot{v}_x+v_y \dot{v}_y\rangle +O\left( 1/(\omega_0\tau_{\mathrm{corr}})^4 \right) \\ \Gamma_{1 (BE) } & = \frac{2\gamma^2E}{\omega_0c^2}\langle b_xv_x+b_yv_y\rangle +O\left( 1/(\omega_0\tau_{\mathrm{corr}})^3 \right) \label{eq:Gamma1_BEint} \end{align} We expressed the correlation function derivatives in (\ref{eq:dw-B2-int2}), (\ref{eq:dw-E2-int2}), (\ref{eq:dw-BE-int2}), \eqref{eq:Gamma1_B2int} and \eqref{eq:Gamma1_BEint} as volume averages of the velocity and magnetic field, in the adiabatic limit. These expressions are therefore independent of the particle motion in the cell. The high frequency limits for $\delta \omega_{B^2}$, $\delta\omega_{E^2}$, $\Gamma_{1 (B^2)}$ and $\Gamma_{1 (BE)}$ are universal. The first term in \eqref{eq:Gamma1_B2int} behaves as $1/\omega_0^2$ and has been calculated in \cite{Guigue2014} in the diffusive adiabatic regime. However the diffusion theory breaks down at times shorter than the collision time $\tau_{\rm{coll}}$, so that at high frequencies, ($\omega_0\tau_{\rm{coll}} \gg 1$) the spectrum deviates from that expected on the basis of diffusion theory. The high frequency (super-adiabatic) limit is correctly given by \cite{Swank2012}, using a correlation function that is valid for all times, namely the first term in \eqref{eq:Gamma1_B2int} goes to zero as the velocity is initially uncorrelated with position, and the very high frequency behavior goes as $\left( 1/\omega_0^{4}\right)$. The result of \cite{Swank2012} shows how the behavior goes from the $\left( \sim1/\omega_0^2\right)$ predicted by diffusion theory at high frequencies to $\left( \sim1/\omega_0^4\right)$ as $\omega_0\tau_{\rm{coll}}$ becomes on the order of or greater than 1. \section{Nonadiabatic regime: weak magnetic field or fast particles, arbitrary fields} We now consider the nonadiabatic limit $\omega_0 \tau_{\mathrm{corr}} \ll 1$. To expand the frequency shifts expressions in terms of power of $\omega_0$, we simply apply the same procedure of recursive integrations by parts, changing the part that is integrated: \begin{equation} \int_0^\infty\sin(\omega_0\tau)f(\tau) \mathrm{d}\tau = \left[ \sin(\omega_0\tau)\int_0^\tau f(t)\mathrm{d}t\right]_0^\infty - \omega_0 \int_0^\infty \cos(\omega_0\tau) \int_0^\tau f(t)\mathrm{d}t \mathrm{d}\tau \label{eq:c} \end{equation} \begin{equation} \int_0^\infty\cos(\omega_0\tau)f(\tau)\mathrm{d}\tau=\left[ \cos (\omega_0\tau)\int_0^\tau f(t)\mathrm{d}t\right]_0^\infty +\omega_0\int_0^\infty\sin(\omega_0\tau)\int_0^\tau f(t)\mathrm{d}t\mathrm{d}\tau. \end{equation} When applying these relations to Eq. (\ref{eq:dw-B2}), (\ref{eq:dw-E2}) and (\ref{eq:dw-BE}), we obtain: \begin{align} \delta\omega_{E^2}= & -\frac{\gamma^2E^2}{2c^{4}}\omega_0\langle x^2+y^2\rangle\nonumber\\ & +\frac{\gamma^2E^2}{2c^{4}}\omega_0^2\int_0^{\infty}\sin (\omega_0\tau)\langle x(0)x(\tau)+y(0)y(\tau)\rangle d\tau \label{eq:dw-E2-int4}% \end{align}% \begin{align} \delta\omega_{BE}= & -\frac{\gamma^2E}{c^2}\langle b_x x + b_y y\rangle\nonumber\\ & +\frac{\gamma^2E}{c^2}\omega_0\int_0^{\infty}\sin(\omega_0 \tau) \langle b_x(0) x(\tau)+b_y(0) y(\tau)\rangle d\tau. \label{eq:dw-BE-int4}% \end{align} One can see that the last terms on the right-hand sides of Eq. (\ref{eq:dw-E2-int4}) and (\ref{eq:dw-BE-int4}), as well as Eq. (\ref{eq:dw-B2}) involve Fourier transforms of correlation functions which depends exclusively on position. Since we are in the limit $\omega_0\tau_{\mathrm{corr}} \ll 1$ we need to consider only the first order expansion of the involved trigonometric functions: $\sin(\omega_0\tau)\approx\omega_0\tau$. (For times $\tau \gtrsim \tau_{\mathrm{corr}}$ the correlation function goes to zero.) Applying this method to Eq. (\ref{eq:dw-B2}), (\ref{eq:dw-E2-int4}) and (\ref{eq:dw-BE-int4}), we obtain \begin{equation} \delta\omega_{B^2}\approx\frac{\gamma^2}{2}\omega_0\int_0^{\infty} \tau\langle b_x(0)b_x(\tau)+b_y(0)b_y(\tau)\rangle\mathrm{d}\tau \label{eq:dw-B2-int5} \end{equation}% \begin{equation} \delta\omega_{E^2}\approx-\frac{\gamma^2E^2}{2c^{4}}\omega_0\langle x^2+y^2\rangle+\frac{\gamma^2E^2}{2c^{4}}\omega_0^{3}\int _0^{\infty}\tau\langle x(0)x(\tau)+y(0)y(\tau)\rangle\mathrm{d}\tau \label{eq:dw-E2-int5} \end{equation} \begin{align} \delta\omega_{BE} \approx & -\frac{\gamma^2E}{c^2}\langle b_x x+b_y y\rangle \label{eq:dw-BE-int5a} \\ & + \frac{\gamma^2E}{c^2}\omega_0^2\int_0^{\infty}\tau\langle b_x(0)x(\tau)+b_y(0)y(\tau)\rangle\mathrm{d}\tau. \nonumber \end{align} The last terms in Eq. (\ref{eq:dw-E2-int5}) and (\ref{eq:dw-BE-int5a}) can not be calculated for any arbitrary trap geometry. But one can see that they behave as $\omega_0^2 \tau_{\mathrm{corr}}^2$ when $\omega_0 \tau_{\mathrm{corr}}$ goes to zero. This means that the expressions of the frequency shifts are dominated by the first term on the right hand side. These results are presented in Table \ref{expressions}. Similarly, since $\cos \omega _0 \tau \approx 1$, Eq. \eqref{eq:rel-B2}, \eqref{eq:rel-E2} and \eqref{eq:rel-BE}) become \begin{align} \Gamma_{1 (B^2)} & = \gamma^2\int_0^{\infty}\langle b_x(0)b_x(\tau)+b_y(0)b_y(\tau)\rangle \mathrm{d}\tau, \label{bg1a} \\ \Gamma_{1 (E^2)} & = \frac{\gamma^2E^2}{c^4} \omega_0^2 \int_0^\infty \langle x(0)x(\tau) + y(0)y(\tau) \rangle\mathrm{d}\tau, \label{bg1b} \\ \Gamma_{1 (BE) } & = \frac{2\gamma^2 E}{c^2} \omega_0 \int_0^\infty \langle b_x(0)x(\tau)+b_y(0)y(\tau) \rangle \mathrm{d}\tau, \label{bg1c} \end{align} from which the low frequency limits follow immediately. \begin{table} \center \begin{tabular}[c]{l|l|l} Frequency & Adiabatic & Nonadiabatic\\ shift & (UCNs) & (Hg)\\\hline\hline & & \\[1pt] $\delta\omega_{B^2}$ & $\frac{\gamma^2}{2\omega_0}\langle b_x^2+b_y^2\rangle + \frac{\gamma^2}{6\omega_0^{3}}\langle v^2\rangle \langle \vert\vec{\nabla}b_x\vert^2 + \vert\vec{\nabla}b_y\vert^2 \rangle $ & $\frac{\gamma^2}{2}\omega_0\int_0^{\infty}\tau\langle b_x(0) b_x(\tau)+b_y(0)b_y(\tau)\rangle\mathrm{d}\tau$\\ & & \\[1pt] $\delta\omega_{E^2}$ & $\frac{\gamma^2E^2}{3c^{4}\omega_0}\langle v^2\rangle$ & $-\frac{\gamma^2 E^2}{2 c^{4}} \omega_0 \langle x^2+y^2\rangle$\\ & & \\[1pt] $\delta\omega_{BE}$ & $-\frac{\gamma^2E}{c^2\omega_0^2}\left( \langle\frac{\partial b_x}{\partial x} \, v_x^2\rangle+ \langle \frac{\partial b_y}{\partial y} \, v_y^2\rangle\right) $ & $\frac{\gamma^2E}{c^2}\langle b_x x+b_y y\rangle$\\ \end{tabular} \caption{Expressions of the leading terms of the frequency shifts induced by the transverse magnetic and motional fields, in the adiabatic and nonadiabatic limits.} \label{expressions} \end{table} \section{\label{Sec6}Magnetic field linearly dependent on position (uniform gradients)} In this case, which is what is usually treated theoretically and applies to most experimental situations, one can simplify the derivatives in terms of trajectory correlation functions without any evolution equation and derive a variety of relationships. Let us consider a magnetic inhomogeneity $\vec{b}$ dependent linearly on the position of a spin: \begin{align} b_x & =G_x x,\\ b_y & =G_y y, \label{eq:uniformGradients} \end{align} where the relation $G_x+G_y=-\frac{\partial b_z}{\partial z} = -G_z$ holds by the divergence theorem. \subsection{General relations for fields with linear gradients and cylindrical symmetry} \subsubsection{Expressions relating frequency shifts with frequency shifts} In the common case of cylindrical field symmetry, (with arbitrary cell shape) the correlation functions of interest are $\langle v_x(0)v_x% (\tau)+v_y(0)v_y(\tau)\rangle,$ $\langle x(0)v_x(\tau)+y(0)v_y% (\tau)\rangle,$ $\langle x(0)x(\tau)+y(0)y(\tau)\rangle$ which satisfy the relations \begin{align} \langle x(0)v_x(\tau)+y(0)v_y(\tau) \rangle & = -\frac{d}{d\tau} \langle x(0)x(\tau)+y(0)y(\tau)\rangle, \\ \langle v_x(0)v_x(\tau)+v_y(0)v_y(\tau) \rangle & = \frac{d}{d\tau} \langle x(0)v_x(\tau)+y(0) v_y(\tau)\rangle = -\frac{d^2}{d\tau^2}\langle x(0)x(\tau)+y(0)y(\tau)\rangle. \end{align} Then \begin{align} S_{xv}\left( \omega \right) & = -i\omega S_{xx}\left( \omega \right) + \left\langle x^2+y^2\right\rangle, \\ S_{vv}\left( \omega \right) & = -i\omega S_{xv}\left( \omega \right). \end{align} According to Eq. (\ref{eq:dw-B2}, \ref{eq:dw-E2}, \ref{eq:dw-BE}) we find \begin{equation} \delta\omega_{BE} = K_{BE} \mathrm{Re} \left( S_{xv} (\omega ) \right) = -K_{BE} \mathrm{Im} \left( S_{vv}(\omega)/\omega\right) = -\frac{K_{BE}}{K_{E^2}}\frac{\delta\omega_{E^2}}{\omega}, \end{equation} with $K_{BE}=\frac{\gamma^2G_z E}{2c^2}$, $K_{B^2}=\frac{\gamma^2 G_z^2}{8}$, $K_{E^2}=\frac{\gamma^2E^2}{2c^4}$. The last expression was obtained in \cite{Pendlebury2004} for the case of particles moving in a cylinder with specular reflecting walls and no gas collisions, but our result holds for any cell shape and type of particle motion. Equation (\ref{eq:dw-BE-int4}) can be written as \begin{align} \delta\omega_{BE} & = \frac{\gamma^2E G_z}{2 c^2} \langle x^2+y^2 \rangle - \frac{\gamma^2E G_z}{2 c^2}\omega_0 \int_0^{\infty}\sin(\omega_0\tau)\langle x(0)x(\tau)+y(0)y(\tau)\rangle d\tau \\ & = \frac{\gamma^2E G_z}{2c^2}\langle x^2+y^2\rangle-\frac{4 E \omega_0}{c^2G_z}\delta\omega_{B^2}, \label{bg6} \end{align} so that Eq. (\ref{bg6}) represents a method of measuring the linear in E shift without applying an electric field. To do this one would apply a known constant gradient and look for a frequency shift dependent on the square of the gradient. The possibility of the volume average field being changed by application of the gradient can be accounted for by taking the part of the shift proportional to the square of the gradient. Another method would be to measure the relaxation rate due to then application of the gradient as discussed in the next paragraph. \subsubsection{Expressions relating frequency shifts and relaxation rates} As the correlation functions defined in Eqs. (\ref{eq:dw-B2} - \ref{eq:rel-BE}) are all causal, that is, they are zero for $\tau<0$, their real and imaginary parts are related by a dispersion relation \cite{Papoulis} and we can write \begin{align} \delta\omega_{BE} & = K_{BE}\left[ \langle x^2+y^2\rangle-\omega \mathrm{Im}\left[ S_{xx}\left( \omega\right) \right] \right] \\ & = K_{BE}\left[ \langle x^2+y^2\rangle -\frac{\omega}{\pi}\int_{-\infty}^{\infty}\frac{\mathrm{Re}\left[ S_{xx}\left( \omega^{\prime}\right) \right] }{\omega-\omega^{\prime}}d\omega^\prime \right] \\ & = K_{BE}\left[ \langle x^2+y^2\rangle -\frac{\omega }{\pi}\frac{1}{2 K_{B^2}}\int_{-\infty }^{\infty }\frac{\Gamma _{1\left( B^{2}\right) }}{\omega -\omega ^\prime} d\omega^\prime \right] \\ & = K_{BE}\left[ \langle x^2+y^2\rangle -\frac{1}{K_{B^2}}\frac{\omega^2}{\pi}\int_{-\infty}^{\infty}\frac{\Gamma_{1\left( B^2\right)}\left( \omega^{\prime}\right) }{\omega^2-\omega^{\prime2}}d\omega^\prime \right] \label{bg3} \\ & =K_{BE}\left[ \langle x^2+y^2\rangle -\frac{1}{K_{E^2}}\frac{\omega^2}{\pi}\int_{-\infty}^{\infty}\frac{\Gamma_{1\left( E^2\right)}\left( \omega^{\prime}\right) }{\left( \omega^2-\omega^{\prime2}\right) \omega^{\prime2}}d\omega^\prime \right]. \label{bg4} \end{align} Equations (\ref{bg3}) and (\ref{bg4}) are particularly interesting because they allow measurement of the frequency dependence of $\delta\omega_{BE}$, the shift, linear in $E$, that produces a serious systematic error in the searches for particle electric dipole moments without application of an electric field. By applying a gradient, $\frac{\partial b_{z}}{\partial z},$ larger than any existing gradients and measuring $\Gamma_{1\left( B^2\right) }\left(\omega\right)$ one can reconstruct the frequency dependence of $\delta \omega_{BE}$. For the case of a non-cylindrically symmetric cell we can apply relatively large gradients $\partial b_{x,y}/\partial x,y$ and thus measure, separately, the spectra of the correlation functions in the two directions. While according to (\ref{bg3}) we need to know the relaxation for all frequencies, the necessary range of measurement is limited because the known high and low frequency limits are reached rather quickly (\ref{bg5},\ref{bg1a}). Substituting \eqref{eq:rel-E2} into \eqref{bg4} we obtain a form of the relation that has been obtained by another method in \cite{Lamoreaux2005b}. \subsubsection{Expressions relating relaxation rates with relaxation rates.} For completeness we give relations between the relaxation rates which are abtained in a similar way: \begin{equation} \Gamma_{1 (E^2) }=\frac{K_{E^2}}{K_{B^2}}\omega^2 \Gamma_{1 (B^2) }=\frac{K_{E^2}}{K_{BE}}\omega\Gamma_{1 (BE)}. \end{equation} The relaxation caused by the electric field alone, $\Gamma_{1 (E^2)},$ has been discussed in \cite{Schmid2008}. \subsection{Adiabatic regime: high magnetic field or slow particles for fields with uniform gradients} We specify Eq. \eqref{eq:dw-B2-int2}, \eqref{eq:dw-E2-int2}, \eqref{eq:dw-BE-int2} to the case of a uniform gradient: \begin{equation} \delta\omega_{B^2}=\frac{\gamma^2}{2\omega_0}\langle b_x^2+b_y^2\rangle+\frac{\gamma^2}{2\omega_0^{3}}\left\{ G_x^2\langle v_x^2\rangle+G_y^2\langle v_y^2\rangle\right\} +O(1/\omega_0^5\tau_{\mathrm{corr}}^5) \label{eq:dw-B2-int3} \end{equation} \begin{equation} \delta\omega_{E^2}=\frac{\gamma^2E^2}{2c^{4}\omega_0}\langle v_x^2+v_y^2\rangle+O(1/\omega_0^3\tau_{\mathrm{corr}}^3) \label{eq:dw-E2-int3} \end{equation} \begin{equation} \delta\omega_{BE}=-\frac{\gamma^2E}{c^2\omega_0^2}\left\{ G_x\langle v_x^2\rangle+G_y\langle v_y^2\rangle\right\} +O(1/\omega_0^4\tau_{\mathrm{corr}}^4). \label{eq:dw-BE-int3} \end{equation} Similarly, the relaxation rates can be expressed as: \begin{align} \Gamma_{1\left( B^2\right) } & =-\gamma^2\frac{1}{\omega_0^2 }\left[ G_x^2\langle xv_x\rangle+G_y^2\langle yv_y\rangle\right] +O(1/\omega_0^4\tau_{\mathrm{corr}}^4) ,\\ \Gamma_{1\left( BE\right) } & =\frac{2\gamma^2E}{c^2}\left[ \frac{1}{\omega_0}\langle G_x x v_x + G_y y v_y \rangle +O(1/\omega_0^3\tau_{\mathrm{corr}}^3)\right]. \end{align} The first term in Eq. \eqref{eq:dw-B2-int3} corresponds to Eq. (18) in \cite{Guigue2014}. It is remarkable that it is possible to derive a simple and universal expression for the third order term $\propto \omega_0^{-3}$ in Eq. \eqref{eq:dw-B2-int3}. To our knowledge this third order term has never been calculated before. \subsection{Low field, high velocity limit for fields with uniform gradients} For uniform gradients \eqref{eq:uniformGradients}, the expression of $\delta \omega_{BE}$ \eqref{eq:dw-BE-int5a} can be simplified to \begin{align} \delta\omega_{BE} & =-\frac{\gamma^2E}{c^2}\langle G_x x^2+G_y y^2\rangle+\frac{\gamma^2E}{c^2}\omega_0^2\int_0^{\infty}\tau\langle G_xx(0)x(\tau)+G_yy(0)y(\tau)\rangle\mathrm{d}\tau\label{eq:dw-BE-int-unif}\\ & =-\frac{\gamma^2E}{c^2}\langle G_xx^2+G_yy^2\rangle+O(\omega_0^2\tau_{\mathrm{corr}}^2) . \end{align} Also \eqref{eq:dw-B2-int5} becomes \begin{equation} \delta\omega_{B^2}\approx\frac{\gamma^2}{2}\omega_0\int_0^{\infty}\tau\langle G_x^2x(0)x(\tau)+G_y^2y(0)y(\tau)\rangle\mathrm{d}\tau \end{equation} and \eqref{eq:dw-E2-int5} \begin{equation} \delta\omega_{E^2}=-\frac{\gamma^2E^2}{2c^{4}}\omega_0\langle x^2+y^2\rangle+\frac{\gamma^2E^2}{2c^{4}}\omega_0^{3}\int_0^{\infty}\tau\langle x(0)x(\tau)+y(0)y(\tau)\rangle\mathrm{d}\tau . \end{equation} Similarly \begin{align} \Gamma_{1 (B^2)} & = \gamma^2 \int_0^{\infty}\langle G_x^2x(0)x(\tau)+G_y^2y(0)y(\tau)\rangle\mathrm{d}\tau ,\\ \Gamma_{1 (E^2)} & = \frac{\gamma^2E^2}{c^4}\omega_0^2\int_0^{\infty}\langle x(0)x(\tau)+y(0)y(\tau)\rangle\mathrm{d}\tau ,\\ \Gamma_{1 (BE) } & = \frac{2 \gamma^2E}{c^2}\omega_0\int_0^{\infty}\langle G_xx(0)x(\tau)+G_yy(0)y(\tau)\rangle\mathrm{d}\tau, \end{align} so that in the low frequency limit there are no universal or quasi-universal expressions. \section{Conclusion} In this paper we have investigated the asymptotic behavior of the spin-relaxation and related frequency shifts due to the restricted motion of particles in non-uniform magnetic and electric fields. Simple universal expressions (valid for any form of gas container and any spatial form of the field) were obtained for the observables $\delta\omega$ and $\Gamma _1$ for adiabatic and nonadiabatic regimes of spin - motion. The remarkable feature of all our results is that they were obtained without any specific assumptions about the explicit form of the correlation functions. Hence, we expect that our results are valid for both diffusive and ballistic regimes of motion. These results can then be applied to a wide variety of realistic systems. They are especially important in the context of experiments searching for the electric dipole moment using trapped particles, for the frequency shifts proportional to electric fields are of utmost importance. In particular we have given general relations between various frequency shifts and relaxation rates. \section{Acknowledgment} The authors would like to thank Albert Steyerl for pointing out a numerical error in the first verision of the manuscript.
train/arxiv
BkiUfoDxK3YB9m4ABLHS
5
1
\section{Introduction} In recent years, several large-scale household energy consumption datasets are made publicly available, e.g. UK-DALE \cite{UK-DALE} (UK Domestic Appliance-Level Electricity) and REFIT \cite{Murray2017refit} (Personalised Retrofit Decision Support Tools For UK Homes Using Smart Home Technology). These datasets boost the studies on energy disaggregation, also known as Non-Intrusive Load Monitoring (NILM) \cite{Hart1992NILM}. Energy disaggregation is a challenging blind source separation problem that aims to separate the energy consumption of individual appliances from the readings of the aggregate meter measuring the total consumption of multiple appliances for example in a house. Figure \ref{fig:energy-dis} gives an example of how the energy consumption of a whole house changes along with that of the individual appliances. This problem is difficult due to a number of uncertainties such as the existence of background noise, the lack of knowledge on the numbers of different appliances and their true energy consumption patterns in a given household, replacements of old appliances, and overlapped operations of multiple appliances with similar energy consumption patterns. Energy disaggregation finds its usefulness in many applications. For example, disaggregated data could be used by feedback systems to provide pertinent information about energy usage and educate consumers at opportune times \cite{Froehlich2009}, which in turn helps the consumers better control their consumption and ultimately save energy \cite{Fischer2008}. Disaggregated data may also help identify malfunctioning equipments or inefficient settings \cite{Froehlich2010}. For policy makers, knowing the amount of energy each category of appliances consumes is critical to the development and evaluation of energy efficiency policies \cite{Sidler1999, Sidler:2003}. Disaggregated data may also provide valuable information to facilitate power system planning, load forecasting, new types of billing procedures, and the ability to pinpoint the origins of certain customer complaints \cite{Froehlich2010}. Another application is to help researchers understand the occurrences of home activities which nowadays are heavily related with the usage of different types of appliances \cite{DSP-2018}. \begin{figure}[!h] \centering \centerline{\includegraphics[width=\columnwidth]{figs/energy_dis.png}} \caption{An example of energy consumption of individual appliances and a whole house} \label{fig:energy-dis} \end{figure} In the literature, a lot of research has been done on applying machine learning methods to the problem of energy disaggregation. Among the popular approaches, different variants of HMMs (Hidden Markov Models) such as FHMMs (Factorial HMMs) have attracted a lot of attention \cite{Kolter2012FHMM,Zhong2014NIPS,Shaloudegi2016FHMM}. With the availability of large-scale open datasets such as UK-DALE and REFIT \cite{UK-DALE,Murray2017refit}, there is a flourish in applying deep neural networks (DNNs) to the problem of energy disaggregation. For example, in \cite{Kelly:2015} and \cite{zhang2018sequence}, the authors investigated the application of convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Autoencoders. However, there are several problems with the conventional DNN models. The computation complexity of the conventional CNNs is getting substantially high when the input sequences are long. In the case of RNNs, the values of the hidden units have to be calculated in a sequential order and thus does not scale well. Recently, a neural network called WaveNet \cite{van2016wavenet} was proposed for long sequence audio processing. WaveNet is a variant of the CNN architecture with dilated convolutional layers which make it easier to be trained with long sequences compared to the conventional CNNs. With skip connections over all the convolutional layers, it can learn multi-scale hierarchical representations. WaveNet has been proven to work well for tasks such as speech synthesis \cite{van2016wavenet} and speech denoising \cite{rethage2017wavenet}. It is efficient because it has fewer parameters than a CNN does. WaveNet is also easy to parallelise compared to RNNs. For the task of energy disaggregation, some appliances may have long-term dependencies in their energy consumption patterns and these patterns may exist at different scales. Therefore, the task of energy disaggregation may benefit from WaveNet's capability of modeling long sequences and learning multi-scale hierarchical representations. To evaluate the performance of WaveNet models for the task of energy disaggregation, we carried out a set of experiments using the public dataset REFIT \cite{Murray2017refit}, and compared the disaggregation results of WaveNet models against the five-layer CNN model proposed in \cite{zhang2018sequence} and a three-layer RNN model. We showed that WaveNet models outperform the other two methods in terms of both error measures and computation cost. We also investigated the influence of the length of input sequences on the disaggregation performance as well as on the computation cost. While the problem of energy disaggregation focuses on estimating the exact amount of energy being consumed by an appliance, there is another interesting task called on/off detection which tries to predict whether individual appliances are in operation or not. Compared to energy disaggregation, on/off detection provides a perspective of a coarser granularity on the usage status of individual appliances and finds its usefulness in understanding the occurrences of home activities that heavily depend upon the assistance of household appliances such as kettle \cite{Alcala:2015}, washing machine and microwave \cite{DSP-2018}. Such dependencies have been proven, for example, to help with activity monitoring and health management \cite{Alcala:2015}. In this paper, we investigate two learning frameworks for the task of on/off detection. The first one, called regression based learning framework, first trains a model for energy disaggregation using the aggregate energy readings as inputs and the appliance readings as the target values, and then derives the on/off state sequence of the appliance by binarising the predictions of the disaggregation model according to the on-power threshold of the appliance. The second one, called classification based learning framework, directly trains a binary classifier with the appliance on/off states. To evaluate the two learning frameworks for the task of on/off detection, we respectively trained a group of WaveNet models following the two learning frameworks with the REFIT dataset. We showed that for the task of on/off detection the classification based learning framework outperforms the regression based learning framework in terms of F1 score. The contributions of this paper are as follows: \begin{itemize} \item We propose to tackle the problem of energy diaggregation with WaveNet models which are capable of modeling long sequences more efficiently compared to the conventional CNNs and RNNs, and we show that WaveNet models achieve the state-of-the-art performance based on a set of experiments with a public dataset. \item We carry out an analysis on how the receptive field size and the target field size would affect the disaggregation performance of the three deep neural networks, i.e., WaveNets, CNNs and RNNs. \item We compare a regression based learning framework with a classification based learning framework for the task of on/off detection and show empirically that the latter outperforms the former that utilises the outputs from energy disaggregation. \item The evaluation is performed using the public dataset REFIT collected from 20 households. We give a detailed description of how the raw data was preprocessed and used for model training and release the source code\footnote{https://github.com/jiejiang-jojo/fast-seq2point} to facilitate the reproducibility of our work. \end{itemize} The rest of the paper is organized as follows: Section \ref{relatedwork} discusses the related work. Section \ref{problem} gives a formal description of the energy disaggregation problem and the on/off detection problem. Section \ref{methods} presents three learning paradigms for model training, introduces three neural network models, and describes how a model is trained respectively for the task of energy disaggregation and on/off detection. Section \ref{experiments} illustrates the experiment preparation and analyses the experiment results. Finally, in Section \ref{conclusion}, we conclude the paper with possibilities of future work. \section{Related Work}\label{relatedwork} In the literature, a lot of research has been done on applying machine learning methods to the problem of energy disaggregation. Among the popular approaches, different variants of hidden Markov models (HMMs) have attracted much attention (e.g. \cite{Kim2011, Kolter2012FHMM, Parson2012Prior, Zhong2014NIPS, Shaloudegi2016FHMM}). Recently, with the availability of large open datasets such as UK-DALE and REFIT \cite{UK-DALE,Murray2017refit} and the superior performance of deep neural networks (DNNs) in many research areas such as computer vision \cite{krizhevsky2012imagenet} and audio processing \cite{rethage2017wavenet}, there has been a flourish on applying DNNs for the problem of energy disaggregation. For example, Kelly and Knottenbelt \cite{Kelly:2015} compared the disaggregation performance of the traditional machine learning methods (e.g. FHMMs) with the deep learning methods such as Autoencoders and Long Short-term Memory (LSTM) networks and the results show that the deep learning methods outperform the traditional methods. Mauch and Yang \cite{Mauch15} also advocated the application of LSTM for the problem of energy disaggregation. Chen et al. \cite{Chen2018} proposed a convolutional sequence to sequence model in which gated linear unit convolutional layers were used to extract information from the sequences of aggregate electricity consumption and residual blocks were used to refine the output of the neural network. Later, Zhang et al. \cite{zhang2018sequence} proposed to use a sequence-to-point paradigm to train a CNN for energy disaggregation which outperforms the sequence-to-sequence learning approach used in \cite{Kelly:2015}. There are also works using a combination of DNNs. For example, by combining CNNs with variational autoencoders, Sirojan et al. \cite{Sirojan18} showed that their approach outperforms the one presented in \cite{zhang2018sequence}. Shin et al. \cite{Shin2019} proposed a subtask gated network that combines the main regression network with an on/off classification subtask network. Targeting real-time applications, Harell et al. \cite{Harell2019} proposed a causal 1-D CNN based on the WaveNet model proposed in \cite{van2016wavenet}. This work is similar to ours as it also adapts WaveNet for the problem of energy disaggregation but our work differs from this work as we use a non-causal version of WaveNet proposed in \cite{rethage2017wavenet}, i.e., the same amount of samples in the past as well as in the future are used to train the model and inform the prediction. Another difference is that we employed the concept of target field as proposed in \cite{rethage2017wavenet} such that the computation of the neighbouring samples can be shared, which speeds up the model training and inference. In addition, we carried out an extensive study on how the receptive field size and the target field size influence the model performance. Moreover, the baseline used in \cite{Harell2019} is a variant of HMM, i.e. sparse super-state HMM, while we compared our work with the state-of-the-art DNN based approaches. \section{Problem Statement}\label{problem} \subsection{Energy Disaggregation}\label{energy_dis} Energy disaggregation aims to estimate the energy usage of individual appliances based on the readings of the mains power meter that measures the total energy consumption of, for example, a whole house. Formally, suppose we have a sequence of readings from a house-level meter denoted as $X=(x_1, x_2, \ldots, x_T)$ where $ T $ is the length of the sequence. The problem of energy disaggregation is to disaggregate $ X $ into the energy consumption sequence of individual appliances denoted as $ Y^i=(y_{1}^{i}, y_{2}^{i}, \dots, y_{T}^{i})$, $y^i_t \in \mathbb{R}_{\geq 0} $ where $ \mathbb{R}_{\geq 0} = [0, +\infty) $, $ I $ is the number of known appliances, $i \in \{1, \ldots, I\}$ is the appliance index and $ t \in \{1, \ldots, T\}$ is the index of samples in time domain. In addition, we denote the readings from unknown appliances and background noise as $ U = (u_{1}, ..., u_{T}) $. At any time $t$, $x_t$ is assumed to be the summation of the readings from all the known appliances and unknown appliances with background noise: \begin{equation} \label{eq:mae} x_{t} = \sum_{i=1}^{I} y^{i}_{t}+u_{t} \end{equation} \noindent where the residual term $ u_{t} $ indicates the energy consumption of unknown appliances and background noise at time $t$. The aim of energy disaggregation is to design a model to separate the energy consumption of the individual appliances $ Y^{i}, i \in \{1, \ldots, I\} $ from the aggregate readings $ X $. That is, we are looking for a set of disaggregation mappings: \begin{equation} \label{eq:mapping_disaggregation} f^{i}: X \mapsto Y^{i}. \end{equation} where each mapping $ f^{i} $ maps from an aggregate reading sequence $ X $ to the energy consumption sequence of an appliance $ Y^{i} $. \subsection{On/off Detection}\label{status_detection} On a coarser granularity, most appliances have an on-power threshold which defines the least amount of energy an appliance needs to operate. When the amount of energy an appliance is consuming is below the on-power threshold it is considered to be in an off state otherwise it is considered to be in an on state. For example, a kettle usually needs 2000 watts to be in an on state while a washing machine needs 20 watts. The problem of on/off detection aims to estimate whether an appliance is in an on or off state based on the readings of the total energy consumption. Formally, given a sequence of aggregated readings from a house-level meter denoted as $X=(x_1, x_2, \ldots, x_T)$ where $ T $ is the length of the sequence, the aim of on/off detection is to design a model to recognise the on/off state sequence of individual appliances denoted as $ Z^i=(z_{1}^{i}, z_{2}^{i}, \dots, z_{T}^{i})$, $z^i_t \in \{0,1\}$, $i \in \{1, \ldots, I\}, t \in \{1, \ldots, T\}$ from the aggregate readings $ X $ where $ I $ is the number of known appliances, $ i $ is the appliance index, $ t $ is the sample index in time domain, $0$ indicates the off state and $1$ indicates the on state. That is, we are looking for a set of mappings: \begin{equation} \label{eq:mapping_onoff} f^{i}: X \mapsto Z^{i}. \end{equation} where each mapping $ f^{i} $ maps from an aggregate reading sequence $X$ to the on/off state sequence of an appliance $Z^{i}$. \section{Methods}\label{methods} \subsection{Learning Paradigms}\label{learning_paradigms} Usually the aggregated readings $ X $ is a long sequence over days, months, or sometimes years. To efficiently train a model, a commonly used approach in the literature of energy disaggregation is the sliding window approach which splits the long sequence $ X $ into shorter sequences $ \mathbf{x} = (x_{t}, ..., x_{t+L-1}) $ where $ L $ indicates the receptive field size. Instead of learning the mappings in Equation \ref{eq:mapping_disaggregation} and Equation \ref{eq:mapping_onoff} directly, we use sequences $ \mathbf{x} $ as input. The target of an input sequence can be a sequence of the same length which is called sequence-to-sequence learning \cite{Kelly:2015} or the midpoint of the target sequence which is called sequence-to-point learning \cite{zhang2018sequence}. In this section, we introduce three variants of the sliding window approach. \subsubsection{Sequence-to-sequence Learning} Sequence-to-sequence learning \cite{Kelly:2015}, as shown in Figure \ref{fig:fast_seq_to_point} (a), was proposed to learn a mapping from an input sequence $ \mathbf{x} $ to an output/target sequence $ \mathbf{y} $, where $ \mathbf{x}=(x_{t}, ..., x_{t+L-1}) $ and $ \mathbf{y}=(y_{t}, ..., y_{t+L-1}) $ have the same length. In sequence-to-sequence learning, each element of the output signal is predicted many times and an average of these predictions is used as the final output, which consequently smooths the edges. However, as pointed in \cite{zhang2018sequence}, it is expected that some of the input sequences will provide a better prediction of a single point than others, particularly those sequences where the point is near the midpoint of the input sequence. \subsubsection{Sequence-to-point Learning} Sequence-to-point learning \cite{zhang2018sequence}, as shown in Figure \ref{fig:fast_seq_to_point} (b), aims to solve the problem of sequence-to-sequence learning by finding a mapping from an input sequence $ \mathbf{x}=(x_{t}, ..., x_{t+L-1}) $ to a single target point $ y_{t+ \left \lfloor L/2 \right \rfloor} $ the index of which corresponds to the midpoint of the input sequence, where $ \left \lfloor \cdot \right \rfloor $ floors a value to an integer. One problem of the sequence-to-point learning paradigm is that learning a single point is usually inefficient. \subsubsection{Fast Sequence-to-point Learning} In this paper, we propose to use a fast sequence-to-point learning paradigm, as shown in Figure \ref{fig:fast_seq_to_point} (c), to speed up the sequence-to-point learning. By introducing a target field \cite{rethage2017wavenet} to replace a single point as output, the computation of a sequence-to-point model can be shared. The input sequence and target sequence are denoted as $ \mathbf{x}=(x_{t}, ..., x_{t+L+r-2}) $ and $ \mathbf{y}=(y_{t+\left \lfloor L/2 \right \rfloor}, ..., y_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ respectively. The length of the input sequence in this case is $(L+r-1)$ where $L$ indicates the size of the receptive field, and $r$ indicates the size of the target field, i.e. the length of the target sequence. When $r=1$, fast sequence-to-point learning degenerates to sequence-to-point learning. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.7\columnwidth]{figs/fast_seq_to_point.png}} \caption{(a) Sequence-to-sequence learning; (b) Sequence-to-point learning; (c) Fast sequence-to-point learning. } \label{fig:fast_seq_to_point} \end{figure} \subsection{Deep Neural Networks} \label{sec:NueralNetworks} In this section, we introduce three classes of neural network. The first two, CNNs and RNNs, serving as two baselines are used to benchmark the performance of WaveNets. \subsubsection{Convolutional neural networks}\label{cnn} Convolutional neural networks (CNNs) have achieved the state-of-the-art performance in many applications such as computer vision \cite{krizhevsky2012imagenet}, speech and audio processing \cite{rethage2017wavenet} and natural language processing \cite{dauphin2016language}. With shared filters to capture local patterns of various signals, the number of parameters of a CNN is fewer than that of a fully connected neural network. Time domain CNNs have been applied to energy disaggregation, for example, in \cite{zhang2018sequence}. Similar to the two dimensional CNN for computer vision \cite{krizhevsky2012imagenet}, a time domain CNN consists of several convolutional layers, each of which contains several filters that are used to convolve with the output of the previous convolutional layer. The filters are designed to capture local patterns of a signal. For example, in computer vision, the lower level filters of a CNN may learn edge detectors, while the higher level filters may learn to capture high-level profiles of an image. Similarly, in the case of time domain CNNs for energy disaggregation, lower level filters may capture the short-term energy usage patterns of different appliances such as a single activation, while higher level filters may capture the long-term patterns such as a complete operating cycle. The time domain convolutional operation can be described as follows: \begin{equation} \label{eq:cnn} v[k^{out}, t] = \sum_{k^{in}} \sum_{\tau=1}^{m} u[k^{in}, t - \tau] \cdot h[k^{out}, k^{in}, \tau] \end{equation} \noindent where $ u $ and $ v $ denote the input and output feature maps of a convolutional layer, and $ k^{in} $ and $ k^{out} $ indicate the index of the input and output feature maps. The filters are represented as a three dimensional tensor $ h $ and $ m $ is the filter length in time domain. The first convolutional layer takes a sequence $ \mathbf{x} $ as input. The predicted output is obtained from the last convolutional layer of a CNN. With larger receptive fields, long-term dependencies in the energy consumption data can be taken into account. However, the computation complexity of CNNs will increase quadratically along with the size of the receptive field. \subsubsection{Recurrent neural networks}\label{rnn} Recurrent neural networks (RNNs) have many successful applications in modeling temporal signals, e.g., audio and speech signal processing \cite{graves2013speech} and natural language processing \cite{chung2014empirical}. Similar to the fully connected neural networks, each input sample $ x_{t} $ is mapped to a hidden unit $ h_{t} $ by a transformation matrix. In addition, there are connections between adjacent hidden units to carry on the information from previous samples. In a non-causal system, a RNN can be bidirectional so as to use information from both history and future. A recurrent layer of a RNN can be described as: \begin{equation} \label{eq:rnn_layer} h_{t} = \phi(Wx_{t} + Vh_{t-1} + b) \end{equation} \noindent where $ W $, $ V $ and $ b $ respectively represent the transformation matrix between input samples and hidden units, the transformation matrix between adjacent hidden units, and a bias term; $ \phi $ represents a non-linear function. A RNN may consist of several recurrent layers. The backpropagation through time algorithm \cite{werbos1990backpropagation} is used for training a RNN. One problem of the conventional RNNs is gradient vanishing/explosion \cite{pascanu2013difficulty}. This is because the depth of a RNN is proportional to the length of the input sequence. When training a RNN, the gradient will accumulate exponentially, which makes the training unstable. To solve the gradient explosion/vanishing problem, LSTM was proposed, which introduces a memory cell with update, forget and output gates to control the information flow \cite{hochreiter1997long}. Later Gated Recurrent Unit (GRU) \cite{chung2014empirical} was proposed to simplify LSTM by reducing the number of parameters. A GRU is described as follows: \begin{equation} \label{eq:gru} \begin{matrix} r_{t} = \sigma(W_{r}x_{t} + U_{r}h_{t-1} + b_{r}) \\ z_{t} = \sigma(W_{z}x_{t} + U_{z}h_{t-1} + b_{z}) \\ \widetilde{h}_{t} = \phi(Wx_{t} + U(r_{t} \odot h_{t-1}) + b) \\ h_{t} = z_{t} \odot h_{t-1} + (1 - z_{t}) \odot \widetilde{h}_{t}. \end{matrix} \end{equation} where $ r_{t} $ indicates the reset gate at time step $t$, $ z_{t} $ indicates the update gate at time step $t$, $\widetilde{h}_{t}$ indicates the candidate new value for the memory cell at time step $t$, $h_{t}$ indicates the final value for the memory cell at time step $t$, $ \sigma $ represents a sigmoid (non-linear) function and $ \phi $ represents a $\tanh$ (non-linear) function. The units that learn to capture short-term dependencies will tend to have reset gates frequently active while the units that learn to capture long-term dependencies will tend to have update gates frequently active. \subsubsection{WaveNets}\label{wavenet} Conventional CNNs cannot scale when the input sequences are getting long as the computation complexity increases quadratically along with the receptive field size. Compared to CNNs, the computation complexity of RNNs increases linearly along with the receptive field size. However, the hidden units of RNNs can only be calculated sequentially because the calculation of each hidden unit depends upon the value of the previous hidden unit. So RNNs cannot handle parallel computations efficiently. Therefore, long sequence modeling has been a computational challenge for both CNNs and RNNs. \begin{figure*}[!h] \centering \centerline{\includegraphics[width=\columnwidth]{figs/wavenet.png}} \caption{An example of the WaveNet input-output structure for energy disaggregation. } \label{fig:wavenet} \end{figure*} To solve this problem, WaveNet \cite{van2016wavenet} was proposed for modeling raw audio signals and has been used for modeling time sequences in tasks such as speech denoising \cite{rethage2017wavenet}. WaveNet is an improvement over conventional CNNs, where a ``dilated convolution'' is applied to reduce the size of filters. A dilated convolution is a convolution with holes. That is, the filters are applied over an area larger than its length by skipping input values with a certain step size. Stacked dilated convolutions enable a network to have very large receptive fields with just a few layers. In \cite{van2016wavenet} a filter length of 2 is applied for modeling the causal audio signals. In this paper, following \cite{rethage2017wavenet}, we applied a filter length of 3 to utilise the non-causal information of input sequences. Figure \ref{fig:wavenet} shows through an example the WaveNet input-output structure used in this paper for both tasks of energy disaggregation and on/off detection. Following \cite{van2016wavenet}, the dilated layers are embedded into residual blocks, as shown in Figure \ref{fig:residualblock}. The residual output of each block will be sent to the input of the next one. The skip output of all the residual blocks will be summed and then followed by a $ 3 \times 1 $ convolutional layer, which gives the final output as predictions. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.5\columnwidth]{figs/residual_block.pdf}} \caption{Residual block of WaveNets. } \label{fig:residualblock} \end{figure} The following equation characterises the relation between the number of dilated convolutional layers of a WaveNet and its receptive field size: \begin{equation} \label{eq:seql_layer} L = (2^s - 1) * (m - 1) + 1 \end{equation} where $L$ denotes the receptive field size, $s$ denotes the number of dilated convolutional layers, $m$ denotes the filter length applied to each dilated convolutional layer and in this paper we set $m=3$ following \cite{rethage2017wavenet}. Since WaveNets do not have recurrent connections \cite{van2016wavenet}, they are typically faster than RNNs, especially when applied to long sequences. \subsection{Training a Model For Energy Disaggregation}\label{learning_energy_dis} For the task of energy disaggregation, the training of a fast sequence-to-point model based on CNN/RNN/WaveNet can be implemented with back-propagation. The inputs are sequences of aggregate energy readings $\mathbf{x}$ while the target values are sequences of appliance energy readings $\mathbf{y}$. Assuming an output and the corresponding target value are denoted as $ \mathbf{\hat{y}}=(\hat{y}_{t+\left \lfloor L/2 \right \rfloor}, ..., \hat{y}_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ and $ \mathbf{y}=(y_{t+\left \lfloor L/2 \right \rfloor}, ..., y_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ respectively, the loss can then be calculated using Mean Absolute Error (MAE) which is used as one of the evaluation criteria (see Section \ref{metric} for details): \begin{equation} \label{eq:loss} loss(\mathbf{\hat{y}}, \mathbf{y}) = \frac{1}{r} \sum_{\tau=\left \lfloor L/2 \right \rfloor}^{\left \lfloor L/2 \right \rfloor+r-1} \left | \hat{y}_{t+\tau} - y_{t+\tau} \right |. \end{equation} The loss function is calculated on mini-batch data. When the target field size $r$ equals 1, Equation \ref{eq:loss} degenerates to the conventional sequence-to-point model. After obtaining the loss, the gradient can be calculated and used to update the parameters of the model. \subsection{Training a Model for On/off Detection}\label{sec:learning_onoff} \subsubsection{Regression Based Learning Framework} The regression based learning framework tackles the problem of on/off detection by utilising the outputs from energy disaggregation. In concrete, for a given appliance, it first trains a fast sequence-to-point model based on CNN/RNN/WaveNet for energy disaggregation. Thereafter, given a new sequence of aggregate energy readings, the energy readings of the appliance are predicated using the disaggregation model. Finally, it derives the on/off state sequence of the appliance by binarising the predictions according to the on-power threshold of the appliance. \subsubsection{Classification Based Learning Framework} The classification based learning framework tackles the problem of on/off detection by directly training a binary classifier. In concrete, it first binaries the energy readings of a given appliance according to the on-power threshold of the appliance. Thereafter, it trains a binary classifier using the aggregate energy readings as inputs and the binarsied appliance readings as the target values. Finally, given a new sequence of aggregate energy readings, the on/off state sequence of the appliance is predicated using the trained classifier. For training a fast sequence-to-point binary classifier for a given appliance, the last layer of a CNN/RNN/WaveNet is a fully connected layer followed by a sigmoid nonlinearity to represent the probability that the appliance is in the on state. Assuming an output and the corresponding target value are denoted as $ \mathbf{\hat{z}}=(\hat{z}_{t+\left \lfloor L/2 \right \rfloor}, ..., \hat{z}_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ and $ \mathbf{z}=(z_{t+\left \lfloor L/2 \right \rfloor}, ..., z_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ respectively, the loss can then be calculated using the binary cross-entropy: \begin{equation} \label{binary_crossentropy} \text{loss}(\mathbf{\hat{z}}, \mathbf{z}) = - \frac{1}{r} \sum_{\tau=\left \lfloor L/2 \right \rfloor}^{\left \lfloor L/2 \right \rfloor+r-1} (z_{t+\tau} \ \text{ln} \ \hat{z}_{t+\tau} + (1 - z_{t+\tau}) \text{ln}(1 - \hat{z}_{t+\tau})). \end{equation} Similarly, the loss function is calculated on mini-batch data. When the target filed size $r$ equals 1, Equation \ref{binary_crossentropy} degenerates to the conventional sequence-to-point model. After obtaining the loss, the gradient can be calculated and used to update the parameters of the model. \section{Experiments}\label{experiments} \subsection{Dataset} The dataset used in this paper is REFIT \cite{Murray2017refit} which is a collection of energy consumption data from 20 households in the UK. The readings were recorded around every 8 seconds and covers a period of over 2 years. The dataset contains both house-level energy usage (aggregate readings) and appliance-level energy usage (appliance readings) of more than 10 types of appliances. In this paper we focus on the disaggregation of four types of appliances: kettle, microwave, dish washer and washing machine which are used by most of the households. \subsection{Data Preprocessing} Firstly, we resampled the data with an interval of 10 seconds to mitigate the fluctuations of time intervals between the original readings, which resulted in 93,976,578 data points. Secondly, following \cite{Kelly:2015}, we filled the gaps in the data shorter than 3 minutes by forward-filling assuming that the gaps are caused by RF issues and filled the gaps longer than 3 minutes with zeros assuming that the gaps are caused by the appliance being switched off. Thirdly, for each type of appliance and the aggregate, we normalised the data by subtracting the mean values and dividing by the corresponding standard deviations. Thereafter, for each household, we extracted all the possible segments of length $(L+r-1)$ from the aggregate readings by a sliding window of step-size $r$, where $L$ indicates the size of the receptive field and $r$ indicates the size of the target field. These segments of aggregate readings are used as inputs for training and testing. For each of the aggregate segments, we obtained the corresponding target sequence by extracting a segment of consecutive appliance readings of length $r$ such that the center of the two segments are aligned. Moreover, we remove any input sequence and its corresponding target sequence where the target sequence contains an appliance reading that is larger than the corresponding aggregate reading in the input sequence. Since not every household has all the four appliances, we used the data from the last four households for testing and the data from the rest of the households for training, as shown in Table \ref{tab:household}. \begin{table*}[!h] \centering \caption{Households used for training and testing per appliance.} \label{tab:household} \begin{tabular}{c|cc} \toprule Appliance & Training household ID & Test household ID \\ \midrule Kettle & [2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13] & [17, 19, 20, 21] \\ Microwave & [2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 15] & [17, 18, 19, 20] \\ Dishwasher & [1, 2, 3, 5, 6, 7, 9, 10, 11, 13, 15] & [16, 18, 20, 21] \\ Washing M. & [1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 13, 15, 16, 17] & [18, 19, 20, 21] \\ \bottomrule \end{tabular} \end{table*} As for on/off detection, instead of normalising the appliance readings, we obtain the output data by binarising the appliance readings using the on power thresholds shown in Table \ref{tab:data} in accordance with the previous studies \cite{zhang2018sequence} and \cite{Kelly:2015}. \begin{table*}[!h] \centering \caption{On power threshold for each appliance in watts.} \label{tab:data} \begin{tabular}{c|ccccc} \toprule & Kettle & Microwave & Dishwasher & Washing M.\\ \midrule On power threshold & 2000 & 200 & 10 & 20\\ \bottomrule \end{tabular} \end{table*} \subsection{Evaluation Metrics}\label{metric} For the task of energy disaggregation, we used two metrics for evaluation in this paper, i.e. Mean Absolute Error (MAE) and normalised Signal Aggregate Error (SAE). MAE is a measurement of errors that averages over the differences between all the predictions with respect to the real consumptions, which is less sensitive with outliers. SAE is a measurement of errors that sums all the differences between the predictions and the real consumptions over a period of time, e.g. a day, a week, a month etc. In our case, the evaluation is over the whole time period of the testing households' data collection. The formal definitions of MAE and SAE are as follows: \begin{equation} \label{mae} MAE = \frac{1}{T}\sum_{t=1}^{T} |\hat{y}_t-y_t| \end{equation} where $\hat{y}_t$ indicates the prediction of an appliance's energy usage at time $t$ and $y_t$ indicates the corresponding ground truth. \begin{equation} \label{sae} SAE = \frac{|\hat{r}-r|}{r} \end{equation} where $\hat{r}=\sum_t{\hat{y}_t}$ and $r=\sum_t{y_t}$ respectively indicate the predicated energy consumption of an appliance over a certain time period and the corresponding ground truth. For the task of on/off detection, we used F1 score to evaluate the performance of different models as the dataset is extremely imbalanced. For example, kettle is on only for about 1\% of the time. F1 score \cite{Jeni:2013} can be interpreted as a harmonic average of the precision and recall: \begin{equation}\label{f1} F1 = 2 \times \frac{ precision \times recall }{ precision + recall }. \end{equation} where precision is the fraction of true positive instances among the predicted positive instances, while recall is the fraction of true positive instances over the total number of positive instances. In the rest of this paper, when evaluating a model using the metrics above, we remove from the test set all the pairs of aggregate reading and appliance reading in which the aggregate reading is less than the individual appliance reading or the aggregate reading is zero. \subsection{Experimental Results For Energy Disaggregation} \subsubsection{Experiment Setup} For energy disaggregation, we trained three groups of neural network models. The first group is based on our implementation of the 5-layer CNN proposed in \cite{zhang2018sequence}. The second group is based on a 3-layer bidirectional RNN with GRUs. The third group is based on the WaveNet as shown in Section \ref{wavenet}. We use the Adam optimizer \cite{Kingma:2014} with a learning rate of 0.001 to minimise the loss as shown in Equation \ref{eq:loss}. These hyper-parameters are chosen experimentally. For each group of models, we explored the influence of two parameters. The first parameter is the receptive field size $L$. In this paper, we used input sequences with a range of receptive field sizes \textit{15}, \textit{31}, \textit{63}, \textit{127}, \textit{255}, \textit{511}, \textit{1023}, \textit{2047}, which corresponds to numbers of layers \textit{3}, \textit{4}, \textit{5}, \textit{6}, \textit{7}, \textit{8}, \textit{9}, \textit{10} in WaveNet models. Note that the receptive field size of \textit{1023} and \textit{2047} were not applied to the CNN and RNN models for the sake of computation cost in training. The second parameter is the target field size $r$ for which we experimented with four different values \textit{1}, \textit{10}, \textit{100} and \textit{1000}. We used a mini-batch size of \textit{128} for training all the models. For most appliances, the duration that an appliance is being used is much smaller than it is not, i.e., the readings are extremely imbalanced between those representing the appliance is in use and those representing it is not. For example, the readings that are less than 10 watts is around 99\% for kettle. In such cases, a model that always predicts a very small value, e.g. zero, may perform well in terms of MAE. Therefore, we employ a naive baseline model, i.e., always predicting zero (always-zero). The metric SAE focuses on the total energy consumption over a period of time, which makes the mean value of an appliance's energy consumption a promising prediction. To this end, we employ another naive baseline model, i.e., always predicting the mean value (always-mean). \subsubsection{Result Analysis} Table \ref{tab:result} shows the best MAE together with the corresponding SAE achieved by the models within each group with a fixed target field size of 100. We can see that the WaveNet model achieves the best MAE over all the four appliances. In particular for \textit{dishwasher} and \textit{washing machine}, the WaveNet model reduces the MAE by 51\% and 38\% comparing to the CNN model while by 32\% and 14\% comparing to the RNN model. As for \textit{kettle} and \textit{microwave}, the WaveNet model and the RNN model obtain similar MAEs. In the case of SAE, the WaveNet model and the RNN model achieve similar results except for the case of \textit{dishwasher} where the WaveNet model has an improvement of 49\%. Overall, the WaveNet model outperforms the other two neural network models and the two naive baselines. \begin{table*} \caption{ The appliance-level mean absolute error (MAE) in unit of watt and signal aggregate error (SAE). Best results are shown in bold.} \vspace{6pt} \label{tab:result} \centering \begin{tabular}{c|c|cccccc} \toprule Metrics & Methods & Kettle & Microwave & Dishwasher & Washing M. & Overall \\ \midrule MAE & Always-zero &10.157 &4.386 &20.784 & 6.189 & 10.378$\pm$6.359\\ & CNN \cite{zhang2018sequence} &5.454 &4.002 &21.014 &4.970 & 8.860$\pm$7.036 \\ & RNN &4.839 &3.696 &15.261 &3.602 & 6.849$\pm$4.880\\ & WaveNet &\textbf{4.726} &\textbf{3.686} &\textbf{10.296} &\textbf{3.080} & \textbf{5.446$\pm$2.860} \\ \midrule SAE & Always-mean &1.347 &0.713 &1.121 & 2.121 & 1.325$\pm$0.512\\ & CNN \cite{zhang2018sequence} &0.258 &0.797 &0.976 &0.440 & 0.617$\pm$0.283\\ & RNN &0.249 &\textbf{0.644} &0.377 &\textbf{0.208} & 0.369$\pm$0.170\\ & WaveNet &\textbf{0.224} &0.666 &\textbf{0.192} & 0.267 & \textbf{0.337$\pm$0.191}\\ \bottomrule \end{tabular} \end{table*} Among the four appliances, \textit{microwave} is the only one that all the three neural network models achieve comparable results as that of the model of always-zero and always-mean. A closer inspection of the REFIT dataset shows that microwaves were mostly operated on either the off mode or the standby mode (0 to 5 watts) and the latter composes of more than 99.6\% of the readings which is the highest among the four appliances. To have a visual understanding of the disaggregation results, Figure \ref{fig:visualisation} shows for each type of appliance an excerpt of the predictions together with the target values with respect to the CNN, RNN and WaveNet models that achieve the best MAE (as shown in Table \ref{tab:result}). It can be seen that the disaggregation results for \textit{kettle} are similar among the three models. As for \textit{microwave}, the WaveNet model has some predictions that are larger than the target values while the predictions of the RNN model are mostly smaller than the target values. The CNN model however does not recognise the operation of the microwave, which is in line with the fact that the corresponding MAE of the CNN model is similar to that of the always-zero model. As for \textit{dishwasher} and \textit{washing machine}, the predictions of the WaveNet model are finer and closer to the target values compare to that of the RNN model, and the predictions of the CNN model are much noisier than the other two models. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/inference_example.png}} \caption{Excerpts of disaggregation results with respect to the CNN, RNN and WaveNet model achieved the best MAE.} \label{fig:visualisation} \end{figure} To investigate how the length of input sequences influences the model performance, we compare the MAEs achieved by the CNN, RNN and WaveNet models as shown in Figure \ref{fig:results-seqlen}. Note that the receptive field size of \textit{1023} and \textit{2047} were only applied to the WaveNet models for the sake of computation cost. The size of the receptive field in general does not have much influence on the performance of the CNN models compared to the other two groups of models. The RNN models in general achieve better MAE when the size of the receptive field gets longer but when the receptive filed size is larger than 255 the performance gets worse. As for the WaveNet models, there is a clear tendency that its performance is getting better with longer receptive fields in the cases of \textit{dishwasher} and \textit{washing machine}. An explanation is that dishwashers and washing machines have relatively longer period of operation and the models need more information to capture the energy consumption patterns. In the case of \textit{kettle}, the WaveNet models achieve better MAE with the size of the receptive field getting longer up to 255 and thereafter the performance starts getting worse. This may be explained by the fact that kettles usually have a short operation time and any longer receptive field will introduce too much noise. \begin{figure} \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/MAE_diff_seq_len.png}} \caption{Mean Absolute Error (MAE) of the CNN, RNN and WaveNet models with different receptive field sizes for the four appliances. } \label{fig:results-seqlen} \end{figure} Training efficiency is also an important factor when comparing models. Figure \ref{fig:results-computation} shows the training time of the CNN, RNN and WaveNet models. We can see that when the receptive field size is above 511 the computation time of the CNN models increases quadratically. The WaveNet models have the lowest computation cost when the receptive field size becomes substantially large ($\geq$ 511) among the three groups of models. Furthermore, the WaveNet models converge much quicker than the other two groups of models. For example, for \textit{washing machine}, the number of iterations that the CNN models and the RNN models needed for training until convergence is more than 4 times of that needed by the WaveNet models. \begin{figure} \centering \centerline{\includegraphics[width=0.5\columnwidth]{figs/computation_time.png}} \caption{Computation time per iteration for the CNN, RNN and WaveNet models with different receptive field sizes. } \label{fig:results-computation} \end{figure} Target field size is a parameter that is worth exploring as well. In Figure \ref{fig:results-width} we show the relation between the target field size and the performance of the WaveNet models in terms of MAE with a fixed receptive field size of 127. We can see that for all the four appliances there is a tendency that the longer target fields achieve better MAE. This is because the longer target fields provide more training samples per mini-batch which is similar to the effect of applying a larger batch size and is more likely to converge to global optima. Comparing to using a larger batch size, the computation efficiency of using longer target fields is much higher due to shared computations. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/MAE_diff_target_field.png}} \caption{Mean absolute error (MAE) of the WaveNet models with different target field sizes. } \label{fig:results-width} \end{figure} \subsection{Experimental Results For On/off Detection} \subsubsection{Experiment Setup} As for the task of on/off detection, our aim is to compare the performance of the regression based learning framework and the classification based learning framework as proposed in Section \ref{sec:learning_onoff}. To this end, we trained two groups of WaveNet models following the two learning frameworks as it has been shown in the previous subsection that WaveNets achieve the best performance for the task of energy disaggregation. We experimented with a range of receptive field sizes \textit{15}, \textit{31}, \textit{63}, \textit{127}, \textit{255}, \textit{511}, \textit{1023}, \textit{2047}. The Adam optimizer is used with a learning rate of 0.001 to minimise the loss function shown in Equation \ref{eq:loss} for the regression based learning framework and in Equation \ref{binary_crossentropy} for the classification based learning framework. \subsubsection{Result Analysis} Figure \ref{fig:classifi-results} shows the F1 scores obtained by the WaveNet models trained respectively under the two learning frameworks with an increasing receptive field size. For the binary classifier under the classification based learning framework, we use a cut-off probability of 0.3, i.e., when the classifier outputs a value larger than 0.3 we consider the appliance is in the on state otherwise in the off state. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/F1_diff_seq_len.png}} \caption{F1 score of the WaveNet models trained following the two learning frameworks.} \label{fig:classifi-results} \end{figure} We can see that in the case of \textit{kettle} and \textit{dishwasher} the classification based learning framework achieves better F1 score than the regression based learning framework when the receptive field size (or the number of dilated convolutional layers) is small. With larger receptive fields the two frameworks perform similarly. As for \textit{microwave} and \textit{washing machine}, the classification based learning framework achieves better F1 score than the regression based learning framework over all the receptive field sizes. \section{Conclusions}\label{conclusion} In this paper, we investigated the problem of energy disaggregation together with the problem of appliance on/off detection. Firstly, we formalised both problems and illustrated the learning/training paradigms used in the literature, which motivated us to introduce the fast-sequence-to-point learning paradigm. By comparing with CNN models and RNN models, we studied the application of the recently proposed WaveNet models to the problem of energy disaggregation. With an evaluation on a real-world dataset, we showed that our disaggregation models based on WaveNets outperform the previous works based on CNNs and RNNs. The empirical evidence demonstrates WaveNets' superiority in handling long sequences. By an extensive experiment with input sequences of varying receptive field sizes, we have shown how the receptive field size affects the disaggregation performance for different appliances. Furthermore, we studied the problem of appliance on/off detection as a natural continuation of the disaggregation problem and investigated the performance of two learning frameworks: (1) a regression based learning framework utilising the results from energy disaggregation and (2) a classification based learning framework that directly trains a binary classifier. We showed empirically that the classification based learning framework outperforms the regression based learning framework in terms of F1 score. This indicates that for applications targeting at appliance on/off states, directly training a binary classifier would be a better choice. For future work, we intend to explore the use of prior knowledge to enhance the learning of WaveNet models. Another interesting direction for future work is to make use of the on/off states of the appliances to improve the results of energy disaggregation. For example, we could use the on/off states of an appliance to condition the predictions of the amount of energy the appliance consumes. \section{Acknowledgement} This work was carried out as part of the ``HomeSense: digital sensors for social research'' project funded by the Economic and Social Research Council (grant ES/N011589/1) through the National Centre for Research Methods. Qiuqiang Kong was supported by EPSRC grant EP/N014111/1 ``Making Sense of Sounds'' and a Research Scholarship from the China Scholarship Council (CSC) No. 201406150082. \bibliographystyle{ACM-Reference-Format} \section{Introduction} In recent years, several large-scale household energy consumption datasets are made publicly available, e.g. UK-DALE \cite{UK-DALE} (UK Domestic Appliance-Level Electricity) and REFIT \cite{Murray2017refit} (Personalised Retrofit Decision Support Tools For UK Homes Using Smart Home Technology). These datasets boost the studies on energy disaggregation, also known as Non-Intrusive Load Monitoring (NILM) \cite{Hart1992NILM}. Energy disaggregation is a challenging blind source separation problem that aims to separate the energy consumption of individual appliances from the readings of the aggregate meter measuring the total consumption of multiple appliances for example in a house. Figure \ref{fig:energy-dis} gives an example of how the energy consumption of a whole house changes along with that of the individual appliances. This problem is difficult due to a number of uncertainties such as the existence of background noise, the lack of knowledge on the numbers of different appliances and their true energy consumption patterns in a given household, replacements of old appliances, and overlapped operations of multiple appliances with similar energy consumption patterns. Energy disaggregation finds its usefulness in many applications. For example, disaggregated data could be used by feedback systems to provide pertinent information about energy usage and educate consumers at opportune times \cite{Froehlich2009}, which in turn helps the consumers better control their consumption and ultimately save energy \cite{Fischer2008}. Disaggregated data may also help identify malfunctioning equipments or inefficient settings \cite{Froehlich2010}. For policy makers, knowing the amount of energy each category of appliances consumes is critical to the development and evaluation of energy efficiency policies \cite{Sidler1999, Sidler:2003}. Disaggregated data may also provide valuable information to facilitate power system planning, load forecasting, new types of billing procedures, and the ability to pinpoint the origins of certain customer complaints \cite{Froehlich2010}. Another application is to help researchers understand the occurrences of home activities which nowadays are heavily related with the usage of different types of appliances \cite{DSP-2018}. \begin{figure}[!h] \centering \centerline{\includegraphics[width=\columnwidth]{figs/energy_dis.png}} \caption{An example of energy consumption of individual appliances and a whole house} \label{fig:energy-dis} \end{figure} In the literature, a lot of research has been done on applying machine learning methods to the problem of energy disaggregation. Among the popular approaches, different variants of HMMs (Hidden Markov Models) such as FHMMs (Factorial HMMs) have attracted a lot of attention \cite{Kolter2012FHMM,Zhong2014NIPS,Shaloudegi2016FHMM}. With the availability of large-scale open datasets such as UK-DALE and REFIT \cite{UK-DALE,Murray2017refit}, there is a flourish in applying deep neural networks (DNNs) to the problem of energy disaggregation. For example, in \cite{Kelly:2015} and \cite{zhang2018sequence}, the authors investigated the application of convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Autoencoders. However, there are several problems with the conventional DNN models. The computation complexity of the conventional CNNs is getting substantially high when the input sequences are long. In the case of RNNs, the values of the hidden units have to be calculated in a sequential order and thus does not scale well. Recently, a neural network called WaveNet \cite{van2016wavenet} was proposed for long sequence audio processing. WaveNet is a variant of the CNN architecture with dilated convolutional layers which make it easier to be trained with long sequences compared to the conventional CNNs. With skip connections over all the convolutional layers, it can learn multi-scale hierarchical representations. WaveNet has been proven to work well for tasks such as speech synthesis \cite{van2016wavenet} and speech denoising \cite{rethage2017wavenet}. It is efficient because it has fewer parameters than a CNN does. WaveNet is also easy to parallelise compared to RNNs. For the task of energy disaggregation, some appliances may have long-term dependencies in their energy consumption patterns and these patterns may exist at different scales. Therefore, the task of energy disaggregation may benefit from WaveNet's capability of modeling long sequences and learning multi-scale hierarchical representations. To evaluate the performance of WaveNet models for the task of energy disaggregation, we carried out a set of experiments using the public dataset REFIT \cite{Murray2017refit}, and compared the disaggregation results of WaveNet models against the five-layer CNN model proposed in \cite{zhang2018sequence} and a three-layer RNN model. We showed that WaveNet models outperform the other two methods in terms of both error measures and computation cost. We also investigated the influence of the length of input sequences on the disaggregation performance as well as on the computation cost. While the problem of energy disaggregation focuses on estimating the exact amount of energy being consumed by an appliance, there is another interesting task called on/off detection which tries to predict whether individual appliances are in operation or not. Compared to energy disaggregation, on/off detection provides a perspective of a coarser granularity on the usage status of individual appliances and finds its usefulness in understanding the occurrences of home activities that heavily depend upon the assistance of household appliances such as kettle \cite{Alcala:2015}, washing machine and microwave \cite{DSP-2018}. Such dependencies have been proven, for example, to help with activity monitoring and health management \cite{Alcala:2015}. In this paper, we investigate two learning frameworks for the task of on/off detection. The first one, called regression based learning framework, first trains a model for energy disaggregation using the aggregate energy readings as inputs and the appliance readings as the target values, and then derives the on/off state sequence of the appliance by binarising the predictions of the disaggregation model according to the on-power threshold of the appliance. The second one, called classification based learning framework, directly trains a binary classifier with the appliance on/off states. To evaluate the two learning frameworks for the task of on/off detection, we respectively trained a group of WaveNet models following the two learning frameworks with the REFIT dataset. We showed that for the task of on/off detection the classification based learning framework outperforms the regression based learning framework in terms of F1 score. The contributions of this paper are as follows: \begin{itemize} \item We propose to tackle the problem of energy diaggregation with WaveNet models which are capable of modeling long sequences more efficiently compared to the conventional CNNs and RNNs, and we show that WaveNet models achieve the state-of-the-art performance based on a set of experiments with a public dataset. \item We carry out an analysis on how the receptive field size and the target field size would affect the disaggregation performance of the three deep neural networks, i.e., WaveNets, CNNs and RNNs. \item We compare a regression based learning framework with a classification based learning framework for the task of on/off detection and show empirically that the latter outperforms the former that utilises the outputs from energy disaggregation. \item The evaluation is performed using the public dataset REFIT collected from 20 households. We give a detailed description of how the raw data was preprocessed and used for model training and release the source code\footnote{https://github.com/jiejiang-jojo/fast-seq2point} to facilitate the reproducibility of our work. \end{itemize} The rest of the paper is organized as follows: Section \ref{relatedwork} discusses the related work. Section \ref{problem} gives a formal description of the energy disaggregation problem and the on/off detection problem. Section \ref{methods} presents three learning paradigms for model training, introduces three neural network models, and describes how a model is trained respectively for the task of energy disaggregation and on/off detection. Section \ref{experiments} illustrates the experiment preparation and analyses the experiment results. Finally, in Section \ref{conclusion}, we conclude the paper with possibilities of future work. \section{Related Work}\label{relatedwork} In the literature, a lot of research has been done on applying machine learning methods to the problem of energy disaggregation. Among the popular approaches, different variants of hidden Markov models (HMMs) have attracted much attention (e.g. \cite{Kim2011, Kolter2012FHMM, Parson2012Prior, Zhong2014NIPS, Shaloudegi2016FHMM}). Recently, with the availability of large open datasets such as UK-DALE and REFIT \cite{UK-DALE,Murray2017refit} and the superior performance of deep neural networks (DNNs) in many research areas such as computer vision \cite{krizhevsky2012imagenet} and audio processing \cite{rethage2017wavenet}, there has been a flourish on applying DNNs for the problem of energy disaggregation. For example, Kelly and Knottenbelt \cite{Kelly:2015} compared the disaggregation performance of the traditional machine learning methods (e.g. FHMMs) with the deep learning methods such as Autoencoders and Long Short-term Memory (LSTM) networks and the results show that the deep learning methods outperform the traditional methods. Mauch and Yang \cite{Mauch15} also advocated the application of LSTM for the problem of energy disaggregation. Chen et al. \cite{Chen2018} proposed a convolutional sequence to sequence model in which gated linear unit convolutional layers were used to extract information from the sequences of aggregate electricity consumption and residual blocks were used to refine the output of the neural network. Later, Zhang et al. \cite{zhang2018sequence} proposed to use a sequence-to-point paradigm to train a CNN for energy disaggregation which outperforms the sequence-to-sequence learning approach used in \cite{Kelly:2015}. There are also works using a combination of DNNs. For example, by combining CNNs with variational autoencoders, Sirojan et al. \cite{Sirojan18} showed that their approach outperforms the one presented in \cite{zhang2018sequence}. Shin et al. \cite{Shin2019} proposed a subtask gated network that combines the main regression network with an on/off classification subtask network. Targeting real-time applications, Harell et al. \cite{Harell2019} proposed a causal 1-D CNN based on the WaveNet model proposed in \cite{van2016wavenet}. This work is similar to ours as it also adapts WaveNet for the problem of energy disaggregation but our work differs from this work as we use a non-causal version of WaveNet proposed in \cite{rethage2017wavenet}, i.e., the same amount of samples in the past as well as in the future are used to train the model and inform the prediction. Another difference is that we employed the concept of target field as proposed in \cite{rethage2017wavenet} such that the computation of the neighbouring samples can be shared, which speeds up the model training and inference. In addition, we carried out an extensive study on how the receptive field size and the target field size influence the model performance. Moreover, the baseline used in \cite{Harell2019} is a variant of HMM, i.e. sparse super-state HMM, while we compared our work with the state-of-the-art DNN based approaches. \section{Problem Statement}\label{problem} \subsection{Energy Disaggregation}\label{energy_dis} Energy disaggregation aims to estimate the energy usage of individual appliances based on the readings of the mains power meter that measures the total energy consumption of, for example, a whole house. Formally, suppose we have a sequence of readings from a house-level meter denoted as $X=(x_1, x_2, \ldots, x_T)$ where $ T $ is the length of the sequence. The problem of energy disaggregation is to disaggregate $ X $ into the energy consumption sequence of individual appliances denoted as $ Y^i=(y_{1}^{i}, y_{2}^{i}, \dots, y_{T}^{i})$, $y^i_t \in \mathbb{R}_{\geq 0} $ where $ \mathbb{R}_{\geq 0} = [0, +\infty) $, $ I $ is the number of known appliances, $i \in \{1, \ldots, I\}$ is the appliance index and $ t \in \{1, \ldots, T\}$ is the index of samples in time domain. In addition, we denote the readings from unknown appliances and background noise as $ U = (u_{1}, ..., u_{T}) $. At any time $t$, $x_t$ is assumed to be the summation of the readings from all the known appliances and unknown appliances with background noise: \begin{equation} \label{eq:mae} x_{t} = \sum_{i=1}^{I} y^{i}_{t}+u_{t} \end{equation} \noindent where the residual term $ u_{t} $ indicates the energy consumption of unknown appliances and background noise at time $t$. The aim of energy disaggregation is to design a model to separate the energy consumption of the individual appliances $ Y^{i}, i \in \{1, \ldots, I\} $ from the aggregate readings $ X $. That is, we are looking for a set of disaggregation mappings: \begin{equation} \label{eq:mapping_disaggregation} f^{i}: X \mapsto Y^{i}. \end{equation} where each mapping $ f^{i} $ maps from an aggregate reading sequence $ X $ to the energy consumption sequence of an appliance $ Y^{i} $. \subsection{On/off Detection}\label{status_detection} On a coarser granularity, most appliances have an on-power threshold which defines the least amount of energy an appliance needs to operate. When the amount of energy an appliance is consuming is below the on-power threshold it is considered to be in an off state otherwise it is considered to be in an on state. For example, a kettle usually needs 2000 watts to be in an on state while a washing machine needs 20 watts. The problem of on/off detection aims to estimate whether an appliance is in an on or off state based on the readings of the total energy consumption. Formally, given a sequence of aggregated readings from a house-level meter denoted as $X=(x_1, x_2, \ldots, x_T)$ where $ T $ is the length of the sequence, the aim of on/off detection is to design a model to recognise the on/off state sequence of individual appliances denoted as $ Z^i=(z_{1}^{i}, z_{2}^{i}, \dots, z_{T}^{i})$, $z^i_t \in \{0,1\}$, $i \in \{1, \ldots, I\}, t \in \{1, \ldots, T\}$ from the aggregate readings $ X $ where $ I $ is the number of known appliances, $ i $ is the appliance index, $ t $ is the sample index in time domain, $0$ indicates the off state and $1$ indicates the on state. That is, we are looking for a set of mappings: \begin{equation} \label{eq:mapping_onoff} f^{i}: X \mapsto Z^{i}. \end{equation} where each mapping $ f^{i} $ maps from an aggregate reading sequence $X$ to the on/off state sequence of an appliance $Z^{i}$. \section{Methods}\label{methods} \subsection{Learning Paradigms}\label{learning_paradigms} Usually the aggregated readings $ X $ is a long sequence over days, months, or sometimes years. To efficiently train a model, a commonly used approach in the literature of energy disaggregation is the sliding window approach which splits the long sequence $ X $ into shorter sequences $ \mathbf{x} = (x_{t}, ..., x_{t+L-1}) $ where $ L $ indicates the receptive field size. Instead of learning the mappings in Equation \ref{eq:mapping_disaggregation} and Equation \ref{eq:mapping_onoff} directly, we use sequences $ \mathbf{x} $ as input. The target of an input sequence can be a sequence of the same length which is called sequence-to-sequence learning \cite{Kelly:2015} or the midpoint of the target sequence which is called sequence-to-point learning \cite{zhang2018sequence}. In this section, we introduce three variants of the sliding window approach. \subsubsection{Sequence-to-sequence Learning} Sequence-to-sequence learning \cite{Kelly:2015}, as shown in Figure \ref{fig:fast_seq_to_point} (a), was proposed to learn a mapping from an input sequence $ \mathbf{x} $ to an output/target sequence $ \mathbf{y} $, where $ \mathbf{x}=(x_{t}, ..., x_{t+L-1}) $ and $ \mathbf{y}=(y_{t}, ..., y_{t+L-1}) $ have the same length. In sequence-to-sequence learning, each element of the output signal is predicted many times and an average of these predictions is used as the final output, which consequently smooths the edges. However, as pointed in \cite{zhang2018sequence}, it is expected that some of the input sequences will provide a better prediction of a single point than others, particularly those sequences where the point is near the midpoint of the input sequence. \subsubsection{Sequence-to-point Learning} Sequence-to-point learning \cite{zhang2018sequence}, as shown in Figure \ref{fig:fast_seq_to_point} (b), aims to solve the problem of sequence-to-sequence learning by finding a mapping from an input sequence $ \mathbf{x}=(x_{t}, ..., x_{t+L-1}) $ to a single target point $ y_{t+ \left \lfloor L/2 \right \rfloor} $ the index of which corresponds to the midpoint of the input sequence, where $ \left \lfloor \cdot \right \rfloor $ floors a value to an integer. One problem of the sequence-to-point learning paradigm is that learning a single point is usually inefficient. \subsubsection{Fast Sequence-to-point Learning} In this paper, we propose to use a fast sequence-to-point learning paradigm, as shown in Figure \ref{fig:fast_seq_to_point} (c), to speed up the sequence-to-point learning. By introducing a target field \cite{rethage2017wavenet} to replace a single point as output, the computation of a sequence-to-point model can be shared. The input sequence and target sequence are denoted as $ \mathbf{x}=(x_{t}, ..., x_{t+L+r-2}) $ and $ \mathbf{y}=(y_{t+\left \lfloor L/2 \right \rfloor}, ..., y_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ respectively. The length of the input sequence in this case is $(L+r-1)$ where $L$ indicates the size of the receptive field, and $r$ indicates the size of the target field, i.e. the length of the target sequence. When $r=1$, fast sequence-to-point learning degenerates to sequence-to-point learning. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.7\columnwidth]{figs/fast_seq_to_point.png}} \caption{(a) Sequence-to-sequence learning; (b) Sequence-to-point learning; (c) Fast sequence-to-point learning. } \label{fig:fast_seq_to_point} \end{figure} \subsection{Deep Neural Networks} \label{sec:NueralNetworks} In this section, we introduce three classes of neural network. The first two, CNNs and RNNs, serving as two baselines are used to benchmark the performance of WaveNets. \subsubsection{Convolutional neural networks}\label{cnn} Convolutional neural networks (CNNs) have achieved the state-of-the-art performance in many applications such as computer vision \cite{krizhevsky2012imagenet}, speech and audio processing \cite{rethage2017wavenet} and natural language processing \cite{dauphin2016language}. With shared filters to capture local patterns of various signals, the number of parameters of a CNN is fewer than that of a fully connected neural network. Time domain CNNs have been applied to energy disaggregation, for example, in \cite{zhang2018sequence}. Similar to the two dimensional CNN for computer vision \cite{krizhevsky2012imagenet}, a time domain CNN consists of several convolutional layers, each of which contains several filters that are used to convolve with the output of the previous convolutional layer. The filters are designed to capture local patterns of a signal. For example, in computer vision, the lower level filters of a CNN may learn edge detectors, while the higher level filters may learn to capture high-level profiles of an image. Similarly, in the case of time domain CNNs for energy disaggregation, lower level filters may capture the short-term energy usage patterns of different appliances such as a single activation, while higher level filters may capture the long-term patterns such as a complete operating cycle. The time domain convolutional operation can be described as follows: \begin{equation} \label{eq:cnn} v[k^{out}, t] = \sum_{k^{in}} \sum_{\tau=1}^{m} u[k^{in}, t - \tau] \cdot h[k^{out}, k^{in}, \tau] \end{equation} \noindent where $ u $ and $ v $ denote the input and output feature maps of a convolutional layer, and $ k^{in} $ and $ k^{out} $ indicate the index of the input and output feature maps. The filters are represented as a three dimensional tensor $ h $ and $ m $ is the filter length in time domain. The first convolutional layer takes a sequence $ \mathbf{x} $ as input. The predicted output is obtained from the last convolutional layer of a CNN. With larger receptive fields, long-term dependencies in the energy consumption data can be taken into account. However, the computation complexity of CNNs will increase quadratically along with the size of the receptive field. \subsubsection{Recurrent neural networks}\label{rnn} Recurrent neural networks (RNNs) have many successful applications in modeling temporal signals, e.g., audio and speech signal processing \cite{graves2013speech} and natural language processing \cite{chung2014empirical}. Similar to the fully connected neural networks, each input sample $ x_{t} $ is mapped to a hidden unit $ h_{t} $ by a transformation matrix. In addition, there are connections between adjacent hidden units to carry on the information from previous samples. In a non-causal system, a RNN can be bidirectional so as to use information from both history and future. A recurrent layer of a RNN can be described as: \begin{equation} \label{eq:rnn_layer} h_{t} = \phi(Wx_{t} + Vh_{t-1} + b) \end{equation} \noindent where $ W $, $ V $ and $ b $ respectively represent the transformation matrix between input samples and hidden units, the transformation matrix between adjacent hidden units, and a bias term; $ \phi $ represents a non-linear function. A RNN may consist of several recurrent layers. The backpropagation through time algorithm \cite{werbos1990backpropagation} is used for training a RNN. One problem of the conventional RNNs is gradient vanishing/explosion \cite{pascanu2013difficulty}. This is because the depth of a RNN is proportional to the length of the input sequence. When training a RNN, the gradient will accumulate exponentially, which makes the training unstable. To solve the gradient explosion/vanishing problem, LSTM was proposed, which introduces a memory cell with update, forget and output gates to control the information flow \cite{hochreiter1997long}. Later Gated Recurrent Unit (GRU) \cite{chung2014empirical} was proposed to simplify LSTM by reducing the number of parameters. A GRU is described as follows: \begin{equation} \label{eq:gru} \begin{matrix} r_{t} = \sigma(W_{r}x_{t} + U_{r}h_{t-1} + b_{r}) \\ z_{t} = \sigma(W_{z}x_{t} + U_{z}h_{t-1} + b_{z}) \\ \widetilde{h}_{t} = \phi(Wx_{t} + U(r_{t} \odot h_{t-1}) + b) \\ h_{t} = z_{t} \odot h_{t-1} + (1 - z_{t}) \odot \widetilde{h}_{t}. \end{matrix} \end{equation} where $ r_{t} $ indicates the reset gate at time step $t$, $ z_{t} $ indicates the update gate at time step $t$, $\widetilde{h}_{t}$ indicates the candidate new value for the memory cell at time step $t$, $h_{t}$ indicates the final value for the memory cell at time step $t$, $ \sigma $ represents a sigmoid (non-linear) function and $ \phi $ represents a $\tanh$ (non-linear) function. The units that learn to capture short-term dependencies will tend to have reset gates frequently active while the units that learn to capture long-term dependencies will tend to have update gates frequently active. \subsubsection{WaveNets}\label{wavenet} Conventional CNNs cannot scale when the input sequences are getting long as the computation complexity increases quadratically along with the receptive field size. Compared to CNNs, the computation complexity of RNNs increases linearly along with the receptive field size. However, the hidden units of RNNs can only be calculated sequentially because the calculation of each hidden unit depends upon the value of the previous hidden unit. So RNNs cannot handle parallel computations efficiently. Therefore, long sequence modeling has been a computational challenge for both CNNs and RNNs. \begin{figure*}[!h] \centering \centerline{\includegraphics[width=\columnwidth]{figs/wavenet.png}} \caption{An example of the WaveNet input-output structure for energy disaggregation. } \label{fig:wavenet} \end{figure*} To solve this problem, WaveNet \cite{van2016wavenet} was proposed for modeling raw audio signals and has been used for modeling time sequences in tasks such as speech denoising \cite{rethage2017wavenet}. WaveNet is an improvement over conventional CNNs, where a ``dilated convolution'' is applied to reduce the size of filters. A dilated convolution is a convolution with holes. That is, the filters are applied over an area larger than its length by skipping input values with a certain step size. Stacked dilated convolutions enable a network to have very large receptive fields with just a few layers. In \cite{van2016wavenet} a filter length of 2 is applied for modeling the causal audio signals. In this paper, following \cite{rethage2017wavenet}, we applied a filter length of 3 to utilise the non-causal information of input sequences. Figure \ref{fig:wavenet} shows through an example the WaveNet input-output structure used in this paper for both tasks of energy disaggregation and on/off detection. Following \cite{van2016wavenet}, the dilated layers are embedded into residual blocks, as shown in Figure \ref{fig:residualblock}. The residual output of each block will be sent to the input of the next one. The skip output of all the residual blocks will be summed and then followed by a $ 3 \times 1 $ convolutional layer, which gives the final output as predictions. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.5\columnwidth]{figs/residual_block.pdf}} \caption{Residual block of WaveNets. } \label{fig:residualblock} \end{figure} The following equation characterises the relation between the number of dilated convolutional layers of a WaveNet and its receptive field size: \begin{equation} \label{eq:seql_layer} L = (2^s - 1) * (m - 1) + 1 \end{equation} where $L$ denotes the receptive field size, $s$ denotes the number of dilated convolutional layers, $m$ denotes the filter length applied to each dilated convolutional layer and in this paper we set $m=3$ following \cite{rethage2017wavenet}. Since WaveNets do not have recurrent connections \cite{van2016wavenet}, they are typically faster than RNNs, especially when applied to long sequences. \subsection{Training a Model For Energy Disaggregation}\label{learning_energy_dis} For the task of energy disaggregation, the training of a fast sequence-to-point model based on CNN/RNN/WaveNet can be implemented with back-propagation. The inputs are sequences of aggregate energy readings $\mathbf{x}$ while the target values are sequences of appliance energy readings $\mathbf{y}$. Assuming an output and the corresponding target value are denoted as $ \mathbf{\hat{y}}=(\hat{y}_{t+\left \lfloor L/2 \right \rfloor}, ..., \hat{y}_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ and $ \mathbf{y}=(y_{t+\left \lfloor L/2 \right \rfloor}, ..., y_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ respectively, the loss can then be calculated using Mean Absolute Error (MAE) which is used as one of the evaluation criteria (see Section \ref{metric} for details): \begin{equation} \label{eq:loss} loss(\mathbf{\hat{y}}, \mathbf{y}) = \frac{1}{r} \sum_{\tau=\left \lfloor L/2 \right \rfloor}^{\left \lfloor L/2 \right \rfloor+r-1} \left | \hat{y}_{t+\tau} - y_{t+\tau} \right |. \end{equation} The loss function is calculated on mini-batch data. When the target field size $r$ equals 1, Equation \ref{eq:loss} degenerates to the conventional sequence-to-point model. After obtaining the loss, the gradient can be calculated and used to update the parameters of the model. \subsection{Training a Model for On/off Detection}\label{sec:learning_onoff} \subsubsection{Regression Based Learning Framework} The regression based learning framework tackles the problem of on/off detection by utilising the outputs from energy disaggregation. In concrete, for a given appliance, it first trains a fast sequence-to-point model based on CNN/RNN/WaveNet for energy disaggregation. Thereafter, given a new sequence of aggregate energy readings, the energy readings of the appliance are predicated using the disaggregation model. Finally, it derives the on/off state sequence of the appliance by binarising the predictions according to the on-power threshold of the appliance. \subsubsection{Classification Based Learning Framework} The classification based learning framework tackles the problem of on/off detection by directly training a binary classifier. In concrete, it first binaries the energy readings of a given appliance according to the on-power threshold of the appliance. Thereafter, it trains a binary classifier using the aggregate energy readings as inputs and the binarsied appliance readings as the target values. Finally, given a new sequence of aggregate energy readings, the on/off state sequence of the appliance is predicated using the trained classifier. For training a fast sequence-to-point binary classifier for a given appliance, the last layer of a CNN/RNN/WaveNet is a fully connected layer followed by a sigmoid nonlinearity to represent the probability that the appliance is in the on state. Assuming an output and the corresponding target value are denoted as $ \mathbf{\hat{z}}=(\hat{z}_{t+\left \lfloor L/2 \right \rfloor}, ..., \hat{z}_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ and $ \mathbf{z}=(z_{t+\left \lfloor L/2 \right \rfloor}, ..., z_{t+\left \lfloor L/2 \right \rfloor+r-1}) $ respectively, the loss can then be calculated using the binary cross-entropy: \begin{equation} \label{binary_crossentropy} \text{loss}(\mathbf{\hat{z}}, \mathbf{z}) = - \frac{1}{r} \sum_{\tau=\left \lfloor L/2 \right \rfloor}^{\left \lfloor L/2 \right \rfloor+r-1} (z_{t+\tau} \ \text{ln} \ \hat{z}_{t+\tau} + (1 - z_{t+\tau}) \text{ln}(1 - \hat{z}_{t+\tau})). \end{equation} Similarly, the loss function is calculated on mini-batch data. When the target filed size $r$ equals 1, Equation \ref{binary_crossentropy} degenerates to the conventional sequence-to-point model. After obtaining the loss, the gradient can be calculated and used to update the parameters of the model. \section{Experiments}\label{experiments} \subsection{Dataset} The dataset used in this paper is REFIT \cite{Murray2017refit} which is a collection of energy consumption data from 20 households in the UK. The readings were recorded around every 8 seconds and covers a period of over 2 years. The dataset contains both house-level energy usage (aggregate readings) and appliance-level energy usage (appliance readings) of more than 10 types of appliances. In this paper we focus on the disaggregation of four types of appliances: kettle, microwave, dish washer and washing machine which are used by most of the households. \subsection{Data Preprocessing} Firstly, we resampled the data with an interval of 10 seconds to mitigate the fluctuations of time intervals between the original readings, which resulted in 93,976,578 data points. Secondly, following \cite{Kelly:2015}, we filled the gaps in the data shorter than 3 minutes by forward-filling assuming that the gaps are caused by RF issues and filled the gaps longer than 3 minutes with zeros assuming that the gaps are caused by the appliance being switched off. Thirdly, for each type of appliance and the aggregate, we normalised the data by subtracting the mean values and dividing by the corresponding standard deviations. Thereafter, for each household, we extracted all the possible segments of length $(L+r-1)$ from the aggregate readings by a sliding window of step-size $r$, where $L$ indicates the size of the receptive field and $r$ indicates the size of the target field. These segments of aggregate readings are used as inputs for training and testing. For each of the aggregate segments, we obtained the corresponding target sequence by extracting a segment of consecutive appliance readings of length $r$ such that the center of the two segments are aligned. Moreover, we remove any input sequence and its corresponding target sequence where the target sequence contains an appliance reading that is larger than the corresponding aggregate reading in the input sequence. Since not every household has all the four appliances, we used the data from the last four households for testing and the data from the rest of the households for training, as shown in Table \ref{tab:household}. \begin{table*}[!h] \centering \caption{Households used for training and testing per appliance.} \label{tab:household} \begin{tabular}{c|cc} \toprule Appliance & Training household ID & Test household ID \\ \midrule Kettle & [2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13] & [17, 19, 20, 21] \\ Microwave & [2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 15] & [17, 18, 19, 20] \\ Dishwasher & [1, 2, 3, 5, 6, 7, 9, 10, 11, 13, 15] & [16, 18, 20, 21] \\ Washing M. & [1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 13, 15, 16, 17] & [18, 19, 20, 21] \\ \bottomrule \end{tabular} \end{table*} As for on/off detection, instead of normalising the appliance readings, we obtain the output data by binarising the appliance readings using the on power thresholds shown in Table \ref{tab:data} in accordance with the previous studies \cite{zhang2018sequence} and \cite{Kelly:2015}. \begin{table*}[!h] \centering \caption{On power threshold for each appliance in watts.} \label{tab:data} \begin{tabular}{c|ccccc} \toprule & Kettle & Microwave & Dishwasher & Washing M.\\ \midrule On power threshold & 2000 & 200 & 10 & 20\\ \bottomrule \end{tabular} \end{table*} \subsection{Evaluation Metrics}\label{metric} For the task of energy disaggregation, we used two metrics for evaluation in this paper, i.e. Mean Absolute Error (MAE) and normalised Signal Aggregate Error (SAE). MAE is a measurement of errors that averages over the differences between all the predictions with respect to the real consumptions, which is less sensitive with outliers. SAE is a measurement of errors that sums all the differences between the predictions and the real consumptions over a period of time, e.g. a day, a week, a month etc. In our case, the evaluation is over the whole time period of the testing households' data collection. The formal definitions of MAE and SAE are as follows: \begin{equation} \label{mae} MAE = \frac{1}{T}\sum_{t=1}^{T} |\hat{y}_t-y_t| \end{equation} where $\hat{y}_t$ indicates the prediction of an appliance's energy usage at time $t$ and $y_t$ indicates the corresponding ground truth. \begin{equation} \label{sae} SAE = \frac{|\hat{r}-r|}{r} \end{equation} where $\hat{r}=\sum_t{\hat{y}_t}$ and $r=\sum_t{y_t}$ respectively indicate the predicated energy consumption of an appliance over a certain time period and the corresponding ground truth. For the task of on/off detection, we used F1 score to evaluate the performance of different models as the dataset is extremely imbalanced. For example, kettle is on only for about 1\% of the time. F1 score \cite{Jeni:2013} can be interpreted as a harmonic average of the precision and recall: \begin{equation}\label{f1} F1 = 2 \times \frac{ precision \times recall }{ precision + recall }. \end{equation} where precision is the fraction of true positive instances among the predicted positive instances, while recall is the fraction of true positive instances over the total number of positive instances. In the rest of this paper, when evaluating a model using the metrics above, we remove from the test set all the pairs of aggregate reading and appliance reading in which the aggregate reading is less than the individual appliance reading or the aggregate reading is zero. \subsection{Experimental Results For Energy Disaggregation} \subsubsection{Experiment Setup} For energy disaggregation, we trained three groups of neural network models. The first group is based on our implementation of the 5-layer CNN proposed in \cite{zhang2018sequence}. The second group is based on a 3-layer bidirectional RNN with GRUs. The third group is based on the WaveNet as shown in Section \ref{wavenet}. We use the Adam optimizer \cite{Kingma:2014} with a learning rate of 0.001 to minimise the loss as shown in Equation \ref{eq:loss}. These hyper-parameters are chosen experimentally. For each group of models, we explored the influence of two parameters. The first parameter is the receptive field size $L$. In this paper, we used input sequences with a range of receptive field sizes \textit{15}, \textit{31}, \textit{63}, \textit{127}, \textit{255}, \textit{511}, \textit{1023}, \textit{2047}, which corresponds to numbers of layers \textit{3}, \textit{4}, \textit{5}, \textit{6}, \textit{7}, \textit{8}, \textit{9}, \textit{10} in WaveNet models. Note that the receptive field size of \textit{1023} and \textit{2047} were not applied to the CNN and RNN models for the sake of computation cost in training. The second parameter is the target field size $r$ for which we experimented with four different values \textit{1}, \textit{10}, \textit{100} and \textit{1000}. We used a mini-batch size of \textit{128} for training all the models. For most appliances, the duration that an appliance is being used is much smaller than it is not, i.e., the readings are extremely imbalanced between those representing the appliance is in use and those representing it is not. For example, the readings that are less than 10 watts is around 99\% for kettle. In such cases, a model that always predicts a very small value, e.g. zero, may perform well in terms of MAE. Therefore, we employ a naive baseline model, i.e., always predicting zero (always-zero). The metric SAE focuses on the total energy consumption over a period of time, which makes the mean value of an appliance's energy consumption a promising prediction. To this end, we employ another naive baseline model, i.e., always predicting the mean value (always-mean). \subsubsection{Result Analysis} Table \ref{tab:result} shows the best MAE together with the corresponding SAE achieved by the models within each group with a fixed target field size of 100. We can see that the WaveNet model achieves the best MAE over all the four appliances. In particular for \textit{dishwasher} and \textit{washing machine}, the WaveNet model reduces the MAE by 51\% and 38\% comparing to the CNN model while by 32\% and 14\% comparing to the RNN model. As for \textit{kettle} and \textit{microwave}, the WaveNet model and the RNN model obtain similar MAEs. In the case of SAE, the WaveNet model and the RNN model achieve similar results except for the case of \textit{dishwasher} where the WaveNet model has an improvement of 49\%. Overall, the WaveNet model outperforms the other two neural network models and the two naive baselines. \begin{table*} \caption{ The appliance-level mean absolute error (MAE) in unit of watt and signal aggregate error (SAE). Best results are shown in bold.} \vspace{6pt} \label{tab:result} \centering \begin{tabular}{c|c|cccccc} \toprule Metrics & Methods & Kettle & Microwave & Dishwasher & Washing M. & Overall \\ \midrule MAE & Always-zero &10.157 &4.386 &20.784 & 6.189 & 10.378$\pm$6.359\\ & CNN \cite{zhang2018sequence} &5.454 &4.002 &21.014 &4.970 & 8.860$\pm$7.036 \\ & RNN &4.839 &3.696 &15.261 &3.602 & 6.849$\pm$4.880\\ & WaveNet &\textbf{4.726} &\textbf{3.686} &\textbf{10.296} &\textbf{3.080} & \textbf{5.446$\pm$2.860} \\ \midrule SAE & Always-mean &1.347 &0.713 &1.121 & 2.121 & 1.325$\pm$0.512\\ & CNN \cite{zhang2018sequence} &0.258 &0.797 &0.976 &0.440 & 0.617$\pm$0.283\\ & RNN &0.249 &\textbf{0.644} &0.377 &\textbf{0.208} & 0.369$\pm$0.170\\ & WaveNet &\textbf{0.224} &0.666 &\textbf{0.192} & 0.267 & \textbf{0.337$\pm$0.191}\\ \bottomrule \end{tabular} \end{table*} Among the four appliances, \textit{microwave} is the only one that all the three neural network models achieve comparable results as that of the model of always-zero and always-mean. A closer inspection of the REFIT dataset shows that microwaves were mostly operated on either the off mode or the standby mode (0 to 5 watts) and the latter composes of more than 99.6\% of the readings which is the highest among the four appliances. To have a visual understanding of the disaggregation results, Figure \ref{fig:visualisation} shows for each type of appliance an excerpt of the predictions together with the target values with respect to the CNN, RNN and WaveNet models that achieve the best MAE (as shown in Table \ref{tab:result}). It can be seen that the disaggregation results for \textit{kettle} are similar among the three models. As for \textit{microwave}, the WaveNet model has some predictions that are larger than the target values while the predictions of the RNN model are mostly smaller than the target values. The CNN model however does not recognise the operation of the microwave, which is in line with the fact that the corresponding MAE of the CNN model is similar to that of the always-zero model. As for \textit{dishwasher} and \textit{washing machine}, the predictions of the WaveNet model are finer and closer to the target values compare to that of the RNN model, and the predictions of the CNN model are much noisier than the other two models. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/inference_example.png}} \caption{Excerpts of disaggregation results with respect to the CNN, RNN and WaveNet model achieved the best MAE.} \label{fig:visualisation} \end{figure} To investigate how the length of input sequences influences the model performance, we compare the MAEs achieved by the CNN, RNN and WaveNet models as shown in Figure \ref{fig:results-seqlen}. Note that the receptive field size of \textit{1023} and \textit{2047} were only applied to the WaveNet models for the sake of computation cost. The size of the receptive field in general does not have much influence on the performance of the CNN models compared to the other two groups of models. The RNN models in general achieve better MAE when the size of the receptive field gets longer but when the receptive filed size is larger than 255 the performance gets worse. As for the WaveNet models, there is a clear tendency that its performance is getting better with longer receptive fields in the cases of \textit{dishwasher} and \textit{washing machine}. An explanation is that dishwashers and washing machines have relatively longer period of operation and the models need more information to capture the energy consumption patterns. In the case of \textit{kettle}, the WaveNet models achieve better MAE with the size of the receptive field getting longer up to 255 and thereafter the performance starts getting worse. This may be explained by the fact that kettles usually have a short operation time and any longer receptive field will introduce too much noise. \begin{figure} \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/MAE_diff_seq_len.png}} \caption{Mean Absolute Error (MAE) of the CNN, RNN and WaveNet models with different receptive field sizes for the four appliances. } \label{fig:results-seqlen} \end{figure} Training efficiency is also an important factor when comparing models. Figure \ref{fig:results-computation} shows the training time of the CNN, RNN and WaveNet models. We can see that when the receptive field size is above 511 the computation time of the CNN models increases quadratically. The WaveNet models have the lowest computation cost when the receptive field size becomes substantially large ($\geq$ 511) among the three groups of models. Furthermore, the WaveNet models converge much quicker than the other two groups of models. For example, for \textit{washing machine}, the number of iterations that the CNN models and the RNN models needed for training until convergence is more than 4 times of that needed by the WaveNet models. \begin{figure} \centering \centerline{\includegraphics[width=0.5\columnwidth]{figs/computation_time.png}} \caption{Computation time per iteration for the CNN, RNN and WaveNet models with different receptive field sizes. } \label{fig:results-computation} \end{figure} Target field size is a parameter that is worth exploring as well. In Figure \ref{fig:results-width} we show the relation between the target field size and the performance of the WaveNet models in terms of MAE with a fixed receptive field size of 127. We can see that for all the four appliances there is a tendency that the longer target fields achieve better MAE. This is because the longer target fields provide more training samples per mini-batch which is similar to the effect of applying a larger batch size and is more likely to converge to global optima. Comparing to using a larger batch size, the computation efficiency of using longer target fields is much higher due to shared computations. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/MAE_diff_target_field.png}} \caption{Mean absolute error (MAE) of the WaveNet models with different target field sizes. } \label{fig:results-width} \end{figure} \subsection{Experimental Results For On/off Detection} \subsubsection{Experiment Setup} As for the task of on/off detection, our aim is to compare the performance of the regression based learning framework and the classification based learning framework as proposed in Section \ref{sec:learning_onoff}. To this end, we trained two groups of WaveNet models following the two learning frameworks as it has been shown in the previous subsection that WaveNets achieve the best performance for the task of energy disaggregation. We experimented with a range of receptive field sizes \textit{15}, \textit{31}, \textit{63}, \textit{127}, \textit{255}, \textit{511}, \textit{1023}, \textit{2047}. The Adam optimizer is used with a learning rate of 0.001 to minimise the loss function shown in Equation \ref{eq:loss} for the regression based learning framework and in Equation \ref{binary_crossentropy} for the classification based learning framework. \subsubsection{Result Analysis} Figure \ref{fig:classifi-results} shows the F1 scores obtained by the WaveNet models trained respectively under the two learning frameworks with an increasing receptive field size. For the binary classifier under the classification based learning framework, we use a cut-off probability of 0.3, i.e., when the classifier outputs a value larger than 0.3 we consider the appliance is in the on state otherwise in the off state. \begin{figure}[!h] \centering \centerline{\includegraphics[width=0.99\columnwidth]{figs/F1_diff_seq_len.png}} \caption{F1 score of the WaveNet models trained following the two learning frameworks.} \label{fig:classifi-results} \end{figure} We can see that in the case of \textit{kettle} and \textit{dishwasher} the classification based learning framework achieves better F1 score than the regression based learning framework when the receptive field size (or the number of dilated convolutional layers) is small. With larger receptive fields the two frameworks perform similarly. As for \textit{microwave} and \textit{washing machine}, the classification based learning framework achieves better F1 score than the regression based learning framework over all the receptive field sizes. \section{Conclusions}\label{conclusion} In this paper, we investigated the problem of energy disaggregation together with the problem of appliance on/off detection. Firstly, we formalised both problems and illustrated the learning/training paradigms used in the literature, which motivated us to introduce the fast-sequence-to-point learning paradigm. By comparing with CNN models and RNN models, we studied the application of the recently proposed WaveNet models to the problem of energy disaggregation. With an evaluation on a real-world dataset, we showed that our disaggregation models based on WaveNets outperform the previous works based on CNNs and RNNs. The empirical evidence demonstrates WaveNets' superiority in handling long sequences. By an extensive experiment with input sequences of varying receptive field sizes, we have shown how the receptive field size affects the disaggregation performance for different appliances. Furthermore, we studied the problem of appliance on/off detection as a natural continuation of the disaggregation problem and investigated the performance of two learning frameworks: (1) a regression based learning framework utilising the results from energy disaggregation and (2) a classification based learning framework that directly trains a binary classifier. We showed empirically that the classification based learning framework outperforms the regression based learning framework in terms of F1 score. This indicates that for applications targeting at appliance on/off states, directly training a binary classifier would be a better choice. For future work, we intend to explore the use of prior knowledge to enhance the learning of WaveNet models. Another interesting direction for future work is to make use of the on/off states of the appliances to improve the results of energy disaggregation. For example, we could use the on/off states of an appliance to condition the predictions of the amount of energy the appliance consumes. \section{Acknowledgement} This work was carried out as part of the ``HomeSense: digital sensors for social research'' project funded by the Economic and Social Research Council (grant ES/N011589/1) through the National Centre for Research Methods. Qiuqiang Kong was supported by EPSRC grant EP/N014111/1 ``Making Sense of Sounds'' and a Research Scholarship from the China Scholarship Council (CSC) No. 201406150082. \bibliographystyle{ACM-Reference-Format}
train/arxiv
BkiUfkw5qhLB3UNjqji9
5
1
\section{Introduction} \label{sec:intro} Ammonia and Volatile organic compounds (VOCs) are associated with numerous health problems. Although VOCs and Ammonia are naturally occurring, they can nonetheless cause serious health issues in high concentration. For example, exposure to Ammonia in high concentration causes harm to skin, lungs and eyes. Methane and other VOC compound leaks contribute to global warming. VOC compounds such as benzene and toluene are carcinogenic\cite{jones1999indoor,brown1994concentrations,maltoni1989benzene,kang2017indoor}. In this paper, we consider both an infrared (IR) and a chemical sensor system for early detection and, thus, prevention of dangerous gas leaks\footnote{This work was presented in part at the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, May 2019 \cite{badawi2019detecting}.}. Mobile infrared and chemical sensors can be part of an open air cyber-physical system (CPS)\cite{badawi2019detecting}. We use the time-series data obtained by the sensors in order to detect accidental and/or deliberate gas vapor leaks. The main contribution of this paper centers on the exploitation of the time-series data that sensors produce rather than conventional reliance on a single or a couple of sensor readings for leak detection. Some VOC gas vapors such as ethane and ammonia absorb infrared light in the Long Wave Infrared (LWIR) while others such as methane in the Medium Wave Infra-red (MWIR) bands. Absorbance by ammonia of infra-red light at different wavelengths is shown in Fig \ref{fig:ammonia_absorbance}. We can easily observe the existence of VOC gas vapor using Infra-red (IR) cameras in open air as shown in Fig. \ref{fig:snapshot}. In this figure, a dark smoke-like region denotes the image of VOC gas vapor. However, the distance between the sensor and the source, and infrared reflections from the background significantly affect the recorded level \cite{erden2010voc,cetin2013method}. Conventional optical devices, such as gas chromatographs and MWIR cameras, are generally expensive. A cheaper alternative would be the use of IR sensors and chemical gas sensors. Yet chemical gas sensors incur degradation in their sensitivity over time. Consequently, identically manufactured sensors are likely to yield significantly different responses upon exposure to gas analytes under identical conditions \cite{gopel1991definitions,davide1996frequency,zuppa2004drift,vergara2012chemical,artursson2000drift}. This problem is known in the literature as the \emph{sensor drift problem}. Causes of sensor drift can be summed up by two phenomena, namely, the physical changes in the structure of the sensor and the changes in the operating environment. The former case is known as \emph{first-order sensor drift}. It is caused by sensor aging or by sensor ``poisoning"\footnote{A process by which the sensor surface absorbs some compounds irreversibly, thus reducing its resistance sensitivity \cite{williams1995detection}.}. Unfortunately, neither poisoning nor aging are reversible as the physical structure of the sensor will have been permanently damaged or at the least affected. The latter case is known as \emph{second-order sensor drift} and is caused by external uncontrollable environmental changes, such as temperature and humidity variations. In this regard, the sensor response will be different from that expected from the original settings. Consequently, any decision thresholds that are optimal prior to sensor drift are likely to exhibit sub-optimal sensitivity and/or specificity once the aforementioned changes take place. Similarly, while it is not possible to detect the concentration of the gas using MWIR and LWIR sensors in open air, it is possible to record a time-varying signal and detect the existence of gas leakage using IR sensors as shown in Fig. \ref{fig:voc_example} using a machine learning algorithm such as a neural network. The sensor signal exhibits sudden jumps and fluctuations due to gas vapor leak. Uncalibrated IR sensor intensity measurements suddenly drop from 95 to 70 and fluctuate because of wind as shown in Fig. \ref{fig:voc_example}. \begin{figure} \centering \includegraphics[width=\linewidth]{amonia_infrared_cropped.png} \caption{Infrared spectrum of ammonia. The figure is taken from NIST \cite{nist}.} \label{fig:ammonia_absorbance} \end{figure}{} \begin{figure}[b] \centering \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth, height=0.62\linewidth]{frame_vid2g} \end{minipage} \begin{minipage}{.4\linewidth} \includegraphics[width=\linewidth]{snapshot2} \end{minipage} \caption{Two infrared images of VOC gas leaks. Red rectangles contain gas-leak regions. Green rectangles contain leak-free regions. Images are downloaded from FLIR systems and Infrared Cameras Inc. \cite{ferret.com.au_2012, cam_inc}, respectively.}. \label{fig:snapshot} \end{figure} In this paper, we analyze the temporal sensor signals using convolutional, additive neural networks and the discriminator of a generative adversarial network (GAN) to detect and classify VOC gas leaks and other dangerous gas emissions. The proposed analysis is applicable to both Chemically-sensitive Field Effect Transistors (ChemFETs) and Electrochemical Impedance Spectroscopy (EIS) and infra-red sensors as they all produce time-varying signals. The rest of the paper is organized as follows. Section \ref{sec:algo} describes the machine learning algorithms used in this paper. Section \ref{sec:data_and_meth} presents experimental results. We use an infrared data set and two publicly available chemical sensor drift data sets obtained at the University of California at San Diego (UCSD) \cite{vergara2012chemical} and \cite{fonollosa2015reservoir}. The paper finishes by offering a brief set of conclusions in Section \ref{sec:conclusion}. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{time_series_vid2.eps} \caption{Example infrared sensor time-series data near a gas leak. As one can see, it is not possible to find a threshold that can isolate leak-free intensity signals from those corresponding to a gas leak. Furthermore, the intensity of leak-free point signals is time-varying. This demonstrates the effect of noise, resolution and lighting factors that in turn lend further complexity to the task of distinguishing between the two classes of signals. The two sets of examples are distinguished by dotting. } \label{fig:voc_example} \end{figure} \section{Deep Learning Algorithms for IR and Chemical Sensor Data Processing} \label{sec:algo} In this paper, we consider three tasks. Task 1 is infra-red sensor-based gas-leakage detection. In tasks 2 and 3, we identify different types of gas analytes. Our first network is an energy-efficient network, namely, an additive neural network, which is a neural network that performs no vector-multiplication except in its last layer. Our second neural network is the discriminator of a generative adversarial neural network, to which we refer shortly as DiscGAN. \subsection{Convolutional Neural Networks} \label{subsec:ConvNet} Convolutional neural networks (or ConvNets) have been extensively used in computer vision \cite{krizhevsky2012imagenet,lecun1995convolutional} and time-series data analysis \cite{langkvist2014review}. In ConvNets, convolutions (or local correlations) between the inputs and the filter weights are used to extract local features at different scales in subsequent layers. \subsection{Additive Neural Networks (AddNet)} \label{subsec:AddNet} Despite their ability to learn and recognize images and signals, deep learning algorithms are computationally expensive. This is attributed to the large number of add-multiply operations needed to be implemented in order to realize convolutions. This poses a problem when it comes to using such methods on platforms where energy is limited. As a result, simpler and, thus, more efficient algorithms are generally required to implement computationally expensive deep learning algorithms in such cases. Nevertheless, there have been attempts to leverage convolutional neural networks across energy-limited devices by means of methods that aim to either implement fewer dot-product operations, or to replace dot-product operations with computationally simpler operations. Binarizing the weights and/or the activations results in replacing real-number multiplication operations with binary logical operations when realizing convolution, as in the case of BinaryConnect\cite{courbariaux2015binaryconnect}, XNOR-Net\cite{rastegari2016xnor} and Binarized Neural Networks \cite{courbariaux2016binarized}. An additive neural network (AddNet) falls under the second category, i.e., replacing real-valued multiplication operations in vector-vector and matrix-vector product operations by special addition operations. The new ``product'' operation comprises binary sign calculation, unsigned addition and regular addition. In what follows, we define the scalar version of our binary operation and extend it straightforwardly to its vector operation. In this regard, let $x$ and $y \in \mathbf{R}$, the multiplication-devoid (abbreviated md) operation denoted by $\oplus$ and defined as follows: \begin{equation}\label{eqn_mf_def1} x \oplus y := \text{sgn}(x.y) (|x| + |y|) \end{equation} where sgn denotes the {\em signum} function. Alternatively, we can express the $\oplus$ operation as follows: \begin{equation} x \oplus y := \text{sgn}(x)y + \text{sgn}(y)x \end{equation} This is because $x=\text{sgn}(x)|x|$. One key property of the md operation is that it preserves the sign of regular multiplication operations \cite{badawi2017multiplication, afrasiyabi2018non}. We define the vector version of the md operation as follows. Let $\bf{x}$ and $\bf{w}$ be two vectors in $\bf{R}^{N}$. The md dot ``product'' is defined as: \begin{equation} \label{eqn_mf_vec_def} \mathbf{w}^T\oplus \mathbf{x} := \sum_{i=1}^N \text{sgn}(x_i.w_i ) (|x_i|+ |w_i|) \end{equation} It can be seen that the md operation expressed in Eq. \ref{eqn_mf_vec_def} requires no real-valued multiplication whatsoever. As such, instead of using add-multiply operations as in an ordinary dot product, we use ordinary addition and addition with sign multiplication in the md vector operation. Furthermore, we can restrict the operands $x_i$ and $w_i$ to be 8-bit numbers in order to speed up the vector addition operations. Another property of the md operation is that it induces the $\ell_1$ norm. This is shown as follows: \begin{equation} \mathbf{x}^T \oplus \mathbf{x} = \sum_{i=1}^N \text{sgn}(x_i.x_i) (|x_i| + |x_i|) = 2{||\mathbf{x}||}_1 \end{equation} In the context of neural networks, we use convolution and matrix-vector multiplication operations in convolutional and dense layers, respectively. In AddNet, we replace the aforementioned dot-product operations with the md vector product. The feed-forwarding pass in dense layers in a neural network can be expressed as follows: \begin{equation} o^l_i = \phi\big({\mathbf{w}_i^l}^T \mathbf{o}^{l-1} + b^l_i \big) \end{equation} where the superscript denotes the layer index, the subscript the neuron index, $\mathbf{w}_i^l$ the weights connecting the output of the previous layer (the $(l-1)$st layer) to the $ith$ neuron, $o_i^l$ the output of the $ith$ neuron, and bold $\mathbf{o}^{l-1}$ the vector output of the previous layer. $\phi$ is the non-linearity function applied element-wise and, finally, $b^l_i$ denotes the bias term added to the pre-activated response ${\mathbf{w}_i^l}^T \mathbf{o}^{l-1}$. Similarly, we can define AddNet layers by replacing the dot-product ${\mathbf{w}_i^l}^T \mathbf{o}^{l-1}$ by our md operator as follows: \begin{equation}\label{eqn:mf_nn_1} o^l_i = \phi\big({\mathbf{w}_i^l}^T \oplus \mathbf{o}^{l-1} + b_i^l \big) \end{equation} Since the md operator is additive, it will result in a larger output than ordinary multiplication does when either of the operands is of small magnitude, e.g. $3\oplus0.1=3.1> 3\times 0.1=0.3$. In the context of neural networks, the layer outputs and the weights are usually small values. As a result, the responses of the md layers will be of larger variance than those of the regular layer. This poses a problem in deep layers, where the dimension of the dot-product is quite large. In other words, if the depth of a convolutional layer is 64 and the kernel size is $3 \times 3$, the convolution operations will carry out dot-products between two vectors, each of which $\in \mathbb{R}^{3\times3\times64}$. In the case of the md layer, this will cause the output to exhibit inordinately high magnitudes. In order to overcome this, we introduce a scaling factor $\alpha$. As such, the feedforwarding pass in Eq. \ref{eqn:mf_nn_1} becomes \begin{equation}\label{eqn:mf_nn_2} o^l_i = \phi\big(\alpha_i^l({\mathbf{w}_i^l}^T \oplus \mathbf{o}^{l-1}) + b_i^l \big) \end{equation} The scaling factor $\alpha^l_i$ enables us to control the range of the output prior to applying the activation function $\phi$ and, thus, leads to a controlled range of responses in subsequent layers. Note that the scaling by $\alpha^l_i$ in Eq. \ref{eqn:mf_nn_2} implies real-valued multiplication. Nevertheless, it requires only one real-valued multiplication per neuron. Therefore, carrying out scaling is not computationally expensive. Numerous options exist for selecting the scaling factor $\alpha^l_i$. One possibility may be the setting of $\alpha^l_i$ to $\frac{1}{{||w_i^l||}_1}$, i.e., the reciprocal of the $\ell_1$ norm of the associated weights. Another option would be having $\alpha^l_i$ be trainable by backpropagation. The latter delivers more flexibility for the model. Nevertheless, batch normalization is a common practice in neural networks and has shown to be quite effective in accelerating the training of deep networks \cite{ioffe2015batch}. Therefore, one can simply apply batch normalization to the pre-activation responses in AddNet. Such normalization eliminates the need to carry out scaling by $\alpha^l_i$ as it will be subsumed by the scaling induced by batch normalization. The proof of AddNet with linear and/or ReLU activation functions satisfying the {\em universal approximation property} over the space of Lebesgue integrable functions can be found in \cite{cybenko1989approximation}. As for training the md layers by backpropagation, it is worth mentioning that the derivative of the {\em signum} function used in the definition has to be computed. This is because $\frac{d ~ \text{sgn}(w)}{dw} = 2 \delta(w)$, where $\delta$ is the Dirac-delta function. In practice, this means that the derivative of the {\em signum} function is zero almost everywhere except when $w=0$. The partial derivative of the md operator w.r.t. $w$ is: \begin{equation} \frac{\partial (w \oplus x)}{\partial w} = \text{sgn}(x)+ 2x\delta(w) \end{equation} We approximate the derivative of the {\em signum} operator using the hyperbolic tangent as follows: \begin{equation}\label{eqn:mf_operator_derv2} \frac{d ~ \text{sgn}(w)}{d w} \approx \frac{d ~ \tanh(aw)}{d w} = a \sech^2(aw) \end{equation} where $\sech(x) = \frac{2}{e^x+e^{-x}}$ is the hyperbolic secant function, and $a$ is a hyperparameter indicating how sharp the hyperbolic tangent is. The larger the hyperparameter $a$ is, the closer $\tanh$ is to the {\em signum} function. Figure \ref{fig:sech_sq} shows the approximate derivative of the {\em signum} function for $a=10$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sech2} \caption{The derivative of $\tanh(a w) = a \sech^2(a w)$ as a function of parameter $w$, with $a$ set to 10.} \label{fig:sech_sq} \end{figure} As we can see from Fig. \ref{fig:sech_sq}, the derivative has high magnitude for $w$ values close to zero, whereas it is effectively zero for large values. This can be seen as allowing small weights to have finer updates than larger weights and thus allowing them to change their sign more often during training. We found empirically that this approximate derivative computation provides satisfactory convergence rates in Google's Tensorflow software. \subsection{DiscGan (Discriminator of GAN as Classifier)} Generative Adversarial Networks (GAN) have become the benchmark in image synthesis \cite{goodfellow2014generative, radford2015unsupervised}. A typical GAN has a generative network, which attempts to generate images (or data) resembling real images from noise input, and a discriminator network, which attempts to discriminate between the real images and those synthesized by the generator. The generator and the discriminator are optimized in an adversarial scheme, i.e., the generator tries to fool the discriminator by the synthetic data it produces, and, in turn, the discriminator tries to counteract the generator by discriminating between the real data samples and the fake ones. In this paper, our aim is not to synthesize realistic data but rather to make use of the adversarial nature of GAN training in order to obtain a discriminator network capable of classifying the input with an unbalanced set of training data. As the recordings of gas leak data may fall short of the clean air recordings for this purpose, we have the generator of the GAN to compensate for the data set with a smaller number of data instances by producing ``artificial" gas leak data during training. In this regard, we perform a two-phase training of the GAN. First, we carry out adversarial training of both the discriminator and the generator using the data of one of the classes. In the second phase, we use data from both classes and carry supervised binary-classification training of the discriminator which now acts as a classifier. In this setting, let $x^i$ represent the $ith$ data instance of one of the classes. In this case, $x^i$s denote the gas leak recordings (or the anomalous class). Let $z$ be a random noise vector, e.g. Gaussian noise or uniform noise. Let $D$ be the discriminator and $G$ be the generator, with each having a set of parameters $\theta_D$ and $\theta_G$, respectively. In the adversarial-training phase, we seek to optimize the following loss function: \begin{equation}\label{eqn:GAN_1} \max_{\theta_D} \min_{\theta_G} \sum_i \log(D(x^i)) + \sum_i \log(1-D(G(z^i)) \end{equation} where $D(x^i)$ is the soft prediction result of the discriminator corresponding to data point $x^i$. From the discriminator perspective, the prediction output $D(x^i)$ should be close to 1 because $x^i$ is ``real". The generator $G$ produces ``fake" data signals from noise vector $z^i$, that is $G(z^i)$, and the prediction $D(G(z^i))$ should be close to zero because $G(z^i)$ is an artificial data instance. The generator, on the other hand, will try to produce $G(z^i)$ that will be assigned the prediction $D(G(z^i))$ close to 1. Once training the first stage is accomplished, we move on to the second stage of supervised training of the entire training data, in which the cost function we seek to minimize is the regular binary cross entropy function $CE$ expressed as follows: \begin{equation}\label{eqn:cross_entropy} CE := -\frac{1}{N} \Big( \sum_i (1-t^i)\log(1-D(x^i) + t^i \log(D(x^i)) \Big) \end{equation} where $t^i ~ \in\{0,1\}$ denotes the true class of $x^i $. When there are multiple classes we can still use the discriminator of a GAN with a slight modification of the loss functions. In this regard, let us assume that there are $N$-classes. In this case, the one-hot encoded label for each input is an $N$-dimensional vector, with all entries equal to zero, except for the $kth$ entry, where $k$ is the true class. During training, the discriminator (or classifier) will minimize the cross entropy of the softmax layer applied at the output layer ($N$ logits). The generator $G$ will attack the output of the $kth$ node. Here we consider the output of the $kth$ node to be the logit of a binary class, i.e., the adversarial loss criterion becomes: \begin{equation}\label{eqn:ce_g} max_{\theta_D} min_{\theta_G}\log(D_{k}\big(G(z)\big) \end{equation} where $D_{k}\big(G(z)\big)$ is the discriminator \emph{sigmoidal} response of the $kth$ node, i.e., we apply the sigmoid function to the logits before taking the logarithm in determining the loss. Note that the loss here is different from the multi-class case, in which we consider multi-class logits, i.e., we use sigmoid normalization instead of the softmax normalization. In practice, since we do a mini-batch update, we take the average of the loss functions and minimize the loss functions based on the mini-batch gradients. \section{Datasets and Experimental Results} \label{sec:data_and_meth} \subsection{Infra-red VOC Dataset} \label{ssec:voc_dataset} Our first data set consists of infra-red imaging signals of VOC gas leaks in open air and clean air recordings. Specifically, we have two classes of discrete-time signals corresponding to VOC gas leaks and clean air, respectively. Each signal is a time series containing 50 samples corresponding to two seconds of recording with a sampling rate of 25 samples per second. The recorded value varies in open air because of background temperature variations and low resolution error as it can be observed in Fig. \ref{fig:voc_example}. Furthermore, the sensors may not be calibrated in practice, so their sensitivity may differ across time. We gathered about 30,000 VOC gas leak and 30,000 clean air data instances The images are obtained using an MWIR camera produced by FLIR systems and Infrared Cameras Inc. \cite{ferret.com.au_2012,cam_inc}. VOC gas absorbs the infra-red light appearing as a white cloud in the black-hot mode infra-red image as shown in Fig. \ref{fig:snapshot}. In these videos, a gas leak erupts from the source with the gas spreading out as time progresses. We manually selected regions of interest and assigned normal event designations to temporal measurements where no gas is present throughout these series, while designating the rest as anomalous events. We used min-max normalization in order to scale signal data points between 0 and 1. The normalized signal $\hat{x}$ is obtained as follows \begin{equation} \label{eqn:minmax} \hat{x}[n]=\frac{x[n]-\min(x)}{\max(x)-\min(x)},\quad n=0,1,... \ ,49 \end{equation} where $\max(x)$ and $\min(x)$ represent the maximum and minimum values of a given infrared signal $x$, respectively. We used convolutional neural networks with the architecture specified in Table \ref{tab:cnn_data1_arch}. In order to obtain more temporal data points, and in order to ensure the network is translation-invariant to the gas eruption location, we chose to randomly crop the input data into temporal signals of size $32$ each. \begin{table}[b] \centering \caption{Architecture of the convolutional neural network used in classifying the data set of Sec. \ref{ssec:voc_dataset}} \begin{tabular}{c|c} \toprule Layer & Specification\\ \midrule Input Layer& input size: $32 \times 1$\\ Conv Layer & 16 $3\times 1$ filters, no strides\\ Max-pooling Layer & down-sampling by 2\\ Batch-normalization Layer& -\\ \midrule Conv Layer & 32 $3 \times 16$ filters, no strides\\ Max-pooling Layer & down-sampling by 2\\ Batch-normalization Layer& -\\ \midrule Conv Layer & 64 $3 \times 32$ filters, no strides\\ Max-pooling Layer & down-sampling by 2\\ Batch-normalization Layer& -\\ \midrule Global Average-pooling Layer&output size: 64\\ \midrule Dense Layer& output size: 64\\ Batch-normalization Layer&-\\ Output Linear Layer& output size: 1\\ \bottomrule \end{tabular} \label{tab:cnn_data1_arch} \end{table} We divided our data set into three disjoint sets. The training data consists of 8,000 recordings of each class. Another set of 8,000 recordings of each class is used as the validation data set. The rest of the data was reserved for testing. We trained our networks using the RMSProp optimizer algorithm \cite{tieleman2012lecture}. We tested the hypothesis of whether dropout helps achieve better results \cite{srivastava2014dropout} by using a dropout rate of $50\%$. As for the GAN approach, we used a generator which is a multi-layer perceptron (MLP) with one hidden layer of size 256. The regular convolutional neural network and AddNet exhibit comparable results. We obtained an accuracy of $99.8\%$ for no-gas data and $99.7\%$ for gas-leak data for a regular ConvNet. AddNet attained a recognition rate of $98.9\%$ for no-gas data and $99.3\%$ for gas-leak data. In the second set of experiments, we assumed that we have an unbalanced data set. In practice, we may not have VOC or ammonia gas leak recordings as clean air. We trained the models with only 50 recordings of gas leak signals against 8,000 recordings of clean air recordings. The test data set contains 14,000 recording instances of VOC gas leaks and clean air recordings. Classification results are also summarized in Table \ref{Tab:VOC_results}. AddNet produces the best results but the discriminator of the GAN Network is also quite close to AddNet. The confusion matrix of the results of the best model is given in Table \ref{tab:conf_mat_data1}. \begin{table}[t] \centering \caption{Accuracy results for infra-red VOC data. Classifiers are trained with only 50 VOC gas leak recordings vs 8000 clean air recordings.} \begin{tabular}{c|c|c|c} \toprule Model & \begin{tabular}{@{}c@{}}No-gas \\Accuracy\\(specificity)\end{tabular} & \begin{tabular}{@{}c@{}} Gas-leak\\ Accuracy\\ (sensitivity) \end{tabular} &\begin{tabular}{@{}c@{}} Total \\ Accuracy \end{tabular}\\ \midrule \begin{tabular}{@{}c@{}} ConvNet\\(dropout $50\%$) \end{tabular}& $98.3\%$ & $95.8\%$ & $97.1\%$\\ \begin{tabular}{@{}c@{}} ConvNet\\no dropout \end{tabular} & $98.0\%$ & $94.2\%$ & $96.1\%$\\ \begin{tabular}{@{}c@{}} AddNet\\(dropout $50\%$) \end{tabular} & $98.2\%$ & $96.0\%$ & $97.1\%$\\ \begin{tabular}{@{}c@{}} AddNet\\(no dropout) \end{tabular} & $99.1\%$ & $97.3\%$ & $98.2\%$\\ \begin{tabular}{@{}c@{}} DiscGAN\\ \end{tabular} & $99.0\%$ & $97.1\%$ & $98.1\%$\\ \bottomrule \end{tabular} \label{Tab:VOC_results} \end{table} \begin{table}[b] \centering \caption{Confusion matrix for the best achieving neural network (AddNet with no dropout) over the testing data. The true positive rate (sensitivity) is $97.3\%$. The true negative rate (specificity) is $99.1\%$. } \begin{tabular}{ccccc} \toprule \multirow{3}{*}{\begin{tabular}{@{}c@{}}Actual \\ Class\end{tabular}}&\multicolumn{2}{c}{\begin{tabular}{@{}c@{}} Predicted Class\\\bottomrule \end{tabular}}&\multirow{3}{*}{\begin{tabular}{@{}c@{}} Total \\Count \end{tabular}}\\ \multicolumn{2}{c}{}Leak&No Leak&\\ \multicolumn{2}{c}{}(positive)&(negative)&\\ \midrule Leak (positive) &13,622 &378&14,000\\ No Leak (negative)&126&13,874 &14,000\\ \bottomrule \end{tabular} \label{tab:conf_mat_data1} \end{table} We also investigated pruning the weights in both AddNet and ConvNet during inference. In this regard, we discard the magnitudes of the smallest magnitude weights while retaining their sign information. We keep the bias coefficients and the coefficients of the last layer intact. Results of various pruning rates are shown in Table \ref{tab:compression}. Apparently, in AddNet, we can discard the magnitude information of the weights up to a high rate ($67.4\%$) without severely degrading performance. On the other hand, the magnitude information is quite critical in the case of a regular ConvNet. These results clearly show the advantages of AddNet, which requires reduced memory space in a mobile device and consumes less energy as it performs much fewer arithmetic operations during inference. \begin{table}[!htbp] \centering \caption{Effect of compressing weights of AddNet and ConvNet by discarding the smallest $K\%$ magnitude while keeping the sign information. ConvNet fails to produce reasonable results when the compression rate exceeds $16.1\%$. The compression rate is estimated by allocating 32 bits to intact weight values and 1 bit for every binarized weight factor.} \begin{tabular}{ccccccc} \toprule \begin{tabular}{@{}c@{}} Model\\ Accuracy \end{tabular} & \multicolumn{6}{c}{\begin{tabular}{@{}c@{}} Weight Compression\\ Rate (smallest K\%) \\ \bottomrule \end{tabular}}\\ & $0$ &$16.1$ & $19.7$ & $67.4$ & $76.8$& $86.6$ \\\toprule AddNet & 98.9 &97.2&97.9&98.0&97.1& 61.3\\\midrule ConvNet & 99.8 &67.4& $-$&$-$ &$-$&$-$\\ \bottomrule \end{tabular} \label{tab:compression} \end{table} \subsection{Gas Sensor Array Recordings under Dynamic Gas Mixtures}\label{sec_data2} We consider a gas type identification problem, in which we have three types of gases to identify, namely, CO, Ethylene and Methane. We used the data set obtained by Fonollosa et al. \cite{fonollosa2015reservoir}. The data set consists of time-series measurements of a sensor array of 16 chemical metal-oxide sensors under exposure to two different kinds of gas mixtures, ethylene and methane in air, and ethylene and CO in air. Sensors were exposed to volatile organic compounds at different concentration levels under tightly-controlled operating conditions during the experiment. The data is obtained at a sampling frequency of 100 $Hz$. The 16 chemical sensors are of four different types, with each having four identical sensors.\footnote{For more details, the reader may refer to the original paper \cite{fonollosa2015reservoir}.} Furthermore, switching between different mixtures of VOCs may occur too fast making it challenging if not impossible for the sensors to reach steady state. This makes identifying the gas analytes difficult using a machine learning method. The recorded sensor data is deposited to the UC Irvine Machine Learning Repository online in the form of two long time-series. We extracted portions of the time series such that the sensor array is exposed to one type of analyte at a given time. Each recording corresponds to 100 seconds of data. We observed that it is enough to sample the sensor response every 2 seconds. Example sensor response signals to CO, ethylene and methane gas vapor exposures are shown in Fig. \ref{Fig:three_gases}. Each sub-figure contains four different sensor responses. We gathered a total of 215 instances from the raw recordings, in which we have 49 CO, 116 ethylene and 50 methane time-series signals. Each instance has $50$ time measurements for each sensor. Thus, a total of $50 \times 16$ measurements per instance are used. Since the number of instances in the data set is small, we employed cross validation with holdout method, where our validation set consists of 35 examples, with the experiment repeated 4 times. Thus, we validated our results over 140 examples. Furthermore, since the number of instances is small compared to the input dimension $50 \times 16$, we opted to randomly crop data points during training into smaller time series of size $40\times16$. This allows the classifier to be invariant to the exact time where the exposure takes place. Furthermore, it increases the number of data points during training. Since the sensors are of different types, and since even the sensors of the same type produce different temporal responses, we process the temporal sensor data using 1-D convolutional networks. The input to each neural network is a matrix of size $40 \times 16$, for $40$ time instances and 16 sensors. \begin{table}[t] \centering \caption{Architecture of the convolutional neural network used in classifying the data set of Sec. \ref{sec_data2}.} \begin{tabular}{c|c} \toprule Layer & Specification\\ \midrule Input Layer& input size: $40 \times 16$\\ Conv Layer & 64 $5\times 16$ filters, no strides\\ Max-pooling Layer & pooling size: 4, stride size: 3\\ Batch-normalization Layer& -\\ \midrule Conv Layer & 128 $5 \times 64$ filters, no strides\\ Max-pooling Layer & pooling size: 4, stride size: 3\\ Batch-normalization Layer& -\\ \midrule Conv Layer & 256 $5 \times 128$ filters, no strides\\ Max-pooling Layer & pooling size: 4, stride size: 3\\ Batch-normalization Layer& -\\ \midrule Global Average-pooling Layer&output size: 256\\ \midrule Dense Layer& output size: 256\\ Batch-normalization Layer&-\\ Output Linear Layer& output size: 3\\ \bottomrule \end{tabular} \label{tab:cnn_data2_arch} \end{table} We used ReLU non-linearity between layers. Our loss function is the cross-entropy with the softmax operator. We used the RMSProp optimizer to carry out the parameter updates during training. We trained a regular ConvNet and an AddNet of the same architecture as in Table \ref{tab:cnn_data2_arch}. Our classification accuracy results over the testing data are shown in Table \ref{tab:data2_res}. The confusion matrix of the results over the validation data set obtained by AddNet is given in Table \ref{tab:conf_matrix_data2}. \begin{table}[b] \centering \caption{Recognition rates for the two neural networks over the test data set} \begin{tabular}{ccccc} \toprule \multirow{2}{*}{} &\multicolumn{3}{c}{\begin{tabular}{@{}c@{}} Gas Type-based Accuracy\\\bottomrule \addlinespace[0.1em] \end{tabular}}&\multirow{2}{*}{\begin{tabular}{@{}c@{}} Average \\ Accuracy \end{tabular}}\\&CO &Ethylene &Methane&\\\toprule ConvNet &$91.1\%$ &$98.6\%$& $100\%$ & $96.6\%$\\ AddNet &$91.1\%$&$97.2\%$ & $100\%$ &$96.1\%$ \\ \bottomrule \end{tabular} \label{tab:data2_res} \end{table}{} \begin{table}[b] \centering \caption{Confusion matrix for AddNet over the validation data sets for repeated trials } \begin{tabular}{ccccc} \toprule True&\multicolumn{3}{c}{\begin{tabular}{@{}c@{}} Predicted Class \end{tabular}}&Total\\ Class&CO&Ethylene&Methane& Count\\\toprule CO&31&3&0&34\\ Ethylene&2&70&0&72\\ Methane&0&0&34&34\\ \bottomrule \end{tabular} \label{tab:conf_matrix_data2} \end{table} It can be observed in Table \ref{tab:data2_res} that the recognition capabilities of both AddNet and ConvNet are on par with one another. It is worth emphasizing the computational frugality of the scheme as use of the regular dot-product is confined solely to the last layer in AddNet. \begin{figure*}[!htbp] \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.95\linewidth,height=0.7\linewidth]{co_15} (a) CO \end{minipage}\hfill \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.95\linewidth,height=0.7\linewidth]{eth_221} (b) Ethylene \end{minipage}\hfill \begin{minipage}{0.33\textwidth} \centering \includegraphics[width=0.95\linewidth,height=0.7\linewidth]{methane_221} (c) Methane \end{minipage}\hfill \caption{Time-series data generated by four different sensors under exposure to different type of gases (50 time samples for each sensor).} \label{Fig:three_gases} \end{figure*} \subsection{Chemical Gas Sensor Array Drift Dataset} The third data set is the publicly available chemical VOC gas sensor drift data set compiled by Vergara \textit{et al.} at UCSD \cite{vergara2012chemical}. The data set was obtained by exposing an array of 16 distinct chemical sensors to 6 types of gas mixtures (ammonia, acetone, ethylene, ethanol, toluene and acetaldehyde) at a variety of concentration levels. Each data record is a vector time series. Vectors contain 8 feature parameters extracted from the sensor time series signals during a gas release experiment, conducted over a period of three years at UCSD. The feature parameters include the steady state resistance value and the normalized resistance change. The remaining 6 parameter features are the maxima and minima of the exponential moving average (ema$_\alpha$) transform governed by the following input-output relation: \begin{equation} y[k] = (1-\alpha)y[k-1] + \alpha(r[k]-r[k-1]) \end{equation} where $r[k]$ is the resistance value at time step $k$, and $y[k]$ is the transformed value after applying the ema filter. The maxima and minima features are reported for $\alpha$ values equal to $0.1$, $0.01$ and $0.001$ over an entire experiment. These ema features have distinct time constants for different $\alpha$ values, as they contain temporal information. Unfortunately, the raw time-domain sensor signals are not available in this data set. Since there are 16 sensors, a total of $16 \times 8 = 128$ feature values are recorded per experiment. The data set is divided into 10 batches ordered chronologically. Full details about the experiment and the data set can be found in \cite{vergara2012chemical}. We carried out our classification tasks by training neural networks for $N=2$ batches and testing on successive batches. This is identical to the sensor drift estimation approach given in \cite{vergara2012chemical}. Because feature values have huge variances, we opted to apply the signed square root function element-wise to control the ranges of the reported values. The modification delivered improved results in our experiments, especially for later batches. We trained an MLP model with two hidden layers, each with 512 output units, and an output layer. Furthermore, we trained the network for 100 epochs using the RMSProp optimizer \cite{tieleman2012lecture}. We applied a dropout rate of $20\%$ and used a batch size of 128 in order to prevent complex co-adaptation. To augment the data, we added a zero-mean Gaussian noise with standard deviation of 0.1. We also tried combining AddNet with the GAN approach, in which case, the discriminator is an AddNet and the generator is a regular MLP. The architecture of the network is the same as that of the GAN we use. Furthermore, we tried utilizing the other batches by passing them to the classifier and carrying out backpropagation according to their guessed labels. This is done in order for the network to utilize the correctly guessed labels so that it could be helpful in improving the classification accuracy for the mis-classified data point. A numerical comparison of the proposed methods to the SVM-classifier ensemble used in \cite{vergara2012chemical} is given in Table \ref{tab:uci_numerical_table}. In general, the AddNet-MLP, the MLP and the multi-class GAN discriminator produce better sensor-drift compensated results than does the SVM based method. \begin{table*}[!htbp] \centering \caption{Comparative accuracy (in $\%$) results of the various models when training on batches 1 and 2 and testing on batches 3-10. Bold-text numbers correspond to the best accuracy results obtained amongst the different algorithms for each batch.} \begin{tabular}{cccccccc} \toprule Batch ID& \begin{tabular}{@{}c@{}} SVM Classifier Ensemble\cite{vergara2012chemical}\end{tabular}&MLP & \begin{tabular}{@{}c@{}} AddNet-MLP\end{tabular} & \begin{tabular}{@{}c@{}} DiscGAN\end{tabular} & \begin{tabular}{@{}c@{}} AddNet-DiscGAN \end{tabular} & \begin{tabular}{@{}c@{}} Domain \\adaptation \end{tabular}\\ \midrule Batch 3&87.8& \textbf{98.6} & \textbf{98.6}& 98.3&97.8&98.7\\ Batch 4&\textbf{90.6}& 83.8 & 75.1&71.4&69.6&73.3\\ Batch 5&72.1& \textbf{99.5} & 99.4&98.4&98.9&99.5\\ Batch 6&44.5& 74.9 & \textbf{75.9}&72.3&73.9&76.4\\ Batch 7&42.5& 59.8 & 57.4&61.5&\textbf{66.3}&59.2\\ Batch 8&29.9& 34.0 & 34.0&\textbf{62.3}&58.8&39.1\\ Batch 9&59.8& 31.6 & 38.9&63.2&\textbf{63.8}&52.3\\ Batch 10&39.7& 47.3 & \textbf{54.3}&43.8&44.5&46.1\\ \bottomrule \end{tabular} \label{tab:uci_numerical_table} \end{table*} As we can see from Table \ref{tab:uci_numerical_table}, using DiscGAN (with a regular discriminator or AddNet), we were able to obtain better recognition rates for later batches (batches 7, 8 and 9). This could be attributed to the fact that the generator did expose the discriminator to novel unseen points in the data space during training. Therefore, the discriminator would have been able to learn additional meaningful features. As for AddNet, it can perform as well as the regular MLP, either in conventional binary classification or in the case of DiscGAN. It is also worth noting that the domain adaptation scheme we employed did not yield any significant improvements. We believe that improved classification results would have been attained, if the entire temporal sensor signal set were at our disposal as input to our algorithms. \section{Conclusions} \label{sec:conclusion} In this paper, we have introduced a variety of deep-learning based algorithms and applied them to VOC gas and ammonia vapor leak detection and gas type identification problems. The first algorithm is based on AddNet. In AddNet, we replace the computationally expensive dot-product operations in deep neural networks with a modified addition operation that retains the sign of multiplication. Its computational efficiency enables AddNet to be used in embedded and mobile systems, in which we envision a smart gas leakage monitoring and detection CPS being reliably used. The second algorithm is called DiscGAN, which uses the discriminator of a generative adversarial neural network as a classifier in a bid to enhance the recognition capabilities of the system. The generator part helps in exposing the discriminator to realistic synthetic data points that can be helpful in classification tasks. We considered three detection and classification tasks. The first task is to detect VOC gas leakage from temporal IR data. Our proposed algorithms achieved accuracy rates of $97-98\%$. The second task we considered is to identify gas types using temporal data of sensor arrays. We were able to attain recognition rates of $96.1-96.5\%$. Our third task was to identify gas types using non-temporal data, where the readings are obtained for the same sensor array over a period of 36 months. The sensor measurements suffer from degradation due to sensor drift. Although our gas identification accuracy results for the early batches in the last data set were quite high, the degradation incurred in later batches resulted in significant identification accuracy drop. We believe that the non-temporal global features reported for the experiments are highly affected by sensor drift. As a result, the features are not sufficiently expressive of the sensor responses for different gas analyte types. Based on our high recognition rates for two temporal data sets considered in this work, we conclude that using sensor measurements in their temporal presentation, and feeding these recordings into deep neural network algorithms, achieves better performance as these algorithms learn discriminative features by themselves with no need to hand-craft features that could be sensitive to error as in the case of the sensor drift problem. \bibliographystyle{IEEEtran}
train/arxiv
BkiUdhDxK19JmhArDwAp
5
1
\section{INTRODUCTION} In the last two decades several semiphenomenological potentials were proposed \cite{S73,d75,P80,A84,B87} which can describe rather well NN scattering data below the pion production threshold. All these models have a common feature, namely that the interaction at large distances is ascribed to the one-pion exchange potential (OPEP). On the other hand, they differ significantly at intermediate and short distances, a fact that becomes evident when one inspects the profile functions produced for the various components of the force. In each case, the reproduction of experimental data is achieved by means of a different balance between effects due to long and short distances. The fact that these potentials are successful means that somehow they are able to incorporate the relevant average dynamics. At present, none of the existing semiphenomenological potentials include explicitly the dynamics associated with chiral symmetry, which constitutes the main conceptual framework for the study of strong interactions at energies which are small compared to the QCD scale. In this regime, non-perturbative effects are dominant and one is not able to do calculations using QCD directly. The usual strategy for overcoming this problem consists in working with an effective theory, constructed in such a way as to include as much as possible the main features of the basic theory. The masses of the quarks $u$ and $d$ are very small and hence their interactions with gluons are approximately invariant under the group $ SU(2) \times SU(2) $. Therefore, one requires the effective theory at the hadron level posses the same basic symmetry, broken by just the pion mass. In the last five years several authors have tackled the problem of NN interactions in the light of chiral symmetry and, so far, only processes associated with the two-pion exchange potential (TPEP) were systematically studied \cite{OK92,CPS92,FC94,RR94,B94,ORK94,RR95,R95,ORK96,RR97}. Chiral symmetry is very relevant to this component of the force because it controls the behaviour of the intermediate $\pi$N amplitude, that is the main building block of the interaction. The first works of this series were restricted to systems containing just pions and nucleons and considered basically the first five processes given in Fig.1 (A) \cite{OK92,CPS92,FC94,RR94,B94,RR95}. These processes constitute an autonomous chiral family, incorporate correctly the well known cancellations of the intermediate $\pi$N amplitude \cite{BR96,BRR97} but correspond to an intermediate amplitude which is too simple for reproducing $\pi$N experimental data \cite{H83}. The extension of this minimal model so as to include other degrees of freedom was considered by Ord\'o\~nez, Ray and van Kolck \cite{ORK94,ORK96} and by ourselves \cite{R95,RR97}. In the former case, a very general effective Lagrangian was used, which included explicitly the interactions of pions, nucleons and deltas and contained some free parameters representing other interactions. In principle, these parameters could be obtained from other physical processes, but these authors chose to adjust them to NN scattering data. Using a non-relativistic cut-off of the order of the rho mass, they could achieve a qualitative description of all NN observables. In our approach, the intermediate $\pi$N amplitude involved the interactions of pions and nucleons, supplemented by empirical information in the form of th H\"ohler, Jacob and Strauss (HJS) coefficients \cite{HJS72}. In general, the physical amplitude for the process $\pi^\alpha(k) N(p) \rightarrow \pi^\beta(k') N(p')$ may be described by two independent variables, $\nu = \frac{1}{4m} (p+p')\cdot(k+k')$ and $t = (k-k')^2$. In order to obtain the HJS coefficients, one subtracts the nucleon pole from the empirical $\pi$N amplitude and uses dispersion relations to continue analytically the remainder (R) to an unphysical region around the point $\nu=0$, $t=0$. The HJS coefficients are then obtained by expanding this remainder in a power series in $\nu$ and $t$. The use of the subthreshold coefficients is particularly suited to the calculation of the TPEP at large distances, since this part of the potential is determined by the intermediate $\pi$N amplitude in the very neighbourhood of the point $\nu = 0$, $t =0$. Thus, in this aspect, our approach is very different from that of Ord\'o\~nez, Ray, and van Kolck. In our calculation, the TPEP was derived from the ten diagrams depicted in Fig.1, representing both interactions involving only nucleons (A) and other degrees of freedom (B). The main features of the asymptotic TPEP were extensively discussed in \cite{RR97} and here we just recall some of the conclusions of that paper. One of them is that the scalar-isoscalar component of the TPEP at large distances is attractive and therefore qualitatively coherent with a well known feature of the nuclear force. As far as dynamics is concerned, we have shown that chiral symmetry is responsible for large cancellations in the pure nucleon sector (Fig.1, A) \cite{BR96,BRR97} and eventually the contributions from this sector turn out to be much smaller than those arising from other degrees of freedom. The main contribution to the intermediate attraction is due to processes containing nucleons in one leg and the remainder degrees of freedom in the other. The fact that our calculation of the TPEP did not contain free parameters means that it yields predictions for NN observables, whose study is the main goal of the present work. We assume that, for distances larger than 2.5 fm, the NN interaction is given by just the OPEP and the TPEP calculated in ref. \cite{RR97} and try to determine the values of the angular momentum and the energy regions for which observables can be ascribed to these components only. Our presentation is divided as follows: in Sec.II we discuss our method of work and in Sec.III we give our results and conclusions. \section{DYNAMICS} In general, it is not easy to isolate unambiguously the observables associated with a particular region of a given potential. Nevertheless, in many cases, the centrifugal barrier can suppress a significant part of the short range interaction and one is left only with contributions from the tail of the force. For instance, in a study of the influence of the OPEP over NN observables, we have obtained, for most waves with $\ell > 2$, purely pionic phase shifts and mixing parameters, which did not depend on the short range features of the interaction \cite{BR94}. In the case of the OPEP, this kind of result is possible because the potential is not too strong. In the present problem, even if one is willing to consider the influence of the TPEP only for separations larger than 2.5 fm, where the results of Ref. \cite{RR97} are mathematically reliable, one needs to use expressions which are valid for all distances. As it is important have close control of the regions of the potential that contribute to the observables, we employ the so called variable phase method. It is fully equivalent to the Schr\"odinger equation and provides a clear spatial picture of the way phase shifts and mixing parameters are structured. For the sake of completeness, we summarize here the main equations used in our calculation. In the case of uncoupled channels, the wave function $u_J(r)$ with angular momentum $J$ is written as \cite{C63} \begin{equation} u_J(r) = c_J(k,r) \; \hat{j}_J(kr) - s_J(k,r) \; \hat{n}_J(kr) \;, \end{equation} \label{1} \noindent where $\hat{j}_J$ and $\hat{n}_J$ are the usual Bessel and Neumann functions multiplied by $kr$. The functions $c_J$ and $s_J$, for a potential $V_J(r)$, are given by \begin{equation} c_J(k,r) = 1 -\frac{m}{k} \int^r_0 \;d\rho \;V_J(\rho) \;\hat{j}_J(k\rho)\;u_J(\rho) \;, \end{equation} \label{2} \begin{equation} s_J(k,r) = -\frac{m}{k} \int^r_0 \;d\rho \;V_J(\rho) \;\hat{n}_J(k\rho)\;u_J(\rho) \;. \end{equation} \label{3} \noindent The variable phase $D_J(k,r)$ is defined as \begin{equation} D_J = tan^{-1} (\frac{s_J}{c_J}) \; \end{equation} \label{4} \noindent and, by construction, it vanishes at the origin and yields the observable phase shift $\delta_J$ when $r$ tends to infinity. Differentiating (2) and (3), using (1), and manipulating the result, one obtains the differential equation \begin{equation} D'_J = - \frac{m}{k}\; V_J\; P^2_J(D_j) \;, \end{equation} \label{5} \noindent where $D'_J = \frac{dD_J}{dr}$ and the structure function $P_J$ is given by \begin{equation} P_J = \hat{j}_J\;cos(D_J) - \hat{n}_J\;sin(D_J) \;. \end{equation} \label{6} In the case of coupled channels, one has two phases, $D_{Jm}$ and $D_{Jp}$, with $m=L-1$ and $p=L+1$, and a mixing parameter $E_J$, that depend on $r$ and become the observables $\delta_{Jm}$, $\delta_{Jp}$ and $\epsilon_J$ when r tends to infinity. Denoting the diagonal and tensor components of the potential by $W_{JL}$ and $T_J$, one has the following coupled differential equations \cite{B67} \begin{eqnarray} D'_{Jm} = &-& \frac{m}{k\;cos(2E_J)} \left\{ W_{Jm} \left[ cos^4(E_J)\;P_m^2 - sin^4(E_J)\;Q_m^2\right] \right. \nonumber\\ &-& \left. W_{Jp}\;sin^2(E_J)\;cos^2(E_J)\;(P_p^2 - Q_p^2) \right. \nonumber\\ &-& \left. 2\;T_J\;sin(E_J)\;cos(E_J)\left[ cos^2(E_J)\;P_m\;Q_p -sin^2(E_J)\;P_p\;Q_m\right] \right\} \;, \end{eqnarray} \label{7} \begin{eqnarray} E'_{Jm} = &-& \frac{m}{k} \left\{ T_J \left[ cos^2(E_J)\;P_m\;P_p + sin^2(E_J)\;Q_m\;Q_p\right] \right. \nonumber\\ &-& \left. W_{Jm}\;sin(E_J)\;cos(E_J)\;P_m\;Q_m - W_{Jp}\;sin(E_J)\;cos(E_J)\;P_p\;Q_p \right\} \;. \end{eqnarray} \label{8} In these expressions the structure functions $P_L$ and $Q_L$ are defined as \begin{eqnarray} P_L &=& \hat{j}_L\;cos(D_{JL}) - \hat{n}_L\;sin(D_{JL}) \;, \label{9}\\ Q_L &=& \hat{j}_L\;sin(D_{JL}) + \hat{n}_L\;cos(D_{JL}) \;. \end{eqnarray} \label{10} \noindent The equation for $D_{Jp}$ is obtained by exchanging the labels $m$ and $p$ in (7). As far as the interaction is concerned, we consider just the OPEP $(V_\pi)$ and the TPEP, which are assumed to represent the full potential for distances greater than 2.5 fm. As pointed out in the introduction, the TPEP is determined by two kinds of contributions, one generated in the pure pion-nucleon sector $(V_N)$ and the other associated with the remainder degrees of freedom $(V_R)$. The direct inspection of these potentials indicates that the former is comparable to the OPEP, whereas the latter is rather strong for $r > 2.0$ fm. In order to calculate the observables, one needs to regularize the various potentials at short distances. In the case of OPEP, the regularization is achieved by cutting it at a radius $r_\pi$ and replacing the inner part by the constant value $V_\pi(r_\pi)$. As $V_N$ is comparable to the OPEP, we adopt the same regularization procedure for it, with a radius $r_N$. The regularization of $V_R$ is more problematic. For distances around 1.0 fm, the value of the cental component of the potential is about $-25$ GeV. On the other hand, in Ref.\cite{RR97} we have argued that the asymptotic TPEP is mathematically reliable only for distances larger than 2.5 fm, indicating that the odd behaviour of the TPEP at short distances is unphysical and associated with the use of equations outside their domain of validity. Inspecting the equations used in that work, it is easy to relate this behaviour to the HJS coefficients involving high powers of $\nu$ and $t$ in the intermediate $\pi$N amplitude. On the other hand, restricting ourselves to the first two leading contributions, due to the coefficients of the terms $\nu^0 t^0$ and $\nu^0 t^1$, we keep most of the asymptotic potential and get values around $-150$ MeV in the neighbourhood of 1.0 fm, which are still large, but much more reasonable. Therefore we base the present study on this leading potential, which is regularized by means of a step function $\theta(r_R)$. In this calculation one can rely only on those results which are independent of the radii used in the regularization procedure. In order to control this independence, we adopt $r_N = r_\pi$, vary $r_\pi$ in the interval $0.8-1.0$ fm and discard the cases where the contribution of $V_\pi + V_N$ to the observables varies more than $1.0\%$. Concerning $V_R$, the preceding discussion suggests that one should be interest in effects due to the region $r>2.5$ fm and hence we consider values of $r_R$ between 1.5 fm and 2.5 fm and study the effect of this variation over the observables. This produces an indication of both the stability of the results and the importance of the inner part of the potential. When the variation of the results is less than $5\%$, we take them as predictions of the potential. For larger deviations, we consider them as estimates. \section{RESULTS AND CONCLUSIONS} We have calculated the predictions for NN observables produced by a chiral potential \cite{RR97} involving only the exchanges of one and two pions, assumed to represent the full interaction for distances larger than 2.5 fm. Since we are interested only in the cases where the centrifugal barrier cuts naturally the inner parts of the force, we used the variable phase method to control this aspect of the problem. One may acquire a feeling for this method by looking at Fig. 2, where we display the variable phases for the wave $^1G_4$, divided by their asymptotic values, for three different energies. It is possible to see that, as expected, higher energies probe more the interior of the potential. It is also interesting to note that the variable phase method allows one to make quantitative statements such as, for instance, the radius for which the phase attains a given percentage of its final value. In the case of uncoupled waves, the interplay between the centrifugal barrier and the regularization of the background is rather intuitive, but this is not the case of coupled systems. In order to clarify this point, we show in Fig. 3 the variable phases for the $^3D_3-^3G_3$ system for $E=200$ MeV, due just to the background and regularized at either at 0.7 fm or 1.0 fm. One sees that the $^3D_3$ curves depend strongly on the cutoff, whereas those describing $\epsilon_3$ and $^3G_3$ are very stable and cannot be distinguished with the naked eye. The wave $^3G_3$ is negligible for distances smaller than 2.0 fm due to the centrifugal barrier, meaning that the system is effectively uncoupled up to that distance. The construction of $^3D_3$ extends up to 3.5 fm, where $\epsilon_3$ is maximum, but the two other observables become asymptotic much later, around 5.0 fm. This indicates that, even for coupled waves, the various components get their contributions from different regions in space. We found out that the observables associated with the following waves depend more than $1 \%$ on the cutoff used for the $V_\pi+V_N$ background and hence are not suited for our study: $^1S_0$, $^3P_0$, $^3P_2$, $^3S_1, \epsilon_1, ^3D_1$ and $^3D_3$. In Figs. 4-7, we display our results for the phase shifts as a function of the laboratory energy. They include predictions from the OPEP cut at 1.0 fm, the sum of the OPEP and $V_N$ cut at 1.0 fm and the total results produced by cutting the contribution of $V_R$ at 1.5 fm ($\chi(1.5)$) and 2.5 fm ($\chi(2.5)$), and the experimental values are taken from SAID VZ40 solution \cite{SAID}. For comparative purposes, we also include the predictions of the Argonne \cite{A84} phenomenological potential. The dominant part of the chiral TPEP is associated with the exchange of a scalar-isoscalar system and hence its significance depends strongly on the NN channel considered. Therefore in the sequence we comment the main features of our results for the various subspaces of spin and isospin. {\bf{(T=1,S=0)}}; Fig. 4: There are no results for the wave $^1S_0$, since it cannot be understood as a TPEP superimposed to a pionic background. For the waves $^1D_2$ and $^1G_4$ one has predictions for energies up to 50 Mev and 300 MeV respectively. As expected, the TPEP increases the attraction due to the OPEP and our results are very close to the experimental values for the potential cut at 2.5 fm. {\bf{(T=1,S=1)}}; Figs. 5a,b,c,d: The waves $^3P_0$ and $^3P_2$ depend on the pion cutoff and were discarded. For the uncoupled waves $^3P_1$, $^3F_3$, and $^3H_5$ (Fig. 5a) we obtain predictions which extend up to 300 MeV for the last two of them. Results for the waves $^3P_1$ and $^3F_3$ are compatible with experiment, but this does not happen for the $^3H_5$ wave. In the case of the coupled waves, that with lowest orbital angular momentum tend, as expected, to be much more influenced by the cutoff used for the OPEP than the other ones. We obtain predictions in all cases, but the mixing parameters are heavily dominated by the OPEP and yield very little information about the TPEP. For the waves $^3F_2$ (Fig. 5b) and $^3F_4 $ (Fig. 5c), the differences with experimental values are small, whereas for the waves $^3H_4$ (Fig. 5c) and $^3H_6$ (Fig. 5d) they are important. {\bf{(T=0,S=0)}}; Fig. 6: Our calculation yields predictions for all the waves in this sector, namely $^1P_1$, $^1F_3$, and $^1H_5$, generally quite close to the pure OPEP ones, reflecting the fact that our central potential in this channel is small. Results are also close to experiment. {\bf{(T=0,S=1)}}; Figs. 7a,b,c: The observables $^3S_1,\,\epsilon_1,\,^3D_1$ and $^3D_3$ depend on the pion cutoff and are not considered. In all cases, coupled and uncoupled, our results are dominated by the OPEP and close to experiment. In order to assess the general trends of the various observables presented in Figs. 4-7, it is useful to recall that the relative strengthes of the central and tensor OPEP in the channels $(T,S)=(1,0),(1,1),(0,0),\text{ and }(0,1)$ are respectively $1:\frac{1}{3}:3:1$ and $0:\frac{1}{3}:0:1$. This means that one pion exchange is more important in the channels with $T=0$ and hence the good agreement between predictions and experimental results noted in Figs. 6 and 7 may be ascribed to OPEP physics. As far as the channels with $T=1$ are concerned (Figs. 4 and 5), the tensor interaction makes the OPEP to be more important for triplet waves, in spite of its weaker central component. Therefore the role of the TPEP is more evident in the waves $^1D_2$ and $^1G_4$ (Fig. 4), where chiral predictions agree well with experiment. In the case of triplet waves (Figs. 5a,b,c,d), one also finds that the chiral potential is able to reproduce experimental data when $\ell < 5$, but this does not happen for $H$ waves. The behavior of these waves are peculiar, since they have a high orbital angular momentum and hence should be close of being OPEP dominated. Indeed, our results show that predictions from the chiral potential for H waves are not far from the OPEP and also depend little on the cutting radius. Part of the discrepancies observed may be associated with the fact that we have used $g^2/4\pi=14.28$ for the $\pi N$ coupling constant \cite{HJS72} whereas the SAID analysis is based on the value $g^2/4\pi=13.7$. It is also worth pointing out that there is a 10\% difference between the experimental $pn$ and $pp$ solutions and the discrepancies would be reduced if the latter were used. However, even if these factors were considered, the experimental data would still seem to suggest that the TPEP is repulsive for the waves $^3H_4$ and $^3H_6$, something which is rather difficult to explain theoretically. A general conclusion that can be drawn from the present study concerns the details of the TPEP. As discussed in the introduction and represented in Fig. 1, it consists in a sum of terms, arising from both the pure pion nucleon sector and from interactions involving other degrees of freedom. Our results show that the former contributions are very small, indicating that the numerical significance of the TPEP is essentially due to the interplay between nucleon and other degrees of freedom. In this work we tested the chiral TPEP derived in Ref.\cite{RR97}, which is based on subthreshold $\pi N$ data and contains no free parameters. Our results have shown that it is rather consistent with experiment\footnote{Note added in revision: In a recent work, the predictions from a similar chiral potential were presented \protect\cite{KBW97}, which agree qualitatively with those produced here.}. \section{Acknowledgments} M.R.R would like to thank the kind hospitality of the Division de Physique Theorique de l'Istitut de Physique Nucleaire, Orsay, France, where this work was performed and FAPESP (Brazilian Agency), for financial support. The work of C.A. da Rocha, was supported by Grant No. 200154/95-8 from CNPq Brazilian agency. This work was partially supported by U.S. Department of Energy.
train/arxiv
BkiUdfQ4ubng_t84ZXWG
5
1
\section{Introduction} Annual losses from severe thunderstorms in the US have exceeded \$10 billion in recent years.\footnote{\url{http://www.willisre.com/Media_Room/Press_Releases_(Browse_All)/2017/WillisRe_Impact_of_ENSO_on_US_Tornado_and_Hail_frequencies_Final.pdf}} In addition to economic losses, 2011 was marked by 552 deaths caused by tornadoes. These economic and human impacts are a strong motivation to study how and why US thunderstorm activity varies from year to year and region to region. Two important aspects are trends potentially related to climate change or multi-decadal variability, and modulation by the El Ni\~no-Southern Oscillation (ENSO). However, inadequacies in the length and quality of the thunderstorm data record present substantial challenges to addressing these questions directly \citep{Verbout2006,Allen:Hail:2015,Edwards2018:wind}. \textcolor{black}{ In the US, a severe thunderstorm is defined to be one that produces a tornado, hail greater than one inch in diameter, or wind gusts in excess of 50 kts. Supercell storms are responsible for a large fraction of severe thunderstorm reports (e.g., 79\% of tornadoes according to \cite{trapp2005tornadoes}), even though only about 10\% of thunderstorms are supercells \citep{Doswell2015Meso}, and a key element in forecasting severe thunderstorms is the prediction of where and when supercells will occur \citep{corfidi2017severe}. A supercell is a thunderstorm with a deep, long-lived rotating updraft (mesocyclone). The presence of buoyancy, i.e., convective available potential energy (CAPE), and deep-layer vertical wind shear are important determinants for supercell development. In addition to the magnitude of the vertical shear, the angle between surface and upper-level winds is important for mesocyclone development and persistence. A key quantity is atmospheric helicity, which is computed relative to storm motion and is proportional to vertical wind shear and the amount of wind direction turning from the surface to upper levels (often 0--3 km). } Several recent studies of US tornado reports have concluded that annual numbers of reliably observed tornadoes, i.e., those rated E/EF1 and greater, show slight but statistically insignificant trends downward over time \citep{BrooksScience2014,Elsner:Tornado:efficiency:2014}, whereas measures of tornado outbreaks or clusters show upward trends \citep{BrooksScience2014,Elsner:Tornado:efficiency:2014,TippettCohen:ExtremeTornado}. Changes in regional tornado activity have also been reported \citep{Agee2016tornado,Gensini2018}, but there is less evidence for changes in hail and damaging straight-line wind, perhaps due to the poorer quality of the relevant databases. In view of the limitations of the historical storm record, a valuable alternative is the analysis of meteorological environments associated with severe thunderstorms. \textcolor{black}{As mentioned above, severe thunderstorms, especially supercell storms, are more likely in the presence of high values of CAPE and of certain measures of} vertical wind shear \citep[see, e.g.,][]{Brooks2003, brooks2013severe} such as storm relative helicity (SRH). Weather forecasters have routinely used such quantities for two decades to interpret observations and the output of numerical weather prediction models \citep{Johns1993,Rasmussen1998,Doswell1996}, and they are also useful in climatological studies, especially in areas outside the US without extensive historical reports \citep{Brooks2003}. The environmental approach can also provide an indication of expected severe thunderstorm activity in a warmer climate based on climate projections that do not resolve thunderstorms explicitly \citep{Trapp2009,Diffenbaugh2013}. On time-scales between weather forecasts and climate projections, this approach has provided a clearer picture of how ENSO modulates US hail and tornado activity \citep{Allen:ENSO2014,Lepore:ENSO:2017}. However, there are notable gaps in previous statistical studies of environments associated with severe thunderstorms. For instance, relationships with ENSO were diagnosed based on \textit{monthly} averages, which are at best indirect proxies for behaviour on the time-scale of weather. Similarly \citet{Gensini2018} computed monthly accumulations of daily maxima of a significant tornado parameter. \citet{TippettCohen:ExtremeTornado} used submonthly environmental data but aggregated the results on an annual and US-wide basis. These gaps motivate the present work, which focuses on extremes of the environmental values rather than on monthly averages, and presents results that are spatially and temporally resolved. The framework that we use is statistical extreme-value theory. \textcolor{black}{\cite{gilleland2013spatial} apply the conditional extreme-value framework of \cite{heffernan2004conditional} to the product WS$\times W_{\rm max}$, where WS is a measure of wind shear and $W_{\rm max}=\sqrt{2 \times \mathrm{CAPE}}$, by conditioning on the $75$th percentile of that variable computed across the spatial domain. This approach has the advantage of allowing the study of real spatial patterns under severe conditions, as opposed to approaches looking at pointwise maxima. They show some temporal variations in the mean simulated values from their model.} \textcolor{black}{\cite{mannshardt2013extremes} perform an unconditional univariate analysis in which they fit the generalized extreme-value (GEV) distribution to} the annual maxima of WS$\times W_{\rm max}$ and establish the existence of a time trend in the GEV location parameter. \textcolor{black}{\cite{heaton2011spatio} consider three Bayesian hierarchical extreme-value models based on exceedances over a high threshold for WS$\times W_{\rm max}$, their third model being based on a Poisson point process with a yearly time trend. Neither paper} clarifies whether this trend is attributable to both CAPE and WS or only to one of them. Moreover, both articles \textcolor{black}{consider trends in annual quantities} and thus cannot detect month-specific features, and they do not account for multiple testing, though this \textcolor{black}{issue} is briefly addressed in \cite{gilleland2008large}. Finally, they consider only time as a covariate. We propose to \textcolor{black}{address} some of the gaps left by the \textcolor{black}{papers mentioned in the previous paragraph}. Our study covers a large part of the contiguous US for individual months from 1979 to 2015. We separately consider CAPE, SRH (0--3 km) and the combined variable PROD=$\sqrt{\mathrm{CAPE}} \times$SRH. \textcolor{black}{To motivate our use of PROD, we consider the discriminant line defined in \citet[][Equation (1)]{Brooks2003}, which is one of the first thresholds used to distinguish low and high likelihoods of severe thunderstorm occurrence using a function of CAPE and vertical shear. This equation can be rewritten as $\mathrm{S6} \times \mathrm{CAPE}^{0.62}= 18.60$, where S6 is the 0--6 km shear. Replacing S6 with 0--3 km SRH and approximating the power $0.62$ by $0.5$ leads to a discriminant line of the form $\mathrm{SRH} \times \sqrt{\mathrm{CAPE}}=c$, i.e., $\mathrm{PROD}=c$, where $c$ is a real constant, and shows that values of PROD can be expected to be indicative of high risk of severe thunderstorms. PROD has already been used as a proxy for severe thunderstorms in several studies \citep[e.g.,][]{TippettCohen:ExtremeTornado} and the plot of Figure~1 in \cite{Brooks2003} is little changed by replacing S6 with 0--3 km SRH (not shown). More generally, the product of CAPE and two shear-related variables (different or not), or equivalently its square root, is commonly used as an indicator of the likelihood of severe thunderstorm occurrence. For instance, the significant tornado parameter (STP) and the supercell composite parameter (SCP) involve the product of CAPE, S6 and 0--1 km SRH, and the product of CAPE, S6 and 0--3 km SRH, respectively \citep[e.g.,][]{thompson2003close}.} \textcolor{black}{To ensure the soundness of our results} we carefully check the \textcolor{black}{suitability} of the GEV and the use of time and ENSO as explanatory variables in its location parameter, and we \textcolor{black}{account} for multiple testing by implementing the false discovery rate procedure of \cite{benjamini1995controlling}. \textcolor{black}{As stated in \citet[][Section 1]{gilleland2013spatial}, in addition to studying PROD, it is insightful to consider its components separately. Furthermore, accounting for multiple testing is essential when performing many simultaneous tests, as highlighted by \citet[Section 4]{gilleland2013spatial}, though they do not apply an adjustment for it.} We find a significant time trend in the location parameter of the GEV for PROD maxima in April, May and August (and to a lesser extent in June and December), in CAPE maxima in April, May and June (and to a lesser extent in August, November and January), and in SRH maxima in May (and to a lesser extent in April). \textcolor{black}{The trends in CAPE maxima are striking, because CAPE is expected to increase in a warming climate \citep{del2007will, van2009surface} and are relevant to rainfall extremes \citep{lepore2015temperature}, but have not previously been observed over the US.} April and May are important months for PROD, as severe thunderstorms are frequent at this period. The corresponding time slope is positive in regions of the US \textcolor{black}{where severe thunderstorms are already common}, which may have implications for risk assessment and management. Our study also reveals that ENSO can explain variation in the location parameter of the GEV for PROD and SRH maxima in February. The corresponding slope is negative over most of the region we consider, possibly suggesting an increased \textcolor{black}{risk of high storm impacts} in February during La Ni\~na years. \textcolor{black}{Our results differ from those of \citet{heaton2011spatio}, \cite{mannshardt2013extremes} and \cite{gilleland2013spatial}, but are fairly consistent with those obtained by \cite{Gensini2018}, who inter alia consider the numbers of tornado reports.} The remainder of the paper is organized as follows. Section~\ref{Sec_Data} presents the data and a brief exploratory analysis. We describe our statistical approach and demonstrate its relevance in Section~\ref{Sec_Methodology}. Section~\ref{Sec_Results} details our main results, and Section~\ref{Sec_Conclusion} summarises our findings and discusses them. \section{Data and exploratory analysis} \label{Sec_Data} The data we investigate consist of 3-hourly time-series of 0-180 hPa convective potential energy (CAPE, Jkg$^{-1}$) and 0--3 km storm relative helicity (SRH, m$^2$s$^{-2}$) from 1 January 1979 at 00:00 to 31 December 2015 at 21:00. The region covered is a rectangle over the contiguous US from~$-110^\circ$~to~$-80^\circ$~longitude and~$30^\circ$~to~$50^\circ$~latitude and the resolution is 1$^\circ$ longitude and 1$^\circ$ latitude. These data constitute a coarse version of reanalysis data from the North American Regional Reanalysis (NARR); the original resolution is 32 km longitude and 32 km latitude \citep[see, e.g.,][]{Mesinger2006:NARR}. The region contains 651 grid points, with no data available for 32 grid points over the sea or lakes. Using these time series, we build 3-hourly time series of PROD=$\sqrt{\mathrm{CAPE}} \times$SRH, measured in m$^3$s$^{-3}$. As a physical covariate we use monthly values of the NINO 3.4 index (${}^{\circ}\mathrm{C}$) from 1979 to 2015, taken from the ERSSTv5 data set available on the NOAA Climate Prediction Center website. Figure~\ref{figspatial} shows the empirical pointwise probabilities that CAPE and SRH exceed thresholds corresponding to roughly the $90^{\text{th}}$ percentile of each variable across the entire region. There is a clear North-South gradient for CAPE probabilities, while the regional spatial pattern for SRH suggests that the high values cluster towards the centre of the region. Figure~\ref{figtempprod} shows an increase in the exceedance probabilities for PROD at many grid points over the decades; a similar result is visible for SRH, but less so for CAPE. This increase is of interest for risk assessment, especially in regions with a high risk of severe thunderstorms. Figure~\ref{figtempprod} strongly suggests the presence of a temporal trend in the maxima, but \textcolor{black}{there seems to be no geographical shift, notwithstanding the results of \cite{gilleland2013spatial}}. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{CAPE_SRH_spatial} \caption{Empirical pointwise probabilities of 3-hourly CAPE exceeding $1400$ Jkg$^{-1}$ (left) and SRH exceeding $170$ m$^2$s$^{-2}$ (right) for the entire period 1979--2015. Dark grey corresponds to grid points where no observations are available.} \label{figspatial} \end{figure} The top left panel of Figure~\ref{prodseasonts} shows a positive correlation between PROD April maxima and time for many grid points, and the middle panels show a positive linear time trend for April maxima of PROD, CAPE and SRH in the subregion indicated. The top right panel shows strong negative correlation between PROD February maxima and ENSO at many grid points, while the scatter-plots in the bottom panels show a roughly linear negative trend for all variables. These analyses underscore the need to incorporate ENSO into our statistical modelling of maxima. \begin{figure} \centering \includegraphics[width=.99\textwidth]{WS_1_4} \caption{Empirical pointwise probabilities of 3-hourly PROD exceeding $3300$ m$^3$s$^{-3}$ during the periods 1979--1987 (top left), 1988--1996 (top right), 1997--2005 (bottom left) and 2006--2015 (bottom right).} \label{figtempprod} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_PROD_corr_1.pdf} \end{subfigure} \\ \vspace{4mm} \centering \begin{subfigure}[b]{.33\linewidth} \includegraphics[width=.99\textwidth]{explore_PROD_season_1_ts.pdf} \end{subfigure \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_CAPE_season_1_ts.pdf} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_SRH_season_1_ts.pdf} \end{subfigure} \\ \vspace{4mm} \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_PROD_season_1_ENSO.pdf} \end{subfigure \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_CAPE_season_1_ENSO.pdf} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_SRH_season_1_ENSO.pdf} \end{subfigure} \\ \caption{Exploratory analysis for monthly maxima: The top panels show the correlation map with time (in years from 1 to 37) for PROD April maxima (left) and the correlation map with ENSO for PROD February maxima (right). The middle and bottom panels display PROD (left), CAPE (centre) and SRH (right) analyses on a subregion indicated by the black rectangle drawn on the correlation maps. The middle panels show the region-averaged monthly maxima time series across all 444 months in light grey, the region-averaged April maxima time series in black and its $95\%$ confidence interval bounds indicated by the red shaded region. Every point in the time series is the averaged maxima across all grid points in the subregion indicated before, for a particular month and a particular year. The bottom panels show scatter-plots of the region-averaged February maxima with ENSO, along with the $95\%$ confidence interval bounds at each point indicated by the whiskers. The black line represents the best fitted local regression trend estimate, with its $95\%$ confidence interval bounds indicated by the shaded blue region.} \label{prodseasonts} \end{figure} \section{Methodology} \label{Sec_Methodology} \subsection{Modelling of maxima} \label{Subsec_Model_Maxima} Risk assessment entails the estimation of return levels associated with very high return periods and of the probabilities of observing events so extreme that they have never occurred before. Extreme-value theory provides a solid framework for the extrapolation needed to perform these tasks for the maxima of PROD, CAPE and SRH. Here we present the statistical background to the results in Section~\ref{Sec_Results}; for further explanation and references see \citet{Coles:2001} or \citet{Davison.Huser:2015}. Let $M_n$ denote the maximum of the independent and identically distributed random variables $X_1, \dots, X_n$. The extremal types theorem states that if there exist sequences $\{ a_n \}>0$ and $\{ b_n \} \in \mathbb{R}$ such that $(M_n-b_n)/a_n$ has a non-degenerate limiting distribution as $n\to\infty$, then this must be a generalized extreme-value (GEV) distribution, $$ \mathrm{GEV}_{\eta, \tau, \xi}(x)=\left \{ \begin{array}{ll} \exp \left[ - \left \{ 1+\xi (x-\eta)/\tau \right \}^{-1/\xi}_{+} \right] ,& \quad \xi \neq 0, \\ \exp \left[ -\exp \left \{ -(x-\eta)/\tau \right \}_{+} \right ], & \quad \xi =0, \end{array} \quad x \in \mathbb{R}, \right. $$ where $\xi$ and $\eta$ are real-valued, $\tau>0$ and, for any real $a$, $a_+=\max \{a,0 \}$. This implies that if $n$ is large enough, we may approximate the distribution of $M_n$ by \begin{equation} \label{Eq_Distr_Maxima} \mathbb{P}(M_n \leq x) \approx \mathrm{GEV}_{\eta, \tau, \xi}(x), \quad x \in \mathbb{R}, \end{equation} for suitably chosen $\eta$, $\tau$ and $\xi$, which are location, scale and shape parameters. The latter defines the type of the distribution: $\xi>0$, $\xi <0$ and $\xi=0$ correspond to the Fr\'echet, Weibull and Gumbel types and allow quite different statistical behaviours, with the first giving a heavy upper tail with polynomial decay, the second modelling bounded variables, and the third an intermediate case, unbounded with an exponentially-decaying upper tail. The GEV approximation for maxima remains valid if the variables are dependent, provided that distant extremes are ``nearly independent'' (more formally, Leadbetter's $D(u_n)$ condition is satisfied). We shall see below that this appears to be the case for our time series, so it is plausible that ~\eqref{Eq_Distr_Maxima} applies. The results above provide a natural model for maxima of stationary sequences. To apply this model we split the data into blocks of equal lengths and compute the maximum of each block. Assume that we have $T$ blocks of length $n$ and let $M_n^{(1)}, \dots, M_n^{(T)}$ denote the corresponding maxima. If $n$ is large enough, the distribution of the $M_n^{(t)}$ is approximately~\eqref{Eq_Distr_Maxima}, upon which inference can be based; this is the so-called block maximum method. As noted in Section~\ref{Sec_Data}, PROD, CAPE and SRH maxima exhibit a time trend and/or a relation with ENSO for some months, and we can allow the GEV parameters to depend upon these variables. Figure~\ref{figseason} and results in Section~\ref{Sec_Results} show that the temporal or ENSO effects only appear for certain months. For instance, time trends for PROD, CAPE and SRH are mainly present in April and May, April to June and April and May, respectively. We therefore choose our blocks to be the months and study each month separately, fitting the models \begin{equation} \label{Eq_Parameters_GEV_Function_Time} M_n^{(t)} \sim \mathrm{GEV}_{\eta_{\mathrm{ti}}(t), \tau_{\mathrm{ti}}, \xi_{\mathrm{ti}}}, \quad \eta_{\mathrm{ti}}(t)=\eta_{0, \mathrm{ti}} + \eta_{1, \mathrm{ti}} t, \quad t=1, \dots, T, \end{equation} and \begin{equation} \label{Eq_Parameters_GEV_Function_ENSO} M_n^{(t)} \sim \mathrm{GEV}_{\eta_{\mathrm{en}}(t), \tau_{\mathrm{en}}, \xi_{\mathrm{en}}}, \quad \eta_{\mathrm{en}}(t)=\eta_{0, \mathrm{en}} + \eta_{1, \mathrm{en}} \mathrm{ENSO}_t, \quad t=1, \dots, T, \end{equation} where $\eta_{0, \mathrm{ti}}$, $\eta_{1, \mathrm{ti}}$, $\eta_{0, \mathrm{en}}$, $\eta_{1, \mathrm{en}}$, $\xi_{\mathrm{ti}}$ and $\xi_{\mathrm{en}}$ are real-valued, $\tau_{\mathrm{ti}}$ and $\tau_{\mathrm{en}}$ are positive, ${\rm ENSO}_t$ is the value of ENSO in that month for year $t$, and $n$ equals 224, 232, 240 or 248, depending on the number of days in the month, as we have eight observations per day. Figure~\ref{prodseasonts} suggests that effects of time and ENSO on maxima are roughly linear and impact the location parameter $\eta$ only, so we consider constant scale and shape parameters; it is generally inappropriate to allow the \textcolor{black}{shape parameter} depend on a covariate owing to the large uncertainty of its estimate. The time trend induces non-stationarity between the blocks (i.e., across years) but does not violate the within-block stationarity assumption; see below. Figure~\ref{figseason} suggests that the time trend does not stem from a shift of seasonality. \begin{figure} \centering \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_PROD_season_13} \end{subfigure \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_CAPE_season_13} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{explore_SRH_season_13} \end{subfigure}\\ \caption{Whole region-averaged monthly maxima of PROD (left), CAPE (centre) and SRH (right). The four lines coloured from light blue to dark blue correspond to the time periods 1979--1987, 1988--1996, 1997--2005 and 2006--2015, respectively.} \label{figseason} \end{figure} We compute the monthly maximum for each month and a given grid point and thereby obtain the maxima $M_{31}^{(1)}, \dots, M_{31}^{(37)}$ for January, say. We then fit the models~\eqref{Eq_Parameters_GEV_Function_Time} and~\eqref{Eq_Parameters_GEV_Function_ENSO} by numerical maximum likelihood estimation for each month and grid point. Recall that, \textcolor{black}{provided the block size $n$ is large enough}, within-block stationarity and the $D(u_n)$ condition ensure the validity of~\eqref{Eq_Distr_Maxima} and hence allow us to consider the models~\eqref{Eq_Parameters_GEV_Function_Time} and~\eqref{Eq_Parameters_GEV_Function_ENSO}. To check the plausibility of these two properties, we considered the 3-hourly time series of PROD, CAPE and SRH at $50$ representative grid points. For each block (associated with a triplet grid point-month-year), we fitted several autoregressive-moving average (ARMA) processes to the corresponding time series, chose the fit that minimized the Akaike information criterion (AIC), and used a Box--Pierce procedure to assess the independence of the corresponding residuals; we found no systematic departure from independence or stationarity. Often the residual distribution appeared to lie in the Fr\'echet or Gumbel maximum-domains of attraction, and \citet[Section 5.5]{Embrechts} show that in such cases convergence of the maxima to the GEV occurs even for ARMA processes. Hence the time series of data within the months seem to satisfy both stationarity and the $D(u_n)$ condition. Choosing the months as blocks thus appears reasonable, as is confirmed by our analysis in \textcolor{black}{the following section}. \textcolor{black}{On the other hand, choosing the seasons or years as blocks would mask many interesting features, and the} sample size associated with day- or week-long blocks is too low for the GEV approximation~\eqref{Eq_Distr_Maxima} to be reasonable. \subsection{Assessment of GEV fit} \label{Subsec_GEV_Test_Fit} At each grid point $i$ and month $j$, we fit the GEV to the monthly maxima, as described in \textcolor{black}{Section~\ref{Subsec_Model_Maxima}}, resulting in location, scale and shape parameter estimates $\hat{\eta}_{i,j}$, $\hat{\tau}_{i,j}$ and $\hat{\xi}_{i,j}$. We use the Kolmogorov--Smirnov test to assess the distributional proximity between this GEV and the empirical distribution of the $37$ observed monthly maxima. For PROD, CAPE and SRH, in most months, the fit appears acceptable at the $5\%$ level at all grid points. These good in-sample fits of the GEV for all variables are confirmed by the quantile-quantile (QQ) plots, which are displayed for one grid point in Figure~\ref{qqplot}. \begin{figure} \centering \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{GEVfit_PROD} \end{subfigure \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{GEVfit_CAPE} \end{subfigure} \begin{subfigure}[b]{.33\linewidth} \centering \includegraphics[width=.99\textwidth]{GEVfit_SRH} \end{subfigure}\\ \caption{Assessment of the in-sample fit of the GEV: QQ plots for PROD (left), CAPE (centre) and SRH (right) May maxima at the grid point whose South-West corner has coordinates $32^\circ$ latitude and $-99^\circ$ longitude. The shaded regions indicate the $95\%$ confidence bounds.} \label{qqplot} \end{figure} However, these results do not take into account the fitting of the GEV to the data, which systematically decreases the values of the Kolmogorov--Smirnov statistic. In order to make an informal allowance for this, we perform the following procedure for each grid point $i$ and month $j$: \begin{enumerate} \item fit the GEV using the pooled observations from the eight grid points nearest to $i$ to obtain $\hat{\eta}_{\mathrm{po}_i, j}$, $\hat{\tau}_{\mathrm{po}_i, j}$ and $\hat{\xi}_{\mathrm{po}_i, j}$; \item use a Kolmogorov--Smirnov test to check the agreement between the ``out-sample'' GEV with parameters $\hat{\eta}_{\mathrm{po}_i, j}$, $\hat{\tau}_{\mathrm{po}_i, j}$ and $\hat{\xi}_{\mathrm{po}_i, j}$, and the empirical distribution of the $37$ observed monthly maxima at grid point $i$. \end{enumerate} Then, we implement the same procedure $100$ times with data simulated from independent GEV fitted at each grid point and compute the $5\%$ and $95\%$ quantiles of the empirical distribution of the number of rejections. Table~\ref{table:ks} shows that, for all variables, the observed numbers of rejections are low compared to the number of grid points (619), especially as we did not account for multiple testing. Moreover, they are not tremendously different from those obtained in the simulation study, \textcolor{black}{although often slightly above the $95\%$ quantile in the case of CAPE and slightly below the $5\%$ quantile for SRH and PROD}. These discrepancies may be explained by the substantial spatial dependence in our data, not accounted for in the simulation study. This procedure supports the use of the GEV at grid points at which no data are available and thus goes beyond the initial goal of assessment of the GEV fit. \begin{table} \centering \renewcommand{\arraystretch}{1.5} \resizebox{\textwidth}{!}{ \begin{tabular}{c||cccccccccccc} \textbf{Variable} & \textbf{Jan} & \textbf{Feb} & \textbf{Mar} & \textbf{Apr} & \textbf{May} & \textbf{Jun} & \textbf{Jul} & \textbf{Aug} & \textbf{Sep} & \textbf{Oct} & \textbf{Nov} & \textbf{Dec} \\ \hline \hline \textbf{PROD} & 42 & 27 & 50 & 29 & 52 & 58 & 67 & 71 & 55 & 27 & 39 & 34 \\ \textbf{Sim PROD 5\%} & 37 & 34 & 33 & 33 & 35 & 35 & 37 & 38 & 38 & 36 & 36 & 37 \\ \textbf{Sim PROD 95\%} & 57 & 57 & 55 & 55 & 57 & 58 & 57 & 58 & 57 & 58 & 58 & 57 \\ \hline \hline \textbf{CAPE} & 59 & 33 & 48 & 34 & 60 & 64 & 71 & 90 & 67 & 43 & 58 & 74 \\ \textbf{Sim CAPE 5\%} & 41 & 36 & 37 & 36 & 36 & 36 & 38 & 42 & 39 & 36 & 36 & 37 \\ \textbf{Sim CAPE 95\%} & 59 & 58 & 60 & 56 & 57 & 58 & 61 & 63 & 61 & 60 & 58 & 59 \\ \hline \hline \textbf{SRH} & 36 & 23 & 24 & 21 & 22 & 42 & 42 & 34 & 35 & 26 & 24 & 36 \\ \textbf{Sim SRH 5\%} & 36 & 36 & 34 & 35 & 34 & 36 & 36 & 36 & 34 & 36 & 36 & 34 \\ \textbf{Sim SRH 95\%} & 60 & 59 & 59 & 53 & 57 & 57 & 57 & 58 & 58 & 58 & 61 & 57 \\ \end{tabular} } \caption{Assessment of the out-sample fit of the GEV: Number of rejections from our out-sample Kolmogorov-Smirnov test (at the $5\%$ level and without accounting for multiple testing) for each variable and each month. For each part (corresponding to one variable), the first row gives the observed number of rejections whereas the second and third ones provide the $5\%$ and $95\%$ quantiles of the empirical distributions of the number of rejections obtained from the simulation study.} \label{table:ks} \end{table} We conclude that the GEV provides a suitable model for the monthly maxima of our three variables. \subsection{Testing procedure} \subsubsection{General} In Section~\ref{Sec_Results}, we assess whether time and ENSO affect the location parameter of the fitted GEV for the three variables PROD, CAPE and SRH. However, as this is assessed at 619 grid points, we must make some allowance for multiple hypothesis testing. We first discuss the statistic used to test the significance of time and ENSO, respectively, in~\eqref{Eq_Parameters_GEV_Function_Time} and~\eqref{Eq_Parameters_GEV_Function_ENSO}. In the first case, we have to test the null and alternative hypotheses $$ H_0: \eta_{1, \mathrm{ti}}=0 \quad \mbox{versus} \quad H_A: \eta_{1, \mathrm{ti}} \neq 0, $$ by comparing the fits of the models $$ \mathcal{M}_0: \eta_{\mathrm{ti}}(t)=\eta_{0, \mathrm{ti}} , \quad \mathcal{M}_1: \eta_{\mathrm{ti}}(t)=\eta_{0, \mathrm{ti}} + \eta_{1, \mathrm{ti}} t, \quad t=1, \dots, 37, $$ and similarly for ENSO. We let $\ell_0(\mathcal{M}_0)$ and $\ell_1(\mathcal{M}_1)$ denote the maximized log-likelihoods for the models $\mathcal{M}_0$ and $\mathcal{M}_1$ and compute the signed likelihood ratio statistic $\tilde{T}=\mathrm{sgn}(\hat\eta_{1, \mathrm{ti}})[2 \{ \ell_1(\mathcal{M}_1)-\ell_0(\mathcal{M}_0) \}]^{1/2}$, where $\mathrm{sgn}(\hat\eta_{1, \mathrm{ti}})$ is the sign of the estimated trend under model $\mathcal{M}_1$; $\tilde T$ has an approximate standard Gaussian distribution under $H_0$, and the corresponding $p$-value is $p=2\Phi(-|\tilde t|)$, where $\tilde t$ is the observed value of $\tilde T$ and $\Phi$ denotes the standard Gaussian distribution function. Computing $p$ for the $m$ grid points yields $m$ ordered $p$-values $p_{(1)} \leq p_{(2)} \leq \dots \leq p_{(m)}$. The underlying p-values are likely to be positively correlated, since dependence on time or ENSO will have a spatial component, and we now discuss how to adjust for this. \subsubsection{Multiple testing} \label{Subsubsec_Multiple_Testing} A popular approach for multiple testing in climatology is the field significance test of~\cite{livezey1983statistical}, but unfortunately this gives little insight about where the results are significant, which is of high interest to us, and the regression approach of~\cite{delsole2011field} has the same drawback. Among methods to identify grid points where the results are significant are those, such as the Bonferroni method, that strongly control the so-called family-wise error rate, i.e., the probability that the number of falsely rejected null hypotheses is equal or larger than unity. However, when the number of hypotheses to test is high, such methods are so stringent that the power of the test is very low. \cite{benjamini1995controlling} introduce the false discovery rate (FDR), namely the expected proportion of false rejections of the null hypothesis $H_0$ out of all rejections of it, and propose a procedure to ensure that the FDR is below a given level $q$ when performing multiple testing. Their approach, which we call the BH procedure, would reject $H_0$ at all grid points $i$ such that $p_i \leq p_{(k)}$, where $$ k=\max \left \{ i: p_{(i)} \leq q \frac{i}{m} \right \}. $$ In fact this ensures that the FDR is less than $q m_0/m$, where $m_0$ denotes the unknown number of grid points at which $H_0$ is true. We then say that the procedure controls the false discovery rate at level $qm_0/m$. For a chosen $q$, let $S_q$ be the number of grid points at which a particular covariate is declared significant by the BH procedure. Then we expect the true number of grid points where the relation is significant, $m_A$, to satisfy \begin{equation} \label{Eq_First_Lower_Bound_Number_Significance} m_A \geq (1-q)S_q. \end{equation} As the BH procedure ensures that the false discovery rate is not more than $q m_0/m$, we may argue a posteriori that we have controlled the FDR at level $$ q^{(1)} = \frac{q\{m-(1-q)S_q\}}{m}\leq q{m_0\over m}, $$ which entails that $m_A \geq (1-q^{(1)})S_q$. Iterating this argument by defining $$ q^{(n+1)} = \frac{q \left\{ m- \left(1-q^{(n)} \right) S_q \right\}}{m}, \quad n =1,2,\ldots, $$ the effective level at which we have controlled the FDR is therefore $q_{\lim}=\lim_{n \to \infty} q^{(n)}$. This limit is generally obtained after a few iterations. Finally, we may write that \begin{equation} \label{Eq_Lower_Bound_Number_Significance} m_A \geq (1-q_{\lim}) S_q. \end{equation} The BH procedure was originally shown to be valid for independent test statistics, but \citet[Theorem~2.1]{benjamini2001control} prove that it controls the FDR at level $q m_0/m$ if the statistics have a certain form of positive dependence. \cite{ventura2004controlling} apply the BH procedure to simulations representative of climatological data and covering the range of correlation scales likely to be encountered in practice, and find that it controls the FDR at level $qm_0/m$. \cite{yekutieli1999resampling} and \cite{benjamini2001control} propose modifications to account for more general dependence between the test statistics. The first is complicated and its gain over the BH procedure is limited, while the second is applicable whatever the dependence structure but has greatly reduced power, so \cite{ventura2004controlling} recommend the use of the BH procedure. The independence assumption underlying the BH procedure is clearly false for our data, \textcolor{black}{but they resemble} those considered in \cite{ventura2004controlling}, so applying the BH procedure at level $q$ should control the FDR at level $q m_0/m$. A more rigorous argument would use the asymptotic normality of our test statistic $\tilde{T}$ and the multivariate central limit theorem to show that $\tilde{T}_1, \ldots, \tilde T_m$ is asymptotically jointly Gaussian and that the results of \citet{benjamini2001control} can be applied, but this is outside the scope of the present paper. \section{Results} \label{Sec_Results} In this section we quantify the effects of time and ENSO in the location parameter of the GEV and study their significance, using $q=0.05$ and $q=0.2$, corresponding to control of the false discovery rate at the nominal levels 5\% and 20\%. In each case we first discuss PROD, which is the main variable of interest for severe thunderstorm risk, and then consider CAPE and SRH. We begin with the effect of time. Table~\ref{Tab_Results_BH_All} shows that many of the 619 grid points exhibit a significant time trend for PROD in April, May and August (and to a lesser extent in June and December). In April, this number equals $313$ at the $20 \%$ level, so~\eqref{Eq_First_Lower_Bound_Number_Significance} implies that at least $250$ of these grid points indeed have a trend; with~\eqref{Eq_Lower_Bound_Number_Significance}, this number rises to $278$. Figure~\ref{Slope_Significance_Time_April} indicates that, in April, the North-East, a very wide South-East corner and the South-West, show significant time trends. In the first two regions, $\hat{\eta}_{1, \mathrm{ti}}$ is positive, corresponding to an increasing risk of \textcolor{black}{severe thunderstorms impacts}, particularly in already risky regions. Similar conclusions may be drawn from Figure~\ref{Slope_Significance_Time_May} in the case of May, though the South-East is less prominent. \textcolor{black}{The highest slope value corresponds to an annual increase of PROD maxima of about $3\%$ of the corresponding PROD maximum recorded in $1979$.} \textcolor{black}{\cite{mannshardt2013extremes} and \cite{heaton2011spatio} do not find such a significantly positive time trend over the whole region which is the most at risk, sometimes called tornado alley, and they do not obtain significantly positive trends in the North-East of our region, whereas they find a significant positive trend towards the West. These differences are likely to be explained by the following facts: they consider a less recent period (1958--1999), their product variable is slightly different than ours, and they study annual instead of monthly maxima. The discrepancies with \cite{heaton2011spatio} can also be explicated by the methodological dissimilarities; as already mentioned, they use a Bayesian hierarchical approach. The evolution obtained by \cite{gilleland2013spatial} between the second (1979--1992) and the third (1993-1999) period is quite consistent with our trends in Spring; for the other seasons, however, the results differ significantly. There are also many dissimilarities when considering the evolution between the first (1958--1978) and the second (1979--1992) periods; this comparison is less relevant since the first period does not belong to the time range we consider. \cite{gilleland2013spatial} consider the mean simulated values conditional on the total amount of energy being large, and then not all grid point values need be extreme; on the other hand, we look at maxima at each grid point. Moreover, the trends we find account for the year-to-year variation, whereas in \cite{gilleland2013spatial}, changes can only be assessed by comparing values for three successive periods of about 15 to 20 years. The positive time trends we obtain in Spring appear quite consistent with the results of \cite{Gensini2018}, who use much more recent data than the previously described papers. The remaining differences, especially for Texas, may arise for the following reasons. First, as PROD is only an indicator of severe weather, there are necessarily discrepancies with results based on effective tornado reports. Second, PROD slightly differs from STP, so the corresponding results may differ somewhat. Furthermore, the findings of \cite{Gensini2018} about reports concern the total number of tornadoes per year, and those about STP are not based on the maxima of that variable.} Regarding CAPE, April, May and June (and to a lesser extent, August, November and January) show many grid points with a significant time trend. For April and May, Figures~\ref{Slope_Significance_Time_April} and~\ref{Slope_Significance_Time_May} show significantly negative $\hat{\eta}_{1, \mathrm{ti}}$ in the West, contrasting with a significantly positive trend in the center and the East. As pointed out by \cite{Trapp2009} and \cite{Diffenbaugh2013}, a positive time trend for CAPE is expected in a context of climate change. However, to the best of our knowledge, an \textit{observed} trend has not been previously reported in the literature. For SRH, May and to a lesser extent April have many significantly positive grid points spread approximately uniformly except in a large South-West corner in April and a large South-East corner in May. The significance for PROD in April and May comes from both CAPE and SRH. Figures~\ref{Slope_Significance_Time_April} and~\ref{Slope_Significance_Time_May} suggest that the significant positive time trend in the \textcolor{black}{riskiest} part of the US stems mainly from CAPE in April and from SRH in May. \textcolor{black}{Overall, no seasonal pattern appears: CAPE seems to drive PROD in January, April, August, November and December, whereas SRH seems to drive it in February, May, June and September. For March, July and October, there is no clear driver. Anyway, trying to relate the behaviour of PROD maxima with that of CAPE and SRH maxima has its limitations. Indeed, the maximum of PROD may not equal the product of the square root of CAPE maximum and the maximum of SRH, as their maxima may not coincide.} We now comment on the effect of ENSO. For PROD, Table~\ref{Tab_Results_BH_All} reveals that many grid points exhibit a significant relation in February. Figure~\ref{Slope_Significance_ENSO_February} indicates that $\hat{\eta}_{1, \mathrm{en}}$ is negative at those and that the main regions concerned are the North-East, the South-Center and the North-West; we expect higher PROD maxima during La Ni\~na years in these regions. \textcolor{black}{The highest slope absolute value corresponds to a decrease of PROD maxima per unit of ENSO of about $10\%$ of the corresponding basic level of PROD maximum.} There is no strikingly significant result for CAPE, although \cite{Allen:ENSO2014} found ENSO signals in CAPE seasonal averages for winter and spring, not accounting for multiple testing. For SRH, a very large number of grid points exhibit significance in February. Figure~\ref{Slope_Significance_ENSO_February} shows that almost all grid points are concerned except for a strip in the North and a tiny diagonal strip in the South-East corner of the region. The estimate $\hat{\eta}_{1, \mathrm{en}}$ is highly negative in most of the region but very positive in the extreme South-East, with a very rapid change in sign, presumably due to proximity with the Gulf of Mexico. There is a significant negative relation in regions at risk of thunderstorms or large-scale storms, for which SRH plays an essential role. \textcolor{black}{The risk of large impacts} may increase during La Ni\~na years. A relationship between seasonal averages of SRH and ENSO in winter was noticed by \cite{Allen:ENSO2014}. Finally, Figure~\ref{Slope_Significance_ENSO_February} suggests that CAPE contributes more than SRH to PROD in terms of significance, though the relation with ENSO is more pronounced for SRH than for CAPE. \begin{table} \center \renewcommand{\arraystretch}{1.5} \resizebox{\textwidth}{!}{ \begin{tabular}{ccc||cccccccccccc} \textbf{Variable} & \textbf{Covariate} & \textbf{q} &\textbf{Jan} &\textbf{Feb} &\textbf{Mar} &\textbf{Apr} &\textbf{May} &\textbf{Jun} &\textbf{Jul} &\textbf{Aug} &\textbf{Sep} &\textbf{Oct} &\textbf{Nov} &\textbf{Dec} \\ \hline \hline \textbf{PROD} & \textbf{Time} & 0.05 & 7 & 0 & 1 & 41 & 36 & 0 & 0 & 36 & 2 & 0 & 0 & 22 \\ & \textbf{Time} & 0.2 & 40 & 0 & 4 &313 &203 & 81 & 13 &148 & 23 & 0 & 0 & 98 \\ & \textbf{ENSO} & 0.05 & 0 & 58 & 10 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ & \textbf{ENSO} & 0.2 & 1 &172 & 26 & 0 & 3 & 3 & 0 & 0 & 0 & 0 & 0 & 1 \\ \hline \hline \textbf{CAPE} & \textbf{Time} & 0.05 & 37 & 13 & 28 &109 & 60 & 89 & 18 & 55 & 4 & 0 & 30 & 1 \\ & \textbf{Time} & 0.2 & 92 & 37 & 73& 268 &273 &206 & 75 &133 & 35 & 40 &134 & 16 \\ & \textbf{ENSO} & 0.05 & 15 & 0 & 0 & 0 & 0 & 2 & 2 & 0 & 0 & 0 & 1& 1 \\ & \textbf{ENSO} & 0.2 & 27 & 11 & 21 & 0 & 0 & 3 & 16 & 14 & 0 & 1 & 6 & 13 \\ \hline \hline \textbf{SRH} & \textbf{Time} & 0.05 & 0 & 1 & 0 & 7 & 43 & 2 & 1 & 7 & 0 & 0 & 0 & 0 \\ & \textbf{Time} & 0.2 & 15 & 44 & 4 &138 &230 & 14 & 50 & 45 & 6 & 0 & 0 & 27 \\ & \textbf{ENSO} & 0.05 & 0& 255 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & \textbf{ENSO} & 0.2 & 3 &384 & 59 & 18 & 4 & 0 & 8 & 7 & 4 & 1 & 0 & 82 \\ \hline \hline & & & & & & & & & & & & & & \\ \hline \hline \textbf{PROD res.} & \textbf{Time} & 0.05 & 7 & 0 & 2 & 30 & 88 & 0 & 0 & 41 & 2 & 0 & 0 & 38 \\ & \textbf{Time} & 0.2 & 50 & 16 & 6 &274 &221 & 86 & 21 &137 &18 & 0 & 2 &100 \\ \hline \hline \textbf{CAPE res.} & \textbf{Time} & 0.05 & 35 & 20 & 15 & 87 & 96 & 89 & 25 & 59 & 9 & 0 & 19 & 2 \\ & \textbf{Time} & 0.2 & 88 & 46 & 51& 219& 267 &223 & 91 &139 & 54 & 41 &120 & 29 \\ \hline \hline \textbf{SRH res.} & \textbf{Time} & 0.05 & 0 & 0 & 0 & 7 & 38 & 2 & 1 & 7 & 0 & 0 & 0 & 0 \\ & \textbf{Time} & 0.2 & 20 & 1 & 6& 126& 241 & 7 & 46 & 41 & 1 & 0 & 0 & 60 \\ \hline \hline & & & & & & & & & & & & & & \\ \hline \hline \textbf{PROD res.} & \textbf{ENSO} & 0.05 & 1 & 66 & 8 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 7 \\ & \textbf{ENSO} & 0.2 & 1 & 178 & 26 & 0 & 49 & 3 & 0 & 0 & 0 & 0 & 0 & 33 \\ \hline \hline \textbf{CAPE res.} & \textbf{ENSO} & 0.05 & 1 & 0 & 0 & 0 & 0 & 3 & 5 & 1 & 0 & 0 & 0 & 2 \\ & \textbf{ENSO} & 0.2 & 21 & 38 & 0 & 1 & 0 & 4 & 17 & 16 & 0 & 0 & 1 & 21 \\ \hline \hline \textbf{SRH res.} & \textbf{ENSO} & 0.05 & 0 & 209 & 0 & 0 & 4 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ & \textbf{ENSO} & 0.2 & 1 & 359 & 20 & 38 & 14 & 0 & 3 & 7 & 2 & 1 & 0 & 63 \\ \end{tabular} } \caption{Number of grid points where $\hat{\eta}_{1, \mathrm{ti}}$ and $\hat{\eta}_{1, \mathrm{en}}$ are significant for PROD, CAPE and SRH maxima for each month (top); number of grid points where $\hat{\eta}_{1, \mathrm{ti}}$ is significant for PROD, CAPE and SRH maxima residuals after accounting for the relation with ENSO (middle); number of grid points where $\hat{\eta}_{1, \mathrm{en}}$ is significant for PROD, CAPE and SRH maxima residuals after accounting for the relation with time (bottom). We have accounted for multiple testing using the BH procedure with the values of $q$ displayed.} \label{Tab_Results_BH_All} \end{table} \begin{figure} \center \includegraphics[scale=0.67]{Slope_Significance_Time_April} \caption{Values and significance of the slope $\hat{\eta}_{1, \mathrm{ti}}$ for PROD (top), CAPE (middle) and SRH (bottom) maxima in April. Large and small circles indicate significance (after accounting for multiple testing using the BH procedure) at any level not lower than $5\%$ and $20\%$, respectively. The units of $\hat{\eta}_{1, \mathrm{ti}}$ are m$^3$s$^{-3}$yr$^{-1}$, Jkg$^{-1}$yr$^{-1}$ and m$^2$s$^{-2}$yr$^{-1}$ for PROD, CAPE and SRH, respectively. Dark grey corresponds to grid points where no observations are available.} \label{Slope_Significance_Time_April} \end{figure} \begin{figure} \center \includegraphics[scale=0.67]{Slope_Significance_Time_May} \caption{Same content as in Figure~\ref{Slope_Significance_Time_April} in the case of May.} \label{Slope_Significance_Time_May} \end{figure} \begin{figure} \center \includegraphics[scale=0.67]{Slope_Significance_ENSO_February} \caption{Values and significance of \textcolor{black}{the ENSO coefficient} $\hat{\eta}_{1, \mathrm{en}}$ for PROD (top), CAPE (middle) and SRH (bottom) maxima in February. Large and small circles indicate significance (after accounting for multiple testing using the BH procedure) at any level not lower than $5\%$ and $20\%$, respectively. The units of $\hat{\eta}_{1, \mathrm{en}}$ are m$^3$s$^{-3}{}^{\circ}\mathrm{C}^{-1}$, Jkg$^{-1}{}^{\circ}\mathrm{C}^{-1}$ and m$^2$s$^{-2}{}^{\circ}\mathrm{C}^{-1}$ for PROD, CAPE and SRH, respectively.} \label{Slope_Significance_ENSO_February} \end{figure} We also considered the residuals of PROD, CAPE and SRH maxima after accounting for ENSO or temporal effects. For instance, if we observe a time trend, the idea of considering the residuals after accounting for ENSO is to determine whether the time trend is explained by ENSO. This allows us to determine whether the time and ENSO effects are ``independent''. In the case of PROD, Table~\ref{Tab_Results_BH_All} shows that removing ENSO does not much decrease the number of grid points exhibiting a significant time trend; there is a slight decrease for April but a small increase for some other months. Accounting for the time trend, on the other hand, can slightly increase the number of grid points showing a significant relation with ENSO. For CAPE, removing ENSO decreases the number of grid points exhibiting a significant time trend for March, but there is a slight increase for other months, whereas accounting for time slightly decreases the number of grid points showing a significant relation with ENSO in January and March only, with a slight increase in other months. Regarding SRH, removing ENSO decreases the number of grid points exhibiting a significant time trend in February but there is little impact for other months. The conclusions are similar when accounting for the time trend and studying the ENSO effect. The maps of the residuals (not shown) indicate that when removing a covariate has little impact on the number of grid points at which the relation with the other covariate is significant, it has almost no impact on their positions either. To summarize, the effects of time and ENSO appear ``independent'', except for CAPE in January and March and SRH in February. \section{Conclusion} \label{Sec_Conclusion} This article quantifies the effects of time and ENSO on the distribution of monthly maxima of PROD, CAPE and SRH, which are highly relevant to the risk of severe thunderstorms. The use of the GEV appears justified in our setting. After allowance for multiple testing we detect a significant time trend in the location parameter of the GEV for PROD maxima in April, May and August, CAPE maxima in April, May and June and SRH maxima in April and May. The observed upward time trend for CAPE, although expected in a warming climate, has not been reported before. April and May are prominent for PROD, as severe thunderstorms are common at this period, and the corresponding trend is positive in parts of the US where the risk is already high, which may have important consequences. We also found ENSO to be a good covariate in the location parameter of the GEV for PROD and SRH maxima in February. The corresponding relationship is negative over most of the region we consider, suggesting that \textcolor{black}{the risk of storm impacts} in February increases during La Ni\~na years. \textcolor{black}{Our results differ from those of \citet{heaton2011spatio}, \cite{mannshardt2013extremes} and \cite{gilleland2013spatial}, but are quite consistent with those obtained by \cite{Gensini2018}, perhaps in part because these authors consider a period similar to ours, more recent than in the earlier studies.} We investigate the effects of time and ENSO on the marginal (at each grid point) extremal behaviour of PROD, CAPE and SRH. Quantifying the potential impacts of these covariates on the local spatial extremal dependence of these variables would also be useful for risk assessment. \textcolor{black}{Modelling} the extremal dependence between CAPE and SRH might also be informative. \textcolor{black}{An interesting question is the implication of an increase of PROD (or SRH) maxima. As PROD can be seen as a proxy for the probability of severe thunderstorm occurrence, it is natural to think that PROD maxima may be good indicators for the maxima of the variable ``number of severe thunderstorms per day". This would somehow imply that the days where PROD maxima occur generally correspond to those days with the largest severe thunderstorms impacts. Providing clear insight about whether this is indeed the case would be valuable.} \section*{Acknowledgements} The work was supported by the Swiss National Science Foundation (project 200021\_178824). NARR data were downloaded from the Research Data Archive (RDA) at the National Center for Atmospheric Research (NCAR), Computational and Information Systems Laboratory at \url{http://rda.ucar.edu/datasets/ds608.0/}. The ERSSTv5 from the NOAA Climate Prediction Center is available at \url{https://www.cpc.ncep.noaa.gov/data/indices/ersst5.nino.mth.81-10.ascii}. \newpage \bibliographystyle{apalike}
train/arxiv
BkiUdpbxaL3SujGfKIe8
5
1
\section{Introduction} Transition metal oxides are one of the most interesting material classes providing a huge variety of structural, magnetic, and electronic properties ranging from metallic to insulating, from ferro- to antiferromagnetic, as well as ferroelectric states \cite{transmetox}. Especially, thin magnetite films (Fe$_3$O$_4$) \cite{moussy} attracted intensive research interest in the last decade in the field of spintronics \cite{spintronic} and spin caloritronics \cite{spincal}. Due to their anticipated half-metallic behavior with complete spin polarization at the Fermi level \cite{halfmet} and high (bulk) Curie temperature of 858\,K \cite{BookofIronOxide}, thin magnetite films are promising candidates for room temperature spintronic devices such as highly spin\,-\,polarized electrodes for magnetic tunneling junctions \cite{mtj, highTMR} or spin\,-\,injectors \cite{spininjector}. Furthermore, multilayers of magnetite and platinum show huge thermoelectric effects \cite{ramos16} based on the recently observed spin Seebeck effect in magnetite \cite{ramos13} pushing the development of more efficient thermoelectric nanodevices \cite{SSE}. Magnetite crystallizes in the inverse spinel structure with a lattice constant of 8.3963\,\AA~\cite{BookofIronOxide} at 300\,K. At $\sim$\,120\,K it undergoes a metal-insulator transition (Verwey transition) \cite{verwey} accompanied by a change from cubic to monoclinic crystal symmetry \cite{MagMonoclin}. The reduction of the crystal symmetry leads to a spontaneous ferroelectric polarization and, thus, to multiferroicity \cite{ferroelctric, multiferroic}. In order to control the relative magnetization alignment in magnetic tunnel junctions, exchange bias effects induced by additional antiferromagnetic layers are commonly used \cite{FirstExchangeCoupling}. In case of Fe$_3$O$_4$ tunnel junctions, the antiferromagnetic NiO is a good candidate due to its small lattice mismatch of only 0.5\,\% and a high N\'{e}el temperature of 523\,K \cite{NiONeel}. Nickel oxide is an insulating material with a high thermal stability. It crystallizes in a rock salt structure with a lattice constant of 4.177\,\AA~\cite{NiOPoisson} at 300\,K. In addition, NiO could be a key material for thermoelectric devices, since recently it was shown that NiO and other antiferromagnetic materials enhance the thermally induced spin currents in spin Seebeck effect experiments \cite{SpinCurrentAFM, SSEAFM}. Previous works \cite{Berti1,MagMomMag,StrainMagMgO,bilayerExBias,bilayersMgO,bilayersMgO2,SchemmeBilayer} have focused on characterization of magnetite and nickeloxide films grown on MgO substrates because of the small lattice mismatch of 0.3\,\% and 0.8\,\%, respectively. However, it has recently been demonstrated that the electronic and magnetic properties of magnetite films can be modified using SrTiO$_3$ substrates \cite{MagSTO, MagAxisMagSTO} in spite of the large lattice mismatch of -7.5\,\%. For instance, one advantage using SrTiO$_3$ substrates is the possibility of doping and, thus, a tunable conductivity providing either an insulating or metallic substrate which could be used as bottom electrode in capacitor-like structures \cite{multiferroic}. Fe$_3$O$_4$/NiO bilayer systems are attractive for magnetic tunnel junctions as well as for thermoelectric devices. Furthermore, bilayers grown on SrTiO$_3$ can be used to synthesize Ni$_x$Fe$_{3-x}$O$_4$ thin films by thermally induced interdiffusion with tunable magnetic and electric properties \cite{Kuschel1}. Regarding Fe$_3$O$_4$/NiO bilayers, previous studies have focused on electronic structure, interfacial coupling and magnetic characterization \cite{MagPropBilayer,Kupper,Fe3O4NiOinterface,ElStructBilayer} whereas to the best of our knowledge, there are no detailed structural studies for bilayers on SrTiO$_3$. However, the magnetic and transport characteristics of such films are sensitive to structural variations, number of defects or stoichiometric deviations and could be affected by the strain between the film and substrate. Therefore, in this work a comparative study on the growth, structural and magnetic properties of Fe$_3$O$_4$/NiO bilayers grown on MgO(001) and Nb-doped SrTiO$_3$(001) is presented. Directly after deposition, the stoichiometry in the surface near region and the surface structure of each layer was determined \textit{in situ} using x\,-\,ray photoelectron spectroscopy (XPS) and low energy electron diffraction (LEED), respectively. The bulk structure was investigated \textit{ex situ} by x\,-\,ray reflectivity (XRR) and synchrotron radiation x\,-\,ray diffraction (SR-XRD) measurements and analyzed within the full kinematic diffraction theory. Further, magnetic properties, e.g., magnetocrystalline anisotropy (MCA), have been investigated via vibrating sample magnetometry (VSM). \section{Experimental Details} Preparation and characterization of the thin oxide films were carried out in an interconnected ultrahigh\,-\,vacuum (UHV) system at a base pressure of 10$^{-8}$\,mbar in the deposition chamber and 10$^{-10}$\,mbar in the analysis chamber. Epitaxial Fe$_3$O$_4$/NiO ultra thin bilayer systems with thicknesses between 5\,nm and 20\,nm were grown via reactive molecular beam epitaxy (RMBE) on 0.05\,\% Nb-doped SrTiO$_3$(001) or on MgO(001) single crystalline substrates. Prior to deposition, the substrates were annealed at 400\,$^\circ$C in 1$\times$10$^{-4}$\,mbar O$_2$ atmosphere for 1\,h in order to remove carbon contamination and get well-defined surfaces. Subsequently, nickel oxide and magnetite films were deposited by thermal evaporation from pure metal rods in 1$\times$10$^{-5}$\,mbar and 5$\times$10$^{-6}$\,mbar oxygen atmosphere, respectively. Deposition was performed at 250\,$^\circ$C substrate temperature using deposition rates of 0.01\,nm/s for nickel oxide films and 0.25\,nm/s for magnetite films as controlled by a quartz microbalance adjacent to the evaporation source. The resulting film thicknesses were determined later on \textit{ex situ} by XRR. Crystal surface quality and stoichiometry were controlled \textit{in situ} after each preparation step by LEED and XPS using an Al K$_\alpha$ (h$\nu\,=\,1486.6$\,eV) radiation source and a Phoibos HSA 150 hemispherical analyzer. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{LEED3.pdf} \caption{LEED pattern recorded at 140\,eV for (a) pure MgO(001) surface, (b) 11.9\,nm NiO film on MgO(001) and (c) 21.5\,nm Fe$_3$O$_4$ on NiO/MgO(001). The LEED pattern taken at 100\,eV of a pure SrTiO$_3$ surface, a 10.4\,nm NiO film on SrTiO$_3$(001) and 20.7\,nm Fe$_3$O$_4$ on NiO/SrTiO$_3$(001) are depicted in (d), (e) and (f), respectively. The larger white square indicates (1$\times$1) structure of the reciprocal unit cell of the respective surface while the smaller white square in (c) and (f) indicates the ($\sqrt{2}\times\sqrt{2})R45^{\circ}$ superstructure unit cell of magnetite.} \label{LEED} \end{figure} After transport under ambient conditions XRR and XRD experiments were carried out \textit{ex situ} for structural characterization of the films. XRR measurements were performed in $\theta$\,-\,$2\theta$ geometry using a lab based diffractometer (Philips X'Pert Pro) equipped with a Cu K$_\alpha$ anode. An in-house developed fitting tool based on the Parratt algorithm \cite{ParratXRR} using N\'{e}vot-Croce \cite{nevot} roughness profiles was applied for the analysis of the XRR curves. For XRD synchrotron based radiation sources at the MaXLab beamline I811 (MaXLab, Lund, Sweden) and at the Swiss Light Source beamline X04SA (Paul Scherrer Institute, Villigen, Switzerland) were used. Both beamlines are equipped with (2S\,+\,3D) type diffractometers and Pilatus pixel area detectors for data collection. The XRD data were recorded in $\theta$\,-\,$2\theta$ geometry at an energy of 12.4\,keV and analyzed within the full kinematic diffraction theory \cite{KinDifTheo} using an in-house developed fitting tool. In addition, the magnetization curves were measured at room temperature for several in-plane directions of the samples by varying the magnetic field $\mu_0$$H$ between -300\,mT and +300\,mT, using a vibration sample magnetometer (VSM, Lakeshore, Model 7407). The magnetization loops were corrected by subtracting the paramagnetic contribution from the substrates. \section{Results} \subsection{LEED\,/\,XPS} Figure \ref{LEED} presents the LEED pattern of the cleaned MgO(001) and SrTiO$_3$(001) surfaces and the as prepared oxide films on the respective substrate. Exemplarily, only patterns of a $\sim$ 20\,nm Fe$_3$O$_4$ and $\sim$ 10\,nm NiO films are shown due to similar results. The intensity variations in all recorded patterns are due to dynamical scattering for electron diffraction. Clear (1$\times$1) structures corresponding to the square unit cells of MgO(001) and SrTiO$_3$(001) surfaces can be seen (cf.~Fig.~\ref{LEED}(a) and (d)). Due to the rocksalt structure of MgO the reciprocal unit vectors of the MgO(001) surface point in [110] and [$\bar{1}10$] directions forming a quadratic reciprocal unit cell. The reciprocal unit vectors of the (001) surface of the perovskite SrTiO$_3$, however, point in [100] and [010] directions forming a quadratic unit cell, as well. Consequently, the reciprocal surface unit vectors of MgO(001) are $\sim\sqrt{2}$ times larger than those of SrTiO$_3$(001). The diffraction spots of the SrTiO$_3$ pattern are sharp and intense while the spots of the MgO substrate are broadened due to charging effects. The diffuse background is quite low in both patterns pointing to clean surfaces and negligible point defects. Additional, XPS measurements of both substrates show no carbon contamination indicating chemically clean substrates. After deposition of NiO the LEED patterns exhibit a (1$\times$1) structure related to the square symmetry of the NiO(001) surface for both substrates (cf. Fig. \ref{LEED}(b) and (e)). As mentioned above, due to the rocksalt structure, the reciprocal unit vectors of NiO(001) surface point in [110] and [$\bar{1}10$] directions and are consequently $\sim\sqrt{2}$ times larger than the surface unit cell of SrTiO$_3$ in reciprocal space. Due to the very similar lattice constants of NiO(001) and MgO(001) the diffraction spots are located at almost identical positions compared to MgO(001) pattern. The bright diffraction spots and negligible background intensity of the NiO(001) surface also indicate a good crystalline quality with long range order for the growth on both substrates. However, compared to the diffraction pattern of the NiO/MgO, the diffraction spots of the NiO/SrTiO$_3$ surface are slightly broadened. This effect can be related to the formation of defects induced by the high lattice misfit of -\,6.9\,\% for NiO(001) on SrTiO$_3$(001). \begin{figure} \centering \includegraphics[width=0.43\textwidth]{XPSneu2.pdf} \caption{X\,-\,ray photoelectron spectra of (a) Ni\,2p region for the as prepared NiO films on MgO(001) and SrTiO$_3$. (b) Fe\,2p region for the as prepared Fe$_3$O$_4$ films on NiO/MgO(001) and NiO/SrTiO$_3$.} \label{XPS} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{XRRneu.pdf} \caption{XRR measurements and the calculated intensities of the bilayers on (a) MgO and (b) SrTiO$_3$ substrates. (c) Surface and interface roughnesses obtained from the XRR measurements.} \label{XRR} \end{figure*} The LEED images of Fe$_3$O$_4$ obtained after deposition on NiO/MgO(001) and NiO/SrTiO$_3$(001) show similar diffraction patterns with a square symmetry (cf. Fig. \ref{LEED}(c) and (f)). Clear diffraction spots with half peak distance compared to the NiO(001) surface indicate approximately doubled lattice constant in real space due to the almost doubled cubic lattice constant of Fe$_3$O$_4$ compared to the other oxides used here. Furthermore, an additional ($\sqrt{2}\times\sqrt{2})R45^{\circ}$ superstructure appears which is characteristic for a well-ordered magnetite surface \cite{pentcheva, korecki, and97}. This superstructure is not observed for maghemite (Fe$_2$O$_3$) which has a very similar surface lattice constant. Therefor, we assume a formation of well-ordered stoichiometric magnetite films. The bright diffraction spots and the low background intensity verify a low defect density and large crystalline areas of the magnetite films for both substrates. The broadening of the diffraction spots of the magnetite surface grown on NiO/SrTiO$_3$(001) can be attributed to the already defective NiO surface caused by the high lattice mismatch. In summary, the LEED patterns of the Fe$_3$O$_4$/NiO bilayer systems confirm a crystalline cube on cube growth of both NiO and Fe$_3$O$_4$ films on MgO(001) as well as on SrTiO$_3$(001). The films grown on MgO substrates exhibit a higher crystalline quality and less surface defects compared to the bilayers grown on SrTiO$_3$. XPS measurements were made directly after deposition of the films to determine the stoichiometry and the valence state of the cation species. Figure \ref{XPS}(a) shows the XP spectra of the Ni\,2p region after deposition of nickel oxide and before deposition of iron oxide. All spectra of the Ni\,2p core level reveal Ni\,2p$_{3/2}$ and Ni\,2p$_{1/2}$ peaks at binding energies of 854.6\,eV and 872.5\,eV, respectively, and two intense satellite structures at about 7\,eV higher binding energies. Since these values agree well with the binding energies reported in literature for a Ni$^{2+}$ valence state in NiO stoichiometry \cite{NiOSatellites, NiOenergies} we assume that the oxide films are stoichiometric and have negligible amounts of point defects as, e.g., oxygen vacancies. Additionally, there is a shoulder $\sim$1.5 eV above the Ni\,2p$_{3/2}$ peak, which is reported to be typical for NiO \cite{uhlen,NiOshoulder}. Thus, the shape of all spectra is comparable to that of NiO bulk crystal \cite{XPSNiO,NiOenergies,XPSNiO2}. In Figure \ref{XPS}(b), the Fe\,2p photoelectron spectra of the iron oxide films as prepared on top of the NiO films are presented. From the position and shape of the Fe\,2p peaks one can get information about the iron oxidation state and the stoichiometry. All recorded spectra exhibit the same shape with main peaks located at binding energy 710.6\,eV and 723.6\,eV for Fe\,2p$_{3/2}$ and Fe\,2p$_{1/2}$, respectively. These binding energies of the core levels correspond to values of Fe$_3$O$_4$ well-known from literature \cite{Yamashita}. Additionally, in contrast to w\"ustite (FeO) and maghemite (Fe$_2$O$_3$), no apparent charge transfer satellites can be observed between the two main peaks due to their overlap \cite{Yamashita,Fuji}. Consequently, all prepared iron oxide films exhibit the desired Fe$_3$O$_4$ stoichiometry. Thus, both XPS and LEED measurements demonstrate that the bilayer structures on both kind of substrates consist of crystalline stoichiometric NiO and Fe$_3$O$_4$ films. \subsection{XRR\,/\,XRD} \begin{figure*}[tb] \centering \includegraphics[width=1\textwidth]{XRDneu.pdf} \caption{XRD measurement along the (00$L$) CTR (a) of the Fe$_3$O$_4$/NiO/MgO samples and (b) of the Fe$_3$O$_4$/NiO bilayers on SrTiO$_3$. In red the calculated intensity distribution using the kinematic approximation is shown. (c) Vertical layer distance of nickel oxide and magnetite grown on MgO(001) and SrTiO$_3$(001) dependent on the film thickness. The dotted lines denote the fully relaxed bulk values of magnetite and nickel oxide.} \label{XRD} \end{figure*} XRR and XRD experiments were performed \textit{ex situ} to determine the structural parameters of the bilayers, e.g., film thicknesses and vertical lattice distances. Figures \ref{XRR}(a) and \ref{XRR}(b) show the measured reflectivity curves and the corresponding calculated reflectivity curves after optimizing the structural parameters. In addition, the obtained thicknesses of all studied bilayers are presented. For all samples clear intensity oscillations with beating effects are visible indicating double layer structures and flat homogenous films with small interface and surface roughness. The applied calculation model consists of a layer of iron oxide on top of a nickel oxide layer on MgO or SrTiO$_3$ substrate. All fitted curves agree excellently with the experimental data using literature values for the dispersion $\delta_{\textrm{Fe$_3$O$_4$}}$\,=\,$1.53\times 10^{-6}$ and $\delta_{\textrm{NiO}}$\,=\,$1.89\times 10^{-6}$ \cite{henke}. This indicates a small defect density, e.g., oxygen vacancies which is in accordance with the XPS results. Additionally, the roughnesses of the films were determined and are presented in Fig. \ref{XPS}(c). Here, all films feature an increase of the surface and interface roughness with increasing film thickness. This effect can be attributed to kinetic roughening of the films during growth and to the progressing relaxation process \cite{KinRoughning}. The nickel oxide films exhibit similar roughnesses of $\sigma_{\textrm{NiO}}$\,=\,2.5\,-\,3.5\,\AA\ on both substrates with small increase for thicker films. The roughness of the Fe$_3$O$_4$ on NiO/MgO increases more drastically, while the magnetite films deposited on NiO/SrTiO$_3$ show nearly constant large roughness with initially almost doubled values compared to the magnetite films on NiO/MgO. This behavior is likely caused by high lattice misfit and the resulting relaxation process. It is in accordance with the broadened diffraction spots of the Fe$_3$O$_4$ films on NiO/SrTiO$_3$ observed in the LEED pattern (cf.~Fig.~\ref{LEED}). Figures \ref{XRD}(a) and \ref{XRD}(b) present the SR-XRD measurements of the (00$L$) crystal truncation rod (CTR) compared to intensities calculated by kinematic diffraction theory of the Fe$_3$O$_4$/NiO bilayers on MgO(001) and SrTiO$_3$(001), respectively. Here, the bulk nomenclature of the reciprocal space was used, where $L$\,=\,$c\,K$$_{\perp}$/(2$\,\pi$) in reciprocal lattice units (r.l.u.) denotes the vertical scattering vector $K_{\perp}$ scaled to the Bragg condition 2\,$\pi$/$c$ ($c_{\textrm{MgO}}$\,=\,4.2117\,\AA, $c_{\textrm{SrTiO}_3}$\,=\,3.905\,\AA). Due to the almost doubled lattice constant of magnetite compared to both MgO and NiO and the resulting lateral tensile strain, the (004)$_S$ spinel bulk reflection is located at higher $L$ values compared to MgO and close to the (002)$_R$ bulk reflection of a rock salt structure. On SrTiO$_3$, both nickel oxide and magnetite exhibit a large lattice misfit and are laterally compressively strained. Thus, the (004)$_S$ reflection of magnetite and (002)$_R$ reflection of NiO are at lower $L$ values compared to SrTiO$_3$ and well separated from the (002)$_P$ perovskite reflection of SrTiO$_3$. Here, the indexes $R$, $S$ and $P$ indicate bulk indexing for rock salt, spinel and perovskite type, respectively. For all bilayers grown on MgO, the measurements show a sharp peak at $L$\,=\,2 originating from the diffraction at the MgO substrate lattice (cf.~Fig.~\ref{XRD}(a)). Additionally, broad rather intense features located at $L$\,$\sim$\,2.02 accompanied by strong Laue oscillations are visible due to finite thickness of the iron and nickel oxide. The well\,-\,pronounced intensity oscillations with two superposed partial oscillations clearly show a periodicity of two layers of different thickness indicating a high crystalline ordering and homogenous thicknesses of both films, magnetite and nickel oxide. This is in accordance with the results seen in the XRR measurements. In the case of bilayers grown on SrTiO$_3$, the (00$L$) rod also shows a sharp substrate peak at $L$\,=\,2 and Laue oscillations due to crystalline magnetite and nickel oxide films (cf.~Fig.~\ref{XRD}(b)). Here, the Bragg peaks originating from the iron and nickel oxide are located at $L$\,$\sim$\,1.86 and broadened due to the finite film thicknesses. On closer inspection, the Laue oscillations show also a periodicity of two layers whereby the damping of the oscillation originating from the magnetite increases with increasing magnetite thickness due to increasing roughness of the Fe$_3$O$_4$ films (cf.~Fig.~\ref{XRR}(c)). Due to the small lattice mismatch between Fe$_3$O$_4$ and NiO a separation of the Bragg peaks originating from the respective film is not visible by eye. Complete data analysis using fully kinematic diffraction theory was performed to obtain the vertical layer distance of the respective oxide film. Within the calculation the atomic form factors of oxygen, nickel and iron arranged in a bulk structure are kept constant while the vertical size of the unit cell is varied. The applied models consist of a homogenous Fe$_3$O$_4$/NiO bilayer on top of the respective substrate. This structural model involving the number of layers coincides with the layer model and the film thicknesses obtained from XRR calculations. The obtained vertical layer distances ($c_{\textrm{NiO}}$/2 for NiO and $c_{\textrm{Fe}_3\textrm{O}_4}$/4 for Fe$_3$O$_4$) are shown in Fig.~\ref{XRD}(c). The dotted lines mark the bulk values of the magnetite and nickel oxide. Due to a larger unit cell of MgO(001) pseudomorphic growth of NiO on MgO results in an expansion of the NiO unit cell in lateral direction and, thus, a vertical compression and consequently a smaller vertical lattice distance. In the case of NiO grown on SrTiO$_3$(001) exactly the opposite is expected due to a smaller bulk unit cell of SrTiO$_3$ compared to NiO resulting in an expansion of the vertical unit cell of NiO. For the NiO layers on MgO the vertical layer distance exhibits a compressive strain (2.078\,\AA) and shows no dependence on the NiO thickness in the investigated range (cf. Fig.~\ref{XRD}(c)). In the case of bilayers grown on SrTiO$_3$ the vertical lattice distance of NiO (2.095\,\AA) points to tensile strain as a result of the lateral compression. Further, there is no dependence on the NiO thickness, as well. However, the situation is different for the relaxation of the magnetite films. Due to pseudomorphic growth of NiO on MgO the vertical layer distance of Fe$_3$O$_4$ grown on top of NiO/MgO is also slightly compressively strained but relaxes to higher values with increasing magnetite thickness. Its value relaxes from 2.0795\,\AA~for the 6.1\,nm thick magnetite film to 2.0885\,\AA~for the thickest magnetite film. A strong relaxation with increasing film thickness of the magnetite can also be seen for magnetite films grown on NiO/SrTiO$_3$. The vertical lattice distance of Fe$_3$O$_4$ on NiO/SrTiO$_3$ is heavily tensile strained and decreases rapidly from 2.117\,\AA~for the thinnest film to 2.106\,\AA~for the 20.7\,nm thick magnetite film. \begin{figure}[t] \centering \includegraphics[width=0.43\textwidth]{loops2.pdf} \caption{VSM magnetization curves of magnetic easy and hard direction for (a) 21.5\,nm thick Fe$_3$O$_4$ film on NiO/MgO and (b) 20.7\,nm thick Fe$_3$O$_4$ film on NiO/SrTiO$_3$.} \label{loops} \end{figure} \subsection{VSM} Exemplary, the magnetic properties of the two thickest magnetite films on NiO/MgO and NiO/SrTiO$_3$ were studied by means of VSM. The magnetization curves were measured for different azimuthal sample directions $\alpha$ between the substrate [100] direction and the applied magnetic field. Figures~\ref{loops}(a) and \ref{loops}(b) show the magnetic moment per f.u. as a function of the magnetic field for the bilayers on MgO and SrTiO$_3$, respectively, for two different directions of the external magnetic field. For both samples a typical ferro(i)magnetic behavior can be observed. Here, the blue curves recorded with the magnetic field applied in [110] direction of the substrates represent magnetic easy axes with a high magnetic remanence and coercive fields. The red curves recorded with the magnetic field applied in [010] direction exhibit the magnetic behavior of a magnetic hard axis due to a lower strength of the coercive field and a smaller magnetic remanence. The Fe$_3$O$_4$ film on NiO/SrTiO$_3$ show an enhanced coercive field compared to the magnetite film grown on NiO/MgO. One possible reason could be a higher density of grain boundaries due to the relaxation process, which supports pinned multidomain states that need larger magnetic fields to be switched. This is consistent with the weaker structural quality, e.g., high roughness, broad diffraction peaks, seen in the LEED, XRR and XRD measurements. Further, the saturation magnetization of the Fe$_3$O$_4$ film grown on NiO/MgO amounts to (3.7$\pm$0.3)\,$\mu_B$/f.u. and coincides with the expected value of 4.0\,$\mu_B$/f.u. within the error tolerance \cite{MagMom, MagMom2}. In contrast, magnetite on NiO/SrTiO$_3$ shows a lower magnetic moment of (3.3$\pm$0.3)\,$\mu_B$/f.u., which may result from the antiferromagnetic coupling in the vicinity of anti-phase domain boundaries (APBs) \cite{APB}. The remanent magnetization as a function of azimuthal sample angle $\alpha$ is shown in Fig.~\ref{remanence} for both investigated samples. The maxima of the magnetic remanence point into $\left\langle100\right\rangle$ directions for both Fe$_3$O$_4$ films on NiO/MgO and NiO/SrTiO$_3$ indicating the magnetic easy directions. Consequently, the magnetic hard axes are located in $\left\langle110\right\rangle$ directions. \begin{figure}[t] \centering \includegraphics[width=0.43\textwidth]{remanence2.pdf} \caption{Polar plots of the magnetic remanence dependent on the azimuthal sample angle $\alpha$ of a 21.5\,nm thick Fe$_3$O$_4$ film on NiO/MgO (red) and 20.7\,nm thick Fe$_3$O$_4$ film on NiO/SrTiO$_3$ (blue).} \label{remanence} \end{figure} \section{Discussion} XPS measurements taken directly after deposition reveal stoichiometric Fe$_3$O$_4$ and NiO on both substrates independent of the film thicknesses. Due to the limited mean free path of the electrons only the surface near region ($\sim$5\,nm) of the layers could be characterized. In this region no evidence for the formation of non stoichiometric magnetite was observed. Pilard \textit{et al.} found a 1.5\,nm thick NiFe$_2$O$_4$ interfacial layer after depositing NiO above 610\,$^\circ$C on Fe$_3$O$_4$ \cite{MagPropBilayer}. Within the XPS measurements presented here the interfacial region could be detected only for the thinnest magnetite films showing spectral shape and binding energies typical for Ni$^{2+}$ in NiO stoichiometry. Thus, there is no evidence for the formation of NiFe$_2$O$_4$. Hard x-ray photoelectron spectroscopy (HAXPES) and x-ray magnetic circular dichroism (XMCD) measurements \cite{Kupper} of the same samples recorded after transport under ambient conditions show small traces of Fe$^{3+}$ excess on the surface of the bilayers grown on SrTiO$_3$. However, in deeper layers and at the interface the presence of stoichiometric NiO and Fe$_3$O$_4$ was confirmed excluding the formation of NiFe$_2$O$_4$ clusters or any interfacial layer also for thicker Fe$_3$O$_4$ films \cite{Kupper}. Consequently, very thin magnetite films tend to form maghemite at the surface after exposure to ambient air whereby thicker films seem to be more stable like it was reported before by Fleischer \textit{el al.} \cite{Fleischer}. Since XPS and LEED measurements taken after preparation and under UHV conditions show no evidence for maghemite, a capping layer deposited directly after growth could prevent the possible oxidation process in the upper layers. \textit{In situ} LEED measurements also verified the Fe$_3$O$_4$ stoichiometry of the iron oxide film showing the typical ($\sqrt{2}\times\sqrt{2})R45^{\circ}$ superstructure of the magnetite surface for all investigated films. Further, NiO films on both substrates exhibit the expected ($1\times1$) pattern due to the rock salt crystal structure. The diffraction spots of the magnetite and NiO films grown on SrTiO$_3$ are slightly broadened compared to the films grown on MgO indicating the formation of surface defects due to the high lattice misfit. Surface roughnesses obtained from the XRR analysis exhibit higher values for all films grown on SrTiO$_3$. But while the roughness of the nickel oxide films deposited on SrTiO$_3$ is only about 0.5\,\AA\ higher than after deposition on MgO, the magnetite films on NiO/SrTiO$_3$ show initially almost doubled values compared to the magnetite films on NiO/MgO. This result is consistent with the broadened diffraction spots in the LEED pattern. Nevertheless, the XRR measurements provide distinct intensity oscillations indicating double layer structures and homogenous film thicknesses. Thus, no intermixing of the two layers takes place during the deposition process. The entire structure of the samples was investigated by XRD measurements of the specular CTR. For all samples the thickness determined by XRR agrees well with the Laue oscillations and, hence, with the number of layers used for the XRD calculation. The strong intensity oscillations reveal crystalline and well-ordered nickel oxide and magnetite films with homogeneous thicknesses on both substrates. The vertical layer distances of all NiO films show no dependence on the thickness in the investigated range. However, NiO and Fe$_3$O$_4$ films grown on MgO exhibit a vertical compressive strain while NiO and Fe$_3$O$_4$ films on SrTiO$_3$ show vertical tensile strain due to lattice matching at the interface. Based on elastic theory for continuum the vertical lattice constant $c$ for homogenous tetragonally (in-plane) distorted films can be calculated by using the formula \cite{HashPoisson} \begin{eqnarray} \frac{\Delta c}{c} = \frac{2\nu}{\nu-1}\frac{\Delta a}{a}\,\,. \label{eq:1} \end{eqnarray} For the calculation of the vertical layer distance for a completely strained film $\Delta a$ from pseudomorphic growth was used. Assuming a Poisson number of $\nu$\,=\,0.21 for NiO \cite{NiOPoisson} the vertical layer distance of pseudomorphic NiO on MgO was calculated to be 2.079\,\AA, hence, the NiO films grown on MgO are fully strained. Above a critical thickness $d_c$ this strain should reduce rapidly due to the stable formation of dislocations. The critical thickness $d_c$ at which the generation of misfit dislocation will begin can be calculated by the formula \cite{critthick} \begin{eqnarray} \frac{d_c}{b} = \frac{\left(1-\nu\,\cos^2\alpha\right)\,\left(\ln\left(\frac{d_c}{b}\right)+1\right)}{2\,\pi\,f\,(1+\nu)\,\cos(\lambda)}\,\,. \label{eq:2} \end{eqnarray} Here, $b=\frac{a_{\textrm{film}}}{\sqrt{2}}$ is the magnitude of the Burgers vector, $f$ the lattice mismatch ($f$\,=\,0.8\% for NiO on MgO), $\nu$ the Poisson ratio, $\alpha$\,=\,90$^\circ$ is the angle between the Burgers vector and the dislocation line and $\lambda$\,=\,45$^\circ$ is the angle between the Burgers vector and the direction both normal to the dislocation line and within the plane of the interface. For NiO films on MgO(001) the critical thickness is determined to 39\,nm. Since the studied films are below the critical thickness the absence of strain relaxation is in good agreement with this model. Similar results are also observed by Schemme \textit{et al.} \cite{SchemmeBilayer} for NiO films of different thicknesses up to 34\,nm grown on MgO(001). The experimental data of James \textit{et al.} \cite{NiOPoisson} show a strain relaxation above $\sim$40\,nm which is consistent with our observations and confirms Eq.~(\ref{eq:2}). Despite, the large misfit of -6.9\% between NiO and SrTiO$_3$ the XRD curves of all studied films also feature distinct Laue oscillations pointing to a good crystalline ordering. Assuming a complete lattice matching at the interface we calculate a vertical lattice distance of 2.161\,\AA\ for fully strained NiO films on SrTiO$_3$ (Eq.~(\ref{eq:1})) while we observe a film thickness independent value of 2.095\AA. Thus, the remaining lateral strain calculated by Eq.~(\ref{eq:1}) only amounts to -0.6\%. For the critical thickness Eq.~(\ref{eq:2}) reveals a value of 3.5\,\AA. All prepared NiO films are well above the critical thickness so the observed strong strain relaxation seems to be reasonable although they are not completely relaxed. We assume that the residual strain cannot be removed from the film due to kinetic difficulties. In the case of Fe$_3$O$_4$ on NiO/MgO, we obtain a vertical layer distance for a fully strained film of 2.092\,\AA~and a critical thickness of 105\,nm ($\nu$\,=\,0.356 \cite{PoissonMagn}, $f$\,=\,0.3\%) applying Eq.~(\ref{eq:1}) and Eq.~(\ref{eq:2}), respectively. Here, the misfit $f$ coincides with the misfit of magnetite on MgO since the growth of NiO on MgO is pseudomorph adapting its lateral lattice constant. All our investigated magnetite films on NiO/MgO are strongly strained having a lower vertical layer distance than calculated by Eq.~(\ref{eq:1}) for pseudomorphic films. Furthermore, the films relax with increasing magnetite thickness, although the predicted critical thickness is far from being reached. This behavior also coincides with the results reported by Schemme \textit{et al.} \cite{SchemmeBilayer}, but cannot be explained so far. Though the low remaining strain of only -0.6\% between the Fe$_3$O$_4$ and NiO/SrTiO$_3$, these magnetite films are less structurally ordered than the magnetite films grown on NiO/MgO. While the crystalline quality of the NiO films on SrTiO$_3$ is constantly high independent of the film thickness, the strength of Laue oscillations of the Fe$_3$O$_4$ films grown on top of NiO/SrTiO$_3$ decreases with increasing magnetite thickness. This result is supported by the high surface roughness of the magnetite films obtained from the XRR measurements as well as by the broadened diffraction spots seen in the LEED pattern. One reason for this behavior could be the fast relaxation process and consequently the formation of grain boundaries and structural defects, e.g., APBs during the initial stage of film growth. The vertical layer distance of a fully strained film and the critical thickness for the formation of misfit dislocations assuming a remaining lattice mismatch of -1\% at the interface to the partially relaxed NiO is calculated to be 2.123\,\AA\ and 27\,nm using Eq.~(\ref{eq:1}) and Eq.~(\ref{eq:2}), respectively. The measured value of 2.117\,\AA~for the 5\,nm thick magnetite film is already lower than the expected value for pseudomorphic growth. With increasing magnetite thickness the measured vertical layer distance strongly relaxes to 2.104\,\AA\ for the 20.7\,nm thick film although the critical thickness is not reached. This coincides with the observations made for Fe$_3$O$_4$ films grown NiO/MgO. Therefore, it seems that elasticity theory overestimates the formation energy of dislocations in the magnetite film. VSM measurements of the two thickest magnetite films on NiO/MgO and NiO/SrTiO$_3$ reveal ferro(i)magnetic behavior for both samples. However, the Fe$_3$O$_4$ film grown on NiO/SrTiO$_3$ shows enhanced coercive field compared to the film on NiO/MgO possibly caused by a higher density of grain boundaries and, thus, a formation of more pinning centers. This behavior coincides with the weaker structural ordering and a higher surface roughness of the magnetite films on NiO/SrTiO$_3$ seen in the LEED, XRD and XRR measurements. An increased coercive field for magnetite films grown on SrTiO$_3$ caused by a higher surface roughness or strain have also been reported in Refs. \cite{MagAnisotrSTO,MagAnisotr2}. The obtained saturation magnetization values of Fe$_3$O$_4$ grown on NiO/MgO and NiO/SrTiO$_3$ coincide within the error tolerances with the values determined by XMCD \cite{Kupper}. Additionally, the value of Fe$_3$O$_4$ film on NiO/MgO is also rather close to the ideal theoretical value as well as to experimental bulk moment of magnetite of 4.07$\mu_B$/f.u. \cite{MagMom,MagMom2,magneticmomentmagnetite}, whereas Fe$_3$O$_4$ on NiO/SrTiO$_3$ exhibits a lower value. A reduced magnetic moment has also been reported for Fe$_3$O$_4$/SrTiO$_3$ systems possibly caused by a large density of APBs induced by high lattice mismatch \cite{MagSTO,Mag111STO}. This result is supported by a weaker structural ordering as well as higher coercive fields and, thus, a higher density of grain boundaries observed for Fe$_3$O$_4$ on NiO/SrTiO$_3$. Further, both investigated samples show a fourfold magnetic in-plane anisotropy with magnetic easy axes aligned along the $\left\langle100\right\rangle$ directions. However, for thin magnetite films on MgO(001) the magnetic easy axes are reported to point into $\left\langle110\right\rangle$ directions \cite{MagAnisotrMgO,Schemme3}, while for Fe$_3$O$_4$ films on SrTiO$_3$(001) the magnetic easy axes are pointing into $\left\langle100\right\rangle$ directions, as well \cite{MagAnisotrSTO,MagAxisMagSTO}. In this case, a tetragonal distortion of the films can influence the spin-orbit coupling which may lead to modified MCA constants and, thus, altered directions of magnetic easy and hard axes. Moreover, magnetite films grown on an iron buffer layer exhibit a magnetic in-plane anisotropy with magnetic easy axes parallel to $\left\langle100\right\rangle$ \cite{SchemmeFeBuffer}. Thus, a modified interface of the Fe$_3$O$_4$ films can lead to a rotation of the magnetic in-plane anisotropy and influence the magnetic properties of magnetite. \section{Summary} We present a comparative study on the structural and magnetic properties of Fe$_3$O$_4$/NiO bilayers grown on MgO(001) and Nb-doped SrTiO$_3$(001). Stoichiometric magnetite and NiO films with a homogenous thicknesses were found on both substrates in the investigated thickness range (5-20\,nm). Detailed analysis of the XRD measurements reveal a high crystallinity of the NiO films independent of the underlying substrate or film thickness. However, magnetite films grown on NiO/SrTiO$_3$ exhibit a weaker structural ordering and higher surface roughness compered to the films grown on NiO/MgO induced by a large lattice mismatch and the resulting relaxation process. Further, the bilayers exhibit a vertical compressive strain on MgO but a tensile strain in vertical direction on SrTiO$_3$ as a result of lateral compression. The weak crystalline structure of Fe$_3$O$_4$ on NiO/SrTiO$_3$ affects the magnetic properties leading to an enhanced coercive field and a reduced magnetic moment compared to magnetite on NiO/MgO. Nevertheless, these Fe$_3$O$_4$/NiO bilayers on MgO and SrTiO$_3$ substrates are expected to show large thermoelectric effects based on the thermal generation of spin currents (spin Seebeck effect) \cite{ramos13,ramos16,SSE} supported by the antiferromagnetic NiO layer \cite{SpinCurrentAFM,SSEAFM}. Additionally, both systems show a fourfold magnetic in-plane anisotropy with magnetic easy axes pointing in $\left\langle100\right\rangle$ directions which is 45$^\circ$ rotated to the well known magnetic easy axes directions of thin magnetite films. One potential reason may be a modified spin-orbit coupling as a result of the tetragonal distortion of the films leading to altered magnetocrystalline anisotropy. A detailed understanding of these bilayers is of utmost importance since they are excellent candidates for potential spintronic and spincaloritronic applications. Therefore, this behavior deserves further future studies including high resolution TEM investigation to shed more light on this interesting change of the magnetic anisotropy of Fe$_3$O$_4$ thin films grown on NiO/MgO(001) and NiO/SrTiO$_3$(001). \section*{Acknowledgments} The authors gratefully acknowledge the financial support by the Deutsche Forschungsgemeinschaft (DFG) via grants No. KU2321/2-1 and KU3271/1-1. Portions of this research were carried out at beamline I811, MaXLab synchrotron radiation source, Lund University, Sweden. Funding for the beamline I811 project was kindly provided by The Swedish Research Council and The Knut och Alice Wallenbergs Stiftelse. Additional experiments were performed at the X04SA beamline at the Swiss Light Source synchrotron radiation source at Paul Scherrer Institute, Villigen, Switzerland. We like to thank the I811 and X04SA beamline staff for experimental support.
train/arxiv
BkiUdww5qoTDt4TW7gIp
5
1
\section{Introduction} In \cite{LP}, Lazić and Peternell propose the following \begin{conjecture}(Generalized Abundance) Let $(X,B)$ be a klt pair with $K_X+B$ pseudoeffective. If $L$ is a nef divisor on $X$ such that $K_X+B+L $ is also nef, then $K_X+B+L \equiv M$ for some semiample $\mathbb{Q}$-divisor $M$.\end{conjecture} They show that this conjecture holds in case $\dim X =2$ and for $\dim X=3$ if $\kappa(K_X+B) >0$ (see \cite[Corollary C,D]{LP}). The main purpose of this article is to prove the following \begin{theorem} \label{1} Let $(X,B)$ be an $n$-dimensional klt pair with $K_X+B \geq 0$ and let $L \in \Pic X$ be nef such that $K_X+B+L$ is nef with nef dimension $n(K_X+B+L) = d$. \newline Assume termination of klt flips in dimension $d$, abundance in dimension $\leq d$, semiampleness conjecture in dimension $\leq d$ and generalized non-vanishing conjecture in dimension $ \leq d-1$. \newline Then $K_X+B+L \equiv M$ for some semiample divisor $M$. \end{theorem} In case $L=0$, such a theorem has been proved by \cite{Am}. The main ingredients of the proof are canonical bundle formula and Nakayama-Zariski decomposition. We have the following corollary \begin{corollary} \label{10} Generalized abundance holds (in any dimension) in the following two cases: \begin{enumerate} \item $n(K_X+B+L)=2$ and $K_X+B \geq 0$, or \item $n(K_X+B+L)=3$ and $\kappa(K_X+B) >0$. \end{enumerate} \end{corollary} \section{Preliminaries} \begin{definition} [Singularities of pairs] Let $(X,B)$ be a sub-pair consisting of a normal variety $X$ and a $\mathbb{Q}$-divisor $B$ on $X$ such that $K_X+B$ is $\mathbb{Q}$-Cartier. It is called \emph{sub-klt} if there exists a log resolution $Y \xrightarrow {\mu} X$ of $(X,B)$ such that letting $B_Y$ be defined by $K_Y+B_Y= \mu^*(K_X+B)$, all coefficients of $B_Y$ are $<1$ and \emph{sub-lc} if all coefficients of $B_Y$ are $\leq 1$. If $B \geq 0$, we drop the prefix sub. \end{definition} \begin{definition} [Nef reduction and nef dimension] Let $X$ be a normal projective variety and let $L \in \Pic X$ be nef. Then there exists a dominant rational map $ \phi: X \dashrightarrow Y$ with connected fibers which is regular over an open subset of $Y$ where $Y$ is also normal projective such that \begin{enumerate} \item If $F \subset X$ is a general compact fiber of $\phi $ with $\dim F= \dim X - \dim Y$, then $L|_{F} \equiv 0 $. \item If $x \in X$ is a very general point and $C \subset X$ a curve passing $x$ such that $\dim (\phi(C)) >0 $, then $(L \cdot C) > 0$. \end{enumerate} $\phi$ is called the \textbf{nef reduction map} of $L$ and $\dim Y$ the \textbf{nef dimension} $n(L)$ of $L$. \end{definition} \section{Inductive approach to generalized abundance} \begin{proof}[Proof of Theorem \ref{1}] We will follow some ideas of \cite{Am}. Let $\Phi$ be the nef reduction map of $K_X+B+L$. Let $\hat{X}$ be the normalization of the closure of the graph of $\Phi $. We have an induced commutative diagram: \begin{center} \begin{tikzcd} \hat{X} \arrow[d,"\mu"] \arrow[dr, "f"] \\ X \arrow[r, dotted, "\Phi"] & Z \end{tikzcd} \end{center} Note that since $\Phi$ is a regular over an open subset of $Z$, $Ex(\mu)$ is $f$-vertical. We will make base changes that preserve this propery. Let $F \subset \hat{X}$ be a general fiber of $f$. Then $(K_{\hat{X}}+B_{\hat{X}}+L_{\hat{X}})|_{F} \equiv 0$. Here $L_{\hat{X}}:= \mu^*L$ and $B_{\hat{X}}$ is defined by $K_{\hat{X}}+B_{\hat{X}} = \mu^*(K_X+B)$. Now $K_F+B_F \geq 0$ implies $K_F+B_F \sim_{\mathbb{Q}} 0$. Thus $L_{\hat{X}}|_F \equiv 0$.\\ We now make some base changes as in the proof of \cite[Theorem 14]{Cha}. By \cite[Lemma 3.1]{LP}, there exists $\sigma: Z^{'} \rightarrow Z$ birational such that letting $X^{'}$ denote the normalization of the main component of $\hat {X} \times _Z Z^{'} \rightarrow Z^{'}$ and $f^{'}: X^{'} \rightarrow Z^{'}$ the induced morphism, there exist $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisors $L_{Z^{'}}$ and $N_{Z^{'}}$ on $Z^{'}$ such that \begin{center} $L_{X^{'}} \equiv f^{'*}(L_{Z^{'}})$ and $K_{X^{'}}+B_{X^{'}}+L_{X^{'}} \equiv f^{'*}(N_{Z^{'}})$. \end{center} Thus $K_{X^{'}}+B_{X^{'}}$ is numerically trivial on all fibers and linearly $\mathbb{Q}$-trivial on a general fiber of $f^{'}$. Letting $\tau: Z^{''} \rightarrow Z^{'}$ be a birational flattening of $f^{'}$ with $Z^{''}$ smooth- i.e. letting $X^{''}$ denote the main component of $X^{'} \times _{Z^{'}}Z^{''} \rightarrow Z^{''}$, the induced morphism $f^{''}: X^{''} \rightarrow Z^{''}$ is flat. Then upper semicontinuity implies that $\nu^{*}(K_{X^{'}}+B_{X^{'}})$ is $\mathbb{Q}$-trivial on all fibers: there exists $ d \in \mathbb{N}$ such that \begin{center} $\mathcal{O}_{X^{''}}(d(\nu^{*}(K_{X^{'}}+B_{X^{'}})) \cong \mathcal{O}_{F^{''}}$ \end{center} for all fibers $F^{''}$ of $f^{''}$ where $\nu: X^{''} \rightarrow X^{'}$ is the induced map. Let $ \sigma: \overline{Z}\rightarrow Z^{''}$ be a finite morphism ($\overline {Z}$ smooth) such that letting $\tilde{X}$ denote the desingularization of the main component of $ \overline{Z} \times_ {Z^{''}} \widetilde{X^{''}}$, $d(K_{\tilde{X}}+B_{\tilde{X}})= \tilde{f}^{*}(N_{\overline{Z}})$ for some $\mathbb{Q}$-Cartier divisor $N_{\overline{Z}}$ on $\Pic \overline{Z}$ (same descent arguments as \cite[Chapter 3, Exercise 12.4]{Hart} apply). Here $\widetilde{X^{''}}$ stands for the desingularization of $X^{''}$. Now letting $\overline {X}$ denote the normalization of the main component of $ \overline{Z} \times_{Z^{''}}X^{''}$, we have morphisms $ \tilde{f}: \tilde{X} \xrightarrow{\pi} \overline{X} \xrightarrow{\overline{f}} \overline{Z}$ and \begin{center} $\pi^{*}\mathcal{O}_{\overline{X}}(d(K_{\overline{X}}+B_{\overline{X}}))= \pi^{*}\overline{f}^{*}(N_{\overline{Z}})$. \end{center} Since $ \pi_*\mathcal{O}_{\tilde{X}} = \mathcal{O}_{\overline{X}}$, we conclude that $\mathcal{O}_{\overline{X}}(d(K_{\overline{X}}+B_{\overline{X}})= \overline{f}^{*}(N_{\overline{Z}})$. Thus $\overline{f}: (\overline{X}, B_{\overline{X}}) \rightarrow \overline{Z}$ is a klt-trivial fibration (\cite[Definition 2.4]{Cha}) with $(B_{\overline{X}})|_{\overline{F}} \geq 0$ for a general fiber $\overline{F}$ of $\overline{f}$. Let $B_{\overline{X}}=B_{\overline{X}}^+-B_{\overline{X}}^-$ be the decomposition into positive and negative parts. Then there exists a $\mathbb{Q}$-divisor $Q$ on $Y$ such that $A :=B_{\overline{X}}^-+\overline{f}^*Q$ supports no fibers over codimension $1$ points of $\overline{Z}$. Note that $Supp(A) \subset Supp(B_{\overline{X}}^-)$ and thus $A$ is $\mu$-exceptional. \\ Consider the induced klt-trivial fibration $ \overline{f} : (\overline{X}, B_{\overline{X}}^+-A) \rightarrow \overline{Z}$. It has the same moduli b-divsior $M_{\overline{Z}}$ as $\overline{f}:(\overline{X}, B_{\overline{X}})\rightarrow \overline{Z}$. Let $\Delta_{\overline{Z}}$ be its discriminant b-divisor. Then since $A$ does not support fibers over codimension $1$ points of $\overline{Z}$, $\Delta_{\overline{Z}} \geq 0$ (This follows from the definition of the discriminant b-divisor) and $(\overline{Z}, \Delta_{\overline{Z}})$ is klt. Moreover, $M_{\overline{Z}}$ is b-nef and b-abundant by \cite[Theorem 3.3]{Am2}. It follows that there exists $\Delta_{\overline{Z}}^{'} \geq 0$ such that $(\overline{Z}, \Delta_{\overline{Z}}^{'})$ is a klt pair and \begin{center} $K_{\overline{X}}+B_{\overline{X}}^+-A \sim_{\mathbb{Q}}\overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}+M_{\overline{Z}}) \sim_{\mathbb{Q}}\overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'})$ \end{center} Letting $\mu: \overline{X} \rightarrow X$ denote the induced morphism, we have \begin{equation} \mu^*(K_X+B)+B_{\overline{X}}^-+A^- \sim_{\mathbb{Q}} \overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'})+A^+ \label{2} \end{equation} where $B_{\overline{X}}^-+A^-$ is effective and $\mu$-exceptional and $A^+$ is effective and does not support fibers over codimension 1 points of $\overline{Z}$. Note that $\overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'})+A^+ \geq 0$ implies that $K_{\overline{Z}}+\Delta_{\overline{Z}}^{'} \geq 0 $. For $D$ a pseudoeffective divisor on a smooth projective variety, let $P_{\sigma}(D)$ denote the positive part of its Nakayama-Zariski decomposition (\cite [Chapter 3]{Nak}). Replacing $X$ by a smooth model (which we still call $X$), we have $P_{\sigma}(\mu^*(K_X+B)+B_{\overline{X}}^-+A^-)=P_{\sigma}(\mu^*(K_X+B))$ by \cite[Lemma 2.16]{GL}. Now we also have \begin{center} $\mu^*(K_X+B+L)+B_{\overline{X}}^-+A^- \equiv \overline {f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}})+A^+$ \end{center} Thus again by loc. cit., this gives \begin{equation} P_{\sigma}(\mu^*(K_X+B+L))=P_{\sigma}(\overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}})) \equiv \mu^*(K_X+B+L) \label{5} \end{equation} since $K_X+B+L$ is nef. Now by \cite[Theorem 5.3]{LP}, after replacing $\overline{Z}$ by a smooth model, $P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}})$ is numerically equivalent to a semiample divisor. But then \begin{center} $P_{\sigma}(\overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}}))=\overline{f}^*P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}})$ (see \cite[Lemma 2.5]{LP}). \end{center} Thus \begin{equation} \mu^*(K_X+B+L) \equiv \overline{f}^*P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}}) \label{6} \end{equation} is numerically equivalent to a semiample divisor and this finishes the proof. \end{proof} \begin{remark} The above result can also be proved by using the MMP instead of the caonical bundle formula. See for example the proof of \cite[Theorem 3.5]{LP2}. \end{remark} \begin{proof}[Proof of Corollary \ref{10}] If $n(K_X+B+L)=3$, then in the above notation, $\dim \overline{Z} =3$. By (\ref{2}), $K_{\overline{Z}}+\Delta_{\overline{Z}}^{'} \geq 0$ and \begin{center} $\kappa(K_X+B)= \kappa(P_{\sigma}(\overline{f}^*(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}))$ (replacing $\overline{X}$ and $\overline{Z}$ with higher models) \end{center} \begin{center} $=\kappa(\overline{f}^*P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}))$ (since $P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'})$ is semiample) \end{center} \begin{center} $=\kappa(P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'})) = \kappa (K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}) >0$n (See \cite[Lemma 2.9]{GL}). \end{center} Then by \cite[Remark 5.4]{LP}, $P_{\sigma}(K_{\overline{Z}}+\Delta_{\overline{Z}}^{'}+L_{\overline{Z}})$ is numerically equivalent to a semiample divisor (possibly, replacing $\overline{Z}$ by a smooth model). Then so is $K_X+B+L$ by (\ref{6}). \end{proof} \section{\emph{Acknowledgements}} I thank my advisor Patrick Brosnan for several helpful conversations on the contents of this note and Vladimir Lazić for his comments on a draft version. An email conversation with Florin Ambro provided the inpetus for this work.
train/arxiv
BkiUbUTxaL3Sug5GKZK7
5
1
\section{Introduction} \label{sec:intro} Among the zoo of chemically peculiar (CP) stars, the mercury-manganese (HgMn/CP3) stars form a rather homogeneous group. They are traditionally identified by the presence of strong \ion{Hg}{ii} and \ion{Mn}{ii} lines in optical spectra and occupy the rather restricted spectral-type range between B6 and A0 \citep{preston74,smith96,chojnowski20,paunzen20}. In addition to strong atmospheric overabundances of Hg and Mn (up to 6 and 3 dex over Solar, respectively; e.g. \citealt{white76}, \citealt{smith96}, \citealt{ghazaryan16}), CP3 stars exhibit numerous other peculiarities, in particular a general overabundance of heavy elements. Generally, the strength of the overabundances increases with atomic number \citep{castelli04,ghazaryan16}. Detailed information on the chemical composition of CP3 stars has for example been provided by \citet{castelli04} and \citet{ghazaryan16}. CP3 stars are slow rotators \citep{mathys04} and have a high rate of multiplicity. Multiplicity frequencies have been estimated at more than 50\,\% \citep{smith96}, with values up to 67\,\% \citep{hubrig95} and 91\,\% \citep{schoeller10}. CP3 stars do not show the strong magnetic fields that are observed in the Ap/CP2 and the He-peculiar stars (which are lumped together under the label 'magnetic CP stars') and are generally listed with the non-magnetic CP stars. However, several recent studies announced the presence of weak or tangled fields \citep{hubrig10,hubrig12} and this has remained a controversial issue \citep{kochukhov13,hubrig20}. CP3 stars show an inhomogeneous surface element distribution ('chemical spots') with obvious signs of secular evolution (e.g. \citealt{hubrig95}, \citealt{adelman02}, \citealt{hubrig06}, \citealt{kochukhov07}, \citealt{briquet10}, \citealt{korhonen13}). CP3 stars are relatively rare objects. However, recent progress has led to a substantial extension of the number of known CP3 stars \citep{chojnowski20,paunzen20}. At the time of this writing, more than 550 Galactic CP3 stars have been registered. Furthermore, with the advent of space photometry, an increasing number of CP3 stars has been found to be photometric variables \citep{alecian09,balona11,morel14,paunzen13,strassmeier17,white17,huemmerich18}. Generally, current studies favoured rotational over pulsational modulation as the cause of the observed variability in the investigated stars (\citealt{huemmerich18}, and the discussion therein). Given their high multiplicity rate, the presence of a CP3 star component in a number of eclipsing binary systems would be expected. These objects, however, are exceedingly rare: the present authors are only aware of four eclipsing binaries containing a CP3 star component, viz. HD 34364 = AR Aur \citep{hubrig06,folsom10}, HD 161701 = HR 6620 \citep{gonzalez14}, TYC 455-791-1 = HSS 348 \citep{strassmeier17}, and HD 10260 = V772 Cas \citep{kochukhov20}. As binaries -- in particular eclipsing ones -- allow the derivation of fundamental stellar parameters like mass and radius, and only very few CP stars have direct determinations of these parameters \citep{north04}, the discovery of binary systems with CP star components is important. Furthermore, such systems may help to understand, and put constraints on, the processes responsible for the formation of the observed chemical peculiarities and the time scales involved. Here we report on the discovery of another eclipsing binary system containing a CP3 star component, viz. BD+09 1467 = V680 Mon, which is well suited to follow-up studies dealing with the solution of the system and the determination of exact stellar parameters for both components. We show that V680 Mon is a 'heartbeat' star, a rare class of eccentric binary stars with short-period orbits (1\,d\,$\la$\,$P_{orb}$\,$\la$\,1\,yr) that exhibit a characteristic signature near the time of periastron in their light curves whose shape is reminiscent of an electrocardiogram diagram (hence the name). Section \ref{sec:analysis} provides information on our target star, its astrophysical parameters and location in the sky, and the observations. We present our results in Section \ref{sec:results} and discuss them in Section \ref{sec:discussion}. \begin{figure} \includegraphics[trim = 0mm 0mm 18mm 105mm, clip, width=\columnwidth]{skyview.pdf} \caption{Sky region of V680 Mon on DSS2 blue plates, accessed via the ALADIN sky atlas \citep{ALADIN1,ALADIN2}. The field of view is 4.289\arcmin\,x\,4.108\arcmin. North is at the top, east is at the left. The image is centered on V680 Mon. The bright star to the north-west that appears partially blended with V680 Mon is 2MASS J06593015+0919070.} \label{skyview} \end{figure} \begin{table*} \caption{Basic stellar parameters of V680 Mon (2MASS J06593071+0918596) and its close companion 2MASS J06593015+0919070.} \label{table_companion} \begin{adjustbox}{max width=1.0\textwidth} \begin{tabular}{llllllllllll} \hline \hline ID & Gaia DR2 & $\alpha$ & $\delta$ & $\pi$ & e\_$\pi$ & $G$\,mag & e\_$G$\,mag & $(BP-RP)_0$ & e\_$(BP-RP)_0$ & MV$_0$ & e\_MV$_0$ \\ \hline J06593071+0918596 & 3157882748862195072 & 104.8779802 & 9.3165762 & 1.594 & 0.146 & 9.9803 & 0.0011 & 0.000 & 0.004 & 1.09 & 0.16 \\ J06593015+0919070 & 3157882744564424704 & 104.8756241 & 9.3186280 & 0.370 & 0.018 & 12.5163 & 0.0002 & 1.146 & 0.002 & 0.35 & 0.09 \\ \hline \hline \end{tabular} \end{adjustbox} \end{table*} \section{Target star, observations and data analysis} \label{sec:analysis} \subsection{Target star} \label{subsec:target_star} V680 Mon = BD+09 1467 = HD 267564 (spectral type B8, \citealt{cannon93}; $V$\,=\,10.13\,mag, \citealt{HIPPARCOS}; $G$\,=\,9.98\,mag, \citealt{gaia2}) was identified as a variable star by \citet{parenago46}, who listed it under the preliminary designation of SVS 1025 Monocerotis and suggested it to be an eclipsing binary with a range of 9.5\,$-$\,10.1 mag ($pg$). No period could be derived from the available observations. The star was included as NSV 3233 into the New Catalogue of Suspected Variable Stars \citep{NSV} and later entered the General Catalogue of Variable Stars as V680 Mon \citep{kholopov87,GCVS}. V680 Mon has been little studied, and discrepant information is found in the literature. From an analysis of 85 photographic plates, \citet{berthold83}\footnote{\url{https://www.sternwarte-hartha.de/wp-content/uploads/2018/11/Heft-18.pdf}} proposed V680 Mon to be an RR Lyrae star and derived first (but, in hindsight, incorrect) elements. Presumably based on this information, V680 Mon was included in the RR Lyrae star catalogues of \citet{mennessier02} and \citet{maintz05}. However, on the basis of ASAS-3 and NSVS observations, \citet{otero06} identified V680 Mon as an eclipsing binary star, in accordance with the initial proposition of \citet{parenago46}. The authors derived a period of $P$\,=\,8.5381\,d and a variability range of 9.93 $-$ 10.31\,mag ($V$). The system was found to be eccentric, with the secondary minimum occurring at phase $\varphi$\,=\,0.865. In consequence, V680 Mon entered the catalogue of eclipsing binary stars with eccentric orbits by \citet{bulut07} and new observations of minima were procured by \citet{brat09} and \citet{huebscher11}. Despite these results, and the correct identification of the star as an eclipsing binary in the International Variable Star Index (VSX; \citealt{VSX}) of the American Association of Variable Star Observers (AAVSO), the star has been listed as an RR Lyrae variable in the SIMBAD database \citep{SIMBAD} until recently, which is probably the reason why it was included into the samples of the RR-Lyrae-star-based studies of \citet{gavrilchenko14} and \citet{gaia_parallaxes_17}. On the initiative of the present authors, V680 Mon is now correctly identified in the SIMBAD database as an eclipsing binary star. \subsection{Sky region} \label{subsec:sky_region} V680 Mon is situated in an area relatively devoid of bright stars, roughly in the midst of an imaginary triangle with Alhena ($\gamma$ Gem), Procyon ($\alpha$ CMi) and the Rosette Nebula (NGC 2244) as its vortices. However, situated at a distance of 12\arcsec\ from our target star, there is the relatively bright star 2MASS J06593015+0919070 = GAIA DR2 3157882744564424704 ($G$\,=\,12.52\,mag, \citealt{gaia2}). Both stars appear as a close double in DSS2 images (Fig. \ref{skyview}). They will also be blended in the spectroscopic and photometric data that form the backbone of this investigation; hence, a more detailed investigation into this matter is necessary. Parameters of both stars are given in Table \ref{table_companion}. \citet{2020arXiv201205220B} list distances of, respectively, 619\,pc (581$-$672\,pc) and 2556\,pc (2438$-$2649\,pc) for V680 Mon and 2MASS J06593015+0919070 based on the Gaia Early Data Release 3 \citep[EDR3,][]{2020arXiv201201533G}. The two stars are not physically connected to each other. Considering its colour and luminosity, 2MASS J06593015+0919070 is obviously a late G- or early K-type giant. It is about 10 times ($\sim$2.5\,mag) fainter than V680 Mon, and we expect no significant contamination of the here employed spectra, which do not show any traces of the signature of a late G- or early K-type giant. Furthermore, we can rule out that the eclipses observed in the combined light curve of both stars originate in 2MASS J06593015+0919070. Even its complete disappearance would not result in a 25\,\% reduction in brightness, as is observed during primary eclipse of the system. We are therefore confident that no blending issues affect our main results. \subsection{Observations} \label{subsec:observations} The spectra employed in this study were extracted from the archive of the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST)\footnote{\url{http://www.lamost.org}}, which is located at Xinglong Observatory in Beijing (China), and procured at Star\'{a} Lesn\'{a} Observatory (SLO) and Skalnat\'{e} Pleso Observatory (SPO) in the High Tatras (Slovak Republic). The LAMOST low-resolution spectrum boasts a resolution of R\,$\sim$\,1800 and covers the wavelength range from 3700 to 9000\,\AA. More information on the LAMOST survey is provided in \citet{lamost1} and \citet{lamost2}. The spectra taken at SLO have a resolution between R = 11\,000 and R = 12\,000 and cover the spectral range from 4150 to 7600\,\AA, while the spectra procured at SPO have a resolution of R = 38\,000 and cover the interval from 4250 to 7375\,\AA. More information on the instrumentation of the SLO and SPO observatories and the reduction process can be found in \citet{2015AN....336..682P}. The photometric observations used in this study were procured by NASA's Transiting Exoplanet Survey Satellite (TESS), which provides ultra-precise photometry in a single passband (600-1000\,nm) taken at a cadence of 2\,min. More information on the TESS spacecraft and data products can be found in \citet{TESS3,TESS1,TESS2}. \begin{figure*} \includegraphics[width=\textwidth]{showcaseV680Mon.pdf} \caption{Comparison of the blue-violet region of (a) a synthetic spectrum corresponding to spectral type B8 V (${T}_{\rm eff}$\,=\,12500\,K; $\log g$\,=\,4.0; [M/H]\,=\,0.0; $\xi$\,=\,2\,km\,s$^{-1}$; smoothed to match the LAMOST resolution), (b) the LAMOST DR4 spectrum of V680 Mon (kB9 hB8 HeB9 V HgMn), and (c) the LAMOST DR4 spectrum of the CP3 star HD 249170 = LAMOST J055457.96+092830.5 (B8 IV HgMn; \citealt{paunzen20}). Some prominent lines of interest are identified.} \label{showcase1} \end{figure*} \section{Results} \label{sec:results} \subsection{Spectral classification} \label{subsec:SpT} The spectral peculiarities of V680 Mon were discovered during our semi-automated search for new CP3 stars in the spectra from LAMOST DR4 \citep{paunzen20}. Because it was recognised as an eclipsing binary and further studies were deemed necessary, V680 Mon was not included into the sample of \citet{paunzen20}. More information on our search for CP stars with Richard O. Gray's MKCLASS code, a program that classifies spectra by emulating the workflow of a human classifier \citep{gray14}, can be found in \citet{huemmerich20} and \citet{paunzen20}. Only one LAMOST spectrum is available for V680 Mon = LAMOST J065930.88+091859.6, which was accessed via the DR4 VizieR online catalogue\footnote{\url{http://cdsarc.u-strasbg.fr/viz-bin/cat/V/153}} \citep{DR4}. The spectrum was obtained on 28 December 2015 (MJD 57384; observation median UTC 17:38:00; $g$-band S/N: 294), that is, at an orbital phase of $\varphi$\,=\,0.705. It was therefore obtained during maximum light and is dominated by light from the primary component of the V680 Mon system. The spectrum is illustrated in Fig. \ref{showcase1}, together with the synthetic spectrum of a B8 V star and the LAMOST DR4 spectrum of an HgMn star from the list of \citet{paunzen20}. Following the workflow outlined in \citet{paunzen20}, V680 Mon is given a final classification of B9 III$-$IV HgMn by the employed specialised version of the MKCLASS code. The \ion{Ca}{ii} K line strength and the relatively weak \ion{He}{ii} lines indeed suggest a spectral type of B9; the hydrogen line profile, however, is best matched by that of a B8 V standard. In general, the hydrogen-line profile is the most reliable indicator of the effective temperature in a CP star \citep{gray09}. Furthermore, CP3 stars have been shown to exhibit a large spread of He abundances, with most (usually the hotter) CP3 stars being He deficient \citep{smith96,ghazaryan16}. We therefore prefer a temperature type of B8 V. While CP3 stars also show a large dispersion of Ca abundances (up to 3 dex; \citealt{ghazaryan16}), we suspect that there is an interstellar contribution to the strong \ion{Ca}{ii} K line in the available spectrum. The CP3 star characteristics are clearly present in the spectrum of V680 Mon. The \ion{Hg}{ii} 3984\,\AA\ line appears merely as a 'bump' in the red wing of H$\epsilon$, as is commonly the case at this low resolution (cf. e.g. the CP3 star spectra shown in \citealt{paunzen20}). The \ion{Mn}{ii} features at 4136\,\AA\ and, in particular, 4206\,\AA\ and 4152/9\,\AA\ are well developed (Fig. \ref{showcase1}). In summary, following the refined classification system of \citet{garrison94}, we arrive at a final classification of kB9 hB8 HeB9 V HgMn. We analysed the high-resolution SLO and SPO spectra obtained at five different orbital phases and found no traces of the secondary component. V680 Mon, therefore, is a single-line spectroscopic binary (SB1) system. In addition to the classical classification criteria discussed above, we measured the equivalent widths of the \ion{Mn}{i} lines at 4462.031, 4762.367, 4765.846, 4766.418, 4783.427, 4823.524, and 6021.790\,\AA, all of which exhibit values between 15 and 20\,m\AA. This is well in line with the results obtained for the HgMn star HD 175640 ([Mn]\,=\,+2.45\,dex and [Hg]\,=\,+4.72\,dex as compared to the Sun; \citealt{castelli04}), which has an identical effective temperature as our target star. We also investigated the weak \ion{Hg}{i} 5460.731\,\AA\ line, which yields an equivalent width of 3\,m\AA. While this is at the detection limit of our set of spectra, its presence clearly indicates a significant overabundance of Hg; for solar metallicity, the equivalent width of this line is well below 1\,m\AA. In summary, our analysis of the SLO and SPO spectra corroborates the results from the blue-violet spectral region and confirms that the primary component of V680 Mon is a CP3 star. \begin{figure*} \includegraphics[trim = 20mm 91mm 20mm 105mm, clip, width=18cm]{V680Mon_LCTESS1.pdf} \caption{TESS light curve of V680 Mon, accessed via 'eleanor' and based on PSF flux.} \label{LCTESS1} \end{figure*} \begin{figure*} \includegraphics[trim = 20mm 107mm 26mm 120mm, clip, width=17cm]{V680Mon_LCTESS2.pdf} \caption{TESS light curve of V680 Mon, accessed via 'eleanor' and based on PSF flux, presenting a detailed view of the 'heartbeats'.} \label{LCTESS2} \end{figure*} \subsection{Photometric variability} \label{subsec:photvar} On the basis of ASAS-3 and NSVS observations, \citet{otero06} identified V680 Mon as an eclipsing binary star (cf. Section \ref{subsec:target_star}). They derived a period of $P$\,=\,8.5381\,d (epoch of primary miminum: HJD 2452990.717), a total range of light variability of 9.93\,$-$\,10.31\,mag ($V$) and found the system to be eccentric, with the secondary minimum occurring at phase $\varphi$\,=\,0.865. With these data, the star was included into the VSX. V680 Mon was observed by TESS during orbits 19 and 20 (TESS Observation Sector 6). The corresponding data were accessed via 'eleanor', which is an open-source python framework for downloading, analysing, and visualising data from the TESS Full Frame Images \citep{feinstein19}.\footnote{\url{https://adina.feinste.in/eleanor/}} In the direct vicinity of our target star, there is the relatively bright star 2MASS J06593015+0919070 ($G$\,=\,12.52\,mag), which is separated from V680 Mon by a distance of approximately 12\arcsec\ (cf. Section \ref{subsec:sky_region}). Since the TESS pixel size is relatively large (21\arcsec), the TESS light curve is a blending of light from both stars. However, as discussed in Section \ref{subsec:sky_region}, the light contribution from 2MASS J06593015+0919070 is negligible and cannot account for the observed eclipses. Apart from its close neighbor, V680 Mon is rather isolated from other bright stars. We tried both the PCA and PSF reductions and found that the PSF-modelled light curve is superior. For the final analysis, PSF modelling using a field of 7x7 pixels was employed. The TESS light curve of V680 Mon is illustrated in Fig. \ref{LCTESS1}. Although the covered time-span is short, the ultra-precise TESS data reveal for the first time that V680 Mon is a heartbeat system. A detailed view of the heartbeats near periastron, whose shape is due to the combined effects of tidal distortion, reflection and Doppler beaming \citep{hambleton13,fuller17,hambleton18}, is presented in Figure \ref{LCTESS2}. The presence of chemical peculiarities in the component of a binary of 'heartbeat configurating' is an interesting find that is further discussed in Section \ref{sec:discussion}. To derive an updated ephemeris, we combined the available photometric time-series data from TESS, the All Sky Automated Survey (ASAS-3; \citealt{ASAS1}) and the Kamogata/Kiso/Kyoto wide-field survey (KWS; \citealt{KWS}) and derived the elements presented in Eqs. \ref{eq1} and \ref{eq2}. \begin{equation} Min\,I = HJD\,2458472.088(1) + 8.53797(2)\,E \\ \label{eq1} \end{equation} \begin{equation} Min\,II = HJD\,2458470.929(2) + 8.53797(2)\,E \label{eq2} \end{equation} \begin{table} \caption{Parameters of the V680 Mon system as obtained from the Least-Squares Trust Region Reflective Algorithm ('Least Squares') and subsequent error estimation using the Markov Chain Monte Carlo (MCMC) sampler. The components' radii are given in semi-major axis units (SMA).} \label{table:EB_params} \centering \begin{tabular}{l c c c} \hline\hline Parameter & \multicolumn{2}{c}{Value} & Status \\ \hline System & & & \\ \hline $q$ & \multicolumn{2}{c}{$0.57_{-0.02}^{+0.01}$} & Variable \\ $i[^{\circ}]$ & \multicolumn{2}{c}{$85.71_{-0.08}^{+0.08}$} & Variable \\ $e$ & \multicolumn{2}{c}{$0.6131_{-1\times10^{-4}}^{+1\times10^{-4}}$} & Variable \\ $\omega[^{\circ}]$ & \multicolumn{2}{c}{$356.36_{-0.08}^{+0.09}$} & Variable \\ $P[d]$ & \multicolumn{2}{c}{$8.53797$} & Fixed \\ $T_0[d]$ & \multicolumn{2}{c}{$2458472.088$} & Fixed \\ \hline \hline Component & primary & secondary & \\ \hline $\Omega$ & $12.62_{-0.04}^{+0.04}$ & $13.2_{-0.2}^{+0.2}$ & Variable \\ $F$ & $5.1_{-0.4}^{+0.5}$ & $5.1_{-0.2}^{+0.3}$ & Variable \\ $r_{eq}$ & $0.0836_{-0.0002}^{+0.0002}$ & $0.0474_{-0.004}^{+0.005}$ & Derived \\ \hline \multicolumn{4}{l}{Atmospheric parameters} \\ \hline $T^{eff}/[K]$ & $12000_{-300}^{+400}$ & $8300_{-200}^{+200}$ & Variable \\ $\beta$ & $0.86_{-0.05}^{+0.04}$ & $0.89_{-0.04}^{+0.03}$ & Variable \\ $A$ & $0.73_{-0.07}^{+0.07}$ & $0.64_{-0.04}^{+0.04}$ & Variable \\ $M/H$ & 0.0 & 0.0 & Fixed \\ \hline \multicolumn{4}{l}{Radii at periastron} \\ \hline $r_{pole}$ & $0.0827_{-0.0003}^{+0.0003}$ & $0.0472_{-0.0004}^{+0.0005}$ & Derived \\ $r_{back}$ & $0.0844_{-0.0002}^{+0.0002}$ & $0.0476_{-0.0004}^{+0.0005}$ & Derived \\ $r_{side}$ & $0.0837_{-0.0002}^{+0.0002}$ & $0.0474_{-0.0004}^{+0.0005}$ & Derived \\ $r_{forward}$ & $0.0846_{-0.0002}^{+0.0002}$ & $0.0476_{-0.0004}^{+0.0005}$ & Derived \\ \hline\hline \end{tabular} \end{table} \begin{figure*} \includegraphics[width=1.0\linewidth]{V680Mon_corner.pdf} \caption{Posterior distribution of the samples generated by the Markov Chain Monte Carlo (MCMC) algorithm. \label{fig:corner_plot}} \end{figure*} \subsection{Binary system modelling} \label{subsec:modelling} Due to the significantly lower quality of the ASAS-3 and KWS photometry, binary system parameters were inferred using only detrended TESS photometry. The light curve was analysed with the Python package ELISa\footnote{\url{https://github.com/mikecokina/elisa}}, which contains tools for modelling light curves of close eclipsing binaries utilising Roche geometry and methods for solving an inverse problem. As a first step, the Least-Squares Trust Region Reflective Algorithm ('Least Squares' hereafter) was used to search for a local minimum around a manually-selected starting point based on the general shape of the light curve. Initial runs were performed with seven free parameters: photometric mass ratio $q$; inclination $i$; eccentricity $e$; argument of periastron $\omega$; surface potentials $\Omega_1$, $\Omega_2$; and effective temperature of the secondary component $T^{eff}_2$. The effective temperature of the primary component $T^{eff}_1$ was fixed to 12\,500\,K, as inferred from the spectral classification (cf. Section \ref{subsec:SpT}). Albedos $A_1$, $A_2$ and gravity darkening factors $\beta_1$, $\beta_2$ were set to 1.0 since both components were expected to have radiative envelopes due to their effective temperatures being well above 7000\,K \citep{zeipel24}. We adopted a square-root law for limb darkening. The corresponding coefficients were interpolated from the pre-calculated tables of \citet{Claret17}, and atmospheric models from \citet{castelli04b} were used for the calculation of the integrated passband flux. Finally, the synchronicity factors $F_1$, $F_2$ were set to assume synchronous rotation of the components at periastron \citep{Hut81}. After the initial solution was found, the effective temperature of the primary component $T^{eff}_1$, synchronicity parameters $F$, gravity darkening factors $\beta$, and albedos $A$ were set variable to allow for the model to relax into the local minimum. $T^{eff}_1$ was allowed to vary $\pm$1000\,K. Using the approach described above, the Least Squares algorithm arrived at a solution with a coefficient of determination of $R^2 = 0.9990$. \begin{figure*} \includegraphics[trim = 0mm 0mm 8mm 140mm, clip, width=\textwidth]{V680Mon_lc_fit.pdf} \caption{The upper panel illustrates the fit between observed and synthetic flux in the TESS passband based on the best-fit model presented in Table \ref{table:EB_params}. The fitting residuals are shown in the lower panel.} \label{fig:lc_fit} \end{figure*} The vicinity of the obtained solution was sampled using the Markov Chain Monte Carlo algorithm (MCMC) with 200 walkers, 200 steps, and uniform prior sampling. After discarding the thermalisation phase of the chain, the confidence intervals of the parameters were inferred from the posterior distribution displayed in Figure \ref{fig:corner_plot} in the form of a corner plot. The resulting system parameters are listed in Table \ref{table:EB_params} and the corresponding fit is illustrated in Figure \ref{fig:lc_fit}. The achieved solution points to an eccentric orbit with eccentricity $e=0.61$ and a relatively short orbital period. The tidal forces during the periastron passage result in a deformation of both components with an amplitude of $\sim$1\,\% (ratio between forward and equivalent radius). The synchronicity parameters for both components remained within errors around the predicted value of $5.27$, which suggests that the rotation of the components is synchronised with the orbital motion during the periastron. Figure \ref{fig:corner_plot} shows the double Gaussian distribution for both effective temperatures that indicates two solutions with similar qualities of fit. However, since their spacing is similar to the standard deviations of each peak, we decided to regard them as a single solution. No additional double Gaussian distributions were detected for any other variable parameters. We note that the fitting process based on information obtained in a single passband has a limited capacity to recover information about the effective temperature of the components. \section{Discussion} \label{sec:discussion} The evolutionary stages of the four known eclipsing binary systems containing a CP3 star component are widely different \citep{gonzalez14,kochukhov20}, ranging from close to the zero-age main sequence (HD 34364 and TYC 455-791-1) to the middle of the main sequence (HD 161701) and close to the terminal-age main sequence (HD 10260). To evaluate the evolutionary status of V680 Mon, we investigated its position in the Hertzprung-Russell diagram. As first step, the absolute magnitude listed in Table \ref{table_companion} was corrected for the contribution of the secondary component as shown in Fig. 9 of \citet{paunzen20}. For the given $q$ value (Table \ref{table:EB_params}), this correction amounts to 0.15\,mag only. The bolometric correction for CP stars \citep{2008A&A...491..545N} yields a value of $-$0.559\,mag and thus a luminosity of $\log L/L_\odot$\,=\,1.628(64). For the calibration of the age, we used the Stellar Isochrone Fitting Tool\footnote{\url{https://github.com/Johaney-s/StIFT}} and the isochrone grid by \citet{2012MNRAS.427..127B} for solar metallicity. Our results indicate that V680 Mon is located on the zero-age main sequence with an age between 5 and 6\,Myr. This result is not dependent on the choice of the isochrone metallicity because the use of overabundant isochrones would result in an even larger luminosity for the same effective temperature. In such grids, our target star would be situated significantly below the zero-age main sequence. Several CP3 stars belong to open clusters \citep[e.g.][]{hubrig12}, which puts further constraints on the age determination. We searched for a possible host cluster within 3$\sigma$ of the position, diameter, proper motion, distance and their errors of the star cluster lists from \citet{Dias2002} and \citet{Cantat2020}. Because V680 Mon is quite close (about 620\,pc from the Sun), we expect these lists to be complete. The closest aggregate is NGC\,2264, which is located about 4.6 degrees or 50\,pc away. In this young open cluster, star formation is still going on \citep{2021A&A...645A..94N} and many young stellar objects are present \citep{2020A&A...636A..80B}. If we accept V680 Mon as a member of NGC\,2264, the derived ages are in excellent agreement. Incidentally, another HgMn star, HD 47553, was reported as a member of NGC\,2264 \citep{1993BSAO...35...76P}; unfortunately, besides this reference, no other analysis of this star was found in the literature. V680 Mon is only the fifth known eclipsing CP3 star, and it is the first one recognised as member of a heartbeat binary. Our results indicate that the V680 Mon system is composed of a CP3 star primary component ($T_{eff}$\,=\,$12000_{-300}^{+400}$\,K; spectral type kB9 hB8 HeB9 V HgMn) and a secondary component of spectral type A4 ($T_{eff}$\,=\,$8300_{-200}^{+200}$; cf. Section \ref{subsec:modelling}). The unique combination of a very young and relatively bright chemically peculiar star in such a system opens up intriguing possibilities. In particular, theory needs to explain the development of CP3 star features in such a young object and under the conditions (tidally-induced effects) of a heartbeat binary. Our modelling attempts indicate a significant deformation of both components by about one per cent (ratio between forward and equivalent radius) during periastron passage. In this respect, it is interesting to note that there is a high rate of occurrence of very eccentric short-period binary systems among the CP1 stars \citep{debernardi00}, which some studies have proposed as lower-temperature counterparts of the CP3 stars \citep{adelman03}. Obviously, the tidally-induced effects do not interfere with the development of CP1 star abundance patterns in these systems. CP2, CP3 and CP4 stars, on the other hand, show a rather different eccentricity versus orbital period distribution with an apparent upper envelope \citep{carrier02}. Interestingly, V680 Mon is located well above this proposed upper envelope (cf. Fig. 8 of \citealt{carrier02}). In summary, V680 Mon lends itself perfectly for detailed follow-up studies and may prove to be a keystone in the understanding of the development of CP3 star peculiarities. \section*{Acknowledgements} We thank the referee, Gautier Mathys, for his comments that helped to improve the paper. We furthermore thank Theodor Pribulla and Johana Sup{\'i}kov{\'a} for their help in preparing this manuscript and express our gratitude to Iosif I. Romanyuk for providing the original Peremennye Zvezdy paper of P. P. Parenago and Hans Michael Maitzen for his translation of the Russian original. EP acknowledges support by the Erasmus+ programme of the European Union under grant number 2020-1-CZ01-KA203-078200. This work has been supported by the VEGA grants of the Slovak Academy of Sciences No. 2/0031/18 and 2/0004/20. This paper makes use of data from the Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST), which is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. Furthermore, this paper includes data collected by the TESS mission. Funding for the TESS mission is provided by the NASA's Science Mission Directorate. This work presents results from the European Space Agency (ESA) space mission Gaia. Gaia data are being processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the Gaia MultiLateral Agreement (MLA). The Gaia mission website is https://www.cosmos.esa.int/gaia. The Gaia archive website is https://archives.esac.esa.int/gaia. This research has made use of the WEBDA database, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University, and the SIMBAD and VizieR databases, operated at CDS, Strasbourg, France. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
train/arxiv
BkiUcEbxK1UJ-rWoIwiA
5
1
\section{Introduction}\label{chp_intro} Criteria for various types of ergodicity by drift condition for Markov Chains have been studied extensively over the past decades, see \cite{hou1988,twd1981,mao2003,mao2004,wang2003}. According to these criteria, a solution to some inequality implies, for example, strong ergodicity of a Markov Chain. However, one may find it not that easy to make sure a process, for instance, not being strongly ergodic. In fact, the celebrated strong ergodicity criteria with drift condition reads as follows. \begin{thm}[\cite{hou1988,twd1981}] Let $Q$ be an irreducible regular $Q$-matrix and $H$ a non-empty finite subset of a countable state space $E$. Then the $Q$-process is strongly ergodic if and only if there exists a bounded solution $(y_i)_{i\in E}$ to inequality \begin{displaymath} \sum_{j\in E}q_{ij}y_j\leqslant -1,\qquad i\notin H. \end{displaymath} \end{thm} \par If we are proving a $Q$-process is not strongly ergodic using this criterion, we have to show that there is no bounded solution to this inequality. Neverthless, this is not so practical. We intend to complement ergodicity criteria in this paper. For instance, can we assert non-strong ergodicity of a $Q$-process from some inequality and some of its solutions? \par Since we are dealing with ergodic properties, we assume processes considered are all recurrent without loss of generality. And we will deal with not only continuous-time but also discrete-time Markov Chains using exactly the same method. \par Consider an irreducible regular $Q$-matrix $Q=(q_{ij}\st i,j\in E)$ on a countable state space $E$ with transition probability matrix $P(t)=\bigl(p_{ij}(t)\bigr)_{t\geqslant0}$. Meanwhile, denote \begin{displaymath} q_i\coloneqq -q_{ii}=\sum_{j\neq i}q_{ij}<\infty,\qquad i\in E. \end{displaymath} We have the following ergodic notions. \begin{enumerate}[(1)] \item The $Q$-process is ergodic, if for each $i$, ${\norm[\big]{p_{i\cdot}(t)- \pi}_{\mathrm{Var}} \coloneqq \sum_{j\in E} \abs[\big]{p_{ij}(t)- \pi_j} \to 0}$ as $t \to \infty$. \item (algebraic ergodicity) The $Q$-process is $\ell$-ergodic for some integer $\ell\geqslant 1$, if for each $i,j\in E$, $\abs[\big]{p_{ij}(t)- \pi_j} = O\bigl(t^{-(\ell-1)}\bigr)$ as $t \to \infty$. \item The $Q$-process is exponentially ergodic, if for each $i,j\in E$, ${\abs[\big]{p_{ij}(t)- \pi_j}=O(\mathrm{e}^{-\beta t})}$ as $t \to \infty$ for some $\beta>0$. \item The $Q$-process is strongly ergodic, if $\lim_{t\to\infty}\sup_{i\in E} \norm[\big]{ p_{i\cdot}(t)- \pi}_{\mathrm{Var}} = 0$. \end{enumerate} Note that we occasionally say a $Q$-process is $0$-ergodic when it is recurrent for ease of terminology. Also, we may say a $Q$-process is $1$-ergodic if it is ergodic. \par Set \begin{displaymath} \sigma_H \coloneqq \inf\bigl\{t\geqslant \eta_1\st X_t\in H\bigr\},\qquad H\subseteq E, \end{displaymath} where $(X_t)_{t\geqslant 0}$ is the $Q$-process and $\eta_1$ is the first jump time. There are probabilistic descriptions of above ergodic notions. \begin{enumerate}[(1)] \item The $Q$-process is ergodic if and only if (abbr.\@ iff) $\max_{i\in H}\E_i\mkern-1.5mu{\sigma_H}$ is finite for some (equivalently, for any) non-empty finite subset $H$ of $E$. \item (algebraic ergodicity) The $Q$-process is $\ell$-ergodic for some integer $\ell\geqslant 1$ iff $\max_{i\in H}\E_i\mkern-1.5mu\sigma_H^\ell$ is finite for some (equivalently, for any) non-empty finite subset $H$ of $E$. \item The $Q$-process is exponentially ergodic iff $\max_{i\in H}\E_i\mkern-1.5mu\mathrm{e}^{\lambda\sigma_H}$ is finite for some positive $\lambda$ (with $\lambda<q_i, \forall i\in E$) and some (equivalently, for any) non-empty finite subset $H$ of $E$. \item The $Q$-process is strongly ergodic iff $\bigl(\E_i\mkern-1.5mu\sigma_H\bigr)_{i\notin H}$\vadjust{\kern2pt}% is bounded for some (equivalently, for any) non-empty finite subset $H$ of $E$. \end{enumerate} \par Now, we declare our main results. Let $\Pi=\bigl(\Pi_{ij}\st i,j\in E\bigr)$ be the embedding chain of the $Q$-process, where we have \begin{numcases} {\Pi_{ij}=} q_{ij}/q_i,\qquad \nonumber &$j\neq i$,\\ 0, &$j=i$\nonumber. \end{numcases} \begin{theorem}\label{in_erg_con} Let $Q$ be an irreducible regular $Q$-matrix and $H$ a non-empty finite subset of $E$. Then the $Q$-process is non-ergodic iff there is a sequence $\{y^{(n)}\}^{\infty}_{n=1}$, where $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ for each $n\geqslant 1$, and $\{y^{(n)}\}^{\infty}_{n=1}$ satisfies the following conditions: \begin{enumerate}[\upshape (1)] \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\in E}$ satisfies $\sup_{i\in E} y^{(n)}_i<\infty$ and solves inequality \begin{equation}\label{in_erg_con_eq} y_i\leqslant\sum_{j\notin H}\Pi_{ij}y_j+\frac{1}{q_i},\qquad i\in E; \end{equation} \item $\sup_{n\geqslant 1} \max_{i\in H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \max_{i\in H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} \begin{theorem}\label{in_serg_con} Let $Q$ be an irreducible regular $Q$-matrix and $H$ a non-empty finite subset of $E$. Then the $Q$-process is non-strongly ergodic iff there is a sequence $\{y^{(n)}\}^{\infty}_{n=1}$, where $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\notin H}$ for each $n\geqslant 1$, and $\{y^{(n)}\}^{\infty}_{n=1}$ satisfies the following conditions: \begin{enumerate}[\upshape (1)] \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\notin H}$ satisfies $\sup_{i\notin H} y^{(n)}_i<\infty$ and solves inequality \begin{equation}\label{in_serg_con_eq} y_i\leqslant\sum_{j\notin H}\Pi_{ij}y_j+\frac{1}{q_i},\qquad i\notin H; \end{equation} \item $\sup_{n\geqslant 1}\sup_{i\notin H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \sup_{i\notin H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} \begin{rmk} Testing sequence in \Cref{in_erg_con,in_serg_con} need not be non-negative. Take \Cref{in_erg_con} for instance. Let $\{y^{(n)}\}^{\infty}_{n=1}$ be a sequence satisfying the conditions in \Cref{in_erg_con}. Then for each $n\geqslant 1$, $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ is a function on $E$. Here, $y^{(n)}$ is not required to be non-negative. We may even allow $\inf_{i\in E}y^{(n)}_i=-\infty$. However, $y^{(n)}$ should be a finite-valued function. In other words, for each $n\geqslant 1$ and $i\geqslant 1$, $y^{(n)}_i$ is a finite real number. \end{rmk} The following inverse problem criterion for algebraic ergodicity generalizes \Cref{in_erg_con}. \begin{theorem}\label{in_aerg_con} Let $Q$ be an irreducible regular $Q$-matrix and $H$ a non-empty finite subset of $E$. Suppose the $Q$-process is $\ell$-ergodic for some non-negative integer $\ell$, then the $Q$-process is not $(\ell+1)$-ergodic iff there is a sequence $\{y^{(n)}\}^{\infty}_{n=1}$, where $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ for each $n\geqslant 1$, and $\{y^{(n)}\}^{\infty}_{n=1}$ satisfies the following conditions: \begin{enumerate}[\upshape (1)] \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\in E}$ satisfies $\sup_{i\in E} y^{(n)}_i<\infty$ and solves inequality \begin{equation}\label{in_aerg_con_eq} y_i\leqslant\sum_{j\notin H}\Pi_{ij}y_j+\frac{(\ell+1)}{q_i}\E_i\mkern-1.5mu\sigma_H^{\ell},\qquad i\in E; \end{equation} \item $\sup_{n\geqslant 1} \max_{i\in H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \max_{i\in H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} \Cref{in_eerg_con} is a non-exponential ergodicity criterion for $Q$-processes. \begin{theorem}\label{in_eerg_con} Let $Q$ be an irreducible regular $Q$-matrix with $\inf_{i\in E} q_i>0$ and $H$ a non-empty finite subset of $E$. Then the $Q$-process is non-exponentially ergodic iff there is a sequence of positive numbers $\{\lambda_n\}_{n=1}^{\infty}$ and a sequence of functions $\{y^{(n)}\}^{\infty}_{n=1}$ on $E$ satisfying the following conditions: \begin{enumerate}[\upshape (1)] \item $\lim_{n\to \infty}\lambda_n=0$; \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\in E}$ is finitely supported and solves inequality \begin{equation}\label{in_eerg_con_eq} y_i^{(n)}\leqslant \frac{q_i}{q_i-\lambda_n}\sum_{j\notin H}\Pi_{ij}y^{(n)}_j+\frac{1}{q_i-\lambda_n},\qquad i\in E; \end{equation} \item $\sup_{n\geqslant 1} \max_{i\in H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \max_{i\in H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} Although we need a sequence of testing functions in applications of above results, we can actually manufacture testing functions in batch. For example, one may consult the following interesting example and its proof in \Cref{sec_catastr}. \begin{exmp} Let $Q=(q_{ij})$ be a conservative $Q$-matrix on $E=\Z_+=\{0,1,2,\ldots\}$ with \begin{numcases} {q_{ij}=} i+1,\quad \nonumber &if\/ $i\geqslant0,\enspace j=i+1$,\\ \alpha_i\geqslant0, &if\/ $i\geqslant1,\enspace j=0$,\nonumber\\ 0, & other $i\neq j$\nonumber. \end{numcases} Assume there are infinitely many non-zero $\alpha_i$, so $Q$ is irreducible. Then, the $Q$-process is non-exponentially ergodic if $ \lim_{i\to\infty}\alpha_i=0$. \end{exmp} \par Brussel's model (see \cite{yanchen1986}) is a typical model of reaction-diffusion process with several species. Finite-dimensional Brussel's model is exponentially ergodic (cf.\@ \cite{chenjw1995}). In \Cref{chp_app}, we will demonstrate that it is non-strongly ergodic using \Cref{in_serg_con}, which was actually proved for the first time in \cite{wu2007} by comparison method. Comparison method works for Brussel's model but it is no longer available for more involved models like the following one. However, we can still deal with it using our drift criteria developed in this paper, see \Cref{chp_app} for further details. \begin{exmp} Let $S$ be a finite set, $E = (\Z_+)^S$ and $p(u, v)$ a transition probability matrix on $S$. We denote by $\theta \in E$ whose components are identically 0 and denote by $e_u \in E$ the unit vector whose component at site $u \in S$ is equal to 1 and other components at $v \neq u $ all equal 0. Define an irreducible $Q$-matrix $Q=\bigl(q(x,y)\st x,y\in E\bigr)$ as follows: \begin{numcases} {q(x, y)=} x(u)^{\gamma}, \nonumber &if\/ $y=x+e_u$,\enspace $x\neq\theta$, \\ 1, \nonumber &if\/ $x=\theta$,\enspace$y=e_u$, \\ x(u)^{\gamma}, \nonumber &if\/ $y=x-e_u$, \\ x(u)p(u,v), \quad\nonumber &if\/ $y=x-e_u+e_v$,\enspace $v\neq u$,\\ 0, & other $y\neq x$\nonumber, \end{numcases} and $q(x) = -q(x,x)=\sum_{y\neq x}q(x,y)$, where $x = \bigl(x(u)\st u\in S\bigr) \in E$. In \Cref{chp_app}, we will prove the following results: \begin{enumerate}[(1)] \item when $\gamma\leqslant 2$, the $Q$-process is non-strongly ergodic; \item when $\gamma\leqslant 1$, the $Q$-process is non-ergodic. \end{enumerate} \end{exmp} As for discrete time chains, we also have the following parallel criteria. \renewcommand{\thetheorem}{\ref{in_erg_con}$^\prime$} \addtocounter{theorem}{-1 \begin{theorem}\label{in_erg_dis} Let $P=(P_{ij})$ be an irreducible aperiodic transition matrix and $H$ a non-empty finite subset of $E$. Then the chain is non-ergodic iff there is a sequence $\{y^{(n)}\}^{\infty}_{n=1}$, where $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ for each $n\geqslant 1$, and $\{y^{(n)}\}^{\infty}_{n=1}$ satisfies the following conditions: \begin{enumerate}[\upshape (1)] \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\in E}$ satisfies $\sup_{i\in E} y^{(n)}_i<\infty$ and solves inequality \renewcommand{\theequation}{\ref{in_erg_con_eq}$^\prime$} \addtocounter{equation}{-1 \begin{equation}\label{in_erg_dis_eq} y_i \leqslant \sum_{j\notin H}P_{ij}y_j+1,\qquad i\in E; \end{equation} \renewcommand{\theequation}{\arabic{equation} \item $\sup_{n\geqslant 1} \max_{i\in H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \max_{i\in H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} \renewcommand{\thetheorem}{\arabic{theorem} \renewcommand{\thetheorem}{\ref{in_serg_con}$^\prime$} \addtocounter{theorem}{-1 \begin{theorem}\label{in_serg_dis} Let $P=(P_{ij})$ be an irreducible aperiodic transition matrix and $H$ a non-empty finite subset of $E$. Then the chain is non-strongly ergodic iff there is $\{y^{(n)}\}^{\infty}_{n=1}$, where $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\notin H}$ for each $n\geqslant 1$, and $\{y^{(n)}\}^{\infty}_{n=1}$ satisfies the following conditions: \begin{enumerate}[\upshape (1)] \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\notin H}$ satisfies $\sup_{i\notin H} y^{(n)}_i<\infty$ and solves inequality \renewcommand{\theequation}{\ref{in_serg_con_eq}$^\prime$} \addtocounter{equation}{-1 \begin{equation}\label{in_serg_dis_eq} y_i \leqslant \sum_{j\notin H}P_{ij}y_j+1,\qquad i\notin H; \end{equation} \renewcommand{\theequation}{\arabic{equation} \item $\sup_{n\geqslant 1}\sup_{i\notin H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \sup_{i\notin H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} \renewcommand{\thetheorem}{\arabic{theorem} \renewcommand{\thetheorem}{\ref{in_aerg_con}$^\prime$} \addtocounter{theorem}{-1 \begin{theorem}\label{in_aerg_dis} Let $P=(P_{ij})$ be an irreducible aperiodic transition matrix and $H$ a non-empty finite subset of $E$. Suppose the chain is $\ell$-ergodic for some non-negative integer $\ell$, then the chain is not $(\ell+1)$-ergodic iff there is a sequence $\{y^{(n)}\}^{\infty}_{n=1}$, where $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ for each $n\geqslant 1$, and $\{y^{(n)}\}^{\infty}_{n=1}$ satisfies the following conditions: \begin{enumerate}[\upshape (1)] \item for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\in E}$ satisfies $\sup_{i\in E} y^{(n)}_i<\infty$ and solves inequality \renewcommand{\theequation}{\ref{in_aerg_con_eq}$^\prime$} \addtocounter{equation}{-1 \begin{equation}\label{in_aerg_dis_eq} y_i\leqslant\sum_{j\notin H}P_{ij}y_j+\E_i\mkern-1.5mu\sigma_H^{\ell},\qquad i\in E; \end{equation} \renewcommand{\theequation}{\arabic{equation} \item $\sup_{n\geqslant 1} \max_{i\in H} y^{(n)}_i =\infty$ (or equivalently, $\varlimsup_{n\to \infty} \max_{i\in H} y^{(n)}_i =\infty$). \end{enumerate} \end{theorem} \renewcommand{\thetheorem}{\arabic{theorem} The remainder of this paper is organized as follows. In \Cref{chp_prf}, we present proofs for our criteria. In \Cref{chp_sbp}, our criteria are applied to single birth processes. Some multi-dimensional models are treated in \Cref{chp_app}. \section{Proofs of Criteria for Inverse Problems}\label{chp_prf} \subsection{Minimal Solution Theory Preparations}\label{sec_mini_soln} Our proofs are based on minimal solution theory. To begin, let's first recall promptly some useful results in minimal solution theory from \cite{hou1988,chen2004}. \par Let $E$ be an arbitrary non-empty set. Denote by $\mathscr{H}$ a set of mappings from $E$ to $\overline{\R}_+\coloneqq [0,+\infty]$: $\mathscr{H}$ contains constant 1 and is closed under non-negative linear combination and monotone increasing limit, where the order relation ``$\geqslant$'' in $\mathscr{H}$ is defined pointwise. Then, $\mathscr{H}$ is a convex cone. We say that $A\colon\mathscr{H}\to\mathscr{H}$ is a cone mapping if $A0=0$ and \begin{displaymath} A(c_1f_1+c_2f_2)=c_1Af_1+c_2Af_2,\qquad \text{for all }c_1, c_2\geqslant 0 \text{ and }f_1, f_2\in \mathscr{H}. \end{displaymath} Denote by $\mathscr{A}$ the set of all such mappings which also satisfy the following hypothesis: \begin{displaymath} \mathscr{H}\ni f_n\uparrow f \quad \text{implies}\quad Af_n\uparrow Af. \end{displaymath} \begin{definition} Given $A\in\mathscr{A}$ and $g\in\mathscr{H}$. We say $f^{*}$ is a minimal non-negative solution (abbr.\@ minimal solution) to equation \begin{equation}\label{mini_eq} f=Af+g,\qquad x\in E, \end{equation} if $f^*$ satisfies \Cref{mini_eq} and for any solution $\widetilde{f}\in\mathscr{H}$ to \Cref{mini_eq}, we have \begin{displaymath} \widetilde{f}\geqslant f^*,\qquad x\in E. \end{displaymath} \end{definition} \begin{theorem}[\mbox{\cite[Theorem 2.2]{chen2004}}]\label{mini_uniq The minimal solution to \Cref{mini_eq} always exists uniquely. \end{theorem} \begin{definition} Let $A, \widetilde{A}\in \mathscr{A}$ and $g, \widetilde{g}\in \mathscr{H}$ satisfy \begin{displaymath} \widetilde{A}\geqslant A,\qquad \widetilde{g}\geqslant g. \end{displaymath} Then we call \begin{equation}\label{mini_ctrl} \widetilde{f} \geqslant \widetilde{A}\widetilde{f}+\widetilde{g},\qquad x\in E \end{equation} a controlling equation of \Cref{mini_eq}. \end{definition} \begin{theorem}[\mbox{\cite[Theorem 2.6]{chen2004}, Comparison Principle}]\label{mini_cmprs} Let $f^*$ be the minimal solution to \Cref{mini_eq}. Then for any solution $\widetilde{f}$ to \Cref{mini_ctrl}, we have $\widetilde{f}\geqslant f^*$. \end{theorem} By \Cref{mini_uniq}, we may define a map \begin{displaymath}\begin{split} m_A\colon \mathscr{H} &\to \mathscr{H},\\ g &\mapsto m_A g, \end{split}\end{displaymath} where $m_A g$ denotes the minimal solution to \Cref{mini_eq}. \begin{theorem}[\mbox{\cite[Theorem 2.7]{chen2004}}]\label{mini_apprx_org} $m_A$ is a cone mapping. For $\{A_n\}\subseteq \mathscr{A}$, $A_n \uparrow A$ and $\{g_n\}\subseteq \mathscr{H}$, $g_n\uparrow g$, we have $A\in \mathscr{A}, g\in\mathscr{H}$ and $m_{A_n}g_n\uparrow m_{A}g$. \end{theorem} The following minimal solution characterizations of moments of hitting times are essential for us to exploit minimal solution theory. \begin{theorem}[\mbox{\cite[Theorem 3.1]{mao2004}}] \label{con_mini_alge} For any $\ell\geqslant 1$, the moments of return times $\E_i\mkern-1.5mu\sigma_H,\E_i\mkern-1.5mu\sigma_H^2,\ldots,\E_i\mkern-1.5mu\sigma_H^\ell$ are inductively the minimal solution to the following $\ell$-family of systems for $0\leqslant n\leqslant \ell-1$, \begin{displaymath} x_i^{(n+1)}=\sum_{j\notin H}\Pi_{ij}x_j^{(n+1)}+\frac{(n+1)}{q_i}x_i^{(n)},\qquad i\in E, \end{displaymath} where $x_i^{(0)}=1\,(i\in E)$. \end{theorem} When $\ell=1$, \Cref{con_mini_alge} gives \begin{corollary}\label{con_min1} $(\E_i\mkern-1.5mu\sigma_H)_{i\in E}$ is the minimal solution to \begin{displaymath} x_i=\sum_{j\notin H}\Pi_{ij}x_j+\frac{1}{q_i},\qquad i\in E. \end{displaymath} \end{corollary} \begin{theorem}[\mbox{\cite[Theorem 4.48]{chen2004}}]\label{con_mini_exp} For a non-empty finite subset $H$ of $E$ and positive $\lambda$ with $\lambda<q_i$ for all $i\in E$, set \begin{displaymath} e_{iH}(\lambda)\coloneqq \frac{1}{\lambda}\bigl(\E_i\mkern-1.5mu\mathrm{e}^{\lambda\sigma_H}-1\bigr) = \int_0^\infty \mathrm{e}^{\lambda t}\Psub{i}[\sigma_H>t]\df{t} \end{displaymath} for each $i\in E$ (cf.\@ \cite[Page 148, Equivalence of Theorems 4.45 and 4.44]{chen2004}). Then $\bigl(e_{iH}(\lambda)\bigr)_{i\in E}$ is the minimal solution to \begin{displaymath} x_i=\frac{q_i}{q_i-\lambda}\sum_{j\notin H}\Pi_{ij}x_j+\frac{1}{q_i-\lambda},\qquad i\in E. \end{displaymath} \end{theorem} To prove the criteria, we may assume $E=\{0, 1, 2, \ldots\}$ and $H=\{0\}$ without loss of generality. Since the proofs for discrete-time Markov Chains are similar with those for continuous-time Chains, we only give the proofs in time-continuous setup. One may easily prove time-discrete results using similar technic. \par Before proceeding further, let's briefly describe the main points in our proofs. Take non-ergodicity for instance. In order to prove the expectation of return time to the state $0$ is infinity, we first get a lower bound for the expectation of return time. Then a sequence of increasing lower bound implies the desired result. On another hand, finite approximation method would guarantee existence of an increasing sequence of lower bound and therefore necessity of our conditions. \pa \subsection{Lower Bound for Polynomial Moments and Sufficiency}\label{sec_suffi} \begin{theorem}\label{dmc} Let $P=(P_{ij})$ be an irreducible conservative transition matrix on $E$. Then the chain is transient iff the inequality \begin{displaymath} \sum_{j\geqslant 0}P_{ij} z_j\leqslant z_i,\qquad i\geqslant 1 \end{displaymath} has a solution $z=(z_i)_{i\geqslant 0}$ satisfying \begin{displaymath} -\infty<\inf_{i\geqslant 0} z_i <z_0. \end{displaymath} \end{theorem} As \Cref{dmc} is a slight modification of \cite[Theorem 4.25]{chen2004}, its proof would not be included here. One may also find a proof in \cite[Proposition 1.3]{martin2016}. \begin{lemma}\label{cmprs_a} Let $\ell$ be a non-negative integer and $Q$ an irreducible regular $Q$-matrix on $E$. Assume further inequality \begin{displaymath} y_i\leqslant\sum_{ \begin{subarray}{c} j\geqslant 1\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}y_j+\frac{(\ell+1)}{q_i}\E_i\mkern-1.5mu\sigma_0^\ell,\qquad i\geqslant1 \end{displaymath} has a finite solution $y=(y_i)_{i\geqslant 1}$ with $\sup_{i\geqslant1} y_i<\infty$. If the $Q$-process is $(\ell+1)$-ergodic, then we have \begin{displaymath} y_i\leqslant \E_i\mkern-1.5mu\sigma_0^{\ell+1}, \qquad i\geqslant 1. \end{displaymath} \end{lemma} \begin{proof} Since the $Q$-process is $(\ell+1)$-ergodic, $(\E_i\mkern-1.5mu\sigma_0^{\ell+1})_{i\geqslant 1}$ is finite and is the minimal non-negative solution to \begin{displaymath} x_i=\sum_{ \begin{subarray}{c} j\geqslant 1\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j+\frac{(\ell+1)}{q_i}\E_i\mkern-1.5mu\sigma_0^\ell,\qquad i\geqslant 1. \end{displaymath} Set \begin{numcases} {z_i=} 0, \nonumber &$i=0$, \\ \E_i\mkern-1.5mu\sigma_0^{\ell+1}-y_i,\qquad \nonumber&$i\geqslant 1$. \end{numcases} Then $(z_i)_{i\geqslant0}$ satisfies \begin{displaymath} \left\{ \begin{aligned} \sum_{j\geqslant 0}\Pi_{ij} z_j &\leqslant z_i,\qquad i\geqslant 1, \\ \inf_{i\geqslant 0} z_i&>-\infty. \end{aligned} \right. \end{displaymath} The $Q$-process is recurrent by our assumption, so is its embedding chain. Applying \Cref{dmc} to the embedding chain $\Pi=(\Pi_{ij})$, we arrive at the conclusion that \begin{displaymath} z_i\geqslant z_0,\qquad i\geqslant 1. \end{displaymath} In other words, \begin{displaymath} y_i\leqslant \E_i\mkern-1.5mu\sigma_0^{\ell+1},\qquad i\geqslant 1.\qedhere \end{displaymath} \end{proof} \begin{remark}\label{upbound} The hypothesis ``$\sup_{i\geqslant1} y_i<\infty$'' cannot be removed from \Cref{cmprs_a}. In fact, we consider an ergodic $Q$-process, then $(\E_i\mkern-1.5mu\sigma_0)_{i\geqslant 1}$ is the minimal non-negative solution to \begin{equation}\label{aux} x_i=\sum_{ \begin{subarray}{c} j\geqslant 1\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j+\frac{1}{q_i},\qquad i\geqslant 1. \end{equation} On another hand, fix an arbitrary $\varepsilon>0$, if we take \begin{displaymath}\begin{split} x_1'&= \E_1\mkern-1.5mu\sigma_0 + \varepsilon,\\ x_i'&= x_1' \sum_{k=0}^{i-1}F_k^{(0)}-\sum_{k=0}^{i-1}d_k^{(0)},\qquad i\geqslant 2, \end{split}\end{displaymath} then $(x_i')_{i\geqslant 1}$ also solves \Cref{aux} (cf.\@ \cite{chenzhang2014}). Because the $Q$-process is assumed to be ergodic and thus recurrent, we have $\sum_{k=0}^\infty F_k^{(0)}=\infty$ (cf.\@ \cite{chen2004, chenzhang2014}). Note that \begin{displaymath} x_i'-\E_i\mkern-1.5mu\sigma_0= \varepsilon \sum_{k=0}^{i-1} F_k^{(0)}, \end{displaymath} we may conclude that $(x_i')_{i\geqslant 1}$ is unbounded. Meanwhile, we have \begin{displaymath} \E_i\mkern-1.5mu\sigma_0 < x_i', \qquad i\geqslant 1. \end{displaymath} This implies that the condition ``$\sup_{i\geqslant1} y_i<\infty$'' cannot be removed. \end{remark} \par It is straightforward to write the time-discrete analogue of \Cref{cmprs_a} and we shall omit its proof. \renewcommand{\thetheorem}{\ref{cmprs_a}$^\prime$ \addtocounter{theorem}{-1 \begin{lemma}\label{cmprs_a_dis} Let $\ell$ be a non-negative integer and $P$ an irreducible aperiodic transition matrix on $E$. Assume further inequality \begin{displaymath} y_i\leqslant\sum_{j\geqslant 1}P_{ij}y_j+\E_i\mkern-1.5mu\sigma_0^\ell,\qquad i\geqslant1 \end{displaymath} has a finite solution $y=(y_i)_{i\geqslant 1}$ with $\sup_{i\geqslant1} y_i<\infty$. If the chain is $(\ell+1)$-ergodic, then we have \begin{displaymath} y_i\leqslant \E_i\mkern-1.5mu\sigma_0^{\ell+1}, \qquad i\geqslant 1. \end{displaymath} \par \vspace{-1.2\baselineskip} \qed \end{lemma} \renewcommand{\thetheorem}{\arabic{theorem} \begin{proof}[Proof of sufficiency of $\Cref{in_aerg_con}$] If the $Q$-process is $(\ell+1)$-ergodic, by \Cref{con_mini_alge} and \Cref{cmprs_a}, for each $n\geqslant 1$, \begin{displaymath} y^{(n)}_0\leqslant \sum_{ \begin{subarray}{c} j\geqslant 1\\ \end{subarray}}\frac{q_{0j}}{q_0}y^{(n)}_j+\frac{(\ell+1)}{q_i}\E_0\mkern-1.5mu\sigma_0^\ell \leqslant \sum_{ \begin{subarray}{c} j\geqslant 1\\ \end{subarray}}\frac{q_{0j}}{q_0}\E_j\mkern-1.5mu\sigma_0^{\ell+1}+\frac{(\ell+1)}{q_i}\E_0\mkern-1.5mu\sigma_0^\ell=\E_0\mkern-1.5mu\sigma_0^{\ell+1}. \end{displaymath} It follows that \begin{displaymath} \infty=\sup_{n\geqslant1}y^{(n)}_0 \leqslant \E_0\mkern-1.5mu\sigma_0^{\ell+1} <\infty, \end{displaymath} a contradiction. \end{proof} \begin{proof}[Proof of sufficiency of $\Cref{in_serg_con}$] It suffices to prove \Cref{in_serg_con} when the $Q$-process is ergodic. By \Cref{cmprs_a} with $\ell=0$, we have \begin{displaymath} y^{(n)}_i\leqslant \E_i\mkern-1.5mu\sigma_0, \qquad i\geqslant 1,\enspace n\geqslant 1. \end{displaymath} Consequently, \begin{displaymath} \infty=\sup_{n\geqslant1}\sup_{i\geqslant1}y^{(n)}_i\leqslant \sup_{i\geqslant1}\E_i\mkern-1.5mu\sigma_0. \end{displaymath} Thus the $Q$-process is non-strongly ergodic. Our proof is now complete. \end{proof} \subsection{Approximation for Polynomial Moments and Necessity}\label{sec_neces} Let $\ell$ be a fixed non-negative integer. To prove necessity of \Cref{in_aerg_con,in_serg_con}, we consider truncated equations for each $n\geqslant 1$: \addtocounter{equation}{1 \begin{equation}\label{apprxa_eq} x_i=\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant n\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j+\frac{(\ell+1)}{q_i}\E_i\mkern-1.5mu\sigma_0^\ell,\qquad 1\leqslant i\leqslant n.\tag{\theequation.n} \end{equation} Denote the minimal non-negative solution to \Cref{apprxa_eq} as \begin{displaymath} x^{(n)}=\bigl(x^{(n)}_i,\ 1\leqslant i\leqslant n\bigr). \end{displaymath} Also, we set $ M_n=\max_{1\leqslant i\leqslant n}x^{(n)}_i$. \begin{lemma}\label{apprxa} If the $Q$-process is $\ell$-erogdic, then we have the following assertions: \begin{enumerate}[\upshape (1)] \item $M_n$ is finite for each $n\geqslant 1$;\label{apprxa:1} \item $\E_i\mkern-1.5mu\sigma_0^{\ell+1}= \lim_{n\to\infty} {\hskip -7pt} \uparrow x^{(n)}_i$ for each $i\geqslant 1$, and $(M_n)_{n\geqslant1}$ is increasing;\label{apprxa:2} \item $(M_n)_{n\geqslant1}$ is bounded iff\/ $(\E_i\mkern-1.5mu\sigma_0^{\ell+1})_{i\geqslant 1}$ is bounded;\label{apprxa:3} \item pick $\ell=0$, then it follows from \eqref{apprxa:3} that the $Q$-process is non-strongly ergodic iff\/ $\sup_{n\geqslant1} M_n=\infty$.\label{apprxa:4} \end{enumerate} \end{lemma} \begin{proof} a) Since the $Q$-process is $\ell$-ergodic, we may pick a positive constant \begin{displaymath} C_n = (\ell+1) \max_{1\leqslant i\leqslant n}\E_i\mkern-1.5mu\sigma_0^\ell+1. \end{displaymath} Now consider inequality \begin{displaymath} x_i \geqslant \sum_{ \begin{subarray}{c} 1\leqslant j\leqslant n\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j+\frac{C_n}{q_i},\qquad 1\leqslant i\leqslant n. \end{displaymath} Introducing a change of variable $\widetilde{x}_i=\frac{x_i}{C_n}$, we have the following equivalent form of the above inequality: \addtocounter{equation}{1} \begin{equation}\label{apprxa_eq_aux}\tag{\theequation.n} \widetilde{x}_i \geqslant \sum_{ \begin{subarray}{c} 1\leqslant j\leqslant n\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}\widetilde{x}_j+\frac{1}{q_i},\qquad 1\leqslant i\leqslant n. \end{equation} By \Cref{con_min1}, the minimal solution to \Cref{apprxa_eq_aux} is the expectation of return time to state 0 of the $Q^{(n)}$-process and is therefore finite, where $Q^{(n)}$ has the following form: \begin{displaymath} Q^{(n)}= \left( \begin{array}{ccccc} -n\quad &{1\ \ } &{1\ \ } &\cdots &{1}\\ q_{10}+\sum_{k=n+1}^{\infty}q_{1,k}\quad &{q_{11}\ } &{q_{12} } &\cdots &{q_{1n}}\\% 第二行元素 \vdots & \vdots & \vdots &\vdots &\vdots\\ q_{n0}+\sum_{k=n+1}^{\infty}q_{n,k}\quad &{q_{n1}\ } &{q_{n2}\ } &\cdots &{q_{nn}}\\ \end{array} \right)_{(n+1)\times(n+1).} \end{displaymath} Now by \Cref{mini_cmprs}, $M_n$ is finite. \par b) By \Cref{con_mini_alge}, $\bigl(\E_i\mkern-1.5mu\sigma_0^{\ell+1}\bigr)_{i\geqslant 1}$ is the minimal solution to \begin{displaymath} x_i=\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant n\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j+\frac{(\ell+1)}{q_i}\E_i\mkern-1.5mu\sigma_0^\ell,\qquad i\geqslant 1. \end{displaymath} Exploiting \Cref{mini_apprx_org}, we obtain the second assertion. \par c) Some trivial manipulation leads to the other two assertions. We omit the details. \end{proof} \begin{proof}[Proof of necessity of $\Cref{in_aerg_con}$] Suppose the $Q$-process is not $(\ell+1)$-ergodic. Set \begin{numcases} {y^{(n)}_i=} \sum_{ \begin{subarray}{c} j\geqslant 1\\ \end{subarray}}\frac{q_{0j}}{q_0}x^{(n)}_j+\frac{(\ell+1)}{q_0}\E_0\mkern-1.5mu\sigma_0^\ell, \quad\nonumber &$i=0$, \\ x^{(n)}_i, \nonumber &$1\leqslant i \leqslant n$, \\ 0, &$i\geqslant n+1$.\nonumber \end{numcases} By the monotone convergence theorem and \Cref{con_mini_alge}, \begin{displaymath}\begin{split} \lim_{n\to \infty} y^{(n)}_0 &= \lim_{n\to \infty} \sum_{ \begin{subarray}{c} j\geqslant 1\\ \end{subarray}}\frac{q_{0j}}{q_0}y^{(n)}_j+\frac{(\ell+1)}{q_0}\E_0\mkern-1.5mu\sigma_0^\ell\\ &= \sum_{ \begin{subarray}{c} j\geqslant 1\\ \end{subarray}}\frac{q_{0j}}{q_0}\E_j\mkern-1.5mu\sigma_0^{\ell+1}+\frac{(\ell+1)}{q_0}\E_0\mkern-1.5mu\sigma_0^\ell\\ &=\E_0\mkern-1.5mu\sigma_0^{\ell+1} =\infty. \end{split}\end{displaymath} Now it is easy to check that $\{y^{(n)}\}^{\infty}_{n=1}$ with $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ is a required sequence. Necessity of \Cref{in_aerg_con} is proved. \end{proof} \begin{proof}[Proof of necessity of $\Cref{in_serg_con}$] Assume the $Q$-process is non-strongly ergodic. We pick $\ell=0$ in \Cref{apprxa} and set \begin{numcases} {y^{(n)}_i=} x^{(n)}_i, \quad\nonumber &$1\leqslant i \leqslant n$, \\ 0, &$i\geqslant n+1$.\nonumber \end{numcases} Then $\{y^{(n)}\}^{\infty}_{n=1}$ is a sequence required in \Cref{in_serg_con}. In fact, we may easily deduce that for each $n\geqslant 1$, $y^{(n)}$ solves \Cref{in_serg_con_eq}. Meanwhile, for each $n\geqslant 1$, we have \begin{displaymath} \sup_{i\geqslant1} y^{(n)}_i=M_n<\infty. \end{displaymath} By the last assertion of \Cref{apprxa}, $\sup_{n\geqslant1} M_n=\infty$. Therefore, \begin{displaymath} \sup_{n\geqslant 1} \sup_{i\geqslant1} y^{(n)}_i =\sup_{n\geqslant 1}M_n=\infty. \end{displaymath} Hence we prove necessity of \Cref{in_serg_con}. \end{proof} \subsection{Proof of \Cref{in_eerg_con}}\label{sec_eergpf} Now, we prove \Cref{in_eerg_con}, non-exponential ergodicity criteria. Since we are discussing exponential ergodicity in this subsection, we assume the process is ergodic without loss of generality. Our idea for proof of \Cref{in_eerg_con} is similar with that of \Cref{in_aerg_con,in_serg_con} but technical details here are different and more complex. Briefly speaking, we first use \Cref{mini_ctrl_in} to get a lower bound for exponential moment of return time. On another hand, we use finite approximation to prove the necessity. \par First, using the notation in \Cref{sec_mini_soln}, we have the following two useful lemmas. \begin{theorem}[\mbox{\cite[Theorem 2.10]{chen2004}}]\label{mini_iter} Given an arbitrary non-negative $\widetilde{f}^{(0)}$ satisfying $0\leqslant \widetilde{f}^{(0)} \leqslant pf^*$ for some non-negative number $p$, set \begin{displaymath} \widetilde{f}^{(n+1)} = A\widetilde{f}^{(n)}+g,\qquad n\geqslant 0. \end{displaymath} Then we have $\widetilde{f}^{(n)} \to f^*\,(n\to\infty)$. \end{theorem} \begin{lemma}\label{mini_ctrl_in} Let $f^*$ be the minimal solution to \Cref{mini_eq} and $\widetilde{f}$ be a non-negative function satisfying \begin{equation}\label{mini_ctrl_in_eq} \widetilde{f} \leqslant A\widetilde{f}+ g ,\qquad x\in E. \end{equation} If $\widetilde{f}\leqslant pf^*$ for some non-negative number $p$, then $\widetilde{f}\leqslant f^*$. \end{lemma} \begin{proof} Assume $p>1$ without loss of generality. Define \begin{displaymath}\begin{split} \widetilde{f}^{(0)}&=\widetilde{f},\\ \widetilde{f}^{(n+1)}&=A\widetilde{f}^{(n)}+g,\qquad n\geqslant 0. \end{split}\end{displaymath} We claim \begin{displaymath} \widetilde{f}^{(n)}\uparrow f^*,\qquad \text{as }n\to \infty. \end{displaymath} In fact, by \Cref{mini_iter}, we have \begin{displaymath} \widetilde{f}^{(n)}\to f^*,\qquad \text{as }n\to \infty. \end{displaymath} So we need only show the monotonicity. According to \Cref{mini_ctrl_in_eq}, \begin{displaymath} \widetilde{f}^{(0)} \leqslant A\widetilde{f}^{(0)}+ g = \widetilde{f}^{(1)}. \end{displaymath} Now if $\widetilde{f}^{(n)} \leqslant \widetilde{f}^{(n+1)}$ for some $n\geqslant 0$, then \begin{displaymath} \widetilde{f}^{(n+1)} = A\widetilde{f}^{(n)}+ g \leqslant A\widetilde{f}^{(n+1)}+ g= \widetilde{f}^{(n+2)}. \end{displaymath} So the monotonicity holds by induction. It follows immediately that \begin{displaymath} \widetilde{f}= \widetilde{f}^{(0)} \leqslant f^*. \end{displaymath} \Cref{mini_ctrl_in} is proved. \end{proof} \par Let $Q$ be a $Q$-matrix on $E$ with $\inf_{i\in E}q_i>0$. Fix an integer $N\geqslant 1$ and consider $Q$-matrix on finite states \begin{displaymath} Q^{(N)}= \left( \begin{array}{ccccc} -N\quad &{1\ } & {1\ } &\cdots &{1}\\ q_{10}+\sum_{k=N+1}^{\infty}q_{1,k}\quad &{q_{11}\ } &{q_{12}\ } &\cdots &{q_{1N}}\\% 第二行元素 \vdots & \vdots & \vdots &\vdots &\vdots\\ q_{N0}+\sum_{k=N+1}^{\infty}q_{N,k}\quad &{q_{N1}\ } &{q_{N2}\ } &\cdots &{q_{NN}}\\ \end{array} \right)_{(N+1)\times(N+1).} \end{displaymath} Meanwhile, we consider the following equation for $\lambda\in\bigl(0, \inf_{i\in E}q_i\bigr)$: \begin{equation}\label{eloc} x_i=\frac{q_i}{q_i-\lambda}\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j + \frac{1}{q_i-\lambda}, \qquad 0\leqslant i \leqslant N. \end{equation} Denote the minimal solution to \Cref{eloc} as $\bigl(x_i^{(\lambda,N)}, 0\leqslant i \leqslant N\bigr)$. Then by \Cref{mini_apprx_org}, we have \begin{displaymath} x_0^{(\lambda,N)} \uparrow e_{00}(\lambda),\qquad \text{as }N\to \infty. \end{displaymath} Also, we set $\lambda'=\frac{1}{2}\inf_{i\in E}q_i$. \begin{lemma}\label{in_eerg_prep} \begin{enumerate}[\upshape (1)] \item Assume the $Q$-process is non-exponentially ergodic, then \begin{displaymath} \lim_{N\to\infty} {\hskip -7pt} \uparrow x^{(\lambda', N)}_0=e_{00}(\lambda')=\infty. \end{displaymath} \item If\/ $x_0^{(\widetilde{\lambda},N)}$ is finite for some $\widetilde{\lambda}\in\bigl(0, \inf_{i\in E}q_i\bigr)$, then for some $\widehat{\lambda}\in\bigl(\widetilde{\lambda}, \inf_{i\in E}q_i\bigr)$, $x_0^{(\widehat{\lambda},N)}$ is finite. \item If\/ $x_0^{(\widetilde{\lambda},N)}<\infty$ for some $\widetilde{\lambda}\in\bigl(0, \inf_{i\in E}q_i\bigr)$, then $x_0^{(\lambda,N)}$ is continuous at $\widetilde{\lambda}$ as a function of $\lambda$. \item If\/ $x_0^{(\widetilde{\lambda},N)}=\infty$ for some $\widetilde{\lambda}\in\bigl(0, \inf_{i\in E}q_i\bigr)$, then \begin{displaymath}\begin{split} &\lim_{\lambda\uparrow \widetilde{\lambda}}x_0^{(\lambda,N)}=\infty,\\ &x_0^{(\lambda,N)}=\infty,\qquad \lambda>\widetilde{\lambda}. \end{split}\end{displaymath} In other words, $x_0^{(\lambda,N)}$ is continuous at $\widetilde{\lambda}$ as an extended real-valued function. \item For any fixed integer $N\geqslant1$, \begin{displaymath} \lim_{\lambda\downarrow 0} x_0^{(\lambda,N)}\leqslant \E_0\mkern-1.5mu\sigma_0<\infty. \end{displaymath} \end{enumerate} \end{lemma} \begin{proof} a) The first assertion is a direct inference of \Cref{mini_apprx_org} and non-exponential ergodicity. \par b) By \Cref{eloc}, $\bigl(2x_i^{(\widetilde{\lambda},N)}, 0\leqslant i \leqslant N\bigr)$ is a finite solution to \begin{displaymath} x_i=\frac{q_i}{q_i-\widetilde{\lambda}}\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j + \frac{2}{q_i-\widetilde{\lambda}}, \qquad 0\leqslant i \leqslant N. \end{displaymath} So it satisfies \begin{displaymath} x_i>\frac{q_i}{q_i-\widetilde{\lambda}}\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j + \frac{1}{q_i-\widetilde{\lambda}}, \qquad 0\leqslant i \leqslant N. \end{displaymath} Consequently, $\bigl(2x_i^{(\widetilde{\lambda},N)}, 0\leqslant i \leqslant N\bigr)$ also satisfies \begin{displaymath} x_i>\frac{q_i}{q_i-\widehat{\lambda}}\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j + \frac{1}{q_i-\widehat{\lambda}}, \qquad 0\leqslant i \leqslant N, \end{displaymath} for some $\widehat{\lambda}$ slightly larger than $\widetilde{\lambda}$. Now by \Cref{mini_cmprs}, $x_0^{(\widehat{\lambda},N)}$ is finite. \par c) By the second assertion, to prove the third one, we need only prove $x_0^{(\lambda,N)}$ is continuous on the interval $(0,\widetilde{\lambda}]$. Because \begin{displaymath} x_i^{(\lambda,N)}=\frac{1}{\lambda}\bigl(\E_i\mkern-1.5mu^{(Q^{(N)})}\mathrm{e}^{\lambda\sigma_0}-1\bigr), \qquad 1\leqslant i \leqslant N, \end{displaymath} $x_i^{(\lambda,N)}\,(i=1,2,\ldots,N)$ is continuous on the interval $(0,\widetilde{\lambda}]$ by the Lebesgue dominated convergence theorem. Furthermore, $x_0^{(\lambda,N)}$ is continuous on the interval according to equality \begin{displaymath} x_0=\frac{q_0}{q_0-\lambda}\sum_{1\leqslant j \leqslant N}\frac{q_{0j}}{q_0}x_j + \frac{1}{q_0-\lambda}. \end{displaymath} \par d) The fourth assertion is obvious according to above discussions. \par e) Now we prove the last assertion. Since the $Q$-process is assumed to be ergodic, $\E_0\mkern-1.5mu\sigma_0<\infty$. We need only illustrate \begin{displaymath} \lim_{\lambda\downarrow 0} x_0^{(\lambda,N)}\leqslant \E_0\mkern-1.5mu\sigma_0. \end{displaymath} By the proof of ``Equivalence of Theorems 4.45 and 4.44" in \cite[Page 148]{chen2004}, we have \begin{displaymath} x_i^{(\lambda,N)}=\int_{0}^{\infty}\mathrm{e}^{\lambda t}\Psub{i}^{(Q^{(N)})}[\sigma_0>t]\df{t},\qquad i\geqslant 1. \end{displaymath} Because the $Q^{(N)}$-process, as a process on finite state space, must be exponentially ergodic, the Lebesgue dominated convergence theorem gives \begin{displaymath} \lim_{\lambda\downarrow 0} x_i^{(\lambda,N)} =\int_{0}^{\infty}\Psub{i}^{(Q^{(N)})}[\sigma_0>t]\df{t} = \E_i^{(Q^{(N)})}\mkern-1.5mu\sigma_0 \leqslant \E_i\mkern-1.5mu\sigma_0,\qquad i\geqslant 1, \end{displaymath} where the last inequality is by \Cref{mini_apprx_org,con_min1}. Furthermore, by \Cref{eloc}, \begin{displaymath}\begin{split} \lim_{\lambda\downarrow 0} x_0^{(\lambda,N)}&= \lim_{\lambda\downarrow 0}\frac{q_0}{q_0-\lambda}\sum_{1\leqslant j \leqslant N}\frac{q_{0j}}{q_0}x^{(\lambda,N)}_j + \lim_{\lambda\downarrow 0}\frac{1}{q_0-\lambda}\\ &=\sum_{1\leqslant j \leqslant N}\frac{q_{0j}}{q_0}\E_j^{(Q^{(N)})}\mkern-1.5mu\sigma_0 + \frac{1}{q_0}\\ &\leqslant \sum_{j \geqslant 1}\frac{q_{0j}}{q_0}\E_j\mkern-1.5mu\sigma_0 + \frac{1}{q_0}=\E_0\mkern-1.5mu\sigma_0. \end{split}\end{displaymath} Therefore, the last assertion holds. \end{proof} \begin{corollary}\label{func} For each $N\geqslant 1$, $x_0^{(\lambda,N)}$ is an extended real-valued continuous function as a funtion of $\lambda$ on interval $(0,\lambda']$.\qed \end{corollary} \begin{proof}[Proof of necessity of $\Cref{in_eerg_con}$] For each positive integer $n\leqslant\E_0\mkern-1.5mu\sigma_0$, we define $y^{(n)}_i\equiv 0\,(i\in E)$ and $\lambda_n=\lambda'$. And for each $n>\E_0\mkern-1.5mu\sigma_0$, we now construct $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\in E}$ and $\lambda_n$ satisfying \begin{displaymath} y^{(n)}_0\geqslant n,\quad\lambda_n\leqslant\frac{1}{n}. \end{displaymath} In fact, by the first assertion of \Cref{in_eerg_prep}, we may pick a large $N_n$ such that \begin{displaymath} x^{(\lambda', N_n)}_0\geqslant n. \end{displaymath} Then for each $N\geqslant N_n$, \begin{displaymath} x^{(\lambda', N)}_0\geqslant n. \end{displaymath} Furthermore, by \Cref{func} and the last assertion of \Cref{in_eerg_prep}, for each $N\geqslant N_n$, there exists $\lambda(n,N)\in (0,\lambda']$ such that \begin{displaymath} x^{(\lambda(n,N), N)}_0= n. \end{displaymath} For ease of notation, we write $ c=\inf_{N\geqslant N_n}\lambda(n,N)$. Now, we claim $c=0$.\\ Otherwise if $c>0$, we have \begin{displaymath} e_{00}(c)=\lim_{N\to\infty}x_0^{(c, N)}\leqslant n, \end{displaymath} contradicting non-exponential ergodicity.\\ Consequently, we may pick some $\lambda(n,\widetilde{N}_n)\leqslant\frac{1}{n}$ and denote it as $\lambda_n$. Then we have $\lambda_n\leqslant\frac{1}{n}$ and $x^{(\lambda_n, \widetilde{N}_n)}_0=n$. Set \begin{numcases} {y^{(n)}_i=} x^{(\lambda_n,\widetilde{N}_n)}_i, \quad\nonumber &$0\leqslant i \leqslant \widetilde{N}_n$, \\ 0, &$i\geqslant \widetilde{N}_n+1$.\nonumber \end{numcases} It is now straightforward to verify that $\{\lambda_n\}_{n=1}^\infty$ and $\{y^{(n)}\}_{n=1}^\infty$ are the desired sequences. Necessity of our condition follows immediately. \end{proof} \begin{proof}[Proof of sufficiency of $\Cref{in_eerg_con}$] a) We first demonstrate \begin{displaymath} y^{(n)}_0 \leqslant e_{00}(\lambda_n),\qquad n\geqslant 1. \end{displaymath} In fact, since $\bigl(y^{(n)}\bigr)_{i\in E}$ is finitely supported for each $n\geqslant 1$, we may pick $N_n$ such that \begin{displaymath} y_i^{(n)}\leqslant \frac{q_i}{q_i-\lambda_n} \sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N_n\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}y^{(n)}_j+\frac{1}{q_i-\lambda_n},\qquad 1\leqslant i\leqslant N_n. \end{displaymath} At the same time, denote the minimal solution of \begin{displaymath} x_i=\frac{q_i}{q_i-\lambda_n}\sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N_n\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j + \frac{1}{q_i-\lambda_n}, \qquad 1\leqslant i \leqslant N_n \end{displaymath} as $\bigl(x_i^{(\lambda_n,N_n)}, 1\leqslant i \leqslant N_n\bigr)$, which is positive. Then by \Cref{mini_apprx_org} and \Cref{mini_ctrl_in}, \begin{displaymath} y^{(n)}_i \leqslant x_i^{(\lambda_n,N_n)} \leqslant e_{i0}(\lambda_n),\qquad 1\leqslant i\leqslant N_n. \end{displaymath} It follows that \begin{displaymath}\begin{split} y_0^{(n)} &\leqslant \frac{q_0}{q_0-\lambda_n}\sum_{1\leqslant j\leqslant N_n}\frac{q_{0j}}{q_0}y^{(n)}_j+\frac{1}{q_0-\lambda_n}\\ &\leqslant \frac{q_0}{q_0-\lambda_n}\sum_{1\leqslant j\leqslant N_n}\frac{q_{0j}}{q_0}e_{j0}(\lambda_n)+\frac{1}{q_0-\lambda_n}\\ &\leqslant \frac{q_0}{q_0-\lambda_n}\sum_{j\geqslant 1}\frac{q_{0j}}{q_0}e_{j0}(\lambda_n)+\frac{1}{q_0-\lambda_n} =e_{00}(\lambda_n), \end{split}\end{displaymath} where the last equality is by \Cref{con_mini_exp}. This is exactly the desired inequality. \par b) For an arbitrary $\lambda>0$, when $\lambda_n<\lambda$, \begin{displaymath} y^{(n)}_0 \leqslant e_{00}(\lambda_n) \leqslant e_{00}(\lambda). \end{displaymath} Consequently, \begin{displaymath} \infty=\varlimsup_{n\to \infty}y_0^{(n)}\leqslant e_{00}(\lambda). \end{displaymath} It turns out that $\E_0\mkern-1.5mu\mathrm{e}^{\lambda\sigma_0}=\infty\,(\lambda>0)$. So the $Q$-process is non-exponentially ergodic. Sufficiency of \Cref{in_eerg_con} is proved. \end{proof} \section{Applications to Single Birth Processes}\label{chp_sbp} \subsection{Explicit Criteria for Single Birth Processes: Alternative Proofs}\label{sec_explct} Explicit and computable criteria for ergodicity and strong ergodicity of single birth processes have been studied in \cite{yanchen1986,zhang2001}, respectively. In this section, we present alternative proofs (of the necessity parts) for these explicit criteria. \par Let $Q$ be an irreducible regular single birth $Q$-matrix on state space $E=\Z_+=\{0,1,2,\ldots\}$. We have \begin{displaymath}\begin{split} &q_{i,i+1}>0,\\ &q_{i,i+j}=0,\qquad i\geqslant0,\enspace j\geqslant2. \end{split}\end{displaymath} Define $q_n^{(k)}=\sum_{j=0}^{k}q_{nj}$ for $0\leqslant k<n\,(k,n\geqslant0)$ and \begin{displaymath} F_n^{(n)}=1,\quad F_n^{(i)}=\frac{1}{q_{n,n+1}}\sum_{k=i}^{n-1}q_n^{(k)}F_k^{(i)}\,(0\leqslant i\leqslant n), \end{displaymath} \begin{equation}\label{def_d} d_0=0,\quad d_n=\frac{1}{q_{n,n+1}}\Bigl(1+\sum_{k=0}^{n-1}q_n^{(k)}d_k\Bigr) = \sum_{k=1}^n\frac{F_n^{(k)}}{q_{k,k+1}}\,(n\geqslant1). \end{equation} Also, we define \begin{displaymath} d=\sup_{k\geqslant 0}\frac{\sum_{n=0}^k d_n}{\sum_{n=0}^k F_n^{(0)}}. \end{displaymath} It is well-known that the $Q$-process is recurrent iff $\sum_{n=0}^\infty F_n^{(0)}=\infty$ (cf.\@ \cite{chen2004, chenzhang2014}). \par To give alternative proofs for explicit ergodicity criteria for single birth processes, we first make some preparations. \begin{lemma}\label{eu_solu} Let $Q$ be an irreducible regular single birth $Q$-matrix and $N$ a positive integer. We investigate the following (truncated) equation: \begin{equation}\label{sglbth_erg_eq x_i = \sum_{ \begin{subarray}{c} 1\leqslant j\leqslant N\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}x_j+\frac{1}{q_i},\qquad 1\leqslant i\leqslant N. \end{equation} \begin{enumerate}[\upshape (1)] \item \Cref{sglbth_erg_eq} has a unique solution, denoted as $\bigl(x^{(N)}_1,x^{(N)}_2,\ldots,x^{(N)}_N\bigr)$. \item We have a recurrence relation: \begin{displaymath} x^{(N)}_k = x^{(N)}_1 \sum_{n=0}^{k-1}F_n^{(0)}- \sum_{n=0}^{k-1}d_n,\qquad 1\leqslant k\leqslant N. \end{displaymath} \item The unique solution is positive. \item $\varlimsup_{N\to \infty}x^{(N)}_1 \geqslant d$. \end{enumerate} \end{lemma} \begin{proof} a) \Cref{sglbth_erg_eq} has the following equivalent form: \begin{displaymath} \sum_{j=1}^N q_{ij}x_j=-1,\qquad 1\leqslant i\leqslant N. \end{displaymath} To prove regularity of this linear system, we need only prove the following homogeneous equation \begin{equation}\label{var_sglbth_erg_eq} \sum_{j=1}^N q_{ij}x_j=0,\qquad 1\leqslant i\leqslant N \end{equation} has only trivial solution. \par Otherwise, if \Cref{var_sglbth_erg_eq} had a non-trivial solution $(\overline{x}_1,\overline{x}_2,\ldots,\overline{x}_N)$, assume $\overline{x}_1\geqslant 0$ without loss of generality. We claim $\overline{x}_1\leqslant \overline{x}_2$. Since if $\overline{x}_1> \overline{x}_2$, \Cref{var_sglbth_erg_eq} with $i=1$ leads to \begin{displaymath} 0=q_{11}\overline{x}_1+q_{12}\overline{x}_2< q_{11}\overline{x}_1+q_{12}\overline{x}_1\leqslant 0, \end{displaymath} a contradiction. So we obtain $\overline{x}_1\leqslant \overline{x}_2$. Furthermore, we may proceed to prove that $\overline{x}_k\leqslant \overline{x}_{k+1}$ using similar arguments for $k=2,3,\ldots,N-1$. That is \begin{displaymath} \overline{x}_1\leqslant \overline{x}_2\leqslant\cdots\leqslant\overline{x}_N. \end{displaymath} Since the solution is non-trivial, we have $\overline{x}_N>0$. Therefore, \begin{displaymath}\begin{split} 0&=q_{N1}\overline{x}_1+q_{N2}\overline{x}_2+\cdots+q_{N,N-1}\overline{x}_{N-1}+q_{N,N}\overline{x}_N\\ &\leqslant (q_{N1}+q_{N2}+\cdots+q_{N,N-1}+q_{N,N})\overline{x}_N<0, \end{split}\end{displaymath} a contradiction. So \Cref{var_sglbth_erg_eq} has only trivial solution. In this way, we prove the first assertion. \par b) To prove the second assertion, we mimic the proof of \cite[Lemma 2.1]{zhang2001}. Define \begin{displaymath} v_0=x^{(N)}_1,\ v_n=x^{(N)}_{n+1}-x^{(N)}_n,\qquad 1\leqslant n \leqslant N-1. \end{displaymath} From \Cref{sglbth_erg_eq}, we easily derive that \begin{displaymath} v_n=\frac{1}{q_{n,n+1}}\Bigl(\sum_{k=0}^{n-1}q_n^{(k)}v_k-1\Bigr),\qquad 1\leqslant n\leqslant N-1. \end{displaymath} By induction, $v_n=v_0 F_n^{(0)}-d_n$ for $0\leqslant n\leqslant N-1$. Our assertion follows immediately. \par c) If $x^{(N)}_i=\min_{1\leqslant k\leqslant N}x^{(N)}_k \leqslant 0$, then \begin{displaymath}\begin{split} -1&=\sum_{j=1}^N q_{ij}x^{(N)}_j=\sum_{j=1}^{i-1} q_{ij}\bigl(x^{(N)}_j-x^{(N)}_i\bigr) -q_{i0}x^{(N)}_i\\ &\ \ \ +(1-\delta_{i,N}) q_{i,i+1}\bigl(x^{(N)}_{i+1}-x^{(N)}_i\bigr)- \delta_{i,N}q_{i,i+1}x^{(N)}_i \geqslant 0, \end{split}\end{displaymath} where $\delta$ is the Kronecker delta. This contradiction infers that the unique solution is positive. \par d) By the second assertion and the positiveness of the solution, we have \begin{displaymath} x^{(N)}_1 > \max_{1\leqslant k\leqslant N} \frac{\sum_{n=0}^{k-1} d_n}{\sum_{n=0}^{k-1} F_n^{(0)}}. \end{displaymath} So the last assertion follows immediately. \end{proof} \par We are now in position to present our alternative proofs for explicit criteria of single birth processes. \par The following ergodicity criterion is due to Shi-Jian Yan and Mu-Fa Chen \cite{yanchen1986}. Here, proof for sufficiency is picked from \cite{yanchen1986} for completeness. \begin{theorem}\label{explct_erg} Let $Q$ be a regular single birth $Q$-matrix, then the $Q$-process is ergodic iff\/ $d<\infty$. \end{theorem} \begin{proof} a) When $d<\infty$, we define \begin{displaymath} y_0=0,\ y_k=\sum_{n=0}^{k-1}\bigl(F_n^{(0)}d - d_n\bigr),\qquad k\geqslant 1. \end{displaymath} Then $(y_i)_{i\geqslant0}$ satisfies the condition of \cite[Theorem 4.45(1)]{chen2004} with $H=\{0\}$. So the $Q$-process is ergodic when $d<\infty$. \par b) When $d=\infty$, for each $N\geqslant1$, we define \begin{displaymath} y^{(N)}_0=x^{(N)}_1+\frac{1}{q_1},\quad y^{(N)}_i= x^{(N)}_i\,(1\leqslant i\leqslant N),\quad y^{(N)}_i= 0\,(i\geqslant N+1). \end{displaymath} Because $\varlimsup_{N\to \infty}x^{(N)}_1 \geqslant d=\infty$, it can be easily seen that the conditions of \Cref{in_erg_con} are satisfied by the sequences $\{y^{(N)}\}^{\infty}_{N=1}$ and $H=\{0\}$ . So the $Q$-process is non-ergodic if $d=\infty$. \end{proof} \par The following strong ergodicity criterion is due to Yu-Hui Zhang \cite{zhang2001}. \begin{theorem}\label{explct_serg} Let $Q$ be a regular single birth $Q$-matrix, then the $Q$-process is strongly ergodic iff\/ $\sup_{k\geqslant0}\sum_{j=0}^k \bigl(F_j^{(0)}d-d_j\bigr)<\infty$. \end{theorem} \begin{proof} We assume the process is ergodic without loss of generality. In light of \Cref{explct_erg}, $d<\infty$ equivalently. \par a) When $\sup_{k\geqslant0}\sum_{j=0}^k \bigl(F_j^{(0)}d-d_j\bigr)<\infty$, we define \begin{displaymath}\begin{split} &y_0=0,\\ &y_k=\sum_{n=0}^{k-1}\bigl(F_n^{(0)}d - d_n\bigr),\qquad k\geqslant 1. \end{split}\end{displaymath} Then $(y_i)_{i\geqslant0}$ satisfies the condition of \cite[Teorem 4.45(3)]{chen2004} with $H=\{0\}$. So the $Q$-process is strongly ergodic. This proof of sufficiency is not original but picked from \cite{zhang2001}. \par b) When $\sup_{k\geqslant0}\sum_{j=0}^k \bigl(F_j^{(0)}d-d_j\bigr)=\infty$, for each $N\geqslant1$, we define \begin{displaymath} y^{(N)}_i= x^{(N)}_i\,(1\leqslant i\leqslant N), \quad y^{(N)}_i= 0\,(i\geqslant N+1). \end{displaymath} It is obvious that $ \sup_{i\geqslant 1}y^{(N)}_i<\infty$ for each $N\geqslant1$. We now prove that $ \varlimsup_{N\to \infty}\sup_{i\geqslant 1}y^{(N)}_i=\infty$. In fact, for an arbitrary $k\geqslant1$, \begin{displaymath}\begin{split} \varlimsup_{N\to \infty}\sup_{i\geqslant 1}y^{(N)}_i &\geqslant \varlimsup_{N\to \infty}x^{(N)}_k =\varlimsup_{N\to \infty}\sum_{n=0}^{k-1}\bigl(F_n^{(0)}x^{(N)}_1 - d_n\bigr)\\ &\geqslant \sum_{n=0}^{k-1}\bigl(F_n^{(0)}d - d_n\bigr). \end{split}\end{displaymath} Taking supremum with respect to $k$ on both sides, we obtain \begin{displaymath} \varlimsup_{N\to \infty}\sup_{i\geqslant 1}y^{(N)}_i=\infty. \end{displaymath} The conditions of \Cref{in_serg_con} are satisfied by the sequences $\{y^{(N)}\}^{\infty}_{N=1}$ and $H=\{0\}$. So the $Q$-process is non-strongly ergodic. \end{proof} \subsection{A Special Class of Single Birth Processes}\label{sec_catastr} In this section, we study conservative single birth $Q$-matrix $Q=(q_{ij})$ with \begin{numcases} {q_{ij}=} i+1,\quad \nonumber &if\/ $i\geqslant0,\enspace j=i+1$,\\ \alpha_i\geqslant0, &if\/ $i\geqslant1,\enspace j=0$,\nonumber\\ 0, & other $i\neq j$\nonumber. \end{numcases} Assume there are infinitely many non-zero $\alpha_i$, so $Q$ is irreducible. The following illuminating example is a catalyst for this part. \begin{example} It is obvious that the $Q$-process is unique for arbitrary $\{\alpha_i\}_{i=1}^{\infty}$. \begin{enumerate}[\upshape (1)] \item If\/ $\alpha_i=\frac{1}{i^\gamma}$ for sufficiently large $i$, the $Q$-process is transient for $\gamma>0$. \item If\/ $\alpha_i=\frac{1}{\log^{\gamma}i}$ for sufficiently large $i$, \begin{enumerate}[\upshape(a)] \item the $Q$-process is transient for $\gamma>1$; \item the $Q$-process is null recurrent for $\gamma=1$; \item the $Q$-process is ergodic but non-exponentially ergodic for $\gamma \in(0,1)$. \end{enumerate} \item If\/ $\alpha_i=\frac{1}{(\log\log i)^\gamma}$ for sufficiently large $i$, the $Q$-process is ergodic but non-exponentially ergodic for $\gamma>0$.\\ \item If\/ \begin{numcases} {\alpha_i=} \tfrac{1}{i},\quad \nonumber &$i$ is an odd positive integer,\\ 1, \nonumber &$i$ is an even positive integer, \end{numcases} the $Q$-process is strongly ergodic. \item The $Q$-process is strongly ergodic if\/ $\alpha_i\equiv1\,(i\geqslant1)$. \end{enumerate} \end{example} This example will be demonstrated via the following propositions. \begin{lemma}\label{cata_dis_rec} \begin{enumerate}[\upshape (1)] \item Let $P=(P_{ij})$ be an irreducible conservative transition matrix on $\Z_+=\{0,1,2,\ldots\}$ with \begin{numcases} {P_{ij}=} p_i,\quad \nonumber &if\/ $i\geqslant0,\enspace j=i+1$,\\ 1-p_i, &if\/ $i\geqslant0,\enspace j=0$,\nonumber\\ 0, & other $i,j\geqslant 0$\nonumber. \end{numcases} Then $P$ is recurrent iff\/ $\prod_{i=0}^{\infty}p_i=0$. \item The $Q$-process mentioned above is recurrent iff\/ $\sum_{i=1}^{\infty}\frac{\alpha_i}{i}=\infty$. \end{enumerate} \end{lemma} \begin{proof} a) By Theorems 4.24 and 4.25 in~\cite{chen2004}, we consider equation \begin{equation}\label{cata_dis_rec_eq} (1-p_i) y_0+p_i y_{i+1}=y_i,\qquad i\geqslant 1. \end{equation} Setting $y_0=0$, we obtain a recurrence relation: \begin{displaymath} y_{i+1}=\frac{1}{p_i}y_i,\qquad i\geqslant 1. \end{displaymath} So \Cref{cata_dis_rec_eq} has a compact solution (non-constant bounded solution, respectively) if $\prod_{i=0}^{\infty}\frac{1}{p_i}=\infty$ ($<\infty$, respectively). This completes our proof. b) By the first assertion, the $Q$-process is recurrent iff $\prod_{i=1}^{\infty}\frac{i+1}{i+1+\alpha_i}=0$. Note that \begin{displaymath} \prod_{i=1}^{\infty}\frac{i+1}{i+1+\alpha_i}=0 \quad (\Longleftrightarrow )\quad \sum_{i=1}^{\infty}\frac{\alpha_i}{i}=\infty. \end{displaymath} The second assertion follows immediately. \end{proof} \begin{lemma}\label{cata_in_eerg} The $Q$-process is non-exponentially ergodic if\/ $\lim_{i\to\infty}\alpha_i=0$. \end{lemma} \begin{proof} First, we deal with a special case: $\{\alpha_i\}_{i=1}^{\infty}$ is monotonically decreasing. For a fixed $n\geqslant 1$, we set \begin{numcases} {y^{(n)}_i=} 1/\alpha_i,\quad \nonumber &$1\leqslant i \leqslant n$, \\ 1/\alpha_n, &$i\geqslant n+1$.\nonumber \end{numcases} It is straightforward to check that $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\geqslant1}$ satisfies \begin{displaymath} (i+1+\alpha_i)y^{(n)}_i\leqslant(i+1)y^{(n)}_{i+1}+1,\qquad i\geqslant 1. \end{displaymath} So $\{ y^{(n)}\}^{\infty}_{n=1}$ is a sequence satisfying all conditions of \Cref{in_serg_con}. The $Q$-process is non-strongly ergodic. \par Now if the $Q$-process is exponentially ergodic, by \Cref{con_mini_exp}, the following \Cref{exp_rec} has a finite non-negative solution $(x_i)_{i\geqslant1}$ for some $\lambda\in (0,1)$. \begin{equation}\label{exp_rec} x_i=\frac{i+1}{i+1+\alpha_i-\lambda}x_{i+1}+\frac{1}{i+1+\alpha_i-\lambda},\quad i\geqslant 1. \end{equation} Equivalently, \begin{displaymath} x_{i+1}=\frac{i+1+\alpha_i-\lambda}{i+1}x_i-\frac{1}{i+1},\quad i\geqslant 1. \end{displaymath} Because $\lim_{i\to\infty}\alpha_i=0$, we have $x_{i+1}\leqslant x_i$ for sufficiently large $i$. So $(x_i)_{i\geqslant1}$ is bounded. Consequently, $\bigl(\frac{1}{\lambda}(\E_i\mkern-1.5mu \mathrm{e}^{\lambda\sigma_0}-1)\bigr)_{i\geqslant 1}$ is bounded since it is the minimal non-negative solution to \Cref{exp_rec}. Hence $\bigl(\E_i\mkern-1.5mu \mathrm{e}^{\lambda\sigma_0}\bigr)_{i\geqslant 1}$ is bounded and so is $(\E_i\mkern-1.5mu \sigma_0)_{i\geqslant 1}$. The $Q$-process is thus strongly ergodic. This is impossible. The $Q$-process is therefore non-exponentially ergodic. \par In general case where $\{\alpha_i\}_{i=1}^{\infty}$ may not be monotonically decreasing, we define conservative $\widetilde{Q}=(\widetilde{q}_{ij})$: \begin{numcases} {\widetilde{q}_{ij}=} i+1,\quad \nonumber &if\/ $i\geqslant0,\enspace j=i+1$,\\ \sup_{k\geqslant i}\alpha_k, &if\/ $i\geqslant1,\enspace j=0$,\nonumber\\ 0, & other $i\neq j$\nonumber. \end{numcases} Because \begin{displaymath} \lim_{i\to\infty}\bigl(\sup_{k\geqslant i}\alpha_k\bigr)=\varlimsup_{i\to\infty}\alpha_i=\lim_{i\to\infty}\alpha_i=0, \end{displaymath} the $\widetilde{Q}$-process is non-exponentially ergodic according to above discussions. Consequently, the $Q$-process is non-exponentially ergodic by comparison. Our proof is now complete. \end{proof} The above proof is based on \Cref{in_serg_con}, we may give a more direct proof using \Cref{in_eerg_con}. \begin{proof}[Alternative Proof of $\Cref{cata_in_eerg}$] Without loss of generality, we assume that $ \lim_{i\to\infty} {\hskip -5pt}\downarrow\alpha_i = 0$. First, set \begin{displaymath} \lambda_n=\frac{1}{n+1},\qquad n\geqslant 1. \end{displaymath} For each fixed positive integer $n$, by \Cref{in_eerg_con}, we consider \begin{displaymath}\begin{split} y_0^{(n)} &\leqslant \frac{1}{1-\lambda_n}y_1^{(n)} +\frac{1}{1-\lambda_n},\\ y_i^{(n)} &\leqslant \frac{i+1}{i+1+\alpha_i-\lambda_n}y_{i+1}^{(n)} +\frac{1}{i+1+\alpha_i-\lambda_n},\qquad i\geqslant 1. \end{split}\end{displaymath} Introducing a change of variable $d_i^{(n)}=y_{i+1}^{(n)}-y_i^{(n)}\,(i\geqslant 0)$, the above inequality is transformed into \begin{displaymath}\begin{split} d_0^{(n)} &\geqslant -\lambda_ny_0^{(n)}-1,\\ d_i^{(n)} &\geqslant \frac{1}{i+1}(\alpha_i-\lambda_n)y_i^{(n)}-\frac{1}{i+1},\qquad i\geqslant 1. \end{split}\end{displaymath} Put $y_0^{(n)}=n$. As $\lim_{i\to\infty} {\hskip -5pt}\downarrow\alpha_i = 0$, there exists $M_1$ such that \begin{displaymath}\begin{split} \alpha_i &\geqslant \lambda_n,\qquad 1 \leqslant i \leqslant M_1-1,\\ \alpha_i &< \lambda_n,\qquad i \geqslant M_1. \end{split}\end{displaymath} If we place \begin{displaymath}\begin{split} d_0^{(n)} &= 0,\\ d_i^{(n)} &=\frac{\alpha_i-\lambda_n}{i+1}y_i^{(n)},\qquad 1 \leqslant i \leqslant M_1-1, \end{split}\end{displaymath} then \begin{displaymath} n=y_0^{(n)}=y_1^{(n)}\leqslant y_2^{(n)}\leqslant \cdots \leqslant y_{M_1}^{(n)}. \end{displaymath} Furthermore, we may pick $M_2> M_1$ such that \begin{displaymath} y_{M_1}^{(n)}-\frac{1}{M_1+1}-\cdots-\frac{1}{M_2}\geqslant 0, \end{displaymath} \begin{displaymath} y_{M_1}^{(n)}-\frac{1}{M_1+1}-\cdots-\frac{1}{M_2}-\frac{1}{M_2+1}<0. \end{displaymath} Meanwhile, let \begin{displaymath}\begin{split} d^{(n)}_k&=-\frac{1}{k+1},\qquad M_1\leqslant k \leqslant M_2-1,\\ d^{(n)}_{M_2}&=-y^{(n)}_{M_2},\\ d^{(n)}_k&=0,\qquad k\geqslant M_2+1.\\ \end{split}\end{displaymath} Thus $y^{(n)}_k=0\,(k>M_2)$. \par Now, one may check that $\{\lambda_n\}_{n=1}^{\infty}$ coupled with $\{y^{(n)}\}^{\infty}_{n=1}$ are sequences satisfying conditions in \Cref{in_eerg_con}. The $Q$-process is non-exponentially ergodic. \end{proof} \begin{corollary}\label{temp_flag} Let $\alpha_i=\frac{1}{\log^{\gamma}i}\,(i\geqslant3)$. \begin{enumerate}[\upshape (1)] \item The $Q$-process is ergodic for $\gamma\in (0,1)$. \item The $Q$-process is null recurrent for $\gamma=1$. \end{enumerate} \end{corollary} \begin{proof} a) When $\gamma\in (0,1)$, we set $y_i=\log^{2\gamma}i\,(i\geqslant 3)$. Then for sufficiently large $i$, \begin{displaymath} (i+1+\alpha_i)y_i \geqslant (i+1)y_{i+1}+1. \end{displaymath} In fact, for large $i$, by Lagrange mean value theorem, \begin{displaymath} (i+1)\bigl(\log^{2\gamma}(i+1)-\log^{2\gamma}i\bigr) \leqslant 2\gamma\frac{i+1}{i}\log^{2\gamma-1}(i+1) \leqslant \log^{\gamma}i-1. \end{displaymath} Thus, the $Q$-process is ergodic for $\gamma\in (0,1)$ by \cite[Teorem 4.45(1)]{chen2004}. b) To obtain the second assertion, we try to exploit \Cref{explct_erg}. Using the O'Stolz theorem and the explicit expression of $F_i^{k)}$ in \cite[Example 8.2]{chenzhang2014}, we have \begin{displaymath}\begin{split} d&=\sup_{i\geqslant 0}\frac{\sum_{k=0}^{i}d_k}{\sum_{k=0}^{i}F_k^{(0)}} \geqslant \lim_{i\to \infty}\frac{\sum_{k=0}^{i}d_k}{\sum_{k=0}^{i}F_k^{(0)}} =\lim_{i\to \infty}\frac{d_i}{F_i^{(0)}}\\ &=\lim_{i\to \infty}\frac{\sum_{k=1}^i\frac{F_i^{(k)}}{q_{k,k+1}}}{F_i^{(0)}}\geqslant \lim_{i\to\infty} \sum_{k=1}^{i-1}\frac{1}{(k+1)\prod_{\ell=1}^k(1+\frac{\alpha_l}{\ell+1})}\\ &=\sum_{k=1}^{\infty}\frac{1}{(k+1)\prod_{\ell=1}^k(1+\frac{\alpha_l}{\ell+1})}.\\ \end{split}\end{displaymath} Now, by Kummer's test, one may see $d=\infty$ for $\alpha_i=\frac{1}{\log i}\,(i\geqslant3)$. The $Q$-process is therefore non-ergodic. \end{proof} \begin{lemma} Let $Q$ be an irreducible regular $Q$-matrix and assume the $Q$-process is recurrent. If\/ $\inf_{i\geqslant 1} q_{i0}>0$, then the $Q$-process is strongly ergodic. \end{lemma} \begin{proof} Take $c\in (0,\inf_{i\geqslant 1} q_{i0})$, then \begin{displaymath} \frac{1}{c}\geqslant \frac{1}{c}+\frac{1-\frac{q_{i0}}{c}}{q_i}=\frac{1}{c}(1-\frac{q_{i0}}{q_i})+\frac{1}{q_i} =\sum_{ \begin{subarray}{c} j\geqslant 1\\ j\neq i \end{subarray}}\frac{q_{ij}}{q_i}\frac{1}{c}+\frac{1}{q_i},\qquad i\geqslant 1. \end{displaymath} So the $Q$-process is strongly ergodic by \cite[Teorem 4.45(3)]{chen2004}. \end{proof} \begin{lemma} Suppose $\{\alpha_i\}_{i=1}^{\infty}$ has a subsequence $\{\alpha_{i_k}\}_{k=1}^{\infty}$ satisfying \begin{displaymath} \inf_{k\geqslant 1}\alpha_{i_k}>0,\quad \sup_{k\geqslant 1}\frac{i_{k+1}}{i_k}<\infty,\quad \sum_{k=1}^\infty \frac{1}{i_k}=\infty. \end{displaymath} Then the $Q$-process is strongly ergodic. \end{lemma} \begin{proof} For ease of notation, we write $i_0=0$. Define conservative $\widetilde{Q}=(\widetilde{q}_{ij})$: \begin{numcases} {\widetilde{q}_{ij}=} i+1,\qquad &if\/ $i=i_k,\enspace j=i_{k+1}$ for some $k\geqslant0$,\nonumber \\ i+1, &if\/ $i_k<i<i_{k+1}$ for some $k\geqslant0$,\enspace $j=i+1$,\nonumber \\ c\coloneqq \tfrac{1}{2}\inf_{k\geqslant 1}\alpha_{i_k}, &if\/ $i=i_k$ for some $k\geqslant1$,\enspace $j=0$,\nonumber\\ 0, & other $i\neq j$\nonumber. \end{numcases} It is easy to see that $\{i_k\}_{k=0}^{\infty}$ is an irreducible subclass of $\widetilde{Q}$. Note that $\{i_k\}_{k=0}^{\infty}$ is also a recurrent subclass of the $\widetilde{Q}$-process since $\sum_{k=1}^\infty \frac{1}{i_k}=\infty$ (This is easy to illustrate using~\Cref{cata_dis_rec}). Because \begin{displaymath} \frac{1}{c}= \frac{i_k+1}{i_k+1+c}\cdot\frac{1}{c}+\frac{1}{i_k+1+c},\qquad k\geqslant1, \end{displaymath} $\{i_k\}_{k=0}^{\infty}$ is furthermore a strongly ergodic subclass according to \cite[Teorem 4.45(3)]{chen2004}. Since $\sup_{k\geqslant 1}\frac{i_{k+1}}{i_k}<\infty$ implies \begin{displaymath} \sup_{k\geqslant 0}\Bigl(\frac{1}{i_k+2}+\frac{1}{i_k+3}+\cdots+\frac{1}{i_{k+1}}\Bigr) \leqslant \sup_{k\geqslant 1}\frac{i_{k+1}-i_k-1}{i_k+1} < \infty, \end{displaymath} exploiting \begin{displaymath} \E_i^{(\widetilde{Q})}\mkern-1.5mu\sigma_0 = \E_{i+1}^{(\widetilde{Q})}\mkern-1.5mu\sigma_0 + \frac{1}{i+1}, \qquad i_k < i < i_{k+1},\enspace k\geqslant0, \end{displaymath} we have $ \sup_{i\geqslant0}\E_i^{(\widetilde{Q})}\mkern-1.5mu<\infty$. Construct an order-preserving conservative coupling $Q$-matrix $\overline{Q}=\bigl(\overline{q}(i,j;i^\prime,j^\prime)\bigr)$, whose marginalities are $Q$ and $\widetilde{Q}$, with non-diagonal entries \begin{numcases} {\overline{q}(i,j;i^\prime,j^\prime)=} (i+1)\land(j+1), \quad&if\/ $i^\prime=i+1,\enspace i\geqslant0$,\nonumber\\ &$j^\prime=j+1,\enspace i_k<j<i_{k+1}$ for some $k\geqslant0$,\nonumber \\ (i+1)\land(j+1), &if\/ $i^\prime=i+1,\enspace i\geqslant0$,\nonumber\\ &$j^\prime=i_{k+1},\enspace j=i_k$ for some $k\geqslant0$,\nonumber \\ (i-j)^+, &if\/ $i^\prime=i+1,\enspace i\geqslant0,\enspace j^\prime=j\geqslant0$,\nonumber\\ c, &if\/ $i^\prime=0,\enspace i\geqslant1,\enspace j^\prime=0,\enspace j=i_k$ for some $k\geqslant1$,\nonumber \\ \alpha_i-c, &if\/ $i^\prime=0,\enspace i\geqslant1,\enspace j^\prime=j=i_k$ for some $k\geqslant1$,\nonumber \\ \alpha_i, &if\/ $i^\prime=0,\enspace i\geqslant1,\enspace i_k<j^\prime=j<i_{k+1}$ for some $k\geqslant0$,\nonumber \\ 0, & other $(i^\prime,j^\prime)\neq(i,j)$\nonumber. \end{numcases} Denote the $\overline{Q}$-process as $\bigl(X(t),Y(t)\bigr)_{t\geqslant0}$, then we easily deduce that \begin{displaymath} \Psub{( i_1, i_2)}^{(\overline{Q})}\mkern-1.5mu\bigl[X(t)\leqslant Y(t)\bigr]=1,\qquad t>0,\enspace i_1\leqslant i_2. \end{displaymath} Hence, \begin{displaymath} \sup_{i\geqslant 1}\E_i^{(Q)}\mkern-1.5mu\sigma_0 \leqslant \sup_{i\geqslant 1}\E_i^{(\widetilde{Q})}\mkern-1.5mu\sigma_0 <\infty, \end{displaymath} so $Q$-process is strongly ergodic. \end{proof} \section{Applications to multi-dimensional examples}\label{chp_app} In this section, we shall apply our inverse problem criteria to some multi-dimensional models. Brussel's model (see \cite{yanchen1986}) is a typical model of reaction-diffusion process with several species. \begin{example}\label{brus} Let $S$ be a finite set, $E = (\Z_+^2)^S$ and let $p_k(u, v)$ be transition probability on $S$, $k = 1,2$. Denote by $e_{u 1} \in E$ the unit vector whose first component at site $u \in S$ is equal to 1 and the second component at $u$ as well as other components at $v \neq u $ all equal 0. Similarly, one can define $e_{u2}$. The model is described by the conservative $Q$-matrix $Q=(q_{ij})$: \begin{numcases} {q(x, y)=} \lambda_1a(u), \nonumber &if\/ $y=x+e_{u1}$, \\ \lambda_2b(u)x_1(u), \nonumber &if\/ $y=x-e_{u1}+e_{u2}$, \\ \lambda_3\binom{x_1(u)}{2}x_2(u), \nonumber &if\/ $y=x+e_{u1}-e_{u2}$, \\ \lambda_4x_1(u), \nonumber &if\/ $y=x-e_{u1}$, \\ x_k(u)p_k(u,v), \nonumber &if\/ $y=x-e_{uk}+e_{vk},\enspace k=1,2,\enspace v\neq u$,\\ 0, & other $y\neq x$\nonumber, \end{numcases} and $q(x) = -q(x,x) =\sum_{y\neq x}q(x,y)$, where ${x = \Bigl(\bigl(x_1(u),x_2(u)\bigr)\st u\in S\Bigr)\in E}$. $a$ and $b$ are positive functions on $S$ and $\lambda_1,\ldots, \lambda_4$ are positive constants. Finite-dimensional Brussel's model is exponentially ergodic (cf.\@ \cite{chenjw1995}). We now demonstrate that it is non-strongly ergodic, which was actually proved for the first time in \cite{wu2007}. But here we adopt different methods. \end{example} \begin{proof} We shall prove our assertion by two approaches. For ease of notation, we write $\widetilde{a}=\sum_{u\in S}a(u)$, $\abs{x}=\sum_{u\in S}\bigl(x_1(u)+x_2(u)\bigr)$ for $x\in E$ and also $E_i=\bigl\{x\in E\st\abs{x}=i\bigr\}$ for $i\geqslant 0$. \par a) For each fixed $n\geqslant 1$, we construct function \begin{displaymath} F^{(n)}(x)=f^{(n)}_i,\qquad x\in E_i,\enspace i\geqslant 1, \end{displaymath} with \begin{numcases} {f^{(n)}_i=} \frac{1}{\lambda_4}\log(i+1), \nonumber\quad &$1\leqslant i \leqslant n$, \\ \frac{1}{\lambda_4}\log(n+1), &$i\geqslant n+1$.\nonumber \end{numcases} Because \begin{displaymath}\begin{split} &\Bigl(1+\frac{1}{k}\Bigr)^{\ell}\leqslant \mathrm{e},\qquad 1\leqslant \ell\leqslant k,\\ \Bigl(1+\frac{1}{k}\Bigr)^{\ell}&\Bigl(\frac{k+1}{k+2}\Bigr)^{\frac{\lambda_1\widetilde{a}}{\lambda_4}}\leqslant \mathrm{e},\qquad 1\leqslant \ell\leqslant k, \end{split}\end{displaymath} we have \begin{displaymath}\begin{split} (\lambda_1\widetilde{a}+\lambda_4\ell)\frac{1}{\lambda_4}\log(k+1) &\leqslant\frac{\lambda_1\widetilde{a}}{\lambda_4}\log(k+1)+\ell\log k+1,\qquad 1\leqslant \ell\leqslant k,\\ (\lambda_1\widetilde{a}+\lambda_4\ell)\frac{1}{\lambda_4}\log(k+1) &\leqslant\frac{\lambda_1\widetilde{a}}{\lambda_4}\log(k+2)+\ell\log k+1,\qquad 1\leqslant \ell\leqslant k. \end{split}\end{displaymath} Now it is straightforward to check that \begin{displaymath}\begin{split} \Bigl(\lambda_1\widetilde{a}+\lambda_4\sum_{u\in S}x_1(u)\Bigr)f^{(n)}_i\leqslant \lambda_1\widetilde{a}f^{(n)}_{i+1}+\lambda_4\sum_{u\in S}x_1(u)f^{(n)}_{i-1}+1&,\\x\in E_i&,\enspace i\geqslant 1,\enspace n\geqslant 1, \end{split}\end{displaymath} where we naturally put $f^{(n)}_0=0\,(n\geqslant1)$. \par It can be easily seen that $F^{(n)}(x)$ satisfies \Cref{in_serg_con_eq} in current setup and $\{F^{(n)}\}^{\infty}_{n=1}$ is a sequence satisfying conditions in \Cref{in_serg_con}. Consequently, we infer that finite-dimensional Brussel's model is non-strongly ergodic. \par b) We try invoking \Cref{in_serg_con} yet with a different testing sequence. For each fixed $n\geqslant 1$, we construct function \begin{displaymath} F^{(n)}(x)=\sum^k_{i=1} d^{(n)}_i,\qquad x\in E_k,\enspace k\geqslant 1, \end{displaymath} with \begin{numcases} {d^{(n)}_i=} \frac{1}{\lambda_4(i+1)}, \nonumber &$1\leqslant i \leqslant n$, \\ -\frac{1}{\lambda_1\widetilde{a}(n+1)}, \nonumber &$i=n+1$, \\ -\frac{1}{\lambda_1\widetilde{a}}, &$i\geqslant n+2$.\nonumber \end{numcases} \vskip -0.3 cm Then a trivial calculation shows that $\{F^{(n)}\}^{\infty}_{n=1}$ is a sequence satisfying conditions in \Cref{in_serg_con}. So finite-dimensional Brussel's model is non-strongly ergodic. \end{proof} \begin{example} Let $E = \Z_+^2$. Epidemic process is defined by $Q$-matrix\break $Q = \Bigl(q\bigl((m, n), (m', n')\bigr)\st (m,n), (m', n') \in E\Bigr)$ with \begin{numcases} {q\bigl((m, n), (m', n')\bigr)=} \alpha, \nonumber &if\/ $(m', n')= (m, n)$, \\ \gamma m, \nonumber &if\/ $(m', n')= (m, n)$, \\ \beta, \nonumber &if\/ $(m', n')= (m, n)$, \\ \delta n, \nonumber &if\/ $(m', n')= (m, n)$, \\ \varepsilon mn, \nonumber &if\/ $(m', n')= (m, n)$, \\ 0, & otherwise, unless $(m', n')=(m, n)$\nonumber, \end{numcases} and $q(m, n) = -q\bigl((m,n),(m,n)\bigr)= \sum_{(m^\prime,n^\prime)\neq (m,n)}q\bigl((m, n), (m', n')\bigr)$, where $\alpha,\gamma,\beta,\delta$, and $\varepsilon$ are non-negative constants. We assume $\gamma>0$ and $\delta>0$. The $Q$-process is unique and ergodic when $\alpha+\beta>0$ (cf.\@ \cite{anderson}). Epidemic process is non-strongly ergodic if\/ $\alpha+\beta$, $\gamma$, and $\delta$ are strictly positive by \cite{wu2007}. Using similar argument as in $\Cref{brus}$, we can also carry out this result and therefore give a new proof. We will not reproduce the details here. \end{example} \begin{example}\label{gamma} Consider a conservative birth-death $Q$-matrix with birth rate $b_0=1$, $b_i=i^{\gamma}\,(i\geqslant 1)$ and death rate $a_i=i^{\gamma}\,(i\geqslant 1)$. It is known that this $Q$-matrix is regular for all $\gamma\in\R$ and the $Q$-process is recurrent. The process is ergodic iff $\gamma >1$ and strongly ergodic iff $\gamma>2$ (cf. \cite{chen2004}). We now use \Cref{in_serg_con} to demonstrate that the process is non-strongly ergodic if $\gamma\leqslant 2$. Also, we use \Cref{in_erg_con} to present that the process is non-ergodic if $\gamma\leqslant 1$. \end{example} \begin{proof} a) First we prove the process is non-strongly ergodic if $\gamma\leqslant2$ using \Cref{in_serg_con}. For each fixed $n\geqslant 1$, define \begin{displaymath} y^{(n)}_k=\sum^k_{i=1} d^{(n)}_i, \qquad k\geqslant 1, \end{displaymath} with \begin{numcases} {d^{(n)}_i=} \frac{1}{i^{1+\frac{1}{i}}},\quad &$1\leqslant i \leqslant n$, \nonumber\\ \frac{1}{i^{1+\frac{1}{n+1}}}, &$i\geqslant n+1$ \nonumber. \end{numcases} When $\gamma\leqslant 2$, we have the following estimates: \begin{subequations}\label{calc}\begin{align} \frac{1}{i^{1+\frac{1}{i}}}-\frac{1}{(i+1)^{1+\frac{1}{i+1}}}&\leqslant \frac{1}{i^{\gamma}},\qquad i\geqslant 1\label{calc1},\\ \frac{1}{i^{1+\frac{1}{n+1}}}-\frac{1}{(i+1)^{1+\frac{1}{n+1}}}&\leqslant \frac{1}{i^{\gamma}},\qquad i\geqslant n+1\label{calc2}. \end{align}\end{subequations} \par In fact, \Cref{calc1} holds obviously for $i=1,2$. Put \begin{displaymath} g_1(x)=\frac{1}{x^{1+\frac{1}{x}}},\qquad x>0. \end{displaymath} Differentiating $g_1$, we obtain \begin{displaymath} \abs{ g_1^\prime(x)}=\frac{1}{x^{2+\frac{1}{x}}}\Bigl(1+\frac{1-\log x}{x}\Bigr) \leqslant \frac{1}{x^{2}} \leqslant \frac{1}{x^{\gamma}},\qquad \text{if } x\geqslant \mathrm{e}. \end{displaymath} By Lagrange mean value theorem, \Cref{calc1} holds. \par We turn to \Cref{calc2}. Denote $\varepsilon=\frac{1}{n+1}$, then we have \begin{displaymath}\begin{split} \frac{1}{i^{1+\varepsilon}}-\frac{1}{(i+1)^{1+\varepsilon}}&=\frac{(i+1)^{1+\varepsilon}-i^{1+\varepsilon}}{i^{1+\varepsilon}(i+1)^{1+\varepsilon}} \leqslant \frac{(1+\varepsilon)(i+1)^{\varepsilon}}{i^{1+\varepsilon}{(i+1)}^{1+\varepsilon}}\\ &=\frac{1+\varepsilon}{i^{1+\varepsilon}(i+1)}= \frac{1}{i^{2}}(1+\varepsilon)\frac{i^{1-\varepsilon}}{i+1}, \end{split}\end{displaymath} where ``$\leqslant$'' is obtained by mean value theorem.\\ Define \begin{displaymath} g_2(x)=(1+\varepsilon)\frac{x^{1-\varepsilon}}{x+1},\qquad x>0. \end{displaymath} By calculus method, we see that $g_2$ is decreasing on the interval $[n+1,\infty)$. One can also verify easily that $g_2(n+1)\leqslant 1$. Therefore \begin{displaymath} g_2(i)=(1+\varepsilon)\frac{i^{1-\varepsilon}}{i+1}\leqslant 1,\qquad i\geqslant n+1. \end{displaymath} And \Cref{calc2} follows.\\ By \Cref{calc}, $\bigl(y^{(n)}_i\bigr)_{i\geqslant 1}$ satisfies \Cref{in_serg_con_eq} in current setup: \begin{equation}\label{gameq} d^{(n)}_i\leqslant d^{(n)}_{i+1}+\frac{1}{i^\gamma},\qquad i\geqslant1. \end{equation} and $\{y^{(n)}\}^{\infty}_{n=1}$ is a sequence satisfying all conditions in \Cref{in_serg_con}. Consequently, we conclude that the $Q$-process is non-strongly ergodic if $\gamma\leqslant 2$. \par b) We use \Cref{in_serg_con} to deduce non-strong ergodicity yet with a different testing sequence. Define \begin{numcases} {d^{(n)}_i=} \frac{1}{(i+9)\log(i+9)}, &$1\leqslant i \leqslant n$, \nonumber\\ \frac{1}{(n+9)\log(n+9)}-\sum^{i-1}_{k=n}\frac{1}{k^2}, \quad&$i\geqslant n+1$\nonumber. \end{numcases} Because \begin{displaymath}\begin{split} &\sum^{\infty}_{k=n}\frac{1}{k^2}>\int^{\infty}_{n}\frac{1}{x^2}\df{x} =\frac{1}{n}>\frac{1}{(n+9)\log(n+9)},\quad n\geqslant 1,\\ &\frac{1}{(i+9)\log(i+9)}-\frac{1}{(i+10)\log(i+10)}\leqslant \frac{1}{i^2},\quad i\geqslant 1, \end{split}\end{displaymath} it is straightforward to verify that $\{y^{(n)}\}^{\infty}_{n=1}$, with $ y^{(n)}_k=\sum^k_{i=1} d^{(n)}_i\,(k\geqslant 1)$, is a sequence satisfying conditions in \Cref{in_serg_con}. \par c) We now turn to non-ergodicity. For each $n\geqslant1$, we set \begin{displaymath} y^{(n)}_0=n+1,\quad y^{(n)}_i=\sum^i_{k=1}d^{(n)}_k\,(i\geqslant1), \end{displaymath} where $d^{(n)}_k=n-\sum_{j=1}^{k-1}\frac{1}{j}\,(k\geqslant1)$ and $\sum_{\varnothing}=0$. Hence for each $n\geqslant 1$, $\bigl(y^{(n)}_i\bigr)_{i\geqslant0}$ satisfies \begin{displaymath}\begin{split} y^{(n)}_0&\leqslant y^{(n)}_1+1,\\ d^{(n)}_i&\leqslant d^{(n)}_{i+1}+\frac{1}{i^\gamma},\qquad i\geqslant1, \end{split}\end{displaymath} which is exactly \Cref{in_erg_con_eq} in current setup. So, $\{y^{(n)}\}^{\infty}_{n=1}$, with $y^{(n)}=\bigl(y^{(n)}_i\bigr)_{i\geqslant0}$, is a sequence for \Cref{in_erg_con}. Therefore, the $Q$-process is non-ergodic for $\gamma\leqslant1$. \end{proof} \par We further investigate a multi-dimensional version of \Cref{gamma}. \renewcommand{\thetheorem}{\ref{gamma}$^\prime$} \addtocounter{theorem}{-1 \begin{example}\label{gammamult} Let $S$ be a finite set, $E = (\Z_+)^S$ and $p(u, v)$ a transition probability matrix on $S$. We denote by $\theta \in E$ whose components are identically 0 and denote by $e_u \in E$ the unit vector whose component at site $u \in S$ is equal to 1 and other components at $v \neq u $ all equal 0. Define an irreducible $Q$-matrix $Q=\bigl(q(x,y)\st x,y\in E\bigr)$ as follows: \begin{numcases} {q(x, y)=} x(u)^{\gamma}, \nonumber &if\/ $y=x+e_u$,\enspace $x\neq\theta$, \\ 1, \nonumber &if\/ $x=\theta$,\enspace $y=e_u$, \\ x(u)^{\gamma}, \nonumber &if\/ $y=x-e_u$, \\ x(u)p(u,v), \quad\nonumber &if\/ $y=x-e_u+e_v$,\enspace $v\neq u$,\\ 0, & other $y\neq x$\nonumber, \end{numcases} and $ q(x) = -q(x,x)=\sum_{y\neq x}q(x,y)$, where $x = \bigl(x(u)\st u\in S\bigr) \in E$. It is easy to check by \cite[Theorem 1]{yanchen1986} that the $Q$-process is unique for all $\gamma\in \R$. We now prove the following results: \begin{enumerate}[(1)] \item When $\gamma\leqslant 2$, the $Q$-process is non-strongly ergodic. \item When $\gamma\leqslant 1$, the $Q$-process is non-ergodic. \end{enumerate} \end{example} \renewcommand{\thetheorem}{\arabic{theorem} \begin{proof} We will reduce multi-dimensional problem to 1-dimensional case. We write $\abs{x}=\sum_{u\in S}x(u)$ for $x\in E$ and $E_i=\bigl\{x\in E\st\abs{x}=i\bigr\}$ for $i\geqslant 0$. \par a) Using \Cref{in_serg_con}, to prove that the $Q$-process is non-strongly ergodic for $\gamma\leqslant2$, we need only construct sequence $\{F^{(n)}\}^{\infty}_{n=1}$ satisfying the conditions. We may guess $F^{(n)}$ is identically $f^{(n)}_i$ on $E_i$ for each $i\geqslant1$, and set \begin{displaymath} d^{(n)}_1=f^{(n)}_1,\quad d^{(n)}_i=f^{(n)}_i-f^{(n)}_{i-1}\,(i\geqslant 2). \end{displaymath} Now, \Cref{in_serg_con_eq} becomes \begin{displaymath} d^{(n)}_i\leqslant d^{(n)}_{i+1}+\frac{1}{\sum_{u\in S}x(u)^{\gamma}},\qquad x\in E_i,\enspace i\geqslant 1. \end{displaymath} Because \begin{displaymath} \sum_{u\in S}x(u)^{\gamma}\leqslant \sum_{u\in S}x(u)^2\leqslant \Bigl(\sum_{u\in S}x(u)\Bigr)^2=i^2,\qquad x\in E_i,\enspace i\geqslant 1,\enspace \gamma\leqslant2, \end{displaymath} we need only construct sequence satisfying \begin{displaymath} d^{(n)}_i\leqslant d^{(n)}_{i+1}+\frac{1}{i^2},\qquad i\geqslant1, \end{displaymath} which is exactly \Cref{gameq} with $\gamma=2$. Now we can proceed our proof as in \Cref{gamma}. The $Q$-process is therefore non-strongly ergodic if $\gamma\leqslant2$. \par b) To deal with non-ergodicity, according to the discussions in a) and using similar notations, we need only consider equation \begin{displaymath}\begin{split} y_0&\leqslant y_1+1,\\ d_i&\leqslant d_{i+1}+\frac{1}{i},\qquad i\geqslant1. \end{split}\end{displaymath} And we can proceed as in proof c) of \Cref{gamma}. Hence the multi-dimensional process is non-ergodic for $\gamma\leqslant1$. \end{proof} \noindent {\bf Acknowledgement:} Thanks to Prof.\@ Mu-Fa Chen for his careful guidance and valuable suggestions. This work is supported by the National Nature Science Foundation of China (Grant No.\@ 11771046). \bibliographystyle{plain}
train/arxiv